Book contents
- Frontmatter
- Contents
- List of figures
- List of examples
- Acknowledgements
- Preface
- Glossary of selected evaluation terms
- 1 Introduction
- 2 Compilation: setting the right foundations
- 3 Composition: designing for needs
- 4 Conducting process evaluation
- 5 Conducting economic evaluation
- 6 Conducting impact evaluation
- 7 Analysis, reporting and communications
- 8 Emerging challenges for evaluation and evaluators
- References
- Annex A The ROTUR framework for managing evaluation expectations
- Annex B Ready reckoner guide to experimentation choices in impact evaluation
- Index
- Social Research Association Shorts
7 - Analysis, reporting and communications
Published online by Cambridge University Press: 05 April 2022
- Frontmatter
- Contents
- List of figures
- List of examples
- Acknowledgements
- Preface
- Glossary of selected evaluation terms
- 1 Introduction
- 2 Compilation: setting the right foundations
- 3 Composition: designing for needs
- 4 Conducting process evaluation
- 5 Conducting economic evaluation
- 6 Conducting impact evaluation
- 7 Analysis, reporting and communications
- 8 Emerging challenges for evaluation and evaluators
- References
- Annex A The ROTUR framework for managing evaluation expectations
- Annex B Ready reckoner guide to experimentation choices in impact evaluation
- Index
- Social Research Association Shorts
Summary
• The role of reporting and maximising the opportunities for use of evaluation evidence
• Choosing the most appropriate deliverables and evaluation Outputs
• Demonstrating reliability and validity of the evaluation evidence
• Building confidence and credibility in the evaluation; handling negative findings
• Going beyond reporting and harnessing wider approaches to Communications
Introduction
Evaluation evidence does not speak for itself. Its end point is usually seen as some form of summative reporting, often in a combination of written and oral reporting. Doing this effectively involves juggling the competing demands of sifting and condensing a multiplicity of evidence, with the necessary health warnings, and producing a narrative that is readily understood by (and useful to) decision makers. This can be quite a balancing act, but providing for ‘good’ reporting goes much further if the value of evaluation findings for decision-making are to be maximised. This chapter looks also at some of those parallel needs and how evaluators can play an active role in building both confidence in the evidence and its credibility.
Reporting and informing decision-making
Evaluation evidence is rarely the sole influence on users. Even evidence-based policy (and practice) is rarely guided wholly by evaluation, research or systematic evidence. Figure 7.1 shows that decision makers are subject to multiple influences. Most of these are not evidence-based, so the odds are often stacked against evaluators’ evidence playing a leading role.
To improve these odds, and to start to raise the profile of evidence in decision-making, evaluators need to go further than end reporting and play an active role in helping users to unpick the evidence and its implications. This is not a universally accepted idea. Even within the evaluation community, some see their role ending with a signed-off report, placing the emphasis on users putting all the effort into appreciating the implications of the evidence in their decisionmaking. Users may also endorse this narrow perspective and feel more comfortable with the evaluation role ending with factual reporting. However, my own experience as a user and evaluator shows ‘reporting’ responsibilities often need to go the extra mile if the evaluation evidence is to be used effectively.
- Type
- Chapter
- Information
- Demystifying EvaluationPractical Approaches for Researchers and Users, pp. 133 - 156Publisher: Bristol University PressPrint publication year: 2017