Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgements
- Abbreviations
- Glossary
- 1 Introduction: user studies for digital library development
- PART 1 SETTING THE SCENE
- PART 2 METHODS EXPLAINED AND ILLUSTRATED
- 6 Questionnaires, interviews and focus groups as means for user engagement with evaluation of digital libraries
- 7 Expert evaluation methods
- 8 Evidence of user behaviour: deep log analysis
- 9 An eye-tracking approach to the evaluation of digital libraries
- 10 Personas
- PART 3 USER STUDIES IN THE DIGITAL LIBRARY UNIVERSE: WHAT ELSE NEEDS TO BE CONSIDERED?
- PART 4 USER STUDIES ACROSS THE CULTURAL HERITAGE SECTOR
- PART 5 PUTTING IT ALL TOGETHER
- Index
7 - Expert evaluation methods
from PART 2 - METHODS EXPLAINED AND ILLUSTRATED
Published online by Cambridge University Press: 08 June 2018
- Frontmatter
- Contents
- Preface
- Acknowledgements
- Abbreviations
- Glossary
- 1 Introduction: user studies for digital library development
- PART 1 SETTING THE SCENE
- PART 2 METHODS EXPLAINED AND ILLUSTRATED
- 6 Questionnaires, interviews and focus groups as means for user engagement with evaluation of digital libraries
- 7 Expert evaluation methods
- 8 Evidence of user behaviour: deep log analysis
- 9 An eye-tracking approach to the evaluation of digital libraries
- 10 Personas
- PART 3 USER STUDIES IN THE DIGITAL LIBRARY UNIVERSE: WHAT ELSE NEEDS TO BE CONSIDERED?
- PART 4 USER STUDIES ACROSS THE CULTURAL HERITAGE SECTOR
- PART 5 PUTTING IT ALL TOGETHER
- Index
Summary
An expert is a person who has made all the mistakes that can be made in a very narrow field.
(Niels Bohr)Introduction
Comprehensive, generalizable evaluations of digital libraries (DLs) are rare. Where evaluation does occur, it is generally minimal. Saracevic (2004) analysed around 80 evaluations to conclude that, both in scientific research and in practice, thorough evaluations of DLs are rather the exception than the rule. The complexity of DL systems can be identified as one reason for this. Examining them in their entirety is not straightforward. Even when attempting to do so, we lack scientifically accepted concepts, approaches and models. Another reason is the allocated funding in DL projects. Evaluation is always a ‘must have’ stated by the funder, but is rarely supported by adequate resources. These reasons are, unfortunately, still valid at the time of writing.
In this chapter the method of expert evaluation is presented and shown to be one possible way of addressing such problems. Expert evaluations are heuristic or qualitative in nature, as opposed to quantitative evaluations, which aim to provide statistically significant results. Heuristic evaluations (Nielsen, 1994) are common in usability engineering, where user interfaces are evaluated by a small group of experts on the basis of their conformity to certain usability principles (heuristics). Expert evaluations differ from other types of heuristic evaluation in that they lack predefined heuristics. The experts are free to provide any comment, on the assumption that their views will be informed ones. Some examples of the 10 general heuristics defined by Nielsen are visibility of the system status, error recognition, error prevention, user control (support undo and redo), aesthetics and minimalistic design.
The advantages of such an expert evaluation are fast and cost-effective results, in contrast to the more expensive types of qualitative user study, which require a larger number of evaluators in order to reflect a representative result. We can distinguish further between two types of expert evaluation: in the first case, the experts themselves are the evaluators, conducting the evaluation and providing the results. In the second, the experts are monitored by evaluators, who lead the evaluation and assess the results.
- Type
- Chapter
- Information
- User Studies for Digital Library Development , pp. 75 - 84Publisher: FacetPrint publication year: 2012
- 3
- Cited by