Hostname: page-component-78c5997874-xbtfd Total loading time: 0 Render date: 2024-11-19T12:26:16.031Z Has data issue: false hasContentIssue false

IDENTIFICATION AND REVIEW OF COST-EFFECTIVENESS MODEL PARAMETERS: A QUALITATIVE STUDY

Published online by Cambridge University Press:  04 August 2014

Eva Kaltenthaler
Affiliation:
School of Health and Related Research (ScHARR), University of Sheffield
Munira Essat
Affiliation:
School of Health and Related Research (ScHARR), University of Sheffield
Paul Tappenden
Affiliation:
School of Health and Related Research (ScHARR), University of Sheffield
Suzy Paisley
Affiliation:
School of Health and Related Research (ScHARR), University of Sheffield
Rights & Permissions [Opens in a new window]

Abstract

Objectives: Health economic models are developed as part of the health technology assessment process to determine whether health interventions represent good value for money. These models are often used to directly inform healthcare decision making and policy. The information needs for the model require the use of other types of information beyond clinical effectiveness evidence to populate the model's parameters. The purpose of this research study was to explore issues concerned with the identification and use of information for the development of such models.

Methods: Three focus groups were held in February 2011 at the University of Sheffield with thirteen UK HTA experts. Attendees included health economic modelers, information specialists and systematic reviewers. Qualitative framework analysis was used to analyze the focus group data.

Results: Six key themes, with related sub-themes, were identified dealing with decisions and judgments; searching methods; selection and rapid review of evidence; team communication; modeler experience and clinical input and reporting methods. There was considerable overlap between themes.

Conclusions: Key issues raised by the respondents included the need for effective communication and teamwork throughout the model development process, the importance of using clinical experts as well as the need for transparent reporting of methods and decisions.

Type
Methods
Copyright
Copyright © Cambridge University Press 2014

The development of a health economic model typically forms part of the health technology assessment (HTA) process. Health economic models require information in addition to clinical efficacy data and this includes evidence relating to relevant comparators, health utilities, resource use and costs, among others. Sources of evidence may include: randomized controlled trials (RCTs), observational evidence and other clinical studies, disease registers, elicitation of expert clinical judgment, existing cost-effectiveness models, routine data sources, and health valuation studies. The way in which these information needs are identified and used can have a fundamental impact on the results of the model (Reference Coyle and Lee1) and, therefore, on healthcare decisions and resulting policies. There is often a lack of transparency associated with how information needs are met in the development of cost-effectiveness models. Drummond et al. (Reference Drummond, Iglesias and Cooper2) found that much of the relevant data for estimating quality-adjusted life-years (QALYs) were not contained in the systematic review for the HTA, and that the chosen method for summarizing the clinical data can inhibit the assessment of economic benefit.

Although some of the issues around the identification and reviewing of evidence for models have been discussed previously (Reference Coyle and Lee1;Reference Cooper, Sutton and Ades3;Reference Cooper, Coyle, Abrams, Mugford and Sutton4;Reference Philips, Ginnelly and Schulpher5;Reference Shemilt, Mugford, Byford, Higgins and Green6;Reference Marsh, Shemilt, Mugford, Vale, Marsh and Donaldson7Reference Paisley8) there remains very little formal guidance with respect to best practice in this area. It is not possible to review all evidence systematically to inform a health economic model and choices need to be made in terms of how evidence is synthesized and used. Briggs et al. (Reference Briggs, Fenwick and Karnon9), recommend that analysts should conform to the broad principles of evidence-based medicine and avoid “cherry picking” the best single source of evidence. While there is a need for transparent and reproducible methods there are also time and resource constraints which can influence how the model development process operates. Chilcott et al. (Reference Chilcott, Tappenden and Rawdin10) suggest that a potential source of errors in health technology assessment models is the separation of the information gathering, reviewing and modeling functions while Drummond et al. (Reference Drummond, Iglesias and Cooper2) also suggest that some of the problems associated with model development could be reduced if evidence requirements were discussed at an early stage.

Kaltenthaler, Tappenden, and Paisley (Reference Kaltenthaler, Tappenden and Paisley11;Reference Kaltenthaler, Tappenden and Paisley12) have previously investigated issues relating to the conceptualization of cost-effectiveness models and the identification and review of evidence to inform models. The aim of this study is to present the findings from a series of focus groups held with UK HTA experts to explore some of the issues and concerns associated with the identification and review of evidence used in the development of cost-effectiveness models. These findings are part of the evidence used to inform a recent NICE Decision Support Unit Technical Support Document (TSD) (Reference Kaltenthaler, Tappenden, Paisley and Squires13).

MATERIALS AND METHODS

Three focus groups were held as part of a workshop with thirteen HTA experts from UK universities in February 2011. We used focus groups because they are considered an appropriate data collection method when research aims to explore the degree of consensus on a topic and to investigate complex behaviors (Reference Morgan, Krueger and Morgan14). They are particularly suited to the study of attitudes and experiences (Reference Kitzinger, Pope and Mays15). Interviews were not chosen as interaction between participants was considered important (Reference Ritchie, Spencer, O’Connor, Ritchie and Lewis16). A subtle realist perspective was adopted (Reference Hammersley, Huberman and Miles17). The participants were a purposive sample, chosen to represent a variety of specialisms with a range of perspectives and included seven modelers, one health economist, one statistician, two information specialists, and two systematic reviewers. Ethical approval for the focus groups was obtained from the University of Sheffield Research Ethics Committee. The focus groups were facilitator led (E.K.) and were recorded using digital media with the recordings transcribed verbatim. Framework analysis (Reference Ritchie, Spencer, O’Connor, Ritchie and Lewis16) was used to develop a thematic framework and the qualitative data were classified and organized into key themes and sub-themes. The initial step was familiarization with the transcribed data. Data were then coded and the conceptual framework was developed. Coding was checked by a second reviewer. Some of the themes were preidentified and some were emergent. The topic guide for the focus groups is shown in Table 1. The discussions were open and participants were invited to discuss other points that were relevant but had not been included in the guide. Areas of agreement among participants are reported and alternative views when expressed, are also reported.

Table 1. Focus Group Topic Guide

RESULTS

From the focus group transcripts six themes were identified. There was overlap between the themes as many of the issues raised were interrelated. The themes and related sub-themes are shown in Table 2.

Table 2. Focus Group Themes and Related Sub-themes

Theme 1: Many Decisions and Judgments Are Necessary during Model Development

The focus groups highlighted that a large number of decisions and judgments are made by modelers during the process of model development, especially related to relevance and appropriateness to the decision problem.

“So if you set up five criteria [for selection of parameter estimates] and then you end up judging what you choose on the basis of six criteria, only two of which were of the five you started out with, then I think it tells you something about how much judgement is required in the process..”(Modeller)

Some participants expressed concerns regarding the difference in judgments made by different modelers to represent the same part of reality: participants were more comfortable if the judgments relating to what should be considered relevant to a particular decision problem did not rest solely with the individual developing the model but rather as a joint task between modelers, decision makers, health professionals, and other stakeholders. Failure to reflect conflicting views between alternative stakeholders may lead to the development of models which represent a contextually naïve and uninformed basis for decision making.

“There is also an issue of whose judgement it is - is it down to the modeller to make that judgement or should it be in conjunction with other people in the team or a clinical expert?” (Statistician)

Planning was regarded as a key element for model development. This could take the form of a protocol or project plans that clearly state the methodology for the model, information needs and key roles and responsibilities for all team members. Most respondents considered that having the evidence requirements set before model implementation allowed the whole team to identify potentially useful information and help to keep track of changes made during the process.

“I think that it would be useful for us to map out a list of things that you might consider . . . pathways in this particular way or problem structuring methods may have some role in identifying what should be included in a model . . .” (Modeller)

The modelers in the focus groups discussed the use of existing models and indicated that the main reason they used these for model development was to critique them so that they can avoid repeating the same mistakes and in turn produce a better model.

“I don't review models for their results, I don't review them to find things that I like – I review them to find things to avoid” (Modeller)

Although the use of existing models was considered to be potentially useful, the modelers suggested that they should be used with caution and should not be relied upon without considerable scrutiny, as the appropriateness or credibility of an existing model may be questionable and there may exist a gap between the decision problem that the model was developed to address and the current decision problem under consideration.

“Well, you have to be a bit careful with that, don't you? Sometimes you trace it back and you find somebody just thought that number up about twenty years ago, and everyone's used it and they’ve all done that since.” (Modeller)

There was variability between participants concerning their approaches to conceptual model development. Respondents discussed the use of several approaches including documenting proposed model structures, developing mock-up models in Microsoft Excel, developing sketches of potential structures, and producing written interpretations of the evidence. The participants all agreed that having a draft model or conceptual model would help to develop a common understanding of the evidence requirements among those involved in the model development. It would help to ensure that health professionals understood how the model would capture the impact of the interventions under consideration on costs and health outcomes, would ensure that the proposed model was clinically relevant and met the needs of the decision maker, and provide an explicit platform for considering and debating alternative model structures and other model development decisions before implementation.

“You’ve got to sort of get a feel of what the model is doing, not just looking at a whole lot of numbers on the spread sheet. So that's why I’m a big believer in a back-of-an-envelope version of the model, which forces you to really understand what's going on in the model.” (Modeller)

Furthermore, the participants expressed that where possible, alternative model development choices should be tested to assess their impact upon the model results.

“.. my view of structural uncertainty is it's where you’re equivocal about whether one set of assumptions is superior to another. . .I wouldn't expect someone to build 10 different models, but I’d expect them to consider what else they could have.” (Modeller)

There was a broad view among the respondents that the use of conceptual models and the use of existing models both have a part to play in model development and that the many decisions and judgments needed during the model development process should involve the whole team.

Theme 2: Searching Methods Appropriate to Modeling Are Required

One common concern raised was the need for appropriate searching methods to ensure the retrieval of relevant evidence for the model. Cost-effectiveness models have multiple information needs requiring different types of evidence drawn from multiple disparate information sources. The participants considered it to be difficult to capture the information needs in a single search query. Exhaustive search methods such as those used in systematic reviews may not be feasible.

“. . .it's relatively easy to find cost studies, it's relatively easy to find RCTs, it's not so easy to find adverse events-so I think maybe hints about how difficult it is to search for something like this in Medline ..might be helpful.” (Information specialist)

During the interviews, respondents discussed the difficulties in finding evidence and the different approaches used for seeking information, including formal Medline and other database searching, contacting experts in the field, searching registries and administrative or routine data sources, snowballing references, following leads, semantic technology, text databases and focused searching. However, the majority of the information retrieval processes are not explicit.

“In terms of parameters, we frequently have parameters that are only ever reported incidentally, things like unusual or irregular adverse events, ..- how do you find them? I don't know any systematic way that could do that. It's pot luck really,..” (Modeller)

“. . . I’m sure that some of the things I’ve had to resort to to obtain particular estimates much more resemble investigative journalism than having anything to do with what I’m trained to do”(Modeller)

The need for guidance and advice was expressed regarding factors that impact on the way evidence is identified, for example, what search techniques might be used to maximize the rate of return of potentially relevant evidence and to ensure appropriate steps have been taken to make the process systematic, reproducible and transparent.

“.. the identification of evidence is any information seeking process, which includes searching on Medline .. phoning people up or having a group of clinicians who advise .., you draw information from all of them and you use that information as evidence to support or justify or make decisions about the model to give it credibility. And as soon as you start to say, .. we just need guidance on searching, it suddenly excludes a lot of information seeking processes that are absolutely knitted in to the model development process” (Information specialist)

These findings show that evidence for models is identified from a variety of sources and searching methods differ from those used for systematic reviews of clinical effectiveness.

Theme 3: Methods for the Selection and Rapid Review of Evidence Are Needed

The participants identified several reviewing related issues such as: selection and prioritization of data, methods for reviewing, the use of hierarchies of evidence and assessment of evidence. These are covered in considerably more detail elsewhere (Reference Kaltenthaler, Tappenden and Paisley12).

The processes for selecting and prioritizing evidence used to inform parameter estimates need consideration according to the participants who also suggested that additional attention should be given to the reporting of parameters where the decision regarding the optimal selection of particular evidence sources was considered equivocal. The participants considered that it was important to prioritize parameters and focus reviewing resources on those elements of the model that were most likely to impact on model results, bearing in mind that the importance of parameters is subject to change during the course of the modeling process.

Owing to the time and resource constraints within the HTA process, all participants considered rapid review methods potentially useful to identify and select evidence. As the use of rapid review methods risk missing relevant information the participants considered it essential that methods were reported transparently, including the potential limitations of the chosen methods.

According to the participants, there is a wide range of types of evidence used to populate models and hierarchies of evidence sources, as suggested by Coyle et al. (Reference Coyle, Lee, Cooper, Shemilt, Mugford, Vale, Marsh and Donaldson18), may be useful as a means of judging the quality of individual parameter estimates and aid the study selection process. To incorporate the quality of individual studies into the selection process, one participant suggested that the Grading of Recommendations Assessment, Development and Evaluation (GRADE) (Reference Shemilt, Mugford and Vale19) system may be a potentially useful framework for rating the quality of evidence from all potential sources of data components that may be used to populate model parameters.

With regard to study selection and assessment of evidence, relevance or applicability could be assessed first to speed up the evidence selection process.

“.. actually if you look at applicability or relevance first, in a time constrained scenario, often that fatally rules out a huge bunch of stuff that you don't need to look at.” (Health economist)

In summary, reviewing resources should be focused on the most important model parameters. Relevance should be considered initially in evidence selection.

Theme 4: Communication among Team Members Is Essential

An important, recurrent and cross-cutting theme on which there was clear common agreement among focus group participants was that to have a credible and robust model effective communication across the whole project team was crucial. The team might include modelers, clinical experts, decision makers, information specialists, systematic reviewers, and others. The group agreed that all parties involved should have a common understanding of the model development process which could be achieved by recruiting clinical experts early on in the process and involving them in the development of the conceptual model, writing protocols, highlighting evidence needs, and having regular meetings with the whole team, presenting and discussing the model. Meetings should use nontechnical language easily understood by clinical experts and other team members. The sharing of internal draft reports was also considered important.

“. . .always having clinical experts, and not just one who is local . . ., but a panel who are available .. very early. You .. involve them in the model development process, which again will also be a fairly whole-team endeavour initially, so that the people who are doing the systematic review of effectiveness studies will be party to most of those conversations about the conceptual modelling of what the model structure might eventually look like.” (Modeller)

Engaging with experts and other researchers can serve as a face validity check to ensure that important information has not been missed, that the most appropriate parameter estimates are used and the opportunity for errors is reduced according to the participants.

“this is another reason why it's important to have, as well as clinical experts, .. other people who are researchers or have worked in the area, so at least after you’ve made some of these choices, then you can do .. reality checks and say, .. is there anything we’ve missed, are there any other recent or grey literature studies that might provide an alternative estimate . . .?”(Statistician)

An issue raised during the focus groups was the lack of communication and understanding between modelers and information specialists. This inadvertently led to the development of ineffective search strategies. The participants considered it important that modelers engage with the information specialist and keep them updated with the process of model development and information needs so that an effective and focused search strategy could be developed.

“the ..ability to go back to the information scientists and say this is more specifically what I’m looking for, but likewise also once the preliminary modelling results come back, .. to say we don't need to do this search anymore because the parameter that we thought it was going to inform is not going to affect the way this particular model works . . .” (Modeller)

All participants considered regular communication across the whole project team useful to ensure a common understanding and avoid mistakes.

Theme 5: Previous Modeler Experience and Clinical Input Are both Important

Input from clinical experts was considered to be crucial for developing the model structure to form an understanding of the decision problem.

“When we are presented with a new disease area, often the first time . . .. You are missing things all the way down the line. If you had. . .somebody else's expertise .. to draw on, you could make a much better job of it.” (Modeller)

In addition, the opinions of clinicians and a wider group of people who are involved in caring for patients were considered to be essential, as they can help provide parameter estimates or identify alternative sources of evidence (including unpublished literature). Furthermore, clinical experts and other researchers can serve as a reality check to ensure that important information has not been missed and that the estimates used are appropriate.

“one of our . . . clinical experts. . . pointed out that viral resistance was not included in our model. And we’d reviewed every single model of prophylaxis – none of those included it, and we’d reviewed the clinical evidence base, and that wasn't there ..and it completely changed the results.” (Modeller)

The group thought that the modeler's previous knowledge and experience of the disease area were extremely important, particularly in a time and resource constrained scenario.

“If you’ve been working in that field, and built up a network in it, then you are a long way ahead, and you have what you need.” (Modeller)

All participants believed that using clinical experts ensured an understanding of the decision problem and serves as a reality check. Previous experience of modelers in particular disease areas was also considered to be useful.

Theme 6: Reporting of Methods Should be Transparent

The focus group participants considered transparency to be an important issue. A view was expressed that there was rarely sufficient time to ensure that the methods used were reported adequately.

“I think transparency is one of the big things that is missing, isn't it? In the whole of modelling, anyway, there's the transparency issue.” (Modeller)

“. . .even if you have enough time to do that as an analyst you might not have enough time to write it up in a way that would make it more transparent.”(Modeller)

The group reflected that a comprehensive account of every evidence source used in the modeling process would be very time consuming and potentially difficult to read. Brief summary tables of the main inputs and sources of information could be used with the possibility of more detailed information presented in appendices.

The participants believed that it is important that decisions and judgments are clearly documented to ensure the credibility of the model including the following: search strategies, selection and justification of studies and model parameters; structural assumptions, decisions relating to the prioritization of key information needs, acknowledgment of limitations and potential biases and documentation of any deviations from the study protocol.

“It's more like the thought process you went through to get there, and what did you (do and what) did you decide not to do..” (Modeller)

Several participants believed that the language used in reporting model development and results should be clearly understood by health professionals and other individuals involved in the process. The model could be represented in both diagrammatic and textual forms using nontechnical, nonmathematical language.

“Again the language to describe that sort of thing would have to be very careful . . ..it needs to be more in terms of principles and things to consider than anything too prescriptive.” (Modeller)

In terms of model credibility it was emphasized that the modeler needs to cater for the target audience and there is a tension between accuracy and credibility.

“..but you know that the clinical audience . . . will expect to see that factor, because the literature has been going on about it for years, So I think we are playing lots of tensions to do with meeting the right audience, trading off accuracy versus credibility.” (Modeller)

The participants agreed that decisions made during the modeling process, especially potentially controversial ones, need to be transparently reported and summary tables may be useful. Key findings are presented in Table 3.

Table 3. Summary of key findings

DISCUSSION

This qualitative research study highlights several important issues in the identification and review of evidence for model parameters as identified by experts in the field of health technology assessment. Six key themes were identified by the participants: related to decisions and judgments; searching methods; selection and rapid review of evidence; team communication; modeler experience and clinical input and reporting methods. There was considerable overlap between themes. For example, reviewing of previous models was suggested as useful for informing searching approaches.

The participants in these focus groups have several years of experience in the field of cost-effectiveness modeling of health technologies. There was a high degree of commonality in the types of tensions that people described in developing models and a high degree of agreement on what was considered important and how issues should be addressed. Standards for undertaking qualitative research were adhered to (Reference Mays and Pope20). The findings from these focus groups helped to inform the development of a NICE Decision Support Unit TSD providing guidance on the identification and review of evidence for cost-effectiveness model (Reference Kaltenthaler, Tappenden, Paisley and Squires13) which goes some way forward in suggesting options for good practice in identifying and reviewing evidence for use in cost-effectiveness models. These issues are of interest to all those working in health technology assessment internationally, including researchers and policy makers, as how evidence is identified and reviewed for used in cost-effectiveness models can have a substantial impact on health technology assessments and subsequent healthcare policy decisions.

There were some limitations with this research. Although there was saturation of the data, the sample size was small. Using a larger sample may have resulted in more robust findings. Only UK participants were included in the focus groups and researchers from other countries may have different views potentially limiting the generalizability of the findings and transferability to other settings. Only one method (focus groups) was used to collect data. There may have been different results if interviews, questionnaires or other qualitative research methods were used. Only academic researchers participated in the focus groups and it would therefore be useful to determine the views from industry, health outcomes and research agencies and policy-making agencies in subsequent research as these may be divergent.

There are many unanswered questions with respect to how to identify, review and select evidence to inform model parameters, hence several areas warrant further research. There is a need for accepted standards for documentation of decisions and the use of sources of evidence such as expert clinical opinion. There is also a need for the development of appropriate search and rapid review methods for the identification and review of evidence used in cost-effectiveness models.

CONCLUSIONS

The findings of this research highlight some of the important issues in the use of evidence in cost-effectiveness models. Consideration of these issues helps to make the model development process more transparent and easier to understand thus facilitating healthcare decision making and health policy development.

CONTACT INFORMATION

Dr Eva Kaltenthaler, Reader in Health Technology Assessment, BSc, MSc, PhD, Dr Paul Tappenden, Reader in Health Economic Modelling, BA, MSc, PhD, Dr Suzy Paisley, Senior Research Fellow and Head of Information Resource Group, BA, MA PhD, Dr Munira Essat, Research Associate in Systematic Reviewing, BSc, MSc, PhD, School of Health and Related Research (ScHARR), University of Sheffield, Regent Court, 30 Regent Street, Sheffield S1 4DA, United Kingdom.

CONFLICTS OF INTEREST

The authors have no conflicts of interest to declare.

References

REFERENCES

1. Coyle, D, Lee, K. Evidence-based economic evaluation: How the use of different data sources can impact results. In: Donaldson C, Mugford M, Vale L, eds. Evidence-based health economics. London: BMJ Books; 2002.Google Scholar
2. Drummond, MF, Iglesias, CP, Cooper, NJ. Systematic reviews and economic evaluations conducted for the National Institute for Health and Clinical Excellence in the United Kingdom: A game of two Halves? Int J Technol Assess Health Care. 2008;24:146150.CrossRefGoogle ScholarPubMed
3. Cooper, N, Sutton, AJ, Ades, AE, et al. Use of evidence in economic decision models: Practical issues and methodological challenges. Health Econ. 2007;16:12771286.CrossRefGoogle ScholarPubMed
4. Cooper, N, Coyle, D, Abrams, K, Mugford, M, Sutton, A. Use of evidence in decision models: An appraisal of health technology assessments in the UK since 1997. J Health Serv Res Policy. 2005;10:245250.CrossRefGoogle ScholarPubMed
5. Philips, Z, Ginnelly, L, Schulpher, M, et al. Review of guidelines for good practice in decision-analytic modelling in health technology assessment. Health Technol Assess. 2004;8:1158.CrossRefGoogle ScholarPubMed
6. Shemilt, I, Mugford, M, Byford, S, et al. Chapter 15. Incorporating economics evidence. In: Higgins, JPT, Green, S, eds. Cochrane Handbook for systematic reviews of interventions. Chichester, UK: John Wiley and Sons; 2008.Google Scholar
7. Marsh, K. Chapter 2: The role of review and synthesis methods in decision models. In: Shemilt, I, Mugford, M, Vale, L, Marsh, K, Donaldson, C, eds. Evidence-based decisions and economics: Health care, social welfare, education and criminal justice. Oxford: Wiley-Blackwell; 2010.Google Scholar
8. Paisley, S. Classification of evidence used in decision-analytic models of cost effectiveness: A content analysis of published reports. Int J Technol Assess Health Care. 2010;26:458462.CrossRefGoogle ScholarPubMed
9. Briggs, A, Fenwick, E, Karnon, J, et al. 2012 Model parameter estimation and uncertainty analysis: A report of the ISPOR-SMDM Modeling Good Research Practices Task Force-6 (DRAFT). http://www.ispor.org/workpaper/modeling_methods/model-parameter-estimation-uncertainty.asp (accessed July 10, 2012).Google Scholar
10. Chilcott, JB, Tappenden, P, Rawdin, A, et al. Avoiding and identifying errors in health technology assessment models: Qualitative study and methodological review. Health Technol Assess. 2010;14:1152.CrossRefGoogle ScholarPubMed
11. Kaltenthaler, E, Tappenden, P, Paisley, S. Reviewing the evidence used in cost-effectiveness models in health technology assessment: A qualitative investigation of current concerns and future research priorities. HEDS Discussion Paper number 12/01, 2012a. http://www.shef.ac.uk/polopoly_fs/1.165496!/file/12.01.pdf (accessed July 10, 2012).Google Scholar
12. Kaltenthaler, E, Tappenden, P, Paisley, S. Reviewing the evidence to inform the population of cost-effectiveness models within health technology assessments Value Health. 2013;16:830836.CrossRefGoogle ScholarPubMed
13. Kaltenthaler, E, Tappenden, P, Paisley, S, Squires, H. NICE DSU Technical Support Document 13: Identifying and reviewing evidence to inform the conceptualisation and population of cost-effectiveness models. Report by the NICE Decision Support Unit, May 2011. http://www.nicedsu.org.uk/TSD%2013%20model%20parameters.pdf (accessed July 10, 2012).Google Scholar
14. Morgan, DL, Krueger, RA. When to use focus groups and why. In: Morgan, DL, ed. Successful focus groups advancing the state of the art. Newbury Park: Sage Publications Inc; 1993.CrossRefGoogle Scholar
15. Kitzinger, J. Focus groups. In: Pope, C, Mays, N, eds. Qualitative research in health care. London: BMJ Books; 2009.Google Scholar
16. Ritchie, J, Spencer, L, O’Connor, W. Carrying out qualitative analysis. In: Ritchie, J, Lewis, J, eds. Qualitative research practice. A guide for social science students and researchers. London: Sage Publications Ltd; 2003.Google Scholar
17. Hammersley, M. Ethnography and realism. In: Huberman, AM, Miles, MB, eds. The qualitative researchers companion. California: Sage; 2002.Google Scholar
18. Coyle, D, Lee, K, Cooper, N. Use of evidence in decision models. In: Shemilt, I, Mugford, M, Vale, L, Marsh, K, Donaldson, C, eds. Evidence-based decisions and economics: Health care, social welfare, education and criminal justice. Oxford: Wiley-Blackwell; 2010.Google Scholar
19. Shemilt, I, Mugford, M, Vale, L, et al. Evidence synthesis, economics and public policy. Res Synth Methods. 2010;1:126135.CrossRefGoogle ScholarPubMed
20. Mays, N, Pope, C. Rigour and qualitative research. BMJ. 1995;311:109112.CrossRefGoogle ScholarPubMed
Figure 0

Table 1. Focus Group Topic Guide

Figure 1

Table 2. Focus Group Themes and Related Sub-themes

Figure 2

Table 3. Summary of key findings