Hostname: page-component-586b7cd67f-vdxz6 Total loading time: 0 Render date: 2024-11-22T20:59:29.809Z Has data issue: false hasContentIssue false

What adult electrocardiogram (ECG) diagnoses and/or findings do residents in emergency medicine need to know?

Published online by Cambridge University Press:  10 March 2015

Catherine Patocka*
Affiliation:
McGill Emergency Medicine Residency Program, McGill University, Montreal, QC, Canada McGill Centre for Medical Education, McGill University, Montreal, QC, Canada
Joel Turner
Affiliation:
Emergency Medicine Department, Jewish General Hospital, Montreal QC, Canada
Jeffrey Wiseman
Affiliation:
McGill Centre for Medical Education, McGill University, Montreal, QC, Canada McGill Department of Internal Medicine, McGill University, Montreal, QC, Canada.
*
Correspondence to: Dr. Catherine Patocka, Department of Emergency Medicine, University of Calgary, Foothills Medical Center, C231-1403 29 St. NW, Calgary, AB Canada, T2N 2T9; Email: [email protected]

Abstract

Objective

There is no evidence-based description of electrocardiogram (ECG) interpretation competencies for emergency medicine (EM) trainees. The first step in defining these competencies is to develop a prioritized list of adult ECG findings relevant to EM contexts. The purpose of this study was to categorize the importance of various adult ECG diagnoses and/or findings for the EM trainee.

Methods

We developed a list of potentially important adult ECG diagnoses/findings and conducted a Delphi opinion-soliciting process. Participants used a 4-point Likert scale to rate the importance of each diagnosis for EM trainees. Consensus was defined as a minimum of 75% agreement at the second round or later. In the absence of consensus, stability was defined as a shift of 20% or less after successive rounds.

Results

A purposive sampling of 22 emergency physicians participated in the Delphi process, and 16 (72%) completed the process. Of those, 15 were from 11 different EM training programs across Canada and one was an expert in EM electrocardiography. Overall, 78 diagnoses reached consensus, 42 achieved stability and one diagnosis achieved neither consensus nor stability. Out of 121 potentially important adult ECG diagnoses, 53 (44%) were considered “must know” diagnoses, 61 (50%) “should know” diagnoses, and 7 (6%) “nice to know” diagnoses.

Conclusion

We have categorized adult ECG diagnoses within an EM training context, knowledge of which may allow clinical EM teachers to establish educational priorities. This categorization will also facilitate the development of an educational framework to establish EM trainee competency in ECG interpretation.

Résumé

Objectif

Il n’existe aucune description, fondée sur des données probantes, de la compétence que doivent acquérir les stagiaires en médecine d’urgence (MU) dans l’interprétation de l’électrocardiogramme (ECG). La première étape pour définir cette compétence consiste en l’élaboration d’une liste prioritaire de signes électrocardiographiques chez l’adulte, pertinents en MU. L’étude décrite ici avait pour but de classer l’importance de divers signes électrocardiographiques et diagnostics fondés sur l’ECG chez l’adulte, à l’intention des stagiaires en MU.

Méthode

Les auteurs ont dressé une liste de signes électrocardiographiques et de diagnostics fondés sur l’ECG chez l’adulte, potentiellement importants à connaître par les stagiaires, et ont mené une enquête d’opinion selon la méthode Delphi. Les participants ont coté l’importance de chaque diagnostic pour les stagiaires en MU, sur une échelle de Likert à 4 points. Le consensus a été défini comme un taux minimal de convergence de 75 % au second tour ou à un tour ultérieur. En l’absence de consensus, la stabilité a été définie comme un changement d’opinion égal ou inférieur à 20 % après les différents tours.

Résultats

Un échantillon choisi à dessein et composé de 22 médecins d’urgence a participé à l’enquête menée selon la méthode Delphi, et 16 répondants (72 %) se sont rendus à la fin du processus. Sur ce dernier nombre, 15 provenaient des 11 programmes de formation donnés en MU partout au Canada, et le dernier était expert en électrocardiographie d’urgence. Dans l’ensemble, 78 diagnostics ont fait l’objet de consensus, 42 sont restés dans la plage de la stabilité et 1 a échappé à la fois au consensus et à la stabilité. Sur 121 diagnostics fondés sur l’ECG chez l’adulte et potentiellement importants, 53 (44 %) ont reçu la cote « doivent être connus [des stagiaires] »; 61 (50 %), la cote « devraient être connus »; et 7 (6 %), la cote « ce serait bien s’ils étaient connus ».

Conclusions

Les auteurs ont classé différents diagnostics fondés sur l’ECG chez l’adulte, dans un contexte de formation en MU, ce qui pourrait aider les enseignants cliniques en MU à dresser une liste prioritaire d’objectifs pédagogiques. Le classement facilitera aussi l’élaboration d’un cadre d’apprentissage permettant aux stagiaires en MU d’acquérir de la compétence dans l’interprétation de l’ECG.

Type
Original Research
Copyright
Copyright © Canadian Association of Emergency Physicians 2015 

Introduction

The electrocardiogram (ECG) is a non-invasive diagnostic tool that is regularly performed at the bedside in the emergency department (ED).Reference Fisch 1 Electrocardiographic abnormalities may be the first indication of ischemia, metabolic disturbance, or life-threatening arrhythmias.Reference Fisch 1 To ensure timely diagnosis and appropriate management of patients, emergency medicine (EM) physicians must provide rapid and reliable interpretation of ECGs, and EM programs must ensure competence of their trainees in this skill.

There is a lack of evidence-based literature on the optimal techniques for learning, maintaining and assessing competency in ECG interpretation.Reference Salerno, Alguire and Waxman 2 In 2001, the American College of Cardiology and the American Heart Association (ACC/AHA) produced a Clinical Competence Statement on Electrocardiography where they defined the minimum education, training, experiences, cognitive and technical skills necessary for the competent reading and interpretation of adult ECGs.Reference Kadish, Buxton and Kennedy 3 Unfortunately, this document was not evidence-based, did not address ECG interpretation in the EM context and was not endorsed by the Society for Academic Emergency Medicine.Reference Michelson and Brady 4

Research in cognitive psychology and medical education suggests that ECG diagnosis requires two distinct reasoning systems: analytic and non-analytic.Reference Evans 5 , Reference Norman and Eva 6 The analytic reasoning system involves a controlled, systematic consideration of features and their relation to potential diagnosis. The non-analytic approach involves rapid processes, such as pattern recognition that allows recognition of the correct diagnosis based on similarity to illness scripts, or mental representations of clinical knowledge based on cases that have been seen in the past.Reference Eva 7 , Reference Charlin, Boshuizen and Custers 8 Recent studies support the use of an additive model of diagnostic reasoning that highlights the importance of these feature-oriented and similarity-based reasoning strategies, suggesting that trainees base their interpretation on what they have previously seen.Reference Ark, Brooks and Eva 9

Competence has been defined as the relationship between a person’s abilities and the tasks he or she is required to perform in a particular situation in the real world.Reference Epstein 10 Sherbino et al propose that emergency physicians’ competence should be: 1) based on abilities, 2) derived from a set of domains that define the field of EM, 3) measurable, and 4) specific to the EM context.Reference Epstein 10 , Reference Sherbino, Bandiera and Frank 11

Evidence suggests that EM residents possess the ability to interpret ECGs. A 2004 survey showed that the majority of residency program directors in the United States were “comfortable” or “very comfortable” with their senior residents’ ability to interpret ECGs.Reference Pines, Perina and Brady 12 However, as of yet, these abilities are neither measurable nor have they been specifically defined in the EM context.

To address the competence of EM residents in adult ECG interpretation, we must first identify EM-specific adult ECG knowledge required by EM trainees. This study sought to define a list of adult ECG diagnosis and/or findings relevant to the EM training context.

Methods

After obtaining approval from the Institutional Review Board (IRB), the Delphi technique was used to develop consensus amongst a panel of EM residency program directors. The Delphi technique uses a series of questionnaires to aggregate opinions in an anonymous fashion over a series of “rounds” conducted through electronic exchange. It is an iterative process that involves systematic collection of judgments and consensus on a particular topic using sequential questionnaires interspersed with summarized information. In our study, a panel of participants were given a questionnaire (round 1) and asked to provide answers. For round two, panelists were sent the results of round 1, which included the average ratings of each item, panelist comments and new items suggested by panel members. The panelists were asked to reconsider and potentially change their ratings of items, taking into account the ratings accorded by the other panel members and panel member comments, and to rate new items. This process was repeated until a consensus was reached. Consensus was defined as a minimum of 75% agreement on the rating of any one item at round 2 or later. In the absence of consensus, stability of opinion was determined. Stability was measured as the consistency of answers between successive rounds of the study, and defined as a shift of 20% or less after successive rounds. As there is significant disagreement in the literature regarding consensus and stability, these values were determined a priori and based on values used in similar studies.Reference Holey, Feeley and Dixon 13 - Reference Scheibe, Skutsch and Schofer 16 Once an item achieved consensus or stability, it was removed from further rounds of the Delphi process. The process was conducted in a “quasi-anonymous” manner. Respondents’ identity was known to the primary investigator (PI) only to allow for reminders and provision of feedback in subsequent rounds. The participants’ ratings and opinions remained anonymous to members of the panel.

Study design, setting and population

One goal of the study was to identify a panel of participants who would have knowledge of the topic being investigated, be reasonably impartial, and have an interest in the research topic. We identified a purposive sample, which included all EM training program directors in Canada (from both the Royal College of Physicians and Surgeons of Canada training programs and the College of Family Physicians fellowship programs) as well as an EM physician with an expertise in emergency electrocardiography. These potential participants were invited to participate as panelists in the online consensus-building process.

Sample size

A sample size of 31 participants was expected to provide representative information while maximizing the chances of an adequate response rates. The literature suggests that there is little benefit to panels that exceed 30 participants.Reference Delbecq and Van de Ven 17 The study’s a priori minimum acceptable response rate was 10 participants.Reference Parenté and Anderson-Parenté 18

Questionnaire development

To develop an extensive list of possible ECG diagnoses and/or findings, one of the investigators (CP) conducted a detailed review of the published literature in cardiology and EM. The starting point for this search was the American Board of Internal Medicine 94–question/answer sheet for the ECG portion of the cardiovascular disease board examination.Reference Auseon, Schaal and Kolibash 19 This list was initially compared to several EM and cardiology textbooks, and missing diagnoses and/or findings were added.Reference Chan, Brady and Harrigan 20 - Reference Surawicz and Knilans 23 The resulting list underwent further review and modification by two EM physicians with ≥5 years of clinical practice. Finally, in the first two rounds of the Delphi process, participants were asked if they would add any other ECG diagnoses and/or findings to the list. Diagnoses and/or findings identified by participants were then added to subsequent rounds of the Delphi process. Each individual diagnosis and/or finding (including a brief description where necessary) was considered a separate item on the Delphi questionnaire.

The Delphi questionnaire was developed utilizing a web-based survey tool (FluidSurveys®, available at www.fluidsurveys.com). The panelists were asked to rate each item’s relevance to EM trainees using a 4-point Likert scale anchored by the following descriptors and assigned ratings:

  1. 1) It is not important for EM trainees to be able to identify this diagnosis (1 point)

  2. 2) It would be nice for EM trainees to be able to identify this diagnosis/finding (2 points)

  3. 3) EM trainees should be able to identify this diagnosis/finding (3 points)

  4. 4) EM trainees must be able to identify this diagnosis/finding (4 points)

Panelists had the option of choosing “I am unfamiliar with this diagnosis,” which was assigned a rating of zero points. Panel members were given the opportunity to make comments about items and add any additional diagnoses at their discretion. The questionnaire was pilot tested with two EM physicians, and final edits were made to the questionnaire based on their feedback.

Data collection

The online survey instrument was used to create, distribute, collect, and analyze responses. An introductory email and consent form were sent to all potential participants, and upon their agreement to participate, they were directed to round 1 of the Delphi questionnaire. Reminder invitations were sent at weekly intervals (three reminders per round). Participation in the panel was voluntary.

Data analysis

Statistical analysis of item ratings during each of the three rounds involved calculation of central tendencies (means, medians, and modes) and levels of dispersion (standard deviation and inter-quartile range) for all items. Given that the results were ordinal data, medians and inter-quartile ranges were reported to panelists in rounds 2 and 3. For the purposes of the analysis, man rating cut offs for the different categories were arbitrarily defined: EM trainees MUST be able to identify (mean Likert rating >3.8), “EM trainees SHOULD be able to identify” (mean Likert rating 2.8-3.8), “it would be NICE for EM trainees to identify” (mean Likert rating 1.8-<2.8), and “it is NOT IMPORTANT for EM trainees to identify” (mean Likert rating <1.8). Table 1 presents the number of diagnoses in each category. Repeated analysis of variance was used to assess differences between standard deviations for each of the rounds. Participant comments were not systematically analyzed.

Table 1 Diagnoses and/or findings in each rating category

Abbreviations: EM – Emergency Medicine, ECG: Electrocardiogram.

Results

Of the 31 faculty members invited to participate in the Delphi process; 22 agreed, and 16 of the 22 (72%) panelists completed all three rounds of the Delphi consensus-gathering exercise. Except for one expert in EM electrocardiography, all other panel members were program directors of accredited EM training programs in Canada and came from 11 different training programs. In round 1, a total of 118 adult ECG diagnoses and/or findings were rated. Panel members suggested the addition of three diagnoses to the list, bringing the total number of items rated to 121 (Appendix A). The mean standard deviations for all 121 items decreased throughout the process: Round 1=0.62, Round 2=0.45, Round 3=0.32. Analysis of variance revealed that the standard deviations of each successive round were substantially different from each other. For 112 of the 121 items, the standard deviations decreased progressively over the three rounds. For the remaining nine items, they remained virtually unchanged through three rounds. Overall, 78 diagnoses reached consensus (Table 2), 42 achieved stability, and one diagnosis achieved neither consensus nor stability (Appendix A).

Table 2 ECG diagnoses and/or findings achieving consensus

* Achieved neither consensus nor stability

Discussion

This study described the consensus opinion of Canadian academic EM residency program directors about which adult ECG diagnoses and/or findings EM physicians must know. The Delphi process allowed for repeated iterations of item rating and led to a reasonable consensus.Reference Smith and Simpson 24 Experts do not yet agree on a single valid measure of consensus in Delphi studies. This study used a hierarchical process described by Dajani et al,Reference Dajani, Sincoff and Talley 25 that included participant consensus (the percentage of participants agreeing on a particular response), followed by stability of opinion in the absence of consensus (consistency of answers between successive rounds of the questionnaire without achieving the pre-defined criteria for consensus), which is consistent with other Delphi studies.Reference Holey, Feeley and Dixon 13 - Reference Murry and Hammons 15 , Reference Dajani, Sincoff and Talley 25 , Reference Williams and Webb 26 The demonstration of convergence, (which is a progressive decrease in range and standard deviation of responses as rounds progressedReference Greatorex and Dexter 27 ), suggested increased panelist agreement, and further supported the study conclusions. The anonymous nature of the process avoided the influence of strong personalities and other group dynamics.Reference Williams and Webb 26

Overall, participants agreed that EM trainees must or should be able to identify the majority of the adult ECG findings and/or diagnoses. This finding was interesting for several reasons: it indicated that there was not an EM-specific list of diagnoses and/or findings to use as a basis for a curriculum or assessment strategy, as had previously been suggested in the literature.Reference Michelson and Brady 4 In fact, this study found that the knowledge expectations for EM trainees were similar to those of other practitioners (cardiologists or internal medicine specialists) who routinely interpret ECGs.Reference Michelson and Brady 4 However, the study findings do not imply that EM ECG competency should be distilled to a checklist to be mastered, an arbitrary number of ECGs to be interpreted over the course of training, or successful completion of a board examination.Reference Kadish, Buxton and Kennedy 3 , Reference Michelson and Brady 4 , Reference Govaerts 28

The key to defining EM competency in ECG interpretation must include the integration of ECG knowledge with skills, judgment, and attitudes that are linked to the professional practice of EM.Reference Frank, Snell and Cate 29 The goal or competency we should aim to achieve is for EM trainees to possess the ability to correctly interpret ECGs (regardless of the specific diagnosis and/or findings), and to be able to facilitate care in the context of a patient’s clinical presentation, including the provision of time-sensitive interventions, resuscitation, or safe discharge home. The majority of EM education does (and should) occur in the context of professional practice, as competence is highly situational and cannot be separated from practice.Reference Dall'Alba and Sandberg 30 Over the course of their residency, EM trainees routinely interpret ECGs and make management decisions based on their interpretation under the training of qualified EM faculty members. Studies suggest that they do this well.Reference Pines, Perina and Brady 12 , Reference Zappa, Smith and Li 31 , Reference Westdorp, Gratton and Watson 32 The value of identifying a long, comprehensive list of ECG diagnoses and/or findings is that it suggests that a practice-based ED ECG instruction may be inadequate. Although trainees may encounter a significant number of the important diagnoses during their clinical rotations in the ED, it is unlikely that each trainee would encounter all of the diagnoses identified in this study. Armed with this knowledge, and the understanding that a significant component of diagnostic reasoning is based on what a trainee has previously seen, it becomes clear that supplemental education may be required in order to develop the appropriate illness scripts required for the practice of EM.Reference Ark, Brooks and Eva 9 This supplemental education may be learner-driven, where an individual trainee uses the list of diagnoses and/or findings to “gauge” their learning and undertakes self-study to improve their knowledge in areas where they perceive deficiencies. Alternatively, educators can use the list to identify components that are routinely not well-addressed by practice-based ECG education, knowledge of which can guide the development of educational strategies and/or assessments.

Limitations

There were several limitations in this study. Selection of the items for inclusion in the ratings list was carried out by a single individual, although the likelihood that important references were missed was minimized by having other physicians and another investigator review the list, and by allowing panel participants to add any diagnosis or finding that they felt was missing. A second limitation was the panel size of 22 participants, 16 of whom completed all 3 rounds of the Delphi process. Although a larger panel size would have resulted in a greater data generation, it would also have increased the risk of losing participants due to rater fatigue. We believe that the small panel size achieved good consensus. A final limitation concerns the selection of panel participants. EM program directors were chosen because of their clinical experience and their likelihood to complete the questionnaire, given their involvement in EM education. Although the group was heterogeneous in terms of geography and training program, as program directors they were all at risk of being directly affected by the results of this study, which may have biased their willingness to participate or their individual responses. Furthermore, their concentration in academic settings may have made them more likely to rate all diagnoses as potentially important instead of focusing on what is most clinically relevant to a practicing EM physician in the community. The authors intend to repeat a similar study in different study populations to further examine this possibility.

Conclusion

We have categorized adult ECG findings and diagnoses within an EM training context. These findings have potentially important applications to EM trainee education. For example, this study can serve as a needs assessment to inform design and development of curriculum to address EM trainees’ interpretation of adult ECG that will be more attuned to the realities of Canadian EM practice and also inform the way in which these competencies are assessed.

Acknowledgements

We are grateful to the educators who voluntarily served on the panel of raters and agreed to be part of the TRACE (TRAinee Competence in ECG interpretation) Consortium: Kirk Magee, Dalhousie University; Daniel Brouillard, Université Laval; Pierre Desaulniers, Université de Montreal; Ian Preyra, McMaster University; Joel Turner, McGill University; Wes Palatnick, University of Manitoba; Rob Woods, University of Saskatchewan; Sandy Dong, University of Alberta; Ian Rigby, University of Calgary; Brian Chung, University of British Columbia; Richard Kohn, McGill University; Christine Richardson, University Western Ontario; M Greidanus, University of Calgary; Jennifer Puddy, University of Calgary; Patrick Ling, University Of Saskatchewan and Amal Mattu, University of Maryland. We would like to acknowledge the following people for their help with this project: Dr. Ken Doyle (survey review) and Xiaoqing Xue (ANOVA).

Competing interests: None declared.

Supplementary material

To view supplementary material for this article, please visit http://dx.doi.org/10.1017/cem.2014.58

References

1. Fisch, C. Evolution of the clinical electrocardiogram. J Am Coll Cardiol 1989;14(5):1127-1138.Google Scholar
2. Salerno, SM, Alguire, PC, Waxman, HS. Competency in interpretation of 12-lead electrocardiograms: a summary and appraisal of published evidence. Ann Intern Med 2003;138(9):747-750.Google Scholar
3. Kadish, AH, Buxton, AE, Kennedy, HL, et al. ACC/AHA clinical competence statement on electrocardiography and ambulatory electrocardiography. A report of the ACC/AHA/ACP-ASIM task force on clinical competence (ACC/AHA committee to develop a clinical competence statement on electrocardiography and ambulatory electrocardiography). J Am Coll Cardiol 2001;38(7):2091-2100.Google Scholar
4. Michelson, EA, Brady, WJ. Emergency physician interpretation of the electrocardiogram. Acad Emerg Med 2002;9(4):317.Google Scholar
5. Evans, JS. Dual-processing accounts of reasoning, judgment, and social cognition. Annu Rev Psychol 2008;59:255-278.Google Scholar
6. Norman, GR, Eva, KW. Diagnostic error and clinical reasoning. Med Educ 2010;44(1):94-100.Google Scholar
7. Eva, KW. What every teacher needs to know about clinical reasoning. Med Educ 2005;39(1):98-106.Google Scholar
8. Charlin, B, Boshuizen, HP, Custers, EJ, et al. Scripts and clinical reasoning. Med Educ 2007;41(12):1178-1184.Google Scholar
9. Ark, TK, Brooks, LR, Eva, KW. Giving learners the best of both worlds: Do clinical teachers need to guard against teaching pattern recognition to novices? Acad Med 2006;81(4):405-409.Google Scholar
10. Epstein, RM. Assessment in medical education. NEJM 2007;356(4):387.Google Scholar
11. Sherbino, J, Bandiera, G, Frank, JR. Assessing competence in emergency medicine trainees: An overview of effective methodologies. CJEM 2008;10(4):365-371.CrossRefGoogle Scholar
12. Pines, JM, Perina, DG, Brady, WJ. Electrocardiogram interpretation training and competency assessment in emergency medicine residency programs. Acad Emerg Med 2004;11(9):982-984.Google Scholar
13. Holey, EA, Feeley, JL, Dixon, J, et al. An exploration of the use of simple statistics to measure consensus and stability in Delphi studies. BMC Medical Research Methodology 2007;7(1):52.CrossRefGoogle ScholarPubMed
14. Penciner, R, Langhan, T, Lee, R, et al. Using a Delphi process to establish consensus on emergency medicine clerkship competencies. Med Teach 2011;33(6):e333-e339.CrossRefGoogle ScholarPubMed
15. Murry, JW, Hammons, JO. Delphi: a versatile methodology for conducting qualitative research. The Review of Higher Education 1995;18(4):423-436.Google Scholar
16. Scheibe, M, Skutsch, M, Schofer, J. Experiments in Delphi methodology. In: Harold A. Linstone and Murray Turoff, editors. The Delphi Method: Techniques and Applications, Boston, MA: Addison Wesley; 1975. p. 262-287.Google Scholar
17. Delbecq, AL, Van de Ven, AH, et al. Group techniques for program planning: a guide to nominal group and Delphi processes. Scott-Foresman, Glenview, IL; 1975.Google Scholar
18. Parenté, FJ, Anderson-Parenté, JK. Delphi Inquiry Systems. In: Wright G, Aytons P, editors. Judgmental Forecasting. Chichester: Wiley; 1987. p. 129-156.Google Scholar
19. Auseon, AJ, Schaal, SF, Kolibash, AJ, et al. Methods of teaching and evaluating electrocardiogram interpretation skills among cardiology fellowship programs in the United States. J Electrocardiol 2009;42(4):339-344.Google Scholar
20. Chan, TC, Brady, WJ, Harrigan, RA, et al, editors. ECG in emergency medicine and acute care. Philadelphia: Mosby; 2005.Google Scholar
21. Marx, JA, Hockberger, RS, Walls, RM, et al, editors. Rosen's emergency medicine: concepts and clinical practice. 7th ed. Philadelphia: Mosby; 2009.Google Scholar
22. Mattu, A, Tabas, JA, Barish, RA, editors. Electrocardiography in emergency medicine. Dallas: American College of Emergency Physicians; 2007.Google Scholar
23. Surawicz, B, Knilans, T, editors. Chou's electrocardiogarphy in clinical practice: Adult and pediatric. 6th ed. Philadelphia: Saunders Elsevier; 2008.Google Scholar
24. Smith, KS, Simpson, RD. Validating teaching competencies for faculty members in higher education: a national study using the Delphi method. Innov High Educ 1995;19(3):223-234.Google Scholar
25. Dajani, JS, Sincoff, MZ, Talley, WK. Stability and agreement criteria for the termination of Delphi studies. Technological Forecasting and Social Change 1979;13(1):83-90.Google Scholar
26. Williams, PL, Webb, C. The Delphi technique: a methodological discussion. J Adv Nurs 1994;19(1):180-186.CrossRefGoogle ScholarPubMed
27. Greatorex, J, Dexter, T. An accessible analytical approach for investigating what happens between the rounds of a Delphi study. J Adv Nurs 2000;32(4):1016-1024.CrossRefGoogle ScholarPubMed
28. Govaerts, MJB. Educational competencies or education for professional competence? Med Educ 2008;42(3):234-236.Google Scholar
29. Frank, JR, Snell, LS, Cate, OT, et al. Competency-based medical education: theory to practice. Med Teach 2010;32(8):638-645.Google Scholar
30. Dall'Alba, G, Sandberg, J. Educating for competence in professional practice. Instructional Science 1996;24(6):411-437.Google Scholar
31. Zappa, MJ, Smith, M, Li, S. How well do emergency physicians interpret ECGs? Ann Emerg Med 1991;20(4):463.Google Scholar
32. Westdorp, EJ, Gratton, MC, Watson, WA. Emergency department interpretation of electrocardiograms. Ann Emerg Med 1992;21(5):541-544.Google Scholar
Figure 0

Table 1 Diagnoses and/or findings in each rating category

Figure 1

Table 2 ECG diagnoses and/or findings achieving consensus

Supplementary material: File

Patocka supplementary material S1

Appendix

Download Patocka supplementary material S1(File)
File 15.5 KB