Although enormous strides have been made in congenital cardiology care, increasing attention to how we train fellows and indeed their trainers is a welcome development in the wider congenital cardiology community. Reference McMahon, Tretter and Redington1–Reference McMahon, Heying and Budts2 Despite several clear guidelines on training in North America, published under the umbrella of the Accreditation Council for Graduate Medical Education, 3–Reference Stout, Valente and Bartz8 and publications for general and specialist training from the Association for European Paediatric and Congenital Cardiology working groups, Reference Heying, Albert and Voges9–Reference Reinhardt, Hansmann and O’Sullivan15 there are limited data on current status of teaching and assessment in different European centres. Assessment for learning is an important concept, recently developed, highlighting the critical nature of assessment in driving learning for trainees (Fig 1). Trainees often experience stress with the volume of new knowledge they encounter in paediatric cardiology fellowship training. Reference Brown, Binney, Gauthier and Blume16 This has prompted some innovative and effective approaches to assisting learning such as echocardiography bootcamps. Reference Ceresnak, Axelrod and Sacks17 Guidelines have recently tried to take this into account and, whilst ensuring some structure, have tried not to be too restrictive in how they define training. Reference Abdulla18 Similarly, provision of feedback to trainees is fundamental to them setting goals and reaching required competences and capabilities to advance on the entrustment scale. Reference McMahon, Tretter and Redington1
We hypothesised that there is marked variation in training and assessment techniques between different countries. The research questions in this study included 1) what instructional techniques are employed in the teaching of paediatric cardiology trainees across different European centres and countries, 2) what types of assessment of paediatric cardiology trainees are undertaken, and 3) how is feedback provided to trainees in such centres and countries?
Definitions
Workplace-based assessments assess a trainee’s professional skills and attitude and provide evidence of appropriate everyday clinical competences. It has high content validity through assessing actual performance in the workplace. Workplace-based assessments are promoted as an integral part of curriculum design and educational planning, in which teaching, learning, assessment, and feedback are closely integrated. Workplace-based assessments include the following:
Case-based discussion is a method for trainees to present and discuss their cases with trainers and obtain systematic and structured feedback. It is designed to assess decision-making and the application or use of medical knowledge in relation to patient care.
Directly observed practices is a trainee-led method that has been designed specifically for trainees to be assessed for competence in the day-to-day practical procedures that they undertake as part of their training, for example, echocardiogram and right heart catheterisation.
Mini-clinical evaluation exercise (Mini-CEX) is a trainee-led snapshot of trainee–patient interaction. It is designed for the trainer to provide feedback on skills essential to the provision of good clinical care by observing an actual clinical encounter. The setting is usually a clinic or ward, and the assessment is usually only concerned with one aspect of the clinical encounter, such as taking a history or one part of the clinical examination. The assessment is recorded on a standard proforma and strengths, and areas for development and action points are identified.
Multisource feedback is a method of obtaining feedback in a structured form from staff associated with the trainee who have the opportunity to observe their practice.
Summative feedback is provided at the end of the learning process and serves to provide trainees with an overall assessment of their learning often with an associated grade.
Formative feedback is typically ungraded or low-stakes opportunities to measure trainee knowledge and skills. Formative feedback is ongoing and helps trainers to focus on trainee learning and trainees to better understand the limits of their own knowledge and how to improve.
Reliability refers to whether an assessment instrument gives the same results each time it is used in the same setting with the same type of subjects. Reliability essentially means consistent or dependable results. Reliability is a part of the assessment of validity.
Validity refers to how accurately a method measures what it is intended to measure.
CORE training represented basic general paediatric cardiology training and subspecialist training represented training which focussed on subspecialist areas such as interventional cardiac catheterisation, cardiac MRI, fetal imaging, or electrophysiology.
Entrustable professional activities are key tasks of a specialty or subspecialty that a trainee can be trusted to perform once sufficient competence has been demonstrated.
Methods and materials
In December 2020, an approved and structured questionnaire was designed to ascertain the training and assessment of trainees in European training centres. After several iterations, approved by two independent paediatric cardiologists and reviewed by the Association for European Paediatric and Congenital Cardiology council, a questionnaire was finalised. Association for European Paediatric and Congenital Cardiology training centres are usually registered on the website and are defined as centres capable of providing the core training to enable fellows to reach competency/capability to work as an independent paediatric cardiologist. Most training centres are surgical centres, but some are medical with links to other surgical centres. Reference McMahon, Heying and Budts2
The questionnaire was circulated to all recorded training centres registered with the Association for European Paediatric and Congenital Cardiology (https://www.surveymonkey.com/r/DB7VSZB). We requested that the survey be completed, when possible, by either a training director or one of the cardiologists actively involved in training and/or assessment of cardiology trainees/fellows. The questionnaire detailed the number of training programmes, number of general congenital cardiology fellows (or trainees), teaching design, breakdown in training, assessment techniques, reviews, and feedback. Open-ended questions searched for strengths and weaknesses of the programme. Consent was obtained from Children’s Health Ireland, Crumlin, to conduct the survey.
Results
Of 95 centres invited to participate in the study, 46 (49%) responded (Fig 2). A complete dataset was available for 41 (43%) of these centres from 19 countries. The breakdown of centres and country are provided in Figure 2. There was a fellowship director in 36/41 (87%) centres and a structured training programme in 34/41 (83%) centres (Fig 3). The majority of respondents were either the training director or a trainer (training cardiologist or an educationalist within the cardiology department). The vast majority of respondents came from a centre with a training director (87%). There were 26 male and 15 female respondents. The responsibility of each centre is to provide trainees with a broad exposure to all the core areas of paediatric cardiology and to ensure they reach competency in delivering care for each of these areas using each of the competences identified through different frameworks. Different national structures provide governance to training often with an overall national lead in training (e.g., United Kingdom Shape of Training programme). Reference McMahon, Heying and Budts2 However, within individual countries, there may be several different centres with very variable communication or interaction between these centres.
Training programmes in Europe
There was a wide variation in the structure and duration of training programmes between the different countries. The median duration of training was 3 years (range from 2 to 6 years). The median core training was 3 years (range 1–5 years) with median 1 year (range 0–5 years) advanced training. Although all programmes offered general cardiology training, advanced subspecialist training in imaging, electrophysiology, catheterisation, heart failure/transplant, and pulmonary hypertension was typically limited to larger centres.
Structure of teaching
The breakdown of instructional techniques are provided in Figure 4 and Supplemental Table S1. These include bedside teaching (41/41, 100%), didactic teaching (38/41, 93%), problem-based learning (28/41, 68%), journal club (31/41, 76%), fellows presenting in the multidisciplinary meeting (41/41, 100%), fellows reporting on echocardiograms (34/41, 83%), clinical simulation (17/41, 41%), echocardiography simulation (10/41, 24%), and catheterisation simulation (3/41, 7%).
Breakdown of training
The median duration of fellowship training was 3 years (range 2–6 years). The median duration of training in the outpatient department was 8 months (range 2–30 months), inpatient ward was 6 months (range 3–40 months), echocardiography department was 6 months (range 1–24 months), catheterisation 3 months (range 0–12 months), intensive care 4 months (range 0–18 months), heart failure/transplant 1 month (range 0–6 months), advanced imaging (MRI/CT) 1 month (range 0–6 months), electrophysiology 1 month (range 0–6 months), and adult CHD 2 months (range 0–12 months). Two programmes listed the same duration of training as 36 months for echocardiography, in-patient and outpatient care as fellows covered all three areas simultaneously. Three other programmes provide training in echocardiography throughout the training of 36 months.
Numbers of procedures
The numbers of echocardiograms (transthoracic, transoesophageal, and foetal echocardiograms), cardiac catheterisation procedures, electrophysiology studies, balloon atrial septostomies, placement of temporary pacing wires, and pericardiocentesis performed during training are presented in Table 1. National guidelines are not always available for each of these procedures.
* United States training previously required minimum of 300 transthoracic echocardiograms and 100 catheterisations (combined cardiac catheterisation and electrophysiology procedures). More recently, this has been replaced by requirement to reach certain milestones during training.
Assessment
Fellow assessments comprised bedside assessment/case-based discussions (n = 27), mini-clinical examination (mini-CEX) (n = 12), directly observed practices (n = 12), oral examination (n = 16), long cases (n = 11), written essay questions (n = 6), multiple choice questions (n = 5), and objective structured clinical examination (n = 2). Entrustable professional activities were utilised in 10 (24%) centres. Data are shown in Figure 5 and Supplemental Table S2.
Feedback
There was significant variation in how feedback was delivered to trainees. This was described as summative only in 17/41 (41%) centres, formative only in 12/41 (29%) centres, and a combination of formative and summative feedback in 10/41 (24%) centres. Written feedback was provided in 10/41 (24%) centres. Data reveal that some form of verbal feedback is most common and provided to trainees in 37/41 (90 %) centres (Fig 6 and Supplemental Table S3).
Feedback was provided by their trainer in 34/41 (83%) centres, any consultant cardiologist in 30/41 (73%) centres, by peers in 8/41 (20%) centres, by parents of patients in 7/41 (17%) centres, and other allied health care professionals in 6/41 (15%) centres.
Discussion
Training to become a paediatric cardiologist in Europe varies markedly from one country to another and although there is excellent training in many countries, the findings of this study clearly show that there is potential for improvement in consistency of assessment and feedback to trainees. The findings of this study support our hypothesis that assessment varies widely across different European centres. Feedback also takes many different forms. Even though 87% of centres have a training director and a structured supervision, the instructional training and assessment varies widely and does make use from standardised techniques, only in a minority of centres. When comparing centres within one country, data do not provide a similar pattern in training or assessment either. This led us to conclude that the structure and assessment of training in Europe is mainly centre-dependent and probably influenced by the personal engagement of the training director.
Recently, it has become clear, using grounded theory, that the relationship between assessment and learning is complex. Reference Cilliers, Schuwirth and Herman19 The impact may be adverse if “passing the assessment is the only goal, resulting in poor learning styles, a grade culture of grade hunting and competitiveness and grade inflation.” Reference Cilliers, Schuwirth and Van der Vleuten20–Reference van Ryn, Hardeman and Phelan21 This can also result in “reductionism if there is poor feedback, misalignment with learning goals, non-meaningful aggregation of assessment data, inadequate longitudinal elements and if the assessment is treated like a tick-box exercise” (work-based assessments or objective structured clinical examinations). Reference Cilliers, Schuwirth and Van der Vleuten20 Learners build knowledge from an inner scaffolding of their individual and social experiences, emotions, will, aptitudes, beliefs, values, self-awareness, and purpose. Reference van Ryn, Hardeman and Phelan21 What you understand in what you have learned is determined by how you understand things, who you are, and what you already know. Reference van Ryn, Hardeman and Phelan21
Although there is wide variation in the duration of exposure of trainees to each of the areas of training, the majority of programmes offer a broad exposure of training in the basics of paediatric cardiology with several offering subspecialist exposure. There is also a wide variation in both the different assessment tools (workplace-based assessments, objective structured clinical examination, etc.) employed as well as the numbers of procedures that trainees are expected to complete during their training between different centres. One wonders if greater focus on the quality of the procedure rather than the number of procedures would be a more useful form of assessment. In the UK, the “Shape of training” model has embraced a move towards a more competency/capability approach with an annual review of competency progression in addition to multisource feedback of performance during different rotations.
Entrustable professional activities
A minority of centres assess trainees using entrustable professional activities. These are standalone tasks that can be “entrusted” to a learner with supervision of a trained professional and were introduced by ten Cate. Reference ten Cate, Chen, Hoff, Peters, Bok and van der Schaaf22–Reference ten Cate, Balmer, Caretta-Weyer, Hatala, Hennus and West23 A recent review of entrustable professional activities in paediatric cardiology reported marked variation in how they are employed in the assessment of trainee entrustment level as well as uncertainty over whether such instruments will bridge the gap between competency and clinical practice. Reference ten Cate and Schumacher24 Conflation of different competences can prove problematic with entrustable professional activities, and caution should be employed before adopting a widespread roll out to every paediatric cardiology training programme. Reference Kim, Tretter, Wilmot, Hahn, Redington and McMahon25 Increasingly in North American programmes, entrustable professional activities are being implemented as competency and milestone assessment tools.
Feedback
Lessons derived from medical education include there can be no assessment without meaningful feedback. Reference Archer26–Reference Watling27 Feedback also takes many different forms between different centres. Formative feedback has a far greater impact on complex skills than summative feedback and grades. Feedback is a dialogue which is a continuous to-and-fro process. Reference McMahon, Gallagher, James, Deery, Rhodes and van Merriënboer28,Reference Brinkman, Geraghty and Lanphear29 Understanding of feedback requires an integrated approach incorporating both the trainee and the training culture. The training culture fosters an environment which allows effective feedback to occur and the trainee to respond to it. Reference Watling, Driessen, van der Vleuten and Lingard30 The training culture should aspire to normalise feedback, promote a trusting trainertrainee relationship, define clear performance goals, and ensure goal alignment for both the trainee and trainer. Reference Watling, Driessen, van der Vleuten and Lingard30 Recent studies have reported that if well implemented, feedback from workplace-based assessments, particularly multisource feedback, leads to a perceived positive effect on practice. Reference Saedon, Salleh, Balakrishnan, Imray and Saedon31 This should be part of a longitudinal assessment process. The majority of programmes in this study provided some form of feedback, primarily in verbal form.
Coaching
Coaching, a process of guiding the trainee towards improvement, is a particularly effective surrogate to providing feedback. Reference Landreville, Cheung, Frank and Richardson32 More recently, especially in Canadian centres, coaching has become incorporated into the training model for trainees. Coaching in the medical training environment has been conceptualised into two types. “Coaching in the Moment” relates to coaching between the trainer and trainee within the clinical practice environment and encompasses observation, feedback, and actionable suggestions for the improvement of performance. 33 The second type of coaching, “Coaching over Time,” occurs between the trainer and trainee outside of the clinical environment. Here, observation is primarily related to the trainees performance data that have been collated over time. Feedback and suggestions for improving performance are critical components to each type of coaching. “Coaching over Time” is essential to guiding trainees in their development as competent cardiologists. 33
Programmatic assessment
Currently, the Association for European Paediatric and Congenital Cardiology has developed a certification examination in addition to a logbook for skills acquired. Although these are very welcome developments to ensure an equally high standard of education throughout Europe, the additional benefit of multiple forms of assessment in addition to formative feedback cannot be overemphasised. Schuwirth and van der Vleuten first espoused the need for a programmatic assessment in 2011. Reference Schuwirth and Van der Vleuten34 This has led to a broadened perspective on the “types of construct assessment tries to capture, the way information from various sources is collected and collated, the role of human judgement and the variety of psychometric methods to determine the quality of the assessment.” Reference Bok, van der Vleuten and de Jong35 A far richer narrative and clearer image of the trainees progress can be garnered through multiple different assessment tools at different time points, each with appropriate feedback to the trainee. Reference de Jong, Bok, Schellekens, Kremer, Jonker and van der Vleuten36 This should ensure a high reliability (sampling) and validity (authenticity of competences tested) of the assessments undertaken.
Using programmatic assessment, individual assessment points are maximised for learning and feedback value, while high-stake decisions are only determined by an aggregation of many data points. Reference Van Der Vleuten, Schuwirth, Driessen, Govaerts and Heeneman37 This approach is very different from historical practices where high-stake pass–fail decisions were decided on single assessments, a limited number of assessment methods were employed, expert judgements were minimised, and often limited feedback was provided to trainees. Reference Van Der Vleuten, Schuwirth, Driessen, Govaerts and Heeneman37 A potential programme of assessment is provided in Table 2, but this could be tailored differently for each training centre according to resources available. Programmatic assessment-for-learning can be applied to any part of the paediatric cardiology training continuum, provided that the underlying learning conception is constructivist. Reference Van Der Vleuten, Schuwirth, Driessen, Govaerts and Heeneman37
Final high-stakes assessment by committee reviewing electronic portfolio and feedback.
+ Required number of procedures.
+/− Progress to AEPC or national exit examination and certification.
Abbreviations: AEPC, Association for European Paediatric and Congenital Cardiology; ARCP, annual review of competency progression; DOPS, directly observed procedures; mini-CEX, mini-clinical evaluation exercise; OSCE, objective structured clinical examination. P, peer feedback; Par(t), parent (or patient) feedback; T, trainer feedback.
AEPC certification examination
Given the need for standardisation of standards of training across Europe, the education committee of the AEPC developed a certification examination based on their training recommendations, which would contribute partly towards Association for European Paediatric and Congenital Cardiology certification. A logbook further completes the certification process, with trainees signed off on their performance, by their trainers at their local centres.
Common problems in training
Several delegates reported many positive aspects of training in European centres. Trainees themselves have reported high satisfaction with overall training from countries with well-established programmes. Reference Horst, Michel, Kubicki, Lang, Zschirnt and Moosmann38 One of the most cited problems in training was limited time availability. Other weaknesses reported included too few fellows in the programme, lack of formalised training structure (teaching/assessment), lack of standardisation of subspecialist services, smaller centres with limited capacity to deliver all subspecialist services, and lack of local access to a cardiac morphology course. Reference McMahon, Heying and Budts2 The lack of access of trainees to all subspecialist services is a challenge for smaller programmes, and perhaps there is the potential for trainees from smaller programmes to spend time in larger centres with greater exposure to subspecialist services.
Comparison with United States training and assessment
Training in the United States of America is well organised with over 60 paediatric cardiology fellowship programmes now in existence (https://www.nrmp.org/fellowship-match-data/). Most cardiac programmes have a dedicated fellowship director who meets regularly with trainees and monitors their progress in reaching the six competences promoted by the Accreditation Council for Graduate Medical Education. Monthly evaluations of trainee performance in different rotations are provided. Knowledge-based assessments are undertaken in several programmes at different stages during the training year. Trainees meet with the fellowship director every 6 months or year to evaluate how they are progressing and whether they are reaching important competences. Reference McMahon, Heying and Budts2
One of the challenges for European programmes is significant resource limitation, especially in terms of faculty. US medium-sized programmes often have larger faculty numbers than European centres. In addition, many also have a dedicated fellowship director who is properly trained as an educationalist or has some degree of educationalist training. This enables faculty to spend greater time training and also undertake more comprehensive fellow assessments. Increased resources, often a challenge for smaller European centres, in terms of faculty and educationalist training are critical to implementing an effective training framework. Reference McMahon, Heying and Budts2
Limitations
This study attempted to review different paedagogical techniques and forms of assessment across Europe. Although several centres (49% of those surveyed) participated, several also failed to respond to invitation. The response rate was low at approximately one half which may have biased the overall findings of the study. We tried when possible to survey the training director, educationalists or a cardiologist actively involved in training at each centre. We did not provide definitions of workplace assessments as we anticipated those involved in training could differentiate between different assessments, for example, directly observed procedures differing from case-based discussions. However, there may be subtle differences in comprehension and application of these assessments between trainers. The study failed to address the impact of different educational techniques and assessment of the quality of their training on the clinical competence of the trainee at the end of their training.
Conclusion
In conclusion, there is marked variation in the assessment of paediatric cardiology trainees across different centres in Europe. Assessment is not a box-ticking exercise but should aim to assess clinical competence as a global construct. Given resource limitations, we must be pragmatic in how we can implement assessment of training. However, encouraging training centres to move towards a competency/capability based programmatic assessment model, using multiple assessment techniques at different time points with multisource feedback may promote assessment for learning of paediatric cardiology trainees.
Supplementary material
The supplementary material for this article can be found at https://doi.org/10.1017/S1047951123003098.
Acknowledgements
We are grateful to the AEPC council for their support in undertaking this project. Ms. Linda Bosschers provided enormous assistance in finalising questionnaires and coordinating the project.
We are grateful to the following doctors for contributing data from their respective centres:
I. Michel-Behnke, T. Podnar, G. Tulzer, M. Gewillig, K. Vandekerckhove, J. Janousek, L. Idorn, K. Juul, J. Pihkala, F. Berger, C. Apitz, A. Uebing, T. Kriebel, P. Ewert, A. Eicken, H.G. Kehl, N. Haas, K. Brockmeier, I. Papagiannis, S. Rammos, M. Russo, G. Di Salvo, G. Santoro, I. Lubaua, O. Kinciniene, R. Sudikiene, R. Vankeviciene, S. Sendzikaite, N. Blom, RMF Berger, R. Anjos, J. Moreira, T. Rodica, F. Amalia, M. Shkolnikova, N. Hakacova, P. Elfstrom, H. Wåhlander, B. Donner, E. Valsangiacomo Buechel, M. Beghetti, I. Cansaran Tanidir, E. Tutar, T. Uçar, B. Tsai-Goodman, T. Thomson, F. BuʼLock, N. Dedieu, A. Toscano, R. Fiszer, W. Helbing, O. Baspinar, and T. Meşe
Financial support
This research received no specific grant from any funding agency, commercial, or not-for-profit sectors.
Competing interests
There are none to report.
Ethical standard
Approval of the above study was obtained from the Ethics Department at CHI Children’s Health Ireland, Crumlin, Dublin, Ireland.