Hostname: page-component-78c5997874-t5tsf Total loading time: 0 Render date: 2024-11-16T19:23:05.721Z Has data issue: false hasContentIssue false

The mini-PAT as a multi-source feedback tool for trainees in child and adolescent psychiatry: Assessing whether it is fit for purpose

Published online by Cambridge University Press:  02 January 2018

Gill Salmon*
Affiliation:
Cwm Taf University Health Board, UK
Lesley Pugsley
Affiliation:
Cardiff University, UK
*
Correspondence to Gill Salmon ([email protected])
Rights & Permissions [Opens in a new window]

Summary

This paper discusses the research supporting the use of multi-source feedback (MSF) for doctors and describes the mini-Peer Assessment Tool (mini-PAT), the MSF instrument currently used to assess trainees in child and adolescent psychiatry. The relevance of issues raised in the literature about MSF tools in general is examined in relation to trainees in child and adolescent psychiatry as well as the appropriateness of the mini-PAT for this group. Suggestions for change including modifications to existing MSF tools or the development of a specialty-specific MSF instrument are offered.

Type
Education & Training
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an open-access article published by the Royal College of Psychiatrists and distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © 2017 The Author

Multi-source feedback (MSF) can motivate doctors to improve and change their practice. Reference Holsgrove, Bhugra, Malik and Brown1,Reference Lipner, Blank, Leas and Fortna2 It gives doctors an overview of how others see them and compares this with their own view as well as the results of their peer group. Reference Abdulla3 MSF evolved in Canada and the USA out of a public demand for accountability to patients as well as an acceptance that assessments examining clinical decision-making and medical expertise do not address other essential competencies, such as interpersonal skills, professionalism and communication. Reference Abdulla3 MSF tools were originally designed to be formative, that is, to lead to awareness of and improvements in performance through feedback. More recently, however, they are being used for summative purposes, namely to provide information for revalidation and the annual review of competence progression (ARCP) which determines whether a trainee is considered fit to proceed with their training. As such, MSF tools need to be sufficiently reliable and valid. Reliability refers to the reproducibility of assessment measures or scores over repeated tests under identical conditions, and validity refers to the degree of confidence that an assessment measures what it is intended to measure. An associated term, feasibility, is a measure of whether an assessment instrument is practical, realistic and sensible given the circumstances and context. Reference Fitch, Malik, Lelliott, Bhugra and Andiappan4

Research on the use of MSF for doctors

Ramsey et al Reference Ramsey, Wenrich, Carline, Inui, Larson and LoGerfo5 published a landmark study showing that it was feasible for internal medicine physicians to obtain peer assessments about their humanistic qualities, clinical practice and communication skills. They also came to important conclusions about the reliability of MSF – for example, that 11 peer ratings were needed to ensure a reliability coefficient of 0.7 (the minimum acceptable for workplace-based assessments (WPBAs)) and that the results were not substantially affected by the relationship between the rater and the person being rated nor by the method used to select the raters. The findings of this study also suggested that a doctor's medical knowledge (determined by examination marks) was not predictive of how peers subsequently rated their interpersonal relationships or communication skills.

The finding that reliable and valid MSF questionnaires can be developed and be feasible to use for assessing doctors has been replicated across settings and specialties. Reference Lipner, Blank, Leas and Fortna2,Reference Archer, Norcini, Southgate, Heard and Davies6Reference Violato, Lockyer and Fidler9 A number of systematic reviews have also been published, all of which conclude that MSF as a method of assessing communication skills, collegiality, humanism and professionalism in doctors has high reliability, validity and feasibility. Reference Evans, Elwyn and Edwards10Reference Donnon, Ansari, Alawi and Violato15

The mini-PAT as an MSF tool

The mini-PAT is used by the Royal College of Psychiatrists as an MSF instrument for trainees. It is well known because of its widespread use in the Foundation Programme. Reference Davies, Archer, Southgate and Norcini16,Reference Carr17 The mini-PAT was derived from the Sheffield Peer Review Assessment Tool (SPRAT) following a mapping exercise against the foundation curriculum, Reference Archer, Norcini, Southgate, Heard and Davies6 thus ensuring its content validity. The SPRAT contains 24 questions assessing a doctor's competencies and professional attributes, and it maps directly on to General Medical Council (GMC) standards of good medical practice, 18 again establishing its content validity. These standards include good clinical care, maintaining good medical practice, teaching and training, appraising and assessing, relationships with patients and working with colleagues. The SPRAT was the first MSF tool validated in the UK for use by paediatric consultants as part of their appraisal. Reference Archer, Norcini and Davies19 It has also been shown to be reliable, needing as few as four raters to determine whether a doctor is in difficulty or not (more in borderline situations), and feasible, taking only 5–6 minutes to use with good return rates (more than 70%). Reference Davis and Archer20 It can also discriminate between the more and less experienced trainee. Reference Archer, McGraw and Davies21

In developing the mini-PAT, nine questions which did not map on to the curriculum for the Foundation Programme were removed from the SPRAT. These included questions relating to the management of complex patients and leadership. One question about probity and health was added while the free-text element and six-point scale (where 1 indicates ‘very poor’ and 6 indicates ‘very good’) remained unchanged. Reference Archer, Norcini, Southgate, Heard and Davies6 The resulting mini-PAT was thought to reflect the importance for foundation doctors of developing communication skills, team work and other humanistic qualities in relation to patient care in addition to their medical knowledge. Reference Abdulla3

In his critical analysis of the mini-PAT, while accepting its content validity and feasibility, Abdulla stated that it ‘lacks sufficient field evaluation and has not gone through any stringent criteria that are required for the validation of an assessment tool’. Reference Abdulla3 Data on the reliability and validity of 693 mini-PAT assessments on 553 foundation year 1 and 2 (F1/F2) doctors have subsequently been published. Reference Archer, Norcini, Southgate, Heard and Davies6 The mean scores of the two groups were found to be significantly different when using the same criterion standard (i.e. expectation for F2 completion), with 19.6% of F1s and 5.6% of F2s being assessed as borderline or below the expectations for F2 completion. This was used as evidence of internal standardisation and construct validity, as was the finding that the trainees scored higher in the domains of working with colleagues and relationships with patients compared with the clinical skills domains. Overall, 53% of F1 doctors and 74% of F2 doctors could have been assessed by no more than 8 assessors based on their mean scores. Factor analysis revealed that the two main factors were humanistic qualities and clinical performance. The authors concluded that the mini-PAT was a valid and reliable MSF tool for assessing foundation doctors.

Use of the mini-PAT in child and adolescent psychiatry training

In child and adolescent psychiatry, the process when using the mini-PAT is as follows: twice a year, the trainee provides contact details of between 8 and 12 co-workers who see them on a frequent basis in a range of situations. These people and the trainee then complete the mini-PAT online. Presumably based on the findings of Archer et al, Reference Archer, Norcini, Southgate, Heard and Davies6 it is suggested that at least 8 forms must be completed to ensure the assessment is reliable. There is, however, no research specifically related to the mini-PAT on the minimum number of assessors required to give a valid result. Reference Abdulla3 The form uses a 6-point Likert-type rating scale. Trainees are rated according to the standard expected at each stage of training. A score of 4 corresponds to the expected standard, with higher or lower scores suggesting the trainee's performance is better or worse. Reference Searle, Holsgrove and Brown22 The responses are analysed centrally and a report is then sent to the trainee's educational supervisor who delivers the feedback in person. Reference Holsgrove23

Potential issues with using MSF tools

Several issues that have been identified in relation to the use of MSF tools for medical practitioners in general are also relevant to their use in child and adolescent psychiatry. One is the trainee's choice of rater. Although several authors have found that MSF assessment is not necessarily biased by allowing the doctor to select their own raters, Reference Ramsey, Wenrich, Carline, Inui, Larson and LoGerfo5,Reference Violato, Marini, Toews, Lockyer and Fidler24,Reference Durning, Cation, Markert and Pangaraaro25 others have found that factors such as the seniority, gender and profession of raters can significantly influence the assessment. For example, Archer et al Reference Archer, McGraw and Davies21 found that consultant raters using the SPRAT gave significantly lower mean scores to paediatric trainees than more junior doctors did; similarly, Bullock et al Reference Bullock, Hassell, Markham, Wall and Whitehouse26 found that consultants and senior nurses were more likely to give ‘concern’ ratings when assessing junior doctors than were peers or administrators. Thus, there is a trend for assessors to be more critical with increasing seniority. When considering the mini-PAT, Archer et al Reference Archer, Norcini, Southgate, Heard and Davies6 found that assessors' scores were affected by their occupation, the length of time the trainee had been working with them, and the working environment. They suggested standardising the number of consultants used as raters by each trainee. These findings support the need for more detailed guidance in rater selection from the Royal College of Psychiatrists. Trainees are currently only advised that raters be chosen from a broad range of co-workers. Reference Fitch, Malik, Lelliott, Bhugra and Andiappan4 In addition, Abdulla Reference Abdulla3 suggests that selection bias can be reduced if the list of raters is discussed and agreed on beforehand with the trainee's supervisor.

Measurement errors, such as the central tendency and halo effect, can also occur and are particularly likely when behaviours which cannot be easily observed are being assessed. Reference Johnson and Bibiana27 A particular issue for non-doctor raters is knowing what standards they should expect for a doctor at that stage in their training. In an attempt to reduce measurement errors, Abdulla Reference Abdulla3 suggests better education for mini-PAT raters. This could be provided by the Royal College of Psychiatrists as part of their online mini-PAT package.

It has been shown that doctors' self-assessments do not correlate well with peer or patient ratings. Reference Hall, Violato, Lewkonia, Lockyer, Fidler and Toews7,Reference Thomas, Gebo and Hellmann28 Violato & Lockyer Reference Violato and Lockyer29 studied psychiatrists, internal medicine physicians and paediatricians, and found that all were inaccurate in assessing their own performance. Those psychiatrists who were rated by peers to be in the bottom quartile saw themselves as ‘average’, whereas the psychiatrists in the top quartile significantly underrated themselves. This indicates that poorly performing doctors often lack insight, not always accepting negative feedback from others and querying its validity. Reference Sargeant, Mann, Sinclair, van der Vleuten and Metsemakers30 Overeem et al Reference Overeem, Wollersheim, Driessen, Lombarts, van de Ven and Grol31 advise that trained facilitators should encourage trainees to reflect on MSF results and help them set concrete goals for improvement. Offering coaching to help trainees identify their strengths and weaknesses may help facilitate changes in performance. Reference Miller and Archer32 Making the feedback highly structured can help trainees acknowledge feedback from all sources rather than just the medical scores which they tend to value more. Reference Ramsey, Wenrich, Carline, Inui, Larson and LoGerfo5,Reference Weinrich, Carline, Giles and Ramsey33Reference Ferguson, Wakeling and Bowie35 Although taking the mean of the scores may be the most reliable approach, Reference Wilkinson, Crossley, Wragg, Mills, Cowan and Wade36 attention should also be given to the free-text comments which might highlight specific performance issues and which may also make the feedback more acceptable. Reference Ferguson, Wakeling and Bowie35 These findings highlight the importance of the MSF feedback process, which should include the development of a relevant action plan in collaboration with the doctor.

It has been proposed that a single, generic MSF tool be used in the UK. Reference Donaldson37 Research supporting this includes Violato & Lockyer's Reference Violato and Lockyer29,Reference Violato and Lockyer38 study of the use of one MSF tool for internal medicine physicians, paediatricians and psychiatrists. Although they found no specialty differences in response rates or reliability, it is of note that of the items clustered into the same four factors across the specialties, for psychiatry the most discriminating factor was communication whereas for the other two specialties the most important was patient management. By contrast, Mackillop et al Reference Mackillop, Crossley, Vivekananda-Schmidt, Wade and Armitage39 evaluated the use of a generic MSF tool across specialties and concluded that, although the generic content was appropriate for most specialties, some would benefit from specialty-specific content.

Does the mini-PAT suit the needs of trainees in child and adolescent psychiatry?

In child and adolescent psychiatry, the mini-PAT is currently used to assess trainees. Although the mini-PAT has content validity for foundation doctors, having been mapped against their curriculum, this does not necessarily mean it is also a valid tool for other grades or for use across specialties. In the making of the mini-PAT, some questions were removed from the SPRAT, namely those relating to management of complex patients and leadership. Reference Archer, Norcini, Southgate, Heard and Davies6 However, these items are highly relevant to trainees in child and adolescent psychiatry. Davies et al Reference Davies, Archer, Bateman, Dewar, Crossley and Grant40 modified the SPRAT for trainees in histopathology following a blueprinting exercise against the histopathology curriculum to establish content validity. They concluded that specialty-specific MSF is feasible and achieves satisfactory reliability. A similar approach blueprinting the SPRAT against the child and adolescent psychiatry competency-based curriculum 41 could therefore be considered. The SPRAT also requires fewer raters than the mini-PAT in order for the results to be sufficiently reliable, Reference Archer, Norcini, Southgate, Heard and Davies6 thus adding to its potential suitability for child psychiatry trainees who often work in small teams.

Alternatively, a specialty-specific MSF instrument for child and adolescent psychiatry trainees could be developed, to reflect the differences in their practice compared with other specialties and the greater importance placed on communication, interpersonal skills, emotional intelligence and relationship building. Reference Fitch, Malik, Lelliott, Bhugra and Andiappan4 Tools taking these attributes into account have been developed for use with consultant psychiatrists and have been found to be feasible to use as well as being reliable and valid. Reference Lelliott, Williams, Mears, Andiappan, Owen and Reading42,Reference Violato, Lockyer and Fidler43 The child and adolescent psychiatry competency-based curriculum 41 gives details of intended learning outcomes (ILOs), which are either mandatory or selective, some of which tap into these areas. The ILOs range from those that are predominantly clinical (e.g. managing emergencies (mandatory), paediatric psychopharmacology (mandatory) and paediatric liaison (selective)) to those that focus on more humanistic skills (e.g. professionalism (mandatory) and establishing and maintaining therapeutic relationships with children, adolescents and families (mandatory)). The ILO on professionalism includes: ‘practicing Child and Adolescent Psychiatry in a professional and ethical manner; child and family centred practice; understanding the impact of stigma and other barriers to accessing mental health services and inter-professional and multi-agency working’. 41 Some of the necessary associated skills which trainees are expected to attain include: supervising junior psychiatric staff, working with colleagues within the team and with other agencies to put the child's needs as central, and acting as an advocate for the child. There is scope to develop this area of the curriculum even further; the American Board of Pediatrics (ABP) published guidelines for the teaching and evaluation of professionalism in paediatric residency programmes 44 as well as standards of professional behaviour against which paediatricians, including those in training, can be evaluated. Reference Fallat and Glover45 Both are of relevance to child and adolescent psychiatrists.

If developed, a child and adolescent psychiatry specialty-specific MSF instrument would need to map on to the relevant ILOs. It could also include feedback from patients and families (which is not currently routinely collected as part of the WPBAs) to reflect the need to balance the views of the child (who is the patient) with those of their carers.

Conclusions

MSF tools such as the mini-PAT can provide reliable and valid information on areas of a trainee's performance such as communication skills and other humanistic qualities affecting patient care for which other forms of assessment, such as written examinations, are unhelpful. MSF tools have their predominant strength when used for formative assessment and were generally designed for this purpose. They are most appropriately used within a portfolio of other WPBAs and can help in making decisions about a doctor's fitness to practice or to continue training. Reference Wright, Richards, Hill, Roberts, Norman and Greco46 Rater bias and measurement error could be reduced by offering more detailed guidance to trainees in their choice of rater as well as to raters in the use of the tool. Measurement error could also be reduced by encouraging trainees to obtain a larger number of returns than the minimum of eight recommended by the Royal College of Psychiatrists. Reference Abdulla3 The quality of the feedback to the trainee is also important and educational supervisors would benefit from training in this area.

Although the mini-PAT is used widely across specialties, it has only been properly evaluated for use with foundation doctors. Interested researchers, clinicians or educationalists might now want to consider developing a modified version of the SPRAT or a specialty-specific MSF tool that is more appropriate for the needs of trainees in child and adolescent psychiatry. This would reflect the differences in their day-to-day practice compared with that of other trainees but would obviously need to be mapped to the curriculum and evaluated in practice to ensure content validity and reliability.

Footnotes

Declaration of interest

None.

References

1 Holsgrove, G. Multisource feedback (360-degree assessment). In Workplace-Based Assessments in Psychiatry (eds Bhugra, D, Malik, A, Brown, N): 65–9. Royal College of Psychiatrists, 2007.Google Scholar
2 Lipner, RS, Blank, LL, Leas, BF, Fortna, GS. The value of patient and peer ratings in recertification. Acad Med 2002; 77: 64–6.CrossRefGoogle ScholarPubMed
3 Abdulla, A. A critical analysis of mini peer assessment tool (mini- PAT). J R Soc Med 2008; 101: 22–6.Google Scholar
4 Fitch, C, Malik, A, Lelliott, P, Bhugra, D, Andiappan, M. Assessing psychiatric competencies: what does the literature tell us about workplace-based assessment? Adv Psychiatr Treat 2008; 14: 122–30.Google Scholar
5 Ramsey, PG, Wenrich, MD, Carline, JD, Inui, TS, Larson, EB, LoGerfo, JP. Use of peer ratings to evaluate physician performance. JAMA 1993; 13: 1655–60.Google Scholar
6 Archer, J, Norcini, J, Southgate, L, Heard, S, Davies, H. Mini-PAT (Peer Assessment Tool): a valid component of a national assessment programme in the UK. Adv Health Sci Educ Theory Pract 2008; 13: 181–92.Google Scholar
7 Hall, W, Violato, C, Lewkonia, R, Lockyer, J, Fidler, H, Toews, J, et al. Assessment of physician performance in Alberta: The Physician Achievement Review. CMAJ 1999; 161: 52–7.Google Scholar
8 Lockyer, J, Blackmore, D, Fidler, H, Crutcher, R, Salter, B, Shaw, K, et al. A study of a multisource feedback system for international medical graduates holding defined licenses. Med Educ 2006; 40: 340–7.CrossRefGoogle Scholar
9 Violato, C, Lockyer, J, Fidler, H. Multisource feedback: a method of assessing surgical practice. BMJ 2003; 326: 546–8.Google Scholar
10 Evans, R, Elwyn, G, Edwards, A. A review of instruments for peer assessment of physicians. BMJ 2004; 328: 1240–3.Google Scholar
11 Travaglia, J, Debono, D. Peer Review in Medicine: A Comprehensive Review of the Literature. Centre for Clinical Governance Research in Health, University of New South Wales, 2009.Google Scholar
12 Dubinsky, I, Jennings, K, Greengarten, M, Brans, A. 360-degree physician assessment. Healthc Q 2010; 13: 71–6.Google Scholar
13 Andrews, JJ, Violato, C, Ansari, A, Donnon, T, Pugliese, G. Assessing psychologists in practice: lessons from health professionals using multisource feedback. Prof Psychol Res Pract 2013; 44: 193207.CrossRefGoogle Scholar
14 Khalifa, K, Ansari, A, Violato, C, Donnon, T. Multisource feedback to assess surgical practice: a systematic review. J Surg Educ 2013; 70: 475–86.Google Scholar
15 Donnon, T, Ansari, A, Alawi, A, Violato, C. The reliability, validity, and feasibility of multisource feedback physician assessment: a systematic review. Acad Med 2014; 89: 511616.CrossRefGoogle ScholarPubMed
16 Davies, H, Archer, J, Southgate, L, Norcini, J. Initial evaluation of the first year of the Foundation Assessment Programme. Med Educ 2009; 43: 7481.Google Scholar
17 Carr, S. The Foundation Programme assessment tools: an opportunity to enhance feedback to trainees? Postgrad Med J 2006; 82: 576–9.Google Scholar
18 General Medical Council. Good Medical Practice. GMC, 2001.Google Scholar
19 Archer, J, Norcini, J, Davies, H. Use of SPRAT for peer review of paediatricians in training. BMJ 2005; 330: 1251–3.Google Scholar
20 Davis, H, Archer, J. Multi source feedback: development and practical aspects. Clin Teach 2005; 2: 7781.CrossRefGoogle Scholar
21 Archer, J, McGraw, M, Davies, H. Assuring the validity of multisource feedback in a national programme. Arch Dis Child 2010; 95: 330–5.Google Scholar
22 Searle, G, Holsgrove, G, Brown, N. Trainees' Guide to Workplace Based Assessment (Version 2.0). Royal College of Psychiatrists, 2007 (http://www.rcpsych.ac.uk/pdf/Trainees_Guide_to_WPBA_070629.pdf).Google Scholar
23 Holsgrove, G. Guide to the mini-Peer Assessment Tool (mini-PAT): An Introduction for Trainees, Assessors and Educational Supervisors.Royal College of Psychiatrists, 2006.Google Scholar
24 Violato, C, Marini, A, Toews, J, Lockyer, J, Fidler, H. Feasibility and psychometric properties of using peers, consulting physicians, co-workers and patients to assess physicians. Acad Med 1997; 72: S824.CrossRefGoogle ScholarPubMed
25 Durning, SJ, Cation, LJ, Markert, RJ, Pangaraaro, LN. Assessing the reliability and validity of the mini-clinical evaluation exercise for internal medicine residency training. Acad Med 2002; 77: 900–4.Google Scholar
26 Bullock, AD, Hassell, A, Markham, WA, Wall, D, Whitehouse, A. How ratings may vary by staff group in multisource feedback assessment of junior doctors. Med Educ 2009; 43: 516–20.Google Scholar
27 Johnson, D, Bibiana, C. Comparison of self, nurse, and physician assessment of residents rotating through an intensive care unit. Crit Care Med 1998; 26: 1811–6.Google Scholar
28 Thomas, PA, Gebo, KA, Hellmann, DB. A pilot study of peer review in residency training. J Gen Intern Med 1999; 14: 551–4.Google Scholar
29 Violato, C, Lockyer, J. Self and peer assessment of pediatricians, psychiatrists and medicine specialists: implications for self-directed learning. Adv Health Sci Educ Theory Pract 2006; 11: 235–44.Google Scholar
30 Sargeant, J, Mann, K, Sinclair, D, van der Vleuten, C, Metsemakers, J. Challenges in multisource feedback: intended and unintended outcomes. Med Educ 2007; 41: 583–91.CrossRefGoogle ScholarPubMed
31 Overeem, K, Wollersheim, H, Driessen, E, Lombarts, K, van de Ven, G, Grol, R, et al. Doctors' perceptions of why 360-degree feedback does (not) work: a qualitative study. Med Educ 2009; 43: 874–82.Google Scholar
32 Miller, A, Archer, J. Impact of workplace based assessment on doctors' education and performance: a systematic review. BMJ 2010; 341: c5064.Google Scholar
33 Weinrich, MD, Carline, ID, Giles, LM, Ramsey, PG. Ratings of the performance of practicing internists by hospital-based registered nurses. Acad Med 1993; 68: 680–7.Google Scholar
34 Higgins, RSD, Bridges, J, Burke, JM, O'Donnell, MA, Cohen, N, Wilkes, SB. Implementing the ACGME general competencies in a cardiothoracic surgery residency program using a 360-degree feedback. Ann Thorac Surg 2004; 77: 1217.Google Scholar
35 Ferguson, J, Wakeling, J, Bowie, P. Factors influencing the effectiveness of multisource feedback in improving professional practice of medical doctors: a systematic review. BMC Med Educ 2014; 14: 76.Google Scholar
36 Wilkinson, JR, Crossley, JG, Wragg, A, Mills, P, Cowan, G, Wade, W. Implementing workplace-based assessment across medical specialties in the United Kingdom. Med Educ 2008; 42: 364–73.Google Scholar
37 Donaldson, L. Trust, Assurance and Safety - The Regulation of Health Professionals in the 21st Century. Department of Health, 2007.Google Scholar
38 Violato, C, Lockyer, J. An examination of the appropriateness of using a common peer assessment instrument to assess physician skills across specialties. Acad Med 2004; 79: S58.Google Scholar
39 Mackillop, L, Crossley, J, Vivekananda-Schmidt, P, Wade, W, Armitage, M. A single generic multisource feedback tool for revalidation of all career-grade doctors: does one size fit all? Med Teach 2011; 33: e7583.CrossRefGoogle ScholarPubMed
40 Davies, H, Archer, J, Bateman, A, Dewar, S, Crossley, J, Grant, J, et al. Specialty-specific multisource feedback: assuring validity, informing training. Med Educ 2008; 42: 1014–20.Google Scholar
41 Royal College of Psychiatrists. A Competency Based Curriculum for Specialist Training in Psychiatry: Specialists in Child and Adolescent Psychiatry. Royal College of Psychiatrists, 2013.Google Scholar
42 Lelliott, P, Williams, R, Mears, A, Andiappan, M, Owen, H, Reading, P, et al. Questionnaire for 360-degree assessment of consultant psychiatrists: development and psychometric properties. Br J Psych 2008; 193: 156–60.Google Scholar
43 Violato, C, Lockyer, J, Fidler, H. Assessment of psychiatrists in practice through multisource feedback. Can J Psych 2008; 53: 525–33.Google Scholar
44 American Board of Pediatrics. Appendix F: Professionalism. In Program Director's Guide to the ABP: Resident Evaluation, Tracking and Certification. American Board of Pediatrics, 2003.Google Scholar
45 Fallat, M, Glover, G, American Academy of Pediatrics, Committee on Bioethics. Professionalism in pediatrics. Pediatrics 2007; 120: e112333.Google Scholar
46 Wright, C, Richards, S, Hill, J, Roberts, M, Norman, G, Greco, M, et al. Multisource feedback in evaluating the performance of doctors: the example of the UK General Medical Council patient and colleague questionnaires. Acad Med 2012; 87: 1668–78.CrossRefGoogle ScholarPubMed
Submit a response

eLetters

No eLetters have been published for this article.