Validity of a cardiology fellow performance assessment: reliability and associations with standardized examinations and awards

Michael W. Cullen, Kyle W. Klarich, Kristine M. Baldwin, Gregory J. Engstler, Jay Mandrekar, Christopher G. Scott, Thomas J. Beckman

Research output: Contribution to journalArticlepeer-review

Abstract

Background: Most work on the validity of clinical assessments for measuring learner performance in graduate medical education has occurred at the residency level. Minimal research exists on the validity of clinical assessments for measuring learner performance in advanced subspecialties. We sought to determine validity characteristics of cardiology fellows’ assessment scores during subspecialty training, which represents the largest subspecialty of internal medicine. Validity evidence included item content, internal consistency reliability, and associations between faculty-of-fellow clinical assessments and other pertinent variables. Methods: This was a retrospective validation study exploring the domains of content, internal structure, and relations to other variables validity evidence for scores on faculty-of-fellow clinical assessments that include the 10-item Mayo Cardiology Fellows Assessment (MCFA-10). Participants included 7 cardiology fellowship classes. The MCFA-10 item content included questions previously validated in the assessment of internal medicine residents. Internal structure evidence was assessed through Cronbach’s α. The outcome for relations to other variables evidence was overall mean of faculty-of-fellow assessment score (scale 1–5). Independent variables included common measures of fellow performance. Findings: Participants included 65 cardiology fellows. The overall mean ± standard deviation faculty-of-fellow assessment score was 4.07 ± 0.18. Content evidence for the MCFA-10 scores was based on published literature and core competencies. Cronbach’s α was 0.98, suggesting high internal consistency reliability and offering evidence for internal structure validity. In multivariable analysis to provide relations to other variables evidence, mean assessment scores were independently associated with in-training examination scores (beta = 0.088 per 10-point increase; p = 0.05) and receiving a departmental or institutional award (beta = 0.152; p = 0.001). Assessment scores were not associated with educational conference attendance, compliance with completion of required evaluations, faculty appointment upon completion of training, or performance on the board certification exam. R2 for the multivariable model was 0.25. Conclusions: These findings provide sound validity evidence establishing item content, internal consistency reliability, and associations with other variables for faculty-of-fellow clinical assessment scores that include MCFA-10 items during cardiology fellowship. Relations to other variables evidence included associations of assessment scores with performance on the in-training examination and receipt of competitive awards. These data support the utility of the MCFA-10 as a measure of performance during cardiology training and could serve as the foundation for future research on the assessment of subspecialty learners.

Original languageEnglish (US)
Article number177
JournalBMC medical education
Volume22
Issue number1
DOIs
StatePublished - Dec 2022

Keywords

  • Assessment
  • Cardiology fellowship
  • Evaluation
  • Training
  • Validity evidence

ASJC Scopus subject areas

  • Education

Fingerprint

Dive into the research topics of 'Validity of a cardiology fellow performance assessment: reliability and associations with standardized examinations and awards'. Together they form a unique fingerprint.

Cite this