Medical education
-
In-training evaluation (ITE) is used to assess resident competencies in clinical settings. This assessment is documented on an evaluation report (In-Training Evaluation Report [ITER]). Unfortunately, the quality of these reports can be questionable. Therefore, training programmes to improve report quality are common. The Completed Clinical Evaluation Report Rating (CCERR) was developed to assess completed report quality and has been shown to do so in a reliable manner, thus enabling the evaluation of these programmes. The CCERR is a resource-intensive instrument, which may limit its use. The purpose of this study was to create a screening measure (Proxy-CCERR) that can predict the CCERR outcome in a less resource-intensive manner. ⋯ It is possible to model CCERR scores in a highly predictive manner. The predictive variables can be easily extracted in an automated process. Because this model is less resource-intensive than the CCERR, it makes it possible to provide feedback from ITER training programmes to large groups of supervisors and institutions, and even to create automated feedback systems using Proxy-CCERR scores.
-
The shift from a time-based to a competency-based framework in medical education has created a need for frequent formative assessments. Many educational programmes use some form of written progress test to identify areas of strength and weakness and to promote continuous improvement in their learners. However, the role of performance-based assessments, such as objective structured clinical examinations (OSCEs), in progress testing remains unclear. ⋯ Scores were found to have high reliability and demonstrated significant differences in performance by year of training. This provides evidence for the validity of using scores achieved on an OSCE as markers of progress in learners at different levels of training. Future studies will focus on assessing individual progress on the OSCE over time.
-
Working effectively in interprofessional teams is a core competency for all health care professionals, yet there is a paucity of instruments with which to assess the associated skills. Published medical teamwork skills assessment tools focus primarily on high-acuity situations, such as cardiopulmonary arrests and crisis events in operating rooms, and may not generalise to non-high-acuity environments, such as in-patient wards and out-patient clinics. ⋯ Our study delineates essential elements of teamwork in low-acuity settings, including desirable attributes of team members, thus laying the foundation for the development of an individual teamwork skills assessment tool.