Medical education
-
Review Meta Analysis
Debriefing for technology-enhanced simulation: a systematic review and meta-analysis.
Debriefing is a common feature of technology-enhanced simulation (TES) education. However, evidence for its effectiveness remains unclear. We sought to characterise how debriefing is reported in the TES literature, identify debriefing features that are associated with improved outcomes, and evaluate the effectiveness of debriefing when combined with TES. ⋯ Limited evidence suggests that video-assisted debriefing yields outcomes similar to those of non-video-assisted debriefing. Other debriefing design features show mixed or non-significant results. As debriefing characteristics are usually incompletely reported, future debriefing research should describe all the key debriefing characteristics along with their associated descriptors.
-
Randomized Controlled Trial
Dyad practice is efficient practice: a randomised bronchoscopy simulation study.
Medical simulation training requires effective and efficient training strategies. Dyad practice may be a training strategy worth pursuing because it has been proven effective and efficient in motor skills learning. In dyad practice two participants collaborate in learning a task they will eventually perform individually. In order to explore the effects of dyad practice in a medical simulation setting, this study examined the effectiveness and efficiency of dyad practice compared with individual practice in the learning of bronchoscopy through simulation-based training. ⋯ Individual practice and dyad practice did not differ in their effectiveness for the acquisition of bronchoscopy skills through supervised simulation training. However, dyad practice proved more efficient than individual practice because two participants practising in dyads learned as much as one participant practising individually but required the same instructor resources and training time as the single learner.
-
In-training evaluation (ITE) is used to assess resident competencies in clinical settings. This assessment is documented on an evaluation report (In-Training Evaluation Report [ITER]). Unfortunately, the quality of these reports can be questionable. Therefore, training programmes to improve report quality are common. The Completed Clinical Evaluation Report Rating (CCERR) was developed to assess completed report quality and has been shown to do so in a reliable manner, thus enabling the evaluation of these programmes. The CCERR is a resource-intensive instrument, which may limit its use. The purpose of this study was to create a screening measure (Proxy-CCERR) that can predict the CCERR outcome in a less resource-intensive manner. ⋯ It is possible to model CCERR scores in a highly predictive manner. The predictive variables can be easily extracted in an automated process. Because this model is less resource-intensive than the CCERR, it makes it possible to provide feedback from ITER training programmes to large groups of supervisors and institutions, and even to create automated feedback systems using Proxy-CCERR scores.