Simulation in healthcare : journal of the Society for Simulation in Healthcare
-
Randomized Controlled Trial
The effectiveness of video-assisted debriefing versus oral debriefing alone at improving neonatal resuscitation performance: a randomized trial.
Debriefing is a critical component of effective simulation-based medical education. The optimal format in which to conduct debriefing is unknown. The use of video review has been promoted as a means of enhancing debriefing, and video-assisted debriefing is widely used in simulation training. Few empirical studies have evaluated the impact of video-assisted debriefing, and the results of those studies have been mixed. The objective of this study was to compare the effectiveness of video-assisted debriefing to oral debriefing alone at improving performance in neonatal resuscitation. ⋯ Using this study design, we failed to show a significant educational benefit of video-assisted debriefing. Although our results suggest that the use of video-assisted debriefing may not offer significant advantage over oral debriefing alone, exactly why this is the case remains obscure. Further research is needed to define the optimal role of video review during simulation debriefing in neonatal resuscitation.
-
Comparative Study
Motion capture measures variability in laryngoscopic movement during endotracheal intubation: a preliminary report.
Success rates with emergent endotracheal intubation (ETI) improve with increasing provider experience. Few objective metrics exist to quantify differences in ETI technique between providers of various skill levels. We tested the feasibility of using motion capture videography to quantify variability in the motions of the left hand and the laryngoscope in providers with various experience. ⋯ Motion analysis can detect interprovider differences in hand and laryngoscope movements during ETI, which may be related to provider experience. This technology has potential to objectively measure training and skill in ETI.
-
Defining valid, reliable, defensible, and generalizable standards for the evaluation of learner performance is a key issue in assessing both baseline competence and mastery in medical education. However, before setting these standards of performance, the reliability of the scores yielding from a grading tool must be assessed. Accordingly, the purpose of this study was to assess the reliability of scores generated from a set of grading checklists used by nonexpert raters during simulations of American Heart Association (AHA) Megacodes. ⋯ We have shown that our checklists can yield reliable scores, are appropriate for use by nonexpert raters, and are able to be used during continuous assessment of team leader performance during the review of a simulated Megacode. This checklist may be more appropriate for use by advanced cardiac life support instructors during Megacode assessments than the current tools provided by the AHA.