Teaching and learning in medicine
-
SGEA 2015 CONFERENCE ABSTRACT (EDITED). Evaluating Interprofessional Teamwork During a Large-Scale Simulation. Courtney West, Karen Landry, Anna Graham, and Lori Graham. CONSTRUCT: This study investigated the multidimensional measurement of interprofessional (IPE) teamwork as part of large-scale simulation training. ⋯ Multidimensional, competency-based instruments appear to provide a robust view of IPE teamwork; however, challenges remain. Due to the large scale of the simulation exercise, observation-based assessment did not function as well as self- and standardized patient-based assessment. To promote greater variation in observer assessments during future Disaster Day simulations, we plan to adjust the rating scale from "not observed," "observed," and "not applicable" to a 4-point scale and reexamine interrater reliability.
-
Randomized Controlled Trial
Use of a checklist during observation of a simulated cardiac arrest scenario does not improve time to CPR and defibrillation over observation alone for subsequent scenarios.
Immersive simulation is a common mode of education for medical students. Observation of clinical simulations prior to participation is believed to be beneficial, though this is often a passive process. Active observation may be more beneficial. ⋯ Observation alone leads to improved performance in the management of a simulated cardiac arrest. The active use of a simple skills-based checklist during observation did not appear to improve performance over passive observation alone.
-
Recognition and management of acutely unwell surgical patients is an important skill to which medical students have little exposure. ⋯ Feedback from students was very positive and clearly demonstrated that a workshop taught by surgical trainees improved medical students' confidence, self-perceived competence, and knowledge in the assessment and management of acutely unwell surgical patients.
-
CGEA 2015 CONFERENCE ABSTRACT (EDITED). A Novel Approach to Assessing Professionalism in Preclinical Medical Students Using Paired Self- and Peer Evaluations. Amanda R. Emke, Steven Cheng, and Carolyn Dufault. CONSTRUCT: This study sought to assess the professionalism of 2nd-year medical students in the context of team-based learning. ⋯ When used as a professionalism assessment within team-based learning, stand-alone and simultaneous peer and self-assessments are highly correlated within individuals across different courses. However, although self-assessment alone is a significant predictor of self-assessment made at the time of assessing one's peers, average peer assessment does not predict self-assessment. To explore this lack of predictive power, we classified students into four subgroups based on relative deviation from median peer and self-assessment scores. Group membership was found to be stable for all groups except for those initially sorted into the high self-assessment/low peer assessment subgroup. Members of this subgroup tended to move into the low self-assessment/low peer assessment group at T2, suggesting they became more accurate at self-assessing over time. A small group of individuals remained in the group that consistently rated themselves highly while their peers rated them poorly. Future studies will track these students to see if similar deviations from accurate professional self-assessment persist into the clinical years. In addition, given that students who fail to perform self-assessments had significantly lower peer assessment scores than their counterparts who completed self-assessments in this study, these students may also be at risk for similar professionalism concerns in the clinical years; follow-up studies will examine this possibility.