Teaching and learning in medicine
-
Problem: To achieve their potential in medical and biomedical careers, students (scholars) from under-resourced backgrounds must build sophisticated skills and develop confidence and professionalism. To flourish in an advanced educational system that may be unfamiliar, these scholars also need networks of mentors and role models. These challenges can affect scholars at multiple stages of their education. ⋯ Analysis of the early phases of the CSM initiative demonstrates such outcomes are attainable. Lessons Learned: This program provides experiences in which scholars develop and practice core competencies essential for developing their self-identity as scientists and professionals. The most important lesson learned is that mentorship teams must be highly dynamic, flexible, thoughtful, and personal in responding to the wide range of challenges and obstacles that scholars from under-resourced backgrounds must overcome to achieve career success.
-
Issue: The physical examination has been in decline for many years and poorer skills contribute to medical errors and adverse events. Diagnostic error is also increasing with the complexity of medicine. Comparing the physical examination in Ireland and the United States with a focus on education, assessment, culture, and health systems may provide insight into the decline of the physical exam in the United States, uncover possible strategies to improve clinical skills, and limit diagnostic error. ⋯ However, steps to introduce a culture of assessment to drive learning are being introduced. One area Ireland could learn from the United States is incorporating more technology into the bedside exam. Enhanced physical examination skills in both countries could reduce reliance on expensive investigations and improve diagnostic accuracy.
-
Theory: We used two theoretical frameworks for this study: a) experiential learning, whereby learners construct new knowledge based on prior experience, and learning grows out of a continuous process of reconstructing experience, and b) deliberate practice, whereby the use of testing (test-enhanced learning) promotes learning and produces better long-term retention. Hypothesis: We hypothesized that moving the USMLE Step 1 exam to follow the clerkship year would provide students with a context for basic science learning that may enhance exam performance. We also hypothesized that examination performance variables, specifically National Board of Medical Examiners (NBME) Customized Basic Science Examinations and NBME subject examinations in clinical disciplines would account for a moderate to large amount of the variance in Step 1 scores. ⋯ Overall, 66.4% of the variance in Step 1 scores after the clerkship year was explained by: the mean score on fourteen pre-clerkship customized NBME exams (p < 0.01, 57.0% R2); performance on the surgery NBME subject exam (p < 0.01, 3.0% R2); the pediatrics NBME subject exam (p < 0.01, 2.0% R2); the Comprehensive Basic Science Self-Assessment (p < .01, 2.0% R2) ; the internal medicine NBME subject exam (p < 0.01, 0.03% R2), pre-clerkship Integrated Clinical Skills score (p < 0.01, 0.05% R2), and the pre-matriculation MCAT (p < 0.01, 0.01% R2). Conclusion: In our institution, nearly two-thirds of the variance in performance on Step 1 taken after the clerkship year was explained mainly by pre-clerkship variables, with a smaller contribution emanating from clerkship measures. Further study is needed to uncover the specific aspects of the clerkship experience that might contribute to success on high stakes licensing exam performance.
-
Construct: The construct addressed in this study is assessment of advanced communication skills among senior medical students. Background: The question of who should assess participants during objective structured clinical examinations (OSCEs) has been debated, and options discussed in the literature have included peer, self, standardized patient, and faculty assessment models. What is not known is whether same-level peer assisted learning can be utilized for formative assessment of advanced communication skills when no faculty, standardized patients, or other trained assessors are involved in providing feedback. ⋯ Findings: There was fair to good overall agreement among self, same-level peer, standardized patient, and faculty-assessment of advanced communication skills. Relative to faculty, peer and standardized patient assessors overestimated advanced communication skills, while self-assessments underestimated skills. Conclusions: Self and same-level peer-assessment may be a viable alternative to faculty assessment for a formative OSCE on advanced communication skills for senior medical students.
-
Construct: This study seeks to determine validity evidence for the Quality of Assessment for Learning score (QuAL score), which was created to evaluate short qualitative comments that are related to specific scores entered into a workplace-based assessment, common within the competency-based medical education (CBME) context. Background: In the age of CBME, qualitative comments play an important role in clarifying the quantitative scores rendered by observers at the bedside. Currently there are few practical tools that evaluate mixed data (e.g. associated score-and-comment data), other than the comprehensive Completed Clinical Evaluation Report Rating tool (CCERR) that was originally derived to rate end-of-rotation reports. ⋯ The CCERR performed similarly, correlating with perceived faculty (r = 0.67, <0.001) and resident utility (0.79, <0.001). Conclusions: The QuAL score is reliable rating score that correlates well with perceptions of utility. The QuAL score may be useful for rating shorter comments generated by workplace-based assessments.