Advances in health sciences education : theory and practice
-
Adv Health Sci Educ Theory Pract · Aug 2006
Accreditation of undergraduate and graduate medical education: how do the standards contribute to quality?
Accreditation organizations such as the Liaison Committee for Medical Education (LCME), the Royal College of Physicians and Surgeons of Canada (RCPSC), and the Accreditation Council for Graduate Medical Education (ACGME) are charged with the difficult task of evaluating the educational quality of medical education programs in North America. Traditionally accreditation includes a more quantitative rather than qualitative judgment of the educational facilities, resources and teaching provided by the programs. The focus is on the educational process but the contributions of these to the outcomes are not at all clear. As medical education moves toward outcome-based education related to a broad and context-based concept of competence, the accreditation paradigm should change accordingly.
-
Adv Health Sci Educ Theory Pract · Aug 2006
Modeling the problem-based learning preferences of McMaster University undergraduate medical students using a discrete choice conjoint experiment.
To use methods from the field of marketing research to involve students in the redesign of McMaster University's small group, problem-based undergraduate medical education program. ⋯ Most students preferred a small group, web-supported, problem-based learning approach led by content experts who facilitated group process. Students favored a program in which tutorial group problems, clinical skills training sessions and the patients selected for clerkship activities were more closely linked to core curriculum concepts.
-
Adv Health Sci Educ Theory Pract · May 2006
Differential effects of two types of formative assessment in predicting performance of first-year medical students.
Formative assessments are systematically designed instructional interventions to assess and provide feedback on students' strengths and weaknesses in the course of teaching and learning. Despite their known benefits to student attitudes and learning, medical school curricula have been slow to integrate such assessments into the curriculum. This study investigates how performance on two different modes of formative assessment relate to each other and to performance on summative assessments in an integrated, medical-school environment. ⋯ A latent variable underlying achievement on open-book formative assessments was highly predictive of achievement on both open- and closed-book summative assessments, whereas a latent variable underlying closed-book assessments only predicted performance on the closed-book summative assessment. Formative assessments can be used as effective predictive tools of summative performance in medical school. Open-book, un-timed assessments of higher order processes appeared to be better predictors of overall summative performance than closed-book, timed assessments of factual recall and image recognition.
-
Adv Health Sci Educ Theory Pract · Jan 2005
Comparative StudyDoes instructor evaluation by students using a WEB-based questionnaire impact instructor performance?
Student feedback is a valuable method to evaluate the quality of education. Using a WEB-based questionnaire, the objective of this study was to evaluate the factors that may affect the ratings given by the students and the impact of those ratings on the instructor's teaching performance. ⋯ No significant improvement was found in the mean points of the total group. In the second year, only 16.4 of the instructors were affected positively.
-
Adv Health Sci Educ Theory Pract · Jan 2005
The effects of violating standard item writing principles on tests and students: the consequences of using flawed test items on achievement examinations in medical education.
The purpose of this research was to study the effects of violations of standard multiple-choice item writing principles on test characteristics, student scores, and pass-fail outcomes. Four basic science examinations, administered to year-one and year-two medical students, were randomly selected for study. Test items were classified as either standard or flawed by three independent raters, blinded to all item performance data. ⋯ Item flaws had little effect on test score reliability or other psychometric quality indices. Results showed that flawed multiple-choice test items, which violate well established and evidence-based principles of effective item writing, disadvantage some medical students. Item flaws introduce the systematic error of construct-irrelevant variance to assessments, thereby reducing the validity evidence for examinations and penalizing some examinees.