-
- Alison Walzak, Maria Bacchus, Jeffrey P Schaefer, Kelly Zarnke, Jennifer Glow, Charlene Brass, Kevin McLaughlin, and Irene W Y Ma.
- A. Walzak is clinical instructor, Department of Medicine, University of British Columbia, Victoria, British Columbia, Canada. M. Bacchus is associate professor, Department of Medicine, University of Calgary, Calgary, Alberta, Canada. J.P. Schaefer is clinical professor, Department of Medicine, University of Calgary, Calgary, Alberta, Canada. K. Zarnke is associate professor, Department of Medicine, University of Calgary, Calgary, Alberta, Canada. J. Glow is internal medicine residency program administrator, University of Calgary, Calgary, Alberta, Canada. C. Brass is internal medicine residency program assistant, University of Calgary, Calgary, Alberta, Canada. K. McLaughlin is associate professor, Department of Medicine, University of Calgary, Calgary, Alberta, Canada. I.W.Y. Ma is associate professor, Department of Medicine, University of Calgary, Calgary, Alberta, Canada.
- Acad Med. 2015 Aug 1;90(8):1100-8.
PurposeTo compare procedure-specific checklists and a global rating scale in assessing technical competence.MethodTwo trained raters used procedure-specific checklists and a global rating scale to independently evaluate 218 video-recorded performances of six bedside procedures of varying complexity for technical competence. The procedures were completed by 47 residents participating in a formative simulation-based objective structured clinical examination at the University of Calgary in 2011. Pass/fail (competent/not competent) decisions were based on an overall global assessment item on the global rating scale. Raters provided written comments on performances they deemed not competent. Checklist minimum passing levels were set using traditional standard-setting methods.ResultsFor each procedure, the global rating scale demonstrated higher internal reliability and lower interrater reliability than the checklist. However, interrater reliability was almost perfect for decisions on competence using the overall global assessment (Kappa range: 0.84-1.00). Clinically significant procedural errors were most often cited as reasons for ratings of not competent. Using checklist scores to diagnose competence demonstrated acceptable discrimination: The area under the curve ranged from 0.84 (95% CI 0.72-0.97) to 0.93 (95% CI 0.82-1.00). Checklist minimum passing levels demonstrated high sensitivity but low specificity for diagnosing competence.ConclusionsAssessment using a global rating scale may be superior to assessment using a checklist for evaluation of technical competence. Traditional standard-setting methods may establish checklist cut scores with too-low specificity: High checklist scores did not rule out incompetence. The role of clinically significant errors in determining procedural competence should be further evaluated.
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*
,_underline_
or**bold**
. - Superscript can be denoted by
<sup>text</sup>
and subscript<sub>text</sub>
. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3.
, hyphens-
or asterisks*
. - Links can be included with:
[my link to pubmed](http://pubmed.com)
- Images can be included with:
![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
- For footnotes use
[^1](This is a footnote.)
inline. - Or use an inline reference
[^1]
to refer to a longer footnote elseweher in the document[^1]: This is a long footnote.
.