-
Medical education online · Jan 2011
A report on the piloting of a novel computer-based medical case simulation for teaching and formative assessment of diagnostic laboratory testing.
- Clarence D Kreiter, Thomas Haugen, Timothy Leaven, Christopher Goerdt, Nancy Rosenthal, William C McGaghie, and Fred Dee.
- Department of Pathology, University of Iowa Carver College of Medicine, Iowa City, 52242, USA. clarence-kreiter@uiowa.edu
- Med Educ Online. 2011 Jan 1;16.
ObjectivesInsufficient attention has been given to how information from computer-based clinical case simulations is presented, collected, and scored. Research is needed on how best to design such simulations to acquire valid performance assessment data that can act as useful feedback for educational applications. This report describes a study of a new simulation format with design features aimed at improving both its formative assessment feedback and educational function.MethodsCase simulation software (LabCAPS) was developed to target a highly focused and well-defined measurement goal with a response format that allowed objective scoring. Data from an eight-case computer-based performance assessment administered in a pilot study to 13 second-year medical students was analyzed using classical test theory and generalizability analysis. In addition, a similar analysis was conducted on an administration in a less controlled setting, but to a much large sample (n = 143), within a clinical course that utilized two random case subsets from a library of 18 cases.ResultsClassical test theory case-level item analysis of the pilot assessment yielded an average case discrimination of 0.37, and all eight cases were positively discriminating (range = 0.11-0.56). Classical test theory coefficient alpha and the decision study showed the eight-case performance assessment to have an observed reliability of σ = G = 0.70. The decision study further demonstrated that a G = 0.80 could be attained with approximately 3 h and 15 min of testing. The less-controlled educational application within a large medical class produced a somewhat lower reliability for eight cases (G = 0.53). Students gave high ratings to the logic of the simulation interface, its educational value, and to the fidelity of the tasks.ConclusionsLabCAPS software shows the potential to provide formative assessment of medical students' skill at diagnostic test ordering and to provide valid feedback to learners. The perceived fidelity of the performance tasks and the statistical reliability findings support the validity of using the automated scores for formative assessment and learning. LabCAPS cases appear well designed for use as a scored assignment, for stimulating discussions in small group educational settings, for self-assessment, and for independent learning. Extension of the more highly controlled pilot assessment study with a larger sample will be needed to confirm its reliability in other assessment applications.
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*
,_underline_
or**bold**
. - Superscript can be denoted by
<sup>text</sup>
and subscript<sub>text</sub>
. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3.
, hyphens-
or asterisks*
. - Links can be included with:
[my link to pubmed](http://pubmed.com)
- Images can be included with:
![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
- For footnotes use
[^1](This is a footnote.)
inline. - Or use an inline reference
[^1]
to refer to a longer footnote elseweher in the document[^1]: This is a long footnote.
.