-
Internal medicine journal · Jul 2022
Professionalism and clinical short answer question marking with machine learning.
- Antoinette Lam, Lydia Lam, Charlotte Blacketer, Roger Parnis, Kyle Franke, Morganne Wagner, David Wang, Yiran Tan, Lauren Oakden-Rayner, Steve Gallagher, Seth W Perry, Julio Licinio, Ian Symonds, Josephine Thomas, Paul Duggan, and Stephen Bacchi.
- University of Adelaide, Adelaide, South Australia, Australia.
- Intern Med J. 2022 Jul 1; 52 (7): 1268-1271.
AbstractMachine learning may assist in medical student evaluation. This study involved scoring short answer questions administered at three centres. Bidirectional encoder representations from transformers were particularly effective for professionalism question scoring (accuracy ranging from 41.6% to 92.5%). In the scoring of 3-mark professionalism questions, as compared with clinical questions, machine learning had a lower classification accuracy (P < 0.05). The role of machine learning in medical professionalism evaluation warrants further investigation.© 2022 Royal Australasian College of Physicians.
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*
,_underline_
or**bold**
. - Superscript can be denoted by
<sup>text</sup>
and subscript<sub>text</sub>
. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3.
, hyphens-
or asterisks*
. - Links can be included with:
[my link to pubmed](http://pubmed.com)
- Images can be included with:
![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
- For footnotes use
[^1](This is a footnote.)
inline. - Or use an inline reference
[^1]
to refer to a longer footnote elseweher in the document[^1]: This is a long footnote.
.