-
- Tom J Crijns, Stein J Janssen, Jacob T Davis, David Ring, Hugo B Sanchez, and Science of Variation Group.
- Department of Surgery and Perioperative Care, Dell Medical School, Health Discovery Building 6.706, 1701 Trinity St., Austin, TX 78723, USA. Electronic address: tom.crijns@austin.utexas.edu.
- Injury. 2018 Apr 1; 49 (4): 819-823.
BackgroundRadiographic fracture classification helps with research on prognosis and treatment. AO/OTA classification into fracture type has shown to be reliable, but further classification of fractures into subgroups reduces the interobserver agreement and takes a considerable amount of practice and experience in order to master.Questions/PurposesWe assessed: (1) differences between more and less experienced trauma surgeons based on hip fractures treated per year, years of experience, and the percentage of their time dedicated to trauma, (2) differences in the interobserver agreement between classification into fracture type, group, and subgroup, and (3) differences in the interobserver agreement when assessing fracture stability compared to classifying fractures into type, group and subgroup.MethodsThis study used the Science of Variation Group to measure factors associated with variation in interobserver agreement on classification of proximal femur fractures according to the AO/OTA classification on radiographs. We selected 30 anteroposterior radiographs from 1061 patients aged 55 years or older with an isolated fracture of the proximal femur, with a spectrum of fracture types proportional to the full database. To measure the interobserver agreement the Fleiss' kappa was determined and bootstrapping (resamples = 1000) was used to calculate the standard error, z statistic, and 95% confidence intervals. We compared the Kappa values of surgeons with more experience to less experienced surgeons.ResultsThere were no statistically significant differences in the Kappa values on each classification level (type, group, subgroup) between more and less experienced surgeons. When all surgeons were combined into one group, the interobserver reliability was the greatest for classifying the fractures into type (kappa, 0.90; 95% CI, 0.83 to 0.97; p < 0.001), reflecting almost perfect agreement. When comparing the kappa values between classes (type, group, subgroup), we found statistically significant differences between each class. Substantial agreement was found in the clinically relevant groups stable/unstable trochanteric, displaced/non-displaced femoral neck, and femoral head fractures (kappa, 0.60; 95% CI, 0.53 to 0.67, p < 0.001).ConclusionsThis study adds to a growing body of evidence that relatively simple distinctions are more reliable and that this is independent of surgeon experience.Copyright © 2018 Elsevier Ltd. All rights reserved.
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*
,_underline_
or**bold**
. - Superscript can be denoted by
<sup>text</sup>
and subscript<sub>text</sub>
. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3.
, hyphens-
or asterisks*
. - Links can be included with:
[my link to pubmed](http://pubmed.com)
- Images can be included with:
![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
- For footnotes use
[^1](This is a footnote.)
inline. - Or use an inline reference
[^1]
to refer to a longer footnote elseweher in the document[^1]: This is a long footnote.
.