• Spine · Jan 2006

    An examination of the reliability of a classification algorithm for subgrouping patients with low back pain.

    • Julie M Fritz, Gerard P Brennan, Shannon N Clifford, Stephen J Hunter, and Anne Thackeray.
    • Division of Physical Therapy, University of Utah, Intermountain Health Care, Salt Lake City, USA. julie.fritz@hsc.utah.edu
    • Spine. 2006 Jan 1; 31 (1): 77-82.

    Study DesignTest-retest design to examine interrater reliability.ObjectiveExamine the interrater reliability of individual examination items and a classification decision-making algorithm using physical therapists with varying levels of experience.Summary Of Background DataClassifying patients based on clusters of examination findings has shown promise for improving outcomes. Examining the reliability of examination items and the classification decision-making algorithm may improve the reproducibility of classification methods.MethodsPatients with low back pain less than 90 days in duration participating in a randomized trial were examined on separate days by different examiners. Interrater reliability of individual examination items important for classification was examined in clinically stable patients using kappa coefficients and intraclass correlation coefficients. The findings from the first examination were used to classify each patient using the decision-making algorithm by clinicians with varying amounts of experience. The reliability of the classification algorithm was examined with kappa coefficients.ResultsA total of 123 patients participated (mean age 37.7 [+/-10.7] years, 44% female), 60 (49%) remained stable between examinations. Reliability of range of motion, centralization/peripheralization judgments with flexion and extension, and the instability test were moderate to excellent. Reliability of centralization/peripheralization judgments with repeated or sustained extension or aberrant movement judgments were fair to poor. Overall agreement on classification decisions was 76% (kappa = 0.60, 95% confidence interval 0.56, 0.64), with no significant differences based on level of experience.ConclusionReliability of the classification algorithm was good. Further research is needed to identify sources of disagreements and improve reproducibility.

      Pubmed     Copy Citation     Plaintext  

      Add institutional full text...

    Notes

     
    Knowledge, pearl, summary or comment to share?
    300 characters remaining
    help        
    You can also include formatting, links, images and footnotes in your notes
    • Simple formatting can be added to notes, such as *italics*, _underline_ or **bold**.
    • Superscript can be denoted by <sup>text</sup> and subscript <sub>text</sub>.
    • Numbered or bulleted lists can be created using either numbered lines 1. 2. 3., hyphens - or asterisks *.
    • Links can be included with: [my link to pubmed](http://pubmed.com)
    • Images can be included with: ![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
    • For footnotes use [^1](This is a footnote.) inline.
    • Or use an inline reference [^1] to refer to a longer footnote elseweher in the document [^1]: This is a long footnote..

    hide…

Want more great medical articles?

Keep up to date with a free trial of metajournal, personalized for your practice.
1,694,794 articles already indexed!

We guarantee your privacy. Your email address will not be shared.