-
Multicenter Study
Classification of Cardiopulmonary Resuscitation Chest Compression Patterns: Manual Versus Automated Approaches.
- Henry E Wang, Robert H Schmicker, Heather Herren, Siobhan Brown, John P Donnelly, Randal Gray, Sally Ragsdale, Andrew Gleeson, Adam Byers, Jamie Jasti, Christina Aguirre, Pam Owens, Joe Condle, and Brian Leroux.
- Department of Emergency Medicine, University of Alabama School of Medicine, Birmingham, AL.
- Acad Emerg Med. 2015 Feb 1;22(2):204-11.
ObjectivesNew chest compression detection technology allows for the recording and graphical depiction of clinical cardiopulmonary resuscitation (CPR) chest compressions. The authors sought to determine the inter-rater reliability of chest compression pattern classifications by human raters. Agreement with automated chest compression classification was also evaluated by computer analysis.MethodsThis was an analysis of chest compression patterns from cardiac arrest patients enrolled in the ongoing Resuscitation Outcomes Consortium (ROC) Continuous Chest Compressions Trial. Thirty CPR process files from patients in the trial were selected. Using written guidelines, research coordinators from each of eight participating ROC sites classified each chest compression pattern as 30:2 chest compressions, continuous chest compressions (CCC), or indeterminate. A computer algorithm for automated chest compression classification was also developed for each case. Inter-rater agreement between manual classifications was tested using Fleiss's kappa. The criterion standard was defined as the classification assigned by the majority of manual raters. Agreement between the automated classification and the criterion standard manual classifications was also tested.ResultsThe majority of the eight raters classified 12 chest compression patterns as 30:2, 12 as CCC, and six as indeterminate. Inter-rater agreement between manual classifications of chest compression patterns was κ = 0.62 (95% confidence interval [CI] = 0.49 to 0.74). The automated computer algorithm classified chest compression patterns as 30:2 (n = 15), CCC (n = 12), and indeterminate (n = 3). Agreement between automated and criterion standard manual classifications was κ = 0.84 (95% CI = 0.59 to 0.95).ConclusionsIn this study, good inter-rater agreement in the manual classification of CPR chest compression patterns was observed. Automated classification showed strong agreement with human ratings. These observations support the consistency of manual CPR pattern classification as well as the use of automated approaches to chest compression pattern analysis.© 2015 by the Society for Academic Emergency Medicine.
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*
,_underline_
or**bold**
. - Superscript can be denoted by
<sup>text</sup>
and subscript<sub>text</sub>
. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3.
, hyphens-
or asterisks*
. - Links can be included with:
[my link to pubmed](http://pubmed.com)
- Images can be included with:
![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
- For footnotes use
[^1](This is a footnote.)
inline. - Or use an inline reference
[^1]
to refer to a longer footnote elseweher in the document[^1]: This is a long footnote.
.