-
- T P Hofer and R A Hayward.
- Veterans Affairs Health Services Research and Development Program, Ann Arbor, MI, USA.
- Med Care. 1996 Aug 1; 34 (8): 737-53.
ObjectivesMany groups involved in health care are very interested in using external quality indices, such as risk-adjusted mortality rates, to examine hospital quality. The authors evaluated the feasibility of using mortality rates for medical diagnoses to identify poor-quality hospitals.MethodsThe Monte Carlo simulation model was used to examine whether mortality rates could distinguish 172 average-quality hospitals from 19 poor-quality hospitals (5% versus 25% of deaths being preventable, respectively), using the largest diagnosis-related groups (DRGs) for cardiac, gastrointestinal, cerebrovascular, and pulmonary diseases as well as an aggregate of all medical DRGs. Discharge counts and observed death rates for all 191 Michigan hospitals were obtained from the Michigan Inpatient Database. Positive predictive value (PPV), sensitivity, and area under the receiver operating characteristic curve were calculated for mortality outlier status as an indicator of poor-quality hospitals. Sensitivity analysis was performed under varying assumptions about the time period of evaluation, quality differences between hospitals, and unmeasured variability in hospital casemix.ResultsFor individual DRG groups, mortality rates were a poor measure of quality, even using the optimistic assumption of perfect casemix adjustment. For acute myocardial infarction, high mortality rate outlier status (using 2 years of data and a 0.05 probability cutoff) had a PPV of only 24%, thus, more than three fourths of those labeled poor-quality hospitals (high mortality rate outliers) actually would have average quality. If we aggregate all medical DRGs and continue to assume very large quality differences and perfect casemix adjustment, the sensitivity for detecting poor-quality hospitals is 35% and PPV is 52%. Even for this extreme case, the PPV is very sensitive to introduction of small amounts of unmeasured casemix differences between hospitals.ConclusionAlthough they may be useful for some surgical diagnoses, DRG-specific hospital mortality rates probably cannot accurately detect poor-quality outliers for medical diagnoses. Even collapsing to all medical DRGs, hospital mortality rates seem unlikely to be accurate predictors of poor quality, and punitive measures based on high mortality rates frequently would penalize good or average hospitals.
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*
,_underline_
or**bold**
. - Superscript can be denoted by
<sup>text</sup>
and subscript<sub>text</sub>
. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3.
, hyphens-
or asterisks*
. - Links can be included with:
[my link to pubmed](http://pubmed.com)
- Images can be included with:
![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
- For footnotes use
[^1](This is a footnote.)
inline. - Or use an inline reference
[^1]
to refer to a longer footnote elseweher in the document[^1]: This is a long footnote.
.