Journal of evaluation in clinical practice
-
Randomized clinical trials (RCTs) can be classified as explanatory or pragmatic. Currently, explanatory and pragmatic are considered to be the extremes of a continuum: Many trials have some features of both explanatory and pragmatic RCTs. The Salford Chronic Obstructive Respiratory Disease (COPD) trial was an open-label phase 3 RCT assessing an experimental product (fluticasone furoate-vilanterol) vs usual care. ⋯ It is clear that the Salford COPD trial had particular features-sharing some of explanatory phase 3 RCTs and some of pragmatic RCTs. This, however, is not enough to tag it as a "pragmatic" RCT providing "real-world" data. These words should not be used when referring to prelicensed RCT, unless they really describe how was the trial conducted and the type of data gathered-something that with the current clinical trial regulations will only occur in very rare circumstances.
-
Regardless of health issue, health sector, patient condition, or treatment modality, the chances are that provision is supported by "a guideline" making professionally endorsed recommendations on best practice. Against this background, research has proliferated seeking to evaluate how effectively such guidance is followed. These investigations paint a gloomy picture with many a guideline prompting lip service, inattention, and even opposition. This predicament has prompted a further literature on how to improve the uptake of guidelines, and this paper considers how to draw together lessons from these inquiries. ⋯ Health care decision makers operate in systems that are awash with guidelines. But guidelines only have paper authority. Managers do not need a checklist of their pros and cons, because the fate of guidelines depends on their reception rather than their production. They do need decision support on how to engineer and reengineer guidelines so they dovetail with evolving systems of health care delivery.
-
Decision curve analysis (DCA) is a widely used method for evaluating diagnostic tests and predictive models. It was developed based on expected utility theory (EUT) and has been reformulated using expected regret theory (ERG). Under certain circumstances, these 2 formulations yield different results. Here we describe these situations and explain the variation. ⋯ EUT and ERG DCA generate different results when treatment effects are taken into account. The magnitude of the difference depends on the effect of treatment and the disutilities associated with disease and treatment effects. This is important to realize as the current practice guidelines are uniformly based on EUT; the same recommendations can significantly differ if they are derived based on ERG framework.