Journal of managed care & specialty pharmacy
-
J Manag Care Spec Pharm · Mar 2014
Cost variability of suggested generic treatment alternatives under the Medicare Part D benefit.
The substitution of generic treatment alternatives for brand-name drugs is a strategy that can help lower Medicare beneficiary out-of-pocket costs. Beginning in 2011, Medicare beneficiaries reaching the coverage gap received a 50% discount on the full drug cost of brand-name medications and a 7% discount on generic medications filled during the gap. This discount will increase until 2020, when beneficiaries will be responsible for 25% of total drug costs during the coverage gap. ⋯ Medicare beneficiaries can realize significant out-of-pocket cost savings for their drugs by taking CMS-suggested generic treatment alternatives. However, due to larger discounts on brand medications made available through recent changes reducing the coverage gap, the potential dollar savings by taking suggested generic treatment alternatives during the gap is less compelling and will decrease as subsidies increase.
-
J Manag Care Spec Pharm · Mar 2014
The GRACE checklist for rating the quality of observational studies of comparative effectiveness: a tale of hope and caution.
While there is growing demand for information about comparative effectiveness (CE), there is substantial debate about whether and when observational studies have sufficient quality to support decision making. ⋯ The 11-item GRACE checklist provides guidance to help determine which observational studies of CE have used strong scientific methods and good data that are fit for purpose and merit consideration for decision making. The checklist contains a parsimonious set of elements that can be objectively assessed in published studies, and user testing shows that it can be successfully applied to studies of drugs, medical devices, and clinical and surgical interventions. Although no scoring is provided, study reports that rate relatively well across checklist items merit in-depth examination to understand applicability, effect size, and likelihood of residual bias. The current testing and validation efforts did not achieve clear discrimination between studies fit for purpose and those not, but we have identified a critical, though remediable, limitation in our approach. Not specifying a specific granular decision for evaluation, or not identifying a single study objective in reports that included more than one, left reviewers with too broad an assessment challenge. We believe that future efforts will be more successful if reviewers are asked to focus on a specific objective or question. Despite the challenges encountered in this testing, an agreed upon set of assessment elements, checklists, or score cards is critical for the maturation of this field. Substantial resources will be expended on studies of real-world effectiveness, and if the rigor of these observational assessments cannot be assessed, then the impact of the studies will be suboptimal. Similarly, agreement on key elements of quality will ensure that budgets are appropriately directed toward those elements. Given the importance of this task and the lessons learned from these extensive efforts at validation and user testing, we are optimistic about the potential for improved assessments that can be used for diverse situations by people with a wide range of experience and training. Future testing would benefit by directing reviewers to address a single, granular research question, which would avoid problems that arose by using the checklist to evaluate multiple objectives, by using other types of validation test sets, and by employing further multivariate analysis to see if any combination or sequence of item responses has particularly high predictive validity.