Journal of clinical epidemiology
-
Systematic reviewers disagree about the ability of observational studies to answer questions about the benefits or intended effects of pharmacotherapeutic, device, or procedural interventions. This study provides a framework for decision making on the inclusion of observational studies to assess benefits and intended effects in comparative effectiveness reviews (CERs). ⋯ Because it is unusual to find sufficient evidence from RCTs to answer all key questions concerning benefit or the balance of benefits and harms, comparative effectiveness reviewers should routinely assess the appropriateness of inclusion of observational studies for questions of benefit. Furthermore, reviewers should explicitly state the rationale for inclusion or exclusion of observational studies when conducting CERs.
-
Updating comparative effectiveness reviews: current efforts in AHRQ's Effective Health Care Program.
To review the current knowledge and efforts on updating systematic reviews (SRs) as applied to comparative effectiveness reviews (CERs). ⋯ CERs need to be regularly updated as new evidence is produced. Lack of attention to updating may lead to outdated and sometimes misleading conclusions that compromise health care and policy decisions. The article outlines several specific goals for future research, one of them being the development of efficient guideline for updating CERs applicable across evidence-based practice centers.
-
To assess whether nominally statistically significant effects in meta-analyses of clinical trials are true and whether their magnitude is inflated. ⋯ Most meta-analyses with nominally significant results pertain to truly nonnull effects, but exceptions are not uncommon. The magnitude of observed effects, especially in meta-analyses with limited evidence, is often inflated.
-
Rare diseases may be difficult to study through conventional research methods, but are amenable to study through certain uncommonly used designs. We sought to explain these designs and to provide a framework to assist researchers in identifying the most appropriate design for a given research question. ⋯ These techniques may facilitate research in rare diseases.
-
Analyses comparing randomized to nonrandomized clinical trials suffer from the fact that the study populations are usually different. We aimed for a comparison of randomized clinical trials (RCTs) and propensity score (PS) analyses in similar populations. ⋯ In our example, treatment effects of off-pump versus on-pump surgery from RCTs and PS analyses were very similar in a "meta-matched" population of studies, indicating that only a small remaining bias is present in PS analyses.