Journal of clinical epidemiology
-
When direct and indirect estimates of treatment effects are coherent, network meta-analysis (NMA) estimates should have increased precision (narrower confidence or credible intervals compared with relying on direct estimates alone), a benefit of NMA. We have, however, observed cases of sparse networks in which combining direct and indirect estimates results in marked widening of the confidence intervals. In many cases, the assumption of common between-study heterogeneity across the network seems to be responsible for this counterintuitive result. ⋯ The result, however, may be spuriously wide confidence intervals for some of the comparisons in the network (and, in the Grading of Recommendations Assessment, Development, and Evaluation approach, inappropriately low ratings of the certainty of the evidence through rating down for serious imprecision). Systematic reviewers should be aware of the problem and plan sensitivity analyses that produce intuitively sensible confidence intervals. These sensitivity analyses may include using informative priors for the between-study heterogeneity parameter in the Bayesian framework and the use of fixed effects models.
-
To examine, through a cross-sectional survey, how well safety information was reported among drug systematic reviews predating PRISMA harms checklist and explore factors associated with better reporting. ⋯ The reporting of safety information was poor both for Cochrane and non-Cochrane drug systematic reviews predating PRISMA harms checklist. The findings suggested a strong need to use the PRISMA harms checklist for reporting safety among drug systematic reviews.
-
Diagnostic and prognostic prediction models often perform poorly when externally validated. We investigate how differences in the measurement of predictors across settings affect the discriminative power and transportability of a prediction model. ⋯ When a prediction model is applied to a different setting to the one in which it was developed, its discriminative ability can decrease or even increase if the magnitude or structure of the errors in predictor measurements differ between the two settings. This provides an important starting point for researchers to better understand how differences in measurement methods can affect the performance of a prediction model when externally validating or implementing it in practice.