Bmc Med Res Methodol
-
Bmc Med Res Methodol · Jan 2013
Comparative StudyModelling heterogeneity variances in multiple treatment comparison meta-analysis--are informative priors the better solution?
Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the 'common variance' assumption). This approach 'borrows strength' for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. ⋯ MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice.
-
Bmc Med Res Methodol · Jan 2013
Trauma registry record linkage: methodological approach to benefit from complementary data using the example of the German Pelvic Injury Register and the TraumaRegister DGU(®).
In Germany, hospitals can deliver data from patients with pelvic fractures selectively or twofold to two different trauma registries, i.e. the German Pelvic Injury Register (PIR) and the TraumaRegister DGU(®) (TR). Both registers are anonymous and differ in composition and content. We describe the methodological approach of linking these registries and reidentifying twofold documented patients. The aim of the approach is to create an intersection set that benefit from complementary data of each registry, respectively. Furthermore, the concordance of data entry of some clinical variables entered in both registries was evaluated. ⋯ Individually, the PIR and the TR reflect a valid source for documenting injured patients, although the data reflect the emphasis of the particular registry. Linking the two registries enabled new insights into care of multiple-trauma patients with pelvic fractures even when linkage rates were poor. Future considerations and development of the registries should be done in close bilateral consultation with the aim of benefiting from complementary data and improving data concordance. It is also conceivable to integrate individual modules, e.g. a pelvic fracture module, into the TR likewise a modular system in the future.
-
Bmc Med Res Methodol · Jan 2013
Selecting optimal screening items for delirium: an application of item response theory.
Delirium (acute confusion), is a common, morbid, and costly complication of acute illness in older adults. Yet, researchers and clinicians lack short, efficient, and sensitive case identification tools for delirium. Though the Confusion Assessment Method (CAM) is the most widely used algorithm for delirium, the existing assessments that operationalize the CAM algorithm may be too long or complicated for routine clinical use. Item response theory (IRT) models help facilitate the development of short screening tools for use in clinical applications or research studies. This study utilizes IRT to identify a reduced set of optimally performing screening indicators for the four CAM features of delirium. ⋯ We identified optimal indicators from a large item pool to screen for delirium. The selected indicators maintain fidelity to clinical constructs of delirium while maximizing psychometric information important for screening. This reduced item set facilitates development of short screening tools suitable for use in clinical applications or research studies. This study represents the first step in the establishment of an item bank for delirium screening with potential questions for clinical researchers to select from and tailor according to their research objectives.
-
Bmc Med Res Methodol · Jan 2013
Direct risk standardisation: a new method for comparing casemix adjusted event rates using complex models.
Comparison of outcomes between populations or centres may be confounded by any casemix differences and standardisation is carried out to avoid this. However, when the casemix adjustment models are large and complex, direct standardisation has been described as "practically impossible", and indirect standardisation may lead to unfair comparisons. We propose a new method of directly standardising for risk rather than standardising for casemix which overcomes these problems. ⋯ Direct risk standardisation using our proposed method is as straightforward as using conventional direct or indirect standardisation, always enables fair comparisons of performance to be made, can use continuous casemix covariates, and was found in our examples to have similar standard errors to the SMR. It should be preferred when there is a risk that conventional direct or indirect standardisation will lead to unfair comparisons.
-
Bmc Med Res Methodol · Jan 2013
A comparison of Cohen's Kappa and Gwet's AC1 when calculating inter-rater reliability coefficients: a study conducted with personality disorder samples.
Rater agreement is important in clinical research, and Cohen's Kappa is a widely used method for assessing inter-rater reliability; however, there are well documented statistical problems associated with the measure. In order to assess its utility, we evaluated it against Gwet's AC1 and compared the results. ⋯ Based on the different formulae used to calculate the level of chance-corrected agreement, Gwet's AC1 was shown to provide a more stable inter-rater reliability coefficient than Cohen's Kappa. It was also found to be less affected by prevalence and marginal probability than that of Cohen's Kappa, and therefore should be considered for use with inter-rater reliability analysis.