Pflege
-
Comparative Study
[Quality criteria of assessment scales--Cohen's kappa as measure of interrator reliability (1)].
A far spread index for the quantification of the interrater-reliabilty of assessment instruments or procedures is the chance-corrected agreement measure "Cohen's Kappa". Although or even though Cohen's Kappa proves to be a common measure of agreement of two raters, it is increasingly being used uncritically. Many authors, but also readers don't seem to take the central and paradox characteristics of Kappa into consideration for the interpretation of their results. ⋯ This is in particular, when results of an investigation reveal particularly high values of observer agreement but low value of Cohen's Kappa or when differences in the overall assessment can even lead to an increase of Kappa. Only the knowledge about these characteristics makes a meaningful presentation and interpretation of the results of reliability studies possible. In a following second part of this paper, practicable alternatives to the problem areas presented and paradox characteristics of Kappa will be shown.