Statistics in medicine
-
Statistics in medicine · Apr 1998
ReviewKappa-like indices of observer agreement viewed from a latent class perspective.
It is common practice to assess consistency of diagnostic ratings in terms of 'agreement beyond chance'. To explore the interpretation of such a term we consider relevant statistical techniques such as Cohen's kappa and log-linear models for agreement on nominal ratings. ⋯ As a result it is shown that Cohen's kappa may be an inadequate and biased index of chance-corrected agreement in studies of intra-observer as well as inter-observer consistency. We suggest a more critical use and interpretation of measures gauging observer reliability by the amount of agreement beyond chance.