Medical education
-
Inter-rater agreement is essential in rating clinical performance of doctors and other health professionals. The purpose of this study was to establish inter-rater agreement in categorising errors in the diagnostic process made by clinicians using computerised decision support systems. ⋯ Raters can achieve good agreement in categorising errors provided they are given explicit scoring rules and do not rely solely upon clinical judgement. The kappa coefficient has limitations in cases where the expected agreement between judges is high and variability is low. The use of 2 indices to assess agreement, analogous to test sensitivity and specificity, is recommended.