Models of assessing the extent of agreement beteween raters using kappa coefficients
In medical studies quality of assessment is of great importance. Typically it is characterized by reliability. A fundamental sense of interobserver reliability is to evaluate a degree of agreement between independent judges examining the same objects. The paper is an evaluation of some interrater reliability measures and their interpretation. Cohen’s kappa and Scott’s coefficient are considered. We describe models and connections between coefficients defined for dichotomous and politomous data....