Models of assessing the extent of agreement beteween raters using kappa coefficients

Joanna Jarosz-Nowak

Mathematica Applicanda (2007)

  • Volume: 35, Issue: 49/08
  • ISSN: 1730-2668

Abstract

top
In medical studies quality of assessment is of great importance. Typically it is characterized by reliability. A fundamental sense of interobserver reliability is to evaluate a degree of agreement between independent judges examining the same objects. The paper is an evaluation of some interrater reliability measures and their interpretation. Cohen’s kappa and Scott’s coefficient are considered. We describe models and connections between coefficients defined for dichotomous and politomous data. We show that abovementioned estimators of kappa for classification into more than 2 categories are weighted averages of kappas in binary models defined for each category separately.

How to cite

top

Joanna Jarosz-Nowak. "Models of assessing the extent of agreement beteween raters using kappa coefficients." Mathematica Applicanda 35.49/08 (2007): null. <http://eudml.org/doc/292557>.

@article{JoannaJarosz2007,
abstract = {In medical studies quality of assessment is of great importance. Typically it is characterized by reliability. A fundamental sense of interobserver reliability is to evaluate a degree of agreement between independent judges examining the same objects. The paper is an evaluation of some interrater reliability measures and their interpretation. Cohen’s kappa and Scott’s coefficient are considered. We describe models and connections between coefficients defined for dichotomous and politomous data. We show that abovementioned estimators of kappa for classification into more than 2 categories are weighted averages of kappas in binary models defined for each category separately.},
author = {Joanna Jarosz-Nowak},
journal = {Mathematica Applicanda},
keywords = {Agreement, Cohen’s kappa, Scott’s coefficient},
language = {eng},
number = {49/08},
pages = {null},
title = {Models of assessing the extent of agreement beteween raters using kappa coefficients},
url = {http://eudml.org/doc/292557},
volume = {35},
year = {2007},
}

TY - JOUR
AU - Joanna Jarosz-Nowak
TI - Models of assessing the extent of agreement beteween raters using kappa coefficients
JO - Mathematica Applicanda
PY - 2007
VL - 35
IS - 49/08
SP - null
AB - In medical studies quality of assessment is of great importance. Typically it is characterized by reliability. A fundamental sense of interobserver reliability is to evaluate a degree of agreement between independent judges examining the same objects. The paper is an evaluation of some interrater reliability measures and their interpretation. Cohen’s kappa and Scott’s coefficient are considered. We describe models and connections between coefficients defined for dichotomous and politomous data. We show that abovementioned estimators of kappa for classification into more than 2 categories are weighted averages of kappas in binary models defined for each category separately.
LA - eng
KW - Agreement, Cohen’s kappa, Scott’s coefficient
UR - http://eudml.org/doc/292557
ER -

NotesEmbed ?

top

You must be logged in to post comments.

To embed these notes on your page include the following JavaScript code on your page where you want the notes to appear.

Only the controls for the widget will be shown in your chosen language. Notes will be shown in their authored language.

Tells the widget how many notes to show per page. You can cycle through additional notes using the next and previous controls.

    
                

Note: Best practice suggests putting the JavaScript code just before the closing </body> tag.