Models of assessing the extent of agreement beteween raters using kappa coefficients
Mathematica Applicanda (2007)
- Volume: 35, Issue: 49/08
- ISSN: 1730-2668
Access Full Article
topAbstract
topHow to cite
topJoanna Jarosz-Nowak. "Models of assessing the extent of agreement beteween raters using kappa coefficients." Mathematica Applicanda 35.49/08 (2007): null. <http://eudml.org/doc/292557>.
@article{JoannaJarosz2007,
abstract = {In medical studies quality of assessment is of great importance. Typically it is characterized by reliability. A fundamental sense of interobserver reliability is to evaluate a degree of agreement between independent judges examining the same objects. The paper is an evaluation of some interrater reliability measures and their interpretation. Cohen’s kappa and Scott’s coefficient are considered. We describe models and connections between coefficients defined for dichotomous and politomous data. We show that abovementioned estimators of kappa for classification into more than 2 categories are weighted averages of kappas in binary models defined for each category separately.},
author = {Joanna Jarosz-Nowak},
journal = {Mathematica Applicanda},
keywords = {Agreement, Cohen’s kappa, Scott’s coefficient},
language = {eng},
number = {49/08},
pages = {null},
title = {Models of assessing the extent of agreement beteween raters using kappa coefficients},
url = {http://eudml.org/doc/292557},
volume = {35},
year = {2007},
}
TY - JOUR
AU - Joanna Jarosz-Nowak
TI - Models of assessing the extent of agreement beteween raters using kappa coefficients
JO - Mathematica Applicanda
PY - 2007
VL - 35
IS - 49/08
SP - null
AB - In medical studies quality of assessment is of great importance. Typically it is characterized by reliability. A fundamental sense of interobserver reliability is to evaluate a degree of agreement between independent judges examining the same objects. The paper is an evaluation of some interrater reliability measures and their interpretation. Cohen’s kappa and Scott’s coefficient are considered. We describe models and connections between coefficients defined for dichotomous and politomous data. We show that abovementioned estimators of kappa for classification into more than 2 categories are weighted averages of kappas in binary models defined for each category separately.
LA - eng
KW - Agreement, Cohen’s kappa, Scott’s coefficient
UR - http://eudml.org/doc/292557
ER -
NotesEmbed ?
topTo embed these notes on your page include the following JavaScript code on your page where you want the notes to appear.