POLISH PSYCHOLOGICAL BULLETIN

1972 ◽  
Vol 17 (5) ◽  
pp. 312-312
Author(s):  
JOSEF BROŽEK
2003 ◽  
Vol 129 (4) ◽  
pp. C2-C2
Author(s):  
No authorship indicated

2003 ◽  
Vol 129 (4) ◽  
pp. 474-474
Author(s):  
No authorship indicated

2003 ◽  
Vol 129 (3) ◽  
pp. 338-338
Author(s):  
No authorship indicated

2017 ◽  
Vol 5 (1) ◽  
pp. 4 ◽  
Author(s):  
Sara Van Erp ◽  
Josine Verhagen ◽  
Raoul P. P. P. Grasman ◽  
Eric-Jan Wagenmakers

2002 ◽  
Vol 128 (6) ◽  
pp. 997-1004 ◽  
Author(s):  
Nancy Eisenberg ◽  
Marilyn S. Thompson ◽  
Susan Augir ◽  
Elizabeth Harris Stanley

Author(s):  
Daniel Klein

Despite its well-known weaknesses, researchers continuously choose the kappa coefficient (Cohen, 1960, Educational and Psychological Measurement 20: 37–46; Fleiss, 1971, Psychological Bulletin 76: 378–382) to quantify agreement among raters. Part of kappa's persistent popularity seems to arise from a lack of available alternative agreement coefficients in statistical software packages such as Stata. In this article, I review Gwet’s (2014, Handbook of Inter-Rater Reliability) recently developed framework of interrater agreement coefficients. This framework extends several agreement coefficients to handle any number of raters, any number of rating categories, any level of measurement, and missing values. I introduce the kappaetc command, which implements this framework in Stata.


Sign in / Sign up

Export Citation Format

Share Document