PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of cmajCMAJ Information for AuthorsCMAJ Home Page
 
CMAJ. Jul 5, 2005; 173(1): 15–16.
PMCID: PMC1167791
Kappa statistic
Christopher R. Carpenter
Assistant Professor, Division of Emergency Medicine, Washington University School of Medicine, St. Louis, Mo.
I would like to thank CMAJ and the Evidence-Based Medicine Teaching Tips Working Group for the teaching tips series, which is wonderfully useful to those of us who are teaching these basic concepts to residents and other physicians. Part 3 in the series, discussing the kappa statistic,1 contained a couple of points to which I would like to contribute, on the basis of my own teaching experiences.
Whereas Thomas McGinn and associates1 suggest that students construct 2 х 2 tables and calculate kappa from successively higher rates of positive calls (see tip 3 in the article), I have instead given students the raw data from actual small studies (with fewer than 25 subjects) and then asked them to construct the 2 х 2 table and calculate actual agreement and chance agreement using the multiplication rule.2 The multiplication rule can be used to calculate joint probability if 2 different events are independent of one another. Most situations that consumers of the medical literature will encounter involve analyzing the numbers provided in various data forms and then determining whether the level of agreement is both acceptable and consistent with the data presented. Rarely will readers need to assign a level of agreement and calculate the kappa statistic. Therefore, the method described here might be valuable as another means to calculate chance agreement and kappa on the basis of more realistic values.
For example, our institution recently implemented the emergency severity index3 (ESI) for nursing triage. Given a random sample of 25 patients from month 1 after implementation and using the nurse administrator's ESI score as the second assessment, I asked residents to compare high-risk and lower-risk triage scores between the triage nurse and the nurse administrator. The resulting 2 х 2 table is completed as shown in Fig. 1, and calculation of chance agreement proceeds as follows:
figure 6FF1
Fig. 1: Agreement table for triage nurse and nurse administrator at the author's hospital, using the emergency severity index3 for nursing triage.
kappa = [(observed agreement – expected agreement)/(1 – expected agreement)]
High-risk assessments by nurse administrator: 11/25 = 0.44
High-risk assessments by triage nurse: 10/25 = 0.40
Lower-risk assessments by nurse administrator: 14/25 = 0.56
Lower-risk assessments by triage nurse: 15/25 = 0.60
Observed agreement = (9 + 13)/25 = 0.88
Expected agreement = (chance of high-risk assessment) + (chance of lower-risk assessment)
Chance of high-risk assessment = 0.44 х 0.40 = 0.176
Chance of lower-risk assessment = 0.56 х 0.60 = 0.336
Expected agreement by chance alone = 0.176 + 0.336 = 0.512
kappa = (0.88 – 0.512)/(1 – 0.512) = 0.368/0.488 = 0.75
Table 1 in both the teachers'1 and learners'4 versions of this article references Maclure and Willett5 as a source of the qualitative classification of kappa. My own review of that paper did not reveal any attempt to qualitatively assess kappa, but at least 3 other sources have done so.6,7,8 In my experience the most widely used classification for kappa is the last of these,8 which proposed the guidelines for interpreting kappa values as outlined in Table 1 in this letter.
Table thumbnail
Table 1
Christopher R. Carpenter Assistant Professor Division of Emergency Medicine Washington University School of Medicine St. Louis, Mo.
1. McGinn T, Wyer PC, Newman TB, Keitz S, Leipzig R, Guyatt G, for Evidence-Based Medicine Teaching Tips Working Group. Tips for teachers of evidence-based medicine: 3. Understanding and calculating kappa. CMAJ 2004;171(11): Online-1 to Online-9. Available: www.cmaj.ca/cgi/data/171/11/1369/DC1/1 (accessed 2005 Jan 10).
2. Dawson B, Trapp RG. Basic and clinical biostatistics. 3rd ed. New York: McGraw-Hill; 2001. p. 66-7, 115-7.
3. Wuerz RC, Milne LW, Eitel DR, Travers D, Gilboy N. Reliability and validity of a new five-level triage instrument. Acad Emerg Med2000;7:236-42. [PubMed]
4. McGinn T, Wyer PC, Newman TB, Keitz S, Leipzig R, Guyatt G, for Evidence-Based Medicine Teaching Tips Working Group. Tips for learners of evidence-based medicine: 3. Measures of observer variability (kappa statistic). CMAJ2004;171(11):1369-73. [PMC free article] [PubMed]
5. Maclure M, Willett WC. Misinterpretation and misuse of the kappa statistic. Am J Epidemiol1987;126:161-9. [PubMed]
6. Altman DG. Practical statistics for medical students. London: Chapman and Hall; 1991.
7. Fleiss JL. Statistical methods for rates and proportions. 2nd ed. New York: John Wiley and Sons; 1981. p. 218.
8. Byrt T. How good is that agreement? [letter]. Epidemiology1996;7:561. [PubMed]
Articles from CMAJ : Canadian Medical Association Journal are provided here courtesy of
Canadian Medical Association