iaa.kappa {UCS}R Documentation

Inter-Annotator Agreement: Cohen's Kappa (iaa)

Description

Compute the kappa statistic (Cohen, 1960) as a measure of intercoder agreement on a binary variable between two annotators, as well as a confidence interval according to Fleiss, Cohen & Everitt (1969). The data can either be given in the form of a 2-by-2 contingency table or as two parallel annotation vectors.

Usage

iaa.kappa(x, y=NULL, conf.level=0.95)

Arguments

x either a 2-by-2 contingency table in matrix form, or a vector of logicals
y a vector of logicals; ignored if x is a matrix
conf.level confidence level of the returned confidence interval (default: 0.95, corresponding to 95% confidence)

Value

A data frame with a single row and the following variables:
kappa sample estimate for the kappa statistic
sd sample estimate for the standard deviation of the kappa statistic
kappa.min, kappa.max two-sided asymptotic confidence interval for the “true” kappa, based on normal approximation with estimated variance
The single-row data frame was chosen as a return structure because it prints nicely, and results from different comparisons can easily be combined with rbind.

References

Cohen, Jacob (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37–46.

Fleiss, Joseph L.; Cohen, Jacob; Everitt, B. S. (1969). Large sample standard errors of kappa and weighted kappa. Psychological Bulletin, 72(5), 323–327.

See Also

iaa.pta

Examples

## kappa should be close to zero for random codings
p <- 0.1			# proportion of true positives
x <- runif(1000) < p		# 1000 candidates annotated randomly
y <- runif(1000) < p
iaa.kappa(x, y)

[Package UCS version 0.5 Index]