accuracy (also called calibration)- extent of agreement between assigned risks and outcome prevalences among
those assigned the same risk.
area under ROC curve (AUC) - see concordance
attribute diagram - plot of outcome prevalences versus assigned risks. If a model is perfectly calibrated, these
points lie on the diagonal line y=x.
bias - square root of the mean-squared difference between assigned and true risks.
Brier score - mean squared difference between outcomes and assigned risks.
calibration - see accuracy
concordance (also called AUC or C-statistic) - probability that the assigned risk of a randomly selected
individual who develops the outcome exceeds that of a randomly selected individual who does not.
cross-classified (CC) model - obtained by cross-classifying individuals into cells according to the risks assigned
by each of two models, and assigning each member of a cell the outcome prevalence among those in that cell.
deterministic outcome –one that occurs with probability one in some members of a population and
probability zero in the remaining members.
Hosmer-Lemeshow (HL) statistic - standardized sum of squared differences between assigned risks and
outcome prevalences. Under the null hypothesis that a model is perfectly calibrated, the statistic has a
integrated discrimination - the square of the PO correlation coefficient, which equals the fraction R2 of
outcome variance explained by the model [14
] and the Yates slope [12
integrated discrimination improvement (IDI) - difference in squared PO correlation coefficients between two
models when one is obtained from the other by including additional covariates.
model variance - variance of outcome prevalences across risk groups of the model.
net reclassification index (NRI) – probability that an expanded model correctly reclassifies a person's risk
(relative to outcome occurrence) minus the corresponding probability of an incorrect classification.
outcome prevalence – expected proportion of individuals in a group who develop the outcome, equal to the
mean risk of individuals in the group.
perfect model – assigns each individual the risk that would result if we knew all risk-determining covariates
and their joint effects on outcome probability.
precision (also called discrimination, resolution) – extent to which a model assigns different risks to individuals
with substantially different true risks.
precision loss – difference between Bernoulli variance of outcomes and the model variance.
prevalence-outcome (PO) correlation coefficient - correlation between actual outcomes and outcome
prevalences in assigned risk groups of a model.
reclassification calibration statistic – Hosmer-Lemeshow test statistic applied to the risk groups determined by
cross-classifying individuals according to risks assigned by two models.
receiver operating characteristic (ROC) curve - plot of points (x(r),y(r)) as r varies from 0 to 1, where x(r) is
the probability that the model assigns risks to individuals who do not develop the outcome, and y(r) is the
corresponding probability for those who develop the outcome.