PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Ann Inst Stat Math. Author manuscript; available in PMC 2010 September 7.
Published in final edited form as:
Ann Inst Stat Math. 2010 February 1; 62(1): 11–35.
doi:  10.1007/s10463-009-0258-9
PMCID: PMC2934856
NIHMSID: NIHMS206154

Latent Class Analysis Variable Selection1

Abstract

We propose a method for selecting variables in latent class analysis, which is the most common model-based clustering method for discrete data. The method assesses a variable’s usefulness for clustering by comparing two models, given the clustering variables already selected. In one model the variable contributes information about cluster allocation beyond that contained in the already selected variables, and in the other model it does not. A headlong search algorithm is used to explore the model space and select clustering variables. In simulated datasets we found that the method selected the correct clustering variables, and also led to improvements in classification performance and in accuracy of the choice of the number of classes. In two real datasets, our method discovered the same group structure with fewer variables. In a dataset from the International HapMap Project consisting of 639 single nucleotide polymorphisms (SNPs) from 210 members of different groups, our method discovered the same group structure with a much smaller number of SNPs.

Keywords: Bayes factor, BIC, Categorical data, Feature Selection, Model-based clustering, Single nucleotide polymorphism (SNP)

1 Introduction

Latent class analysis is used to discover groupings in multivariate categorical data. It models the data as a finite mixture of distributions, each one corresponding to a class (or cluster or group). Because of the underlying statistical model it is possible to determine the number of classes using model selection methods. But the modeling framework does not currently address the selection of the variables to be used; typically all variables are used in the model.

Selecting variables for latent class analysis can be desirable for several reasons. It can help interpretability of the model, and it can also make it possible to fit a model with a larger number of classes than would be possible with all the variables, for identifiability reasons. In general, removing unnecessary variables and parameters can also improve classification performance and the precision of parameter estimates.

In this paper we propose a method for selecting the variables to be used for clustering in latent class analysis. This is based on the method of Raftery and Dean (2006) for variable selection in model-based clustering of continuous variables. The method assesses a variable’s usefulness for clustering by comparing two models, given the clustering variables already selected. In one model the variable contributes information about cluster allocation beyond that contained in the already selected variables, and in the other model it does not. We then present a new search algorithm, based on Badsberg (1992), for exploring the space of possible models. The resulting method selects both the variables and the number of classes in the model.

In Section 2 we review some aspects of latent class analysis and in Section 3 we describe our variable selection methodology. In Section 4 we give results from simulated data and in Section 5 we give results for two real datasets, including one with a large number of variables and a much smaller number of data points. Issues arising with the method are discussed in Section 6.

2 Latent Class Analysis

2.1 Latent Class Analysis Model

Latent class analysis was proposed by Lazarsfeld (1950a,b) and Lazarsfeld and Henry (1968) and can be viewed as a special case of model-based clustering, for multivariate discrete data. Model-based clustering assumes that each observation comes from one of a number of classes, groups or subpopulations, and models each with its own probability distribution (Wolfe 1963; McLachlan and Peel 2000; Fraley and Raftery 2002). The overall population thus follows a finite mixture model, namely

x~g=1Gπgfg(x),

where fg is the density for group g, G is the number of groups, 0 < πg < 1, ∀g and g=1Gπg=1. Often, in practice, the fg are from the same parametric family (as is the case in latent class analysis) and we can write the overall density as:

x~g=1Gπgf(x|θg),

where θg is the set of parameters for the gth group.

In latent class analysis, the variables are usually assumed to be independent given knowledge of the group an observation came from, an assumption called local independence. Each variable within each group is then modeled with a multinomial density. The general density of a single variable x (with categories 1,…,d) given that it is in group g is then

x|g~j=1dpjg1{x=j},

where 1{x = j} is the indicator function equal to 1 if the observation of the variable takes value j and 0 otherwise, pjg is the probability of the variable taking value j in group g, and d is the number of possible values or categories the variable can take.

Since we are assuming conditional independence, if we have k variables, their joint group density can be written as a product of their individual group densities. If we have x = (x1,…, xk), we can write the joint group density as:

x|g~i=1kj=1dipijg1{xi=j},

where 1{xi = j} is the indicator function equal to 1 if the observation of the ith variable takes value j and 0 otherwise, pijg is the probability of variable i taking value j in group g and di is the number of possible values or categories the ith variable can take. The overall density is then a weighted sum of these individual product densities, namely

x~g=1G(πgi=1kj=1dipijg1{xi=j}),

where 0 < πg < 1, ∀g and g=1Gπg=1.

The model parameters {pijg, πg : i = 1,…, k, j = 1,…, di, g = 1,…, G} can be estimated from the data (for a fixed value of G) by maximum likelihood using the EM algorithm or the Newton-Rhapson algorithm or a hybrid of the two. These algorithms require starting values which are usually randomly generated. Because the algorithms are not guaranteed to find a global maximum and are usually fairly dependent on good starting values, it is routine to generate a number of random starting values and use the best solution given by one of these. In appendix B, we present an adjusted method useful for the cases where an inordinately large number of starting values is needed to get good estimates of the latent class models and G > 2.

Goodman (1974) discussed the issue of checking whether a latent class model with a certain number of classes was identifiable for a given number of variables. A necessary condition for identifiability when there are G classes and k variables with numbers of categories d = (d1,…, dk) is

i=1kdi>(i=1kdik+1)×G.

This basically amounts to checking that there are enough pieces of information (or cell counts or pattern combinations) to estimate the number of parameters in the model. However, in practice, not all possible pattern combinations are observed (some or many cell counts may be zero) and so the actual information available may be less. When selecting the number of latent classes in the data, we consider only numbers of classes for which this necessary condition is satisfied.

For reviews of latent class analysis, see Clogg (1981), McCutcheon (1987), Clogg (1995) and Hagenaars and McCutcheon (2002).

2.2 Selecting the number of latent classes

Each different value of G, the number of latent classes, defines a different model for the data. A method is needed to select the number of latent classes present in the data. Since a statistical model for the data is used, model selection techniques can be applied to this question.

In order to choose the best number of classes for the data we need to choose the best model (and the related number of classes). Bayes factors (Kass and Raftery 1995) are used to compare these models.

The Bayes factor for comparing model Mi versus model Mj is equal to the ratio of the posterior odds for Mi versus Mj to the prior odds for Mi versus Mj. This reduces to the posterior odds when the prior model probabilities are equal. The general form for the Bayes factor is:

Bij=p(Y|Mi)p(Y|Mj),

where p(Y | Mi) is known as the integrated likelihood of model Mi (given data Y). It is called the integrated likelihood because it is obtained by integrating over all the model parameters, namely the mixture proportions and the group variable probabilities. Unfortunately the integrated likelihood is difficult to compute (it has no closed form) and some form of approximation is needed for calculating Bayes factors in practice.

In our approximation we use the Bayesian information criterion (BIC), which is very simple to compute. The BIC is defined by

BIC=2×log(maximized likelihood)(no. of parameters)×log(n),
(1)

where n is the number of observations.

Twice the logarithm of the Bayes factor is approximately equal to the difference between the BIC values for the two models being compared. We choose the number of latent classes by recognizing that each different number of classes defines a model, which can then be compared to others using BIC. Keribin (1998) showed BIC to be consistent for the choice of the number of components in a mixture model under certain conditions, when all variables are relevant to the grouping. A rule of thumb for differences in BIC values is that a difference of less than 2 is viewed as barely worth mentioning, while a difference greater than 10 is seen as constituting strong evidence (Kass and Raftery 1995).

3 Variable Selection in Latent Class Analysis

3.1 Variable Selection Method

At any stage in the procedure we can partition the collection of variables into three sets: Y(clust), Y (?) and Y(other), where:

  • Y(clust) is the set of variables already selected as useful for clustering,
  • Y(?) is the variable(s) being considered for inclusion into/exclusion from Y(clust),
  • Y(other) is the set of all other variables.

Given this partition and the (unknown) clustering memberships z we can recast the question of the usefulness of Y(?) for clustering as a model selection question. The question becomes one of choosing between two different models, M1 which assumes that Y(?) is not useful for clustering, and M2 which assumes that it is.

The two models are specified as follows:

M1:p(Y|z)=p(Y(clust),Y(?),Y(other)|z)=p(Y(other)|Y(?),Y(clust))p(Y(?))p(Y(clust)|z)M2:p(Y|z)=p(Y(clust),Y(?),Y(other)|z)=p(Y(other)|Y(?),Y(clust))p(Y(?),Y(clust)|z),=p(Y(other)|Y(?),Y(clust))p(Y(?)|z)p(Y(clust)|z),
(2)

where z is the (unobserved) set of cluster memberships. Model M1 specifies that, given Y(clust), Y(?) is independent of the cluster memberships (defined by the unobserved variables z), that is, Y(?) gives no further information about the clustering. Model M2 implies that Y(?) does provide information about clustering membership, beyond that given just by Y(clust). The difference between the assumptions underlying the two models is illustrated in Figure 1, where arrows indicate dependency.

Figure 1
Graphical Representation of ModelsM1 and M2 for Latent Class Variable Selection. In model M1, the candidate set of additional clustering variables, Y(?), is independent of the cluster memberships, z, given the variables Y(clust) already in the model. ...

We assume that the remaining variables Y(other) are conditionally independent of the clustering given Y(clust) and Y(?) and belong to the same parametric family in both models.

This basically follows the approach used in Raftery and Dean (2006) for model-based clustering with continuous data and Gaussian clusters. One difference is that conditional independence of the variables was not assumed there, so that instead of p(Y(?)) in model M1 we had p(Y(?) | Y(clust)). This assumed conditional independence instead of full independence, i.e. the assumption in model M1 previously was that given the information in Y(clust), Y(?) had no additional clustering information. Note, that unlike Figure 1 in Raftery and Dean (2006) there are no lines between the subsets of variables Y(clust) and Y(?) in our Figure 1, due to the conditional independence assumption.

Models M1 and M2 are compared via an approximation to the Bayes factor which allows the high-dimensional p(Y(other)/Y(clust), Y(?)) to cancel from the ratio. The Bayes factor, B12, for M1 against M2 based on the data Y is given by

B12=p(Y|M1)/p(Y|M2),

where p(Y | Mk) is the integrated likelihood of model Mk (k = 1, 2), namely

p(Y|Mk)=p(Y|θk,Mk)p(θk|Mk)dθk.
(3)

In (3), θk is the vector-valued parameter of model Mk, and pk|Mk) is its prior distribution (Kass and Raftery 1995).

Let us now consider the integrated likelihood of model M1, p(Y|M1) = p(Y(clust), Y(?), Y(other)|M1). From (2), the model M1 is specified by three probability distributions: the latent class model that specifies p(Y(clust)1,M1), and the distributions p(Y(?)1,M1) and p(Y(other)|Y(?), Y(clust), θ1,M1). We denote the parameter vectors that specify these three probability distributions by θ11, θ12, and θ13, and we assume that their prior distributions are independent. Then the integrated likelihood itself factors as follows:

p(Y|M1)=p(Y(other)|Y(?),Y(clust),M1)p(Y(?)|M1)p(Y(clust)|M1),
(4)

where

p(Y(other)|Y(?),Y(clust),M1)=p(Y(other)|Y(?),Y(clust),θ13,M1)p(θ13|M1)dθ13.

Similar results hold for p(Y(?)|M1) and p(Y(clust)|M1). Similarly, we obtain

p(Y|M2)=p(Y(other)|Y(?),Y(clust),M2)p(Y(?),Y(clust)|M2),
(5)

where p(Y(?), Y(clust)|M2) is the integrated likelihood for the latent class model for (Y(?), Y(clust)).

The prior distribution of the parameter, θ13, is assumed to be the same under M1 as under M2. It follows that

p(Y(other)|Y(?),Y(clust),M2)=p(Y(other)|Y(?),Y(clust),M1).

We thus have

B12=p(Y(?)|M1)p(Y(clust)|M1)p(Y(?),Y(clust)|M2),
(6)

which has been greatly simplified by the cancellation of the factors involving the potentially high-dimensional Y(other). The integrated likelihoods in (6) are still hard to evaluate analytically, and so we approximate them using the BIC approximation of (1).

3.2 Headlong Search Algorithm

Given these models we need to find a method for creating partitions of the variables at each step. Initially we need enough variables to start Y(clust) so that a latent class model for G > 1 can be identified. If a latent class model on the set of all variables is identifiable for G > 1, we choose the largest number of classes that can be identified, and we then estimate the model. For each category of each variable, we then calculate the variance of its probability across groups. For each variable, we add up these variances and rank the variables according to this sum. The rationale is that variables with high values of this sum have high between-group variation in probability, and hence may be more useful for clustering.

Given this ranking we choose the top k* variables, where k* is the smallest number of variables that allow a latent class model with G > 1 to be identified. This is our starting Y(clust). The other variables can be left in their ordering based on variability for future order of introduction in the headlong algorithm.

If the above strategy is not possible, we instead proceed as follows. We calculate the minimum number of variables needed for identification of a latent class model with G > 1. We then select a number of random subsets each with this number of variables. Then for the initial Y(clust) we choose the variable set that gives the greatest overall average variance of categories’ probabilities across the groups (given the best latent class model identified). If the minimum number of variables is small enough, we enumerate all possible subsets to choose the best initial Y(clust), instead of sampling.

Once we have an initial set of clustering variables, Y(clust), we can proceed with the inclusion and exclusion steps of the headlong algorithm.

First we must define the constants upper and lower. The constant upper is the quantity above which the difference in BIC for models M2 and M1 will result in a variable being included in Y(clust) and below which the difference in BIC for models M2 and M1 will result in a variable being excluded from Y(clust). The constant lower is the quantity below which the difference in BIC for models M2 and M1 will result in a variable being removed from consideration for the rest of the procedure. A natural value for upper is 0, by which we mean that any positive difference in BIC for models M2 and M1 is taken as evidence of a variable’s usefulness for clustering and any negative difference is taken as evidence of a variable’s lack of usefulness. A difference of lower is taken to indicate that a variable is unlikely to ever be useful as a clustering variable and is no longer even checked. In general a large negative number such as −100 (which by our rule of thumb would constitute strong evidence against) makes a sensible value for lower.

  • Inclusion Step: Propose each variable in Y(other) singly in turn for Y(?). Calculate the difference in BIC for models M2 and M1 given the current Y(clust). If the variable’s BIC difference is:
    • between upper and lower, do not include in Y(clust) and return variable to the end of the list of variables in Y(other);
    • below lower, do not include in Y(clust) and remove variable from Y(other);
    • above upper, include variable in Y(clust) and stop inclusion step.
    If we reach the end of the list of variables in Y(other), the inclusion step is stopped.
  • Exclusion Step: Propose each variable in Y(clust) singly in turn for Y(?) (with the remaining variables in Y(clust) not including current Y(?) now defined as Y(clust) in M1 and M2). Calculate the difference in BIC for models M2 and M1. If the variable’s BIC difference is:
    • between upper and lower, exclude the variable from (the original) Y(clust) and return variable to the end of the list of variables in Y(other) and stop exclusion step;
    • below lower, exclude the variable from (the original) Y(clust) and from Y(other) and stop exclusion step;
    • above upper, do not exclude the variable from (the original) Y(clust).
    If we reach the end of the list of variables in Y(clust) the exclusion step is stopped.

If Y(clust) remains the same after consecutive inclusion and exclusion steps the headlong algorithm stops because it has converged.

4 Simulated Data Results

4.1 Binary Simulated Data Example

Five hundred points were simulated from a two-class model satisfying the local independence assumption. There were four variables separating the classes (variables 1–4) and nine noise variables, i.e. variables that have the same probabilities in each class (variables 5–13). The actual model parameters are shown in Table 1.

Table 1
Model parameters used to generate binary data example

When we estimated the latent class model based on all thirteen variables, BIC selected a two-class model. Since we simulated the data and hence know the actual membership of each point, we can compare the correct classification with that produced by the model estimated using all the variables. The number of observations incorrectly classified by this model was 123. The number of observations that would be incorrectly classified by using the model with the actual parameter values is 110. The estimated parameters from the model with all variables are given in Table 2.

Table 2
Estimated parameters for the model involving all variables for the binary data example

The variables ordered according the variability of their estimated probabilities (in decreasing order) are: 1, 3, 2, 4, 11, 7, 5, 6, 13, 9, 8, 10, 12. As expected, the first four variables are the clustering variables. We note that the difference between the true probabilities across groups is 0.4 for variable 1 and 0.3 for variables 2 to 4. Since variable 1 therefore gives better separation of the classes, we would expect it to be first in the list. The number of variables needed in order to estimate a latent class model with at least 2 classes is 3. So the starting clustering variables are {1, 3, 2}. The individual step results for the variable selection procedure starting with this set are given in Table 3.

Table 3
Results for each step of the variable selection procedure for the binary data example. Note that the third and fourth row list the variables with the highest and lowest BIC difference respectively (i.e. all others were examined as well).

When clustering on the four selected variables only, BIC again chose 2 classes as the best fitting model. Comparing the classification of the observations based on the estimates from this model with the correct classification we found that 110 observations had been misclassified. This seems to be optimal given that this is also the error from classifying based on the actual model parameters. The estimated parameters from the model using only selected variables are given in Table 4.

Table 4
Estimated parameters for the model involving only the selected variables for the binary data example

4.2 Non-Binary Simulated Data Example

One thousand points were simulated from a three-class model satisfying the local independence assumption. There are four variables that separate the classes (variables 1–4) and six noise variables that have the same probabilities in each class (variables 5–10). The actual model parameters are given in Table 5 and Table 6. Several other sets of parameters were used to simulate similar datasets where the algorithm gave results similar to this example; results are omitted.

Table 5
Actual clustering parameters for the model with data from variables with different numbers of categories
Table 6
Actual non-clustering parameters for the model with data from variables with different numbers of categories

When we estimated the latent class model based on all ten variables, BIC selected a 2-class model; recall that the actual number of classes is 3. The difference between BIC values for a 2-class and a 3-class model based on all the variables was 68. Again, since we have simulated the data and know the true membership of each point, we can compare the partition given by the true classification with that produced by the 2-class model estimated using all the variables. A cross-tabulation of the true memberships versus the estimated memberships from the 2-class model with all variables is as follows:

Estimated classes
12
True classes129325
285324
324528

The misclassification rate from the model with the actual parameters was 19.9%. If we match each true class to the best estimated class in the 2-class model with all variables we get a misclassification rate of 38.3%. If we assume that we knew the number of classes in advance to be 3 then the misclassification rate for the 3-class model with all variables is 25.7%. However this is knowledge that is not typically available in practice.

The variables ordered according the variability of their estimated probabilities in the 2-class model (in decreasing order) were: 2, 3, 1, 4, 6, 9, 7, 10, 8, 5. The first four variables are the clustering variables. The number of variables needed in order to estimate a latent class model with at least 2 classes is 3. So the starting clustering variables were {2, 3, 1}. The individual step results for the variable selection procedure starting with this set are given in Table 7.

Table 7
Results for each step of the variable selection procedure for the data from variables with different numbers of categories. Note that the third and fourth row list the variables with the highest and lowest BIC difference respectively (i.e. all others ...

When clustering on the four selected variables only, BIC this time chose 3 classes as the best fitting model. Comparing the partition from classifying observations based on the estimates from this model and the correct partition we found that the misclassification rate was 23.8%. The estimated parameters from the model using only selected variables are given in Table 8.

Table 8
Estimated parameters for the model involving only the selected variables for the data from variables with different numbers of categories

The misclassification results are summarized in Table 9. In addition to the misclassification rate, we show the Rand Index (Rand 1971) and the Adjusted Rand Index (Hubert and Arabie 1985).

Table 9
Misclassification Summary for the data from variables with different numbers of categories. (c) indicates that the number of classes was constrained to this value in advance. Recall that the minimum misclassification rate from the model based on the actual ...

5 Real Data Examples

5.1 Hungarian Heart Disease Data

This dataset consists of five categorical variables from a larger dataset (with 10 other continuous variables) collected from the Hungarian Institute of Cardiology. Budapest by Andras Janosi, M.D. (Detrano et al. 1989; Gennari et al. 1989). The outcome of interest is diagnosis of heart disease (angiographic disease status) into two categories: < 50% diameter narrowing and > 50% diameter narrowing in any major vessel. The original paper (Detrano et al. 1989) looked at the data in a supervised learning context and achieved a 77% accuracy rate. Originally there was information about 294 subjects but 10 subjects had to be removed due to missing data. The five variables given are gender (male/female) [sex], chest pain type (typical angina/atypical angina/non-anginal pain/asymptomatic) [cp], fasting blood sugar > 120 mg/dl (true/false) [fbs], resting electrocardiographic results (normal/having ST-T wave abnormality/showing probable or definite left ventricular hypertrophy by Estes’ criteria) [restecg] and exercise induced angina (yes/no) [exang].

When BIC is used to select the number of classes in a latent class model with all of the variables, it decisively selects 2 (with a difference of at least 38 points between 2 classes and any other identifiable number of classes). When the variables are put in decreasing order of variance of estimated probabilities between classes the ordering is the following: cp, exang, sex, restecg and fbs.

Observations were classified into whichever group their estimated membership probability was greatest for. The partition estimated by this method is compared with the clinical partition below:

<50% narrowing>50% narrowing
Class 113413
Class 24790

If class 1 is matched with the <50% class and class 2 with the >50% class there is a correct classification rate of 78.9%. This gives a sensitivity of 87.4% and a specificity of 74%.

The variable selection method chooses 3 variables: cp, exang and sex. BIC selects 2 classes for the latent class model on these variables. The partition given by this model is the same as the one given by the model with all variables. The largest difference in estimated group membership probabilities between the two latent class models is 0.1. The estimated model parameters in the variables common to both latent class models and the mixing proportions differ between models by at most 0.003. Both models have the same correct classification, specificity and sensitivity rate. Thus our method identifies the fact that it is possible to reduce the number of variables from 5 to 3 with no cost in terms of clustering.

The estimated parameters for the latent class model with all variables included is given in Table 10 and the estimated parameters for the latent class model with only the selected variables included is given in Table 11.

Table 10
Estimated parameters for the model involving all variables for Hungarian Heart Disease Data
Table 11
Estimated parameters for the model involving the selected variables for Hungarian Heart Disease Data

5.2 HapMap Data

The HapMap project (The International HapMap Consortium 2003) was set up to examine patterns of DNA sequence variation across human populations. A consortium with members including the United States, United Kingdom, Canada, Nigeria, Japan and China is attempting to identify chromosomal regions where genetic variants are shared across individuals. One of the most common types of these variants is the single nucleotide polymorphism (SNP). A SNP occurs when a single nucleotide (A, T, C or G) in the genome differs across individuals. If a particular locus has either A or G then these are called the two alleles. Most SNPs have only two alleles.

This dataset is from a random selection of 3,389 SNPs on 210 individuals (out of 4 million available in the HapMap public database). Of these 801 had complete sets of measurements from all subjects and a further subset of 639 SNPs had non-zero variability. Details of the populations and numbers of subjects are given in Table 12.

Table 12
Information on the subject populations for the HapMap data

There are two possible correct groupings of the data. The first one is into 3 groups: European (CEU), African (YRI) and Asian (CHB+JPT), and the second is into 4 groups: European (CEU), African (YRI), Japanese (JPT) and Chinese (CHB).

When all 639 SNPs are used to build latent class models, BIC selects the best number of classes as 3. The resulting estimated partition matches up exactly with the first 3-group partition. The variable selection procedure selects 413 SNPs as important to the clustering, reducing the number of variables by over a third. Using only the selected variables, BIC again selects a 3-class model whose estimated partition again gives perfect 3-group classification. The BIC values for models using both sets of data from 2 to 4 classes are given in Table 13. Note that comparing within rows in Table 13 is appropriate, but comparing between rows is not because different rows correspond to different datasets.

Table 13
BIC values for different sets of variables and different numbers of classes for the HapMap data.

The HapMap project is also interested in the position of SNPs that differ between populations, so we can look at the distribution of all 639 SNPs across the 22 chromosomes and compare it to the distribution of the selected SNPs. This is presented in Figure 2.

Figure 2
Distribution of SNPs for full and selected variable sets on the set of 22 chromosomes

Although the subset of SNPs that these data come from are a random sample, it may be that some are close to each other on the same chromosome. Since genetic variants close to each other on a chromosome tend to be inherited together, this suggests that the conditional independence assumption for LCA may not hold in this case. Incorporating these dependencies may be beneficial.

6 Discussion

We have proposed a method for selecting variables in latent class analysis. In our simulated datasets the method selected the correct variables, and this also led to improved classification and more accurate selection of the number of classes. In both real data examples, the data were classified equally accurately by the smaller set of variables selected by our method as by a larger set. The HapMap data provided an example of the “n [double less-than sign] p” type, and there our method reduced the number of variables (SNPs in that case) by over a third without any degradation in classification performance.

In general it appears to be a better idea to select variables before estimating the clustering model in both the discrete and continuous cases. We have seen that inclusion of noise variables can degrade the accuracy of both model estimation and choice of the number of clusters.

In terms of estimation of the model, including variables with no cluster structure can either smear out separated clusters/classes or introduce spurious classes. It is difficult without any extra knowledge to know what can happen in advance. From looking at the simulations and data sets presented here as well as others, it would appear that these problems are most likely to occur when separation between the classes is poor.

The headlong search algorithm is different from the greedy search algorithm described in Raftery and Dean (2006) in two ways:

  1. The best variable (in terms of the BIC difference) is not necessarily selected in each inclusion and exclusion step in the headlong search.
  2. It is possible that some variables are not looked at in any step after a certain point in the headlong algorithm (after being removed from consideration).

The headlong search is substantially faster than the greedy search and in spite of point 2 above, usually gives comparable or sometimes better results (perhaps due to the local nature of the search).

Galimberti and Soffritti (2006) considered the problem of finding multiple cluster structures in latent class analysis. In this problem the data are divided into subsets, each of which obeys a different latent class model. The models in the different subsets may include different variables. This is a somewhat different problem from the one we address here, but it also involves a kind of variable selection in latent class analysis.

Keribin (1998) showed that BIC was consistent for choice of the number of components in a mixture model under certain conditions, notably assuming that all variables were relevant to the clustering. Empirical evidence seems to suggest that when noise/irrelevant variables are present, BIC is less likely to select the correct number of classes. The general correctness of the BIC approximation in a specific case of binary variables with two classes in a naive Bayes network (which is equivalent to a 2-class latent class model with the local independence assumption satisfied) was looked at by Rusakov and Geiger (2005). The authors found that although the traditional BIC penalty term of # of parameters × log(# of observations) (or half this depending on the definition) was correct for regular points in the data space, it was not correct for singularity points (with two different types of singularity points requiring two adjusted versions of the penalty term). The first type of singularity points were those sets of parameters that could arise from a naive Bayes model with all but at most 2 links removed (type 1) and those that could arise from a model with all links removed (type 2), representing a set of mutually independent variables. Similarly in the case of redundant or irrelevant variables being included (which is closely related to the two singularity point types) they found that the two adjusted penalty terms were correct. These issues with clustering with noise variables reinforce the arguments for variable selection in latent class analysis.

Acknowledgments

Both authors were supported by NIH grant 8 R01 EB002137-02. Raftery was also supported by NICHD grant 1 R01HD O54511, NSF grant IIS0534094 and NSF grant ATM0724721. The authors would like to thank Matthew Stephens and Paul Scheet for their preparation of the HapMap data for this paper.

Appendix A: Headlong Search Algorithm for Variable Selection in Latent Class Analysis

Here we give a more complete description of the headlong search variable selection and clustering algorithm for the case of discrete data modeled by conditionally independent multinomially distributed groups. Note that for each latent class model fitted in this algorithm one must run a number of random starts to find the best estimate of the model (in terms of BIC). We recommend at least 5 for small to medium problems but for bigger problems hundreds may be needed to get a decent model estimate. The issue of getting good starting values without multiple generation of random starts is dealt with in appendix B.

  • Choose Gmax, the maximum number of clusters/classes to be considered for the data. Make sure that this number is identifiable for your data! Define constants upper (default 0) and lower (default -100), where upper is the quantity above which the difference in BIC for models M2 and M1 will result in a variable being included in Y(clust) and below which the difference in BIC for models M2 and M1 will result in a variable being excluded from Y(clust), and lower is the quantity below which the difference in BIC for models M2 and M1 will result in a variable being removed from consideration for the rest of the procedure.
  • First step : One way of choosing the initial clustering variable set is by estimating a latent class model with at least 2 classes for all variables (if more classes are identifiable, estimate all identifiable class numbers and choose the model with the best number of classes via BIC). Order the variables in terms of variability of their estimated probabilities across classes. Choose the minimum top variables that allow at least a 2-class model to be identified. This is the initial Y(clust). We do not require that the BIC difference between clustering and a model with a single class for our Y(clust) to be positive at this point because we need a set of starting variables for the algorithm. These can be removed later if there are not truly clustering variables.
    Specifically we estimate the {pijg, i = 1,…, k, j = 1,…, di, g = 1,…, G} where k is the number of variables, di is the number of categories for the ith variables and G is the number of classes. For each variable i we calculate V(i)=j=1diVar(pijg). We order the variables in decreasing order of V(i): y(1), y(2), …, y(k) and find m the minimum number of top variables that will identify a latent class model with G ≥ 2.
    Y(clust)={y(1),y(2),,y(m)}Y(other)={y(m+1),,y(k)}
    If the previous method is not possible (data cannot identify latent class model for G > 1) then split the variables randomly into subsets with enough variables to identify a latent class model for at least 2 classes, estimate the latent models for each subset and calculate the BICs, estimate the single class (G = 1) models for each subset and calculate these 1 class BICs and choose the subset with the highest difference between latent class model (G ≥ 2) and 1 class model BICs as the initial Y(clust).
    Specifically look at the list of numbers of categories d = (d1,…, dk) and work out the minimum number of variables m that allows a latent class model for G ≥ 2 to be identified. Split the variables into S subsets of at least m variables in each. For each set Ys, s = 1,…, S estimate:
    BICdiff(Ys)=BICclust(Ys)BICnot clust(Ys)
    where BICclust(Ys) = max2≤GGmaxs {BICG(Ys)}, with BICG(Ys) being the BIC given in (1) for the latent class model for Ys with G classes and Gmaxs being the maximum number of identifiable classes for Ys, and BICnot clust(Ys) = BIC1(Ys).
    We choose the best variable subset, Ys1, such that
    s1=argmaxs:YsY(BICdiff(Ys))
    and create
    Y(clust)=Ys1andY(other)=Y\Ys1
    where Y\Ys1 denotes the set of variables Y excluding the subset Ys1.
  • Second step : Next we look at each variable in Y(other) singly in order as the new variable under consideration for inclusion into Y(clust). For each variable we look at the difference between the BIC for clustering on the set of variables including the variables selected in the first set and the new variable (maximized over number of clusters from 2 up to Gmax) and the sum of the BIC for the clustering of the variables chosen in the first step and the BIC for the single class latent class model for the new variable. If this difference is less than lower the variable is removed from consideration for the rest of the procedure and we continue checking the next variable. Once the difference is greater than upper we stop and this variable is included in the set of clustering variables. Note that if no variable has difference greater than upper we include the variable with the largest difference in the set of clustering variables. We force a variable to be selected at this stage to give one final extra starting variable.
    Specifically, we split Y(other) into its variables y1,…, yD2. For each j in 1,…,D2 until BICdiff(yj) > upper, we compute the approximation to the Bayes factor in (6) by
    BICdiff(yj)=BICclust(yj)BICnot clust(yj)
    where BICclust(yj) = max2≤GGmaxj {BICG(Y(clust), yj)} with BICG(Y(clust), yj) being the BIC given in (1) for the latent class clustering model for the dataset including both the previously selected variables (contained in Y(clust)) and the new variable yj with G classes, and BICnot clust(yj) = BICreg + BICclust(Y(clust)) where BICreg is BIC1(yj) and BICclust(Y(clust)) is the BIC for the latent class clustering model with only the currently selected variables in Y(clust).
    We choose the first variable, yj2, such that
    BICdiff(Yj2)>upper
    or if no such j2 exists,
    j2=argmaxj:yjY(other)(BICdiff(yj))
    and create
    Y(clust)=Y(clust)yj2andY(other)=Y(other)\yj2
    where Y(clust) [union operator] yj2 denotes the set of variables including those in Y(clust) and variable yj2.
  • General Step [Inclusion part] : Each variable in Y(other) is proposed singly (in order), until the difference between the BIC for clustering with this variable included in the set of currently selected clustering variables (maximized over numbers of clusters from 2 up to Gmax) and the sum of the BIC for the clustering with only the currently selected clustering variables and the BIC for the single class latent class model of the new variable, is greater than upper.
  • The variable with BIC difference greater than upper is then included in the set of clustering variables and we stop the step. Any variable whose BIC difference is less than lower is removed from consiferation for the rest of the procedure. If no variable has BIC difference greater than upper no new variable is included in the set of clustering variables
    Specifically, at step t we split Y(other) into its variables y1,…, yDt. For j in 1,…,Dt we compute the approximation to the Bayes factor in (6) by
    BICdiff(yj)=BICclust(yj)BICnot clust(yj)
    (7)
    where BICclust(yj) = max2≤GGmaxj {BICG(Y(clust), yj)}, with BICG(Y(clust), yj) being the BIC given in (1) for the latent class clustering model for the dataset including both the previously selected variables (contained in Y(clust)) and the new variable yj with G clusters, and BICnot clust(yj) = BICreg + BICclust(Y(clust)) where BICreg is the single class latent class model for variable yj and BICclust(Y(clust)) is the BIC for the clustering with only the currently selected variables in Y(clust).
    We check if BICdiff(yj) > upper, if so we stop and set
    Y(clust)=Y(clust)yjifBICdiff(yj)>0andY(other)=Y(other)\yjifBICdiff(yj)>0
    if not we increment j and re-calculate BICdiff(yj) If BICdiff(yj) < lower we remove it from both Y(clust) and Y(other)
    If no j has BICdiff(yj) > upper leave Y(clust) = Y(clust) and Y(other) = Y(other).
  • General Step [Removal part] : Each variable in Y(clust) is proposed singly (in order), until the difference between the BIC for clustering with this variable included in the set of currently selected clustering variables (maximized over numbers of clusters from 2 up to Gmax) and the sum of the BIC for the clustering with only the other currently selected clustering variables (and not the variable under consideration) and the BIC for the single class latent class model of the variable under consideration, is less than upper.
  • The variable with BIC difference less than upper is then removed from the set of clustering variables and we stop the step. If the difference is greater than lower we include the variable at the end of the list of variables in Y(other). If not we remove it entirely from consideration for the rest of the procedure. If no variable has BIC difference less than upper no variable is excluded from the current set of clustering variables
    In terms of equations for step t + 1, we split Y(clust) into its variables y1, …, yDt+1. For each j in 1,…,Dt+1 we compute the approximation to the Bayes factor in (6) by
    BICdiff(yj)=BICclustBICnot clust(yj)
    where BICclust = max2≤GGmax {BICG(Y(clust))} with BICG,m (Y(clust)) being the BIC given in (1) for the model-based clustering model for the dataset including the previously selected variables (contained in Y(clust)) with G clusters, and BICnot clust(yj) = BICreg + BICclust(Y(clust)\yj) where BICreg is the single class latent class model for variable yj and BICclust(Y(clust)\yj) is the BIC for the clustering with all the currently selected variables in Y(clust) except for yj.
    We check if BICdiff(yj) < upper, if so we stop and set
    Y(clust)=Y(clust)\yjifBICdiff(yj)<upperandY(other)=Y(other)yjiflower<BICdiff(yj)<upper
    if not we increment j and re-calculate BICdiff(yj) If BICdiff(yj) < lower we remove it from both Y(clust) and Y(other)
    If no j has BICdiff(yj) < upper leave Y(clust) = Y(clust) and Y(other) = Y(other).
  • After the first and second steps the general step is iterated until consecutive inclusion and removal proposals are rejected. At this point the algorithm stops as any further proposals will be the same ones already rejected.

Appendix B: Starting Values for Latent Class Analysis in the Headlong Search Algorithm

In the previous appendix we discussed the details of the headlong algorithm for latent class variable selection. In each step multiple latent class models for different set of data/variables and classes are estimated. Previously we have only mentioned that starting values are generated randomly for each model several times and the best (in terms of BIC/likelihood) of the resulting estimated models is chosen as the single estimate for a particular latent class model. This means that for each different dataset and each different number of classes we are required to generate random starting values and estimate the model via EM numerous times. For datasets with reasonable numbers of variables this is not too computationally expensive but for more complex datasets it is burdensome. Also with increasing numbers of observations and/or variables and/or classes more random starts are needed to have any confidence in finding the global maximum likelihood for the model as the likelihood surface becomes more complex, with increasing numbers of local maxima.

Because of the stepwise nature of the algorithm we can use models estimated before to give good starting values for new models. By starting values here we mean the matrix z of conditional probabilities of membership in the different classes for each observation.

At the end of each step (either inclusion or exclusion) we have a set of currently selected clustering variables. At some point in the step we have estimated the latent class model for this set over a range of classes (or sometimes just one, 2 classes) and chosen the model with the number of classes that gives us the highest BIC. We can call this model LCAcurrent and the number of classes in this best model for the current set of clustering variables Gcurrent. We can also save the z matrix for this model and call it zcurrent.

In our next step we will be either looking at models for Y(clust) with a new additional variable (inclusion step) or models for Y(clust) leaving out one of the current clustering variables. It seems obvious that a reasonable starting z matrix for models involving the new dataset (which is either a sub- or super-set of the old one) and number of classes Gcurrent would be zcurrent, because the dataset will only have changed by one variable. So instead of randomly generating multiple z matrixes (or other starting parameters) to try to get the global maximum likelihood for our latent class model, we merely use what we believe to be a good set starting z matrix (which hopefully will be reasonably close to the global maximum in the new likelihood space).

However, we may still wish to have good starting values for the new dataset with different numbers of classes, Gcurrent ± c. But our zcurrent will be an n × Gcurrent matrix (where n is the number of observations) and we need n × (Gcurrent ± c) matrices. How can we sensibly create a new matrix with c more/less columns given our zcurrent?

We will look at the case for +1 and −1 separately (the analogue for general +c and −c should be obvious). It will be rare in practice to need more than ±1 at each step as the number of identifiable classes will only generally increase fairly slowly with the number of variables selected.

For −1 we want to reduce the number of columns of our zcurrent by 1. A sensible way to do this is to collapse the two closest classes (in terms of Euclidean distance in the parameter space). We calculate the distances between the classes’ estimated parameters/probabilities from LCAcurrent and select the closest two. We then simply remove the two columns corresponding to those classes from zcurrent and replace them with one column equal to the sum (across rows) of the removed columns. This is our new starting z matrix for the model with Gcurrent − 1 classes. In terms of a single observation with probability p1 of being in the first chosen class and probability p2 of being the second chosen class we are saying the observation has probability p1 + p2 of being in the new class created from the amalgamation of the two i.e. the observation will be in the new class if he is in either of the old classes. Note that if we wish to, we can weight the distances with the mixing proportions, making it more likely that we would join smaller close classes.

For −c we can use the resulting matrix from the process described in the previous paragraph to estimate the model for Gcurrent − 1 classes and then reduce the resulting estimated z from this model by one column in the same fashion, continuing on in the same way until we have removed c columns.

For +1 we want to increase the number of columns of our zcurrent by 1. An obvious way to do this is by splitting a class in two. We choose the largest class (in terms of mixing proportions). We then remove the column corresponding to that class from zcurrent and call this w and estimate a two class latent class model using the data points weighted by w. Obviously we have returned to problem of needing starting values for estimating our 2-class model. However usually a small number of randomly generated starts, say 5, for this number of classes will result in an estimated model achieving the global maximum likelihood and this is usually not too computationally expensive. Once we have our 2-class model estimate of the z matrix, called z2, we can multiply this by w and add the resulting two columns to the original zcurrent (less the removed column), giving us a starting z matrix for estimating the Gcurrent + 1 class model. We can think of w as being the conditional probability of an observation being in the old selected class and then the new z2 matrix as being the probability for an observation being in either of the two new sub-classes given it was in the old class.

Again for +c we can use the resulting matrix from the process described in the previous paragraph to estimate the model for Gcurrent + 1 classes and then increase the resulting estimated z from this model by one column in the same fashion, continuing on in the same way until we have added c columns.

References

  • Badsberg JH. Model search in contingency tables by CoCo. In: Dodge Y, Whittaker J, editors. Computational Statistics. Volume 1. 1992. pp. 251–256.
  • Clogg CC. New developments in latent structure analysis. In: Jackson DJ, Borgatta EF, editors. Factor Analysis and Measurement in Sociological Research. Beverly Hills: Sage; 1981. pp. 215–246.
  • Clogg CC. Latent class models. In: Arminger CCCG, Sobel ME, editors. Handbook of Statistical Modeling for the Social and Behavioral Sciences. New York: Plenum; 1995. pp. 311–360.
  • Detrano R, Janosi A, Steinbrunn W, Pfisterer M, Schmid J-J, Sandhu S, Guppy KH, Lee S, Froelicher V. International application of a new probability algorithm for the diagnosis of coronary artery disease. American Journal of Cardiology. 1989;64:304–310. [PubMed]
  • Fraley C, Raftery AE. Model-based clustering, discriminant analysis, and density estimation. Journal of the American Statistical Association. 2002;97:611–631.
  • Galimberti G, Soffritti G. Identifying multiple cluster structures through latent class models. In: Spiliopoulou M, Kruse R, Borgelt C, Nürnberger A, Gaul W, editors. From Data and Information Analysis to Knowledge Engineering. Berlin: Springer; 2006. pp. 174–181.
  • Gennari JH, Langley P, Fisher D. Models of incremental concept formation. Artificial Intelligence. 1989;40:11–61.
  • Goodman LA. Exploratory latent structure analysis using both identifiable and unidentifiable models. Biometrika. 1974;61:215–231.
  • Hagenaars JA, McCutcheon AL. Applied Latent Class Analysis. Cambridge, U.K: Cambridge University Press; 2002.
  • Hubert L, Arabie P. Comparing partitions. Journal of Classification. 1985;2:193–218.
  • Kass RE, Raftery AE. Bayes factors. Journal of the American Statistical Association. 1995;90:773–795.
  • Keribin C. Consistent estimate of the order of mixture models. Comptes Rendues de l’Academie des Sciences, Série I-Mathématiques. 1998;326:243–248.
  • Lazarsfeld PF. The logical and mathematical foundations of latent structure analysis, Chapter 10. In: Stouffer SA, editor. Measurement and Prediction, Volume IV of The American Soldier: Studies in Social Psychology in World War II. Princeton University Press: 1950a.
  • Lazarsfeld PF. Some latent structures, Chapter 11. In: Stouffer SA, editor. Measurement and Prediction, Volume IV of The American Soldier: Studies in Social Psychology in World War II. Princeton University Press; 1950b.
  • Lazarsfeld PF, Henry NW. Latent Structure Analysis. Boston: Houghton Mifflin; 1968.
  • McCutcheon AL. Latent Class Analysis. Newbury Park, Calif: Sage; 1987.
  • McLachlan GJ, Peel D. Finite Mixture Models. New York: Wiley; 2000.
  • Raftery AE, Dean N. Variable selection for model-based clustering. Journal of the American Statistical Association. 2006;101:168–178.
  • Rand WM. Objective criteria for the evaluation of clustering methods. Journal of the American Statistical Association. 1971;66:846–850.
  • Rusakov D, Geiger D. Asymptotic model selection for naive Bayesian networks. Journal of Machine Learning Research. 2005;6:1–35.
  • The International HapMap Consortium. The international hapmap project. Nature. 2003;426:789–796. [PubMed]
  • Wolfe JH. Master’s thesis. Berkeley: University of California; 1963. Object cluster analysis of social areas.