Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC3530625

Formats

Article sections

- Abstract
- 1 Introduction
- 2 Methods
- 3 Simulation Studies
- 4 Application: the Physician Reliability Study in the Diagnosis of Endometriosis
- 5 Discussion
- Supplementary Material
- References

Authors

Related links

Biometrics. Author manuscript; available in PMC 2013 December 1.

Published in final edited form as:

Published online 2012 September 24. doi: 10.1111/j.1541-0420.2012.01789.x

PMCID: PMC3530625

NIHMSID: NIHMS384105

The publisher's final edited version of this article is available at Biometrics

See other articles in PMC that cite the published article.

In diagnostic medicine, estimating the diagnostic accuracy of a group of raters or medical tests relative to the gold standard is often the primary goal. When a gold standard is absent, latent class models where the unknown gold standard test is treated as a latent variable are often used. However, these models have been criticized in the literature from both a conceptual and a robustness perspective. As an alternative, we propose an approach where we exploit an imperfect reference standard with unknown diagnostic accuracy and conduct sensitivity analysis by varying this accuracy over scientifically reasonable ranges. In this article, a latent class model with crossed random effects is proposed for estimating the diagnostic accuracy of regional obstetrics and gynaecological (OB/GYN) physicians in diagnosing endometriosis. To avoid the pitfalls of models without a gold standard, we exploit the diagnostic results of a group of OB/GYN physicians with an international reputation for the diagnosis of endometriosis. We construct an ordinal reference standard based on the discordance among these international experts and propose a mechanism for conducting sensitivity analysis relative to the unknown diagnostic accuracy among them. A Monte-Carlo EM algorithm is proposed for parameter estimation and a BIC-type model selection procedure is presented. Through simulations and data analysis we show that this new approach provides a useful alternative to traditional latent class modeling approaches used in this setting.

The motivation for this statistical research comes from the Physician Reliability Study (PRS) (Schliep et al., 2012) that investigated the diagnosis of endometriosis for various types of physicians by using different combinations of clinical information. Endometriosis is a gynecological medical condition in which cells from the lining of the uterus appear and flourish outside the uterine cavity, most commonly on the ovaries. The diagnosis of endometriosis can be complicated and there is no consensus in the field on what constitutes the gold standard (e.g., Brosens and Brosens, 2000). Of interest in the PRS is estimating the diagnostic accuracy of a group of regional obstetrics and gynecological (OB/GYN) physicians (R-OB/GYNs) in terms of their endometriosis diagnosis. The R-OB/GYNs were presented with digital images of the uterus and adnexal structure of participants that were taken during laparoscopies and were asked to make a diagnosis of endometriosis. Since these R-OB/GYNs see patients daily and are affiliated with the same medical center, an assessment of their diagnostic accuracy can help the medical center in designing specific training programs to improve their diagnosis.

Diagnostic accuracy of raters or medical tests are of considerable interest in many public health and biomedical fields (Zou, McClish, and Obuchowski, 2002; Pepe, 2003; Hui and Walter, 1980). The estimation of diagnostic accuracy is straightforward when the true disease status is known. In many cases such as for endometriosis, however, a gold standard is not available. Methods have been proposed to estimate diagnostic accuracy without a gold standard using latent class models for which the true disease status is considered to be a latent variable (Hui and Walter, 1980; Hui and Zhou, 1998). Qu, Tan, and Kutner (1996) proposed a random-effects latent class model that introduces conditional dependence between tests with normally distributed random effects. Albert et al. (2001) proposed a latent class model with a finite mixture structure to account for dependence between tests. More recently, it has been shown that latent class models for estimating diagnostic accuracy may be problematic in many practical situations (Albert and Dodd, 2004; Pepe and Janes, 2006). Specifically, Albert and Dodd (2004) showed that with a small number of binary tests, estimates of diagnostic accuracy are biased under a misspecified dependence structure; yet in many situations it is nearly impossible to distinguish between models with different dependence structures from observed data.

Between the two extremes of no gold standard on anyone and a gold standard on all individuals, there are situations where a gold standard does not exist but some imperfect information is available. When there is no gold standard, the best available reference tests can be employed to help the estimation of diagnostic accuracy of new tests. Those best available reference tests may themselves be subject to small error, and therefore are called imperfect reference standard. In the motivating PRS example, there are diagnostic results on the same subjects from a group of international expert (IE) OB/GYN physicians. These IEs all had directed specialized training in laparoscopic surgery, accrued extensive clinical and research experience in diagnosing and treating endometriosis, and have international reputations in the field. A scatterplot of the correlation between the IE and R-OB/GYN ratings is included in Web Appendix A of the web-based supplementary materials. In this paper, we propose new methodology to estimate the R-OB/GYNs’ diagnostic accuracy by exploiting the IEs’ diagnostic results in the PRS.

Valenstein (1990), Begg (1987), and Qu and Hadgu (1998) have discussed the bias in estimating diagnostic accuracy using an imperfect reference standard. Using both analytical and simulation techniques, Albert (2009) showed that, with the aid of an imperfect reference standard with high sensitivity and specificity, inferences on diagnostic accuracy are robust to misspecification of the conditional dependence between tests. However, this approach assumes that the diagnostic accuracy of the imperfect reference standard is known or can be estimated from other data sources. In some cases, no gold standard exists and it is impossible to obtain estimates of the diagnostic accuracy of the imperfect reference standard relative to the gold standard from other studies. In this situation, we show how multiple expert raters or more definitive tests can be used along with a sensitivity analysis to estimate the diagnostic accuracy of other raters or tests.

Our proposed approach for estimating the average diagnostic accuracy among R-OB/GYNs makes use of the latent class model as in the aforementioned literature for models without a gold standard. Since each physician examines each subject in the PRS, we develop an approach where R-OB/GYNs are random and crossed with subjects. To exploit the IEs’ diagnostic results, we construct an ordinal composite imperfect reference standard from the individual diagnostic results of the IEs. We first assume that we know the diagnostic accuracy of the imperfect reference standard and proceed with estimating the diagnostic accuracy of the R-OB/GYNs. In this step, the value of the diagnostic accuracy of the imperfect reference standard is chosen such that the corresponding posterior probability of the latent disease status given the observed ordinal reference standard is reasonable. We then vary this choice widely within a scientifically sensible range of the posterior probabilities in order to assess the robustness of the estimated diagnostic accuracy of the R-OB/GYNs. To handle the computational challenge arising from the crossed random effects, we develop a Monte-Carlo EM algorithm for parameter estimation. We investigate the robustness of the proposed latent class model with respect to the misspecification of the dependence structure between tests. We show that, without the imperfect reference standard (i.e., IE reviewers), estimates of diagnostic parameters are biased under a misspecified dependence structure. Moreover, the model selection criterion (Ibrahim, Zhu, and Tang, 2008) has difficulty distinguishing the various competing models. However, with the aid of the imperfect ordinal standard, (i) estimates of diagnostic accuracy are nearly unbiased, even when the dependence structure between the R-OB/GYN tests is misspecified or when the assumed diagnostic accuracy of the imperfect reference standard deviates from the truth in a reasonable way and (ii), we are able to distinguish between competing models for the dependence between R-OB/GYNs.

In section 2, we propose a latent class modeling approach for estimating the diagnostic accuracy of the R-OB/GYNs that exploits the IEs by constructing an ordinal reference standard. In Section 3, we investigate the bias from misspecifying the random effects structure with and without the use of the imperfect reference standard. In Section 4, we apply the proposed model to data from the PRS in the diagnosis of endometriosis. A discussion follows in Section 5.

Let *Y _{ij}* denote the binary diagnostic result of endometriosis (

$$P({Y}_{ij}=1{D}_{i}={d}_{i},{b}_{i},{c}_{j})=\mathrm{\Phi}\left({\beta}_{{d}_{i}}+{\sigma}_{{d}_{i}}{b}_{i}+{\tau}_{{d}_{i}}{c}_{j}\right),\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}{\sigma}_{{d}_{i}},{\tau}_{{d}_{i}}0,$$

(1)

where *b _{i}* is the subject-specific random effect with probability density distribution (p.d.f.)

Contrast to the latent class models with a Gaussian random effect model in Qu, Tan and Kutner (1996) and Albert (2001), where the rater sensitivity and specificity were treated as fixed effects, (1) can be used to estimate the average sensitivity and specificity across the population of regional physicians as follows:
${S}_{e}=\mathrm{\Phi}({\beta}_{1}/\sqrt{1+{\sigma}_{1}^{2}+{\tau}_{1}^{2}})$ and
${S}_{p}=\mathrm{\Phi}(-{\beta}_{0}/\sqrt{1+{\sigma}_{0}^{2}+{\tau}_{0}^{2}})$ under the normality assumption of *b _{i}* and

$${g}_{m}(x)={\lambda}_{m}\mathrm{(x;{\mu}_{1m,}{\nu}_{1m}^{2})+(1-{\lambda}_{m})\mathrm{(x;{\mu}_{2m},{\nu}_{2m}^{2}),\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}m=1,2,}}$$

(2)

where *λ _{m}* is the probability in the first group, 0 ≤

The likelihood of (1) is complicated by the two crossed random effects *b _{i}* and

$$L(\theta y)=\int \int \sum _{{d}_{1}=0}^{1}\sum _{{d}_{I}=0}^{1}[\underset{\underset{{\left\{\mathrm{\Phi}\left({\beta}_{{d}_{i}}+{\sigma}_{{d}_{i}}{b}_{i}+{\tau}_{{d}_{i}}{c}_{j}\right)\right\}}^{{y}_{ij}}}{\overset{}{j=1J}}}{\overset{}{i=1I}}\times {\left\{1-\mathrm{\Phi}\left({\beta}_{{d}_{i}}+{\sigma}_{{d}_{i}}{b}_{i}+{\tau}_{{d}_{i}}{c}_{j}\right)\right\}}^{1-{y}_{ij}}]\underset{{\pi}_{{d}_{i}}{g}_{1}({b}_{i}){\text{d}b}_{i}}{\overset{\underset{{g}_{2}({c}_{j}){\text{d}c}_{j}}{\overset{.}{j=1J}}}{i=1I}}$$

(3)

The likelihood (3) involves high-dimensional integration and summation, which is difficult to evaluate by numerical approximation. As a consequence, a Monte-Carlo EM algorithm is presented in Web Appendix B of the web-based supplementary materials to obtain the maximum-likelihood estimation of (3).

In addition to the eight R-OB/GYNs, four IEs provided diagnoses of endometriosis for each subject in the PRS. These IEs are well-known OB/GYN physicians in the field and are expected to have better diagnostic accuracy than other physicians. Although a gold standard does not exist for endometriosis, the diagnostic results from these IEs can be used to construct an imperfect reference standard to improve the estimation of average diagnostic accuracy of the R-OB/GYNs. Specifically, an ordinal imperfect reference standard can be constructed based on the multiple IE binary ratings. Generally, suppose there are *L* binary IE ratings
${\stackrel{~}{T}}_{i}^{(l)}$ for the *i*th subject, *l* = 1, · · ·, *L*. We propose to use the sum of those binary ratings as the imperfect reference standard
${T}_{i}={\sum}_{l=1}^{L}{\stackrel{~}{T}}_{i}^{(l)}$, where *T _{i}* takes values 0, 1, · · ·,

$$L(\theta y,t)=\sum _{{d}_{1}=0}^{1}\sum _{{d}_{I}=0}^{1}\{P(Y=yT=t,D=d)\underset{{S}_{{t}_{i}{d}_{i}T}^{}\underset{{\pi}_{{d}_{i}}}{\overset{}{i=1I}}\}}{\overset{}{i=1I}},$$

(4)

where ${S}_{{t}_{i}{d}_{i}T}^{=}$ characterizes the diagnostic accuracy of the imperfect reference standard relative to the true disease status. In this section, we will assume ${S}_{{t}_{i}{d}_{i}T}^{}$ is known. However, for endometriosis there is no established gold standard available for estimating these quantities. When no gold standard exists, we propose a methodological approach for conducting a sensitivity analysis in section 2.3.

To incorporate the information of the imperfect reference standard, we consider the model

$$P({Y}_{ij}=1{T}_{i}={t}_{i},{D}_{i}={d}_{i},{b}_{i},{c}_{j})=\mathrm{\Phi}\left({\beta}_{{d}_{i}}+{\sigma}_{{d}_{i}}{b}_{i}+{\tau}_{{d}_{i}}{c}_{j}\right),\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}{\sigma}_{{d}_{i}},{\tau}_{{d}_{i}}0,$$

(5)

where we make the assumption that the observed ratings do not depend on the imperfect reference standard if the true disease status is available. This is a natural assumption which is usually true in practice. The assumption can be relaxed by allowing *β* to depend on *T _{i}* (i.e., replacing

$$L(\theta )=\int \int \sum _{{d}_{1}=0}^{1}\sum _{{d}_{I}=0}^{1}[\underset{\underset{{\left\{\mathrm{\Phi}\left({\beta}_{{d}_{i}}+{\sigma}_{{d}_{i}}{b}_{i}+{\tau}_{{d}_{i}}{c}_{j}\right)\right\}}^{{y}_{ij}}}{\overset{}{j=1J}}}{\overset{}{i=1I}}\times {\left\{1-\mathrm{\Phi}\left({\beta}_{{d}_{i}}+{\sigma}_{{d}_{i}}{b}_{i}+{\tau}_{{d}_{i}}{c}_{j}\right)\right\}}^{1-{y}_{ij}}]\underset{{S}_{{t}_{i}{d}_{i}T}^{{\pi}_{{d}_{i}}}\underset{{g}_{2}({c}_{j}){\text{d}c}_{j}}{\overset{.}{j=1J}}}{\overset{}{i=1I}}$$

(6)

A Monte-Carlo EM algorithm is used to obtain maximum likelihood estimates of *β*_{di}, *σ*_{di} and *τ*_{di}, *d _{i}* = 0 or 1, by maximizing (6).

There are special cases of the proposed methodology. When
${S}_{L1T}^{=}$, it reduces to the case in which the true disease status is observed. For diagnosing endometriosis in the PRS, it corresponds to the scenario where all four IEs report positive results if the subject has endometriosis and they all report negative results if the subject does not (i.e., IEs have perfect classifications). When
${S}_{{t}_{i}1T}^{=}$ for *t _{i}* = 0, 1, · · ·,

As stated in Section 2.2, we construct the imperfect reference standard *T* by using the diagnostic results from the four IEs in the PRS. In this application, we do not know the diagnostic accuracy of the IEs and, consequently, do not know the diagnostic accuracy of the constructed imperfect reference standard. Hence, we discuss how we conduct sensitivity analysis for diagnostic accuracy estimation of the R-OB/GYNs by varying the diagnostic accuracy of the imperfect reference standard over a wide range of reasonable values. In particular, we assume the following polychotomous logit model,

$${S}_{{t}_{i}{d}_{i}T}^{=}$$

(7)

and a symmetrically defined ${S}_{{t}_{i}0T}^{:}$. Note here that ${S}_{41T}^{=}$. Denoting ${S}_{{d}_{i}{t}_{i}D}^{=}$ , then by Bayes rule,

$${S}_{{d}_{i}{t}_{i}D}^{=}$$

(8)

which characterizes the posterior probability of disease given the observed imperfect reference standard. We focus on the sum of the IE ratings, rather than their individual values, since this provides a simple and intuitive approach for sensitivity analysis that does not require the specification of the dependence structure between the multiple IE ratings. With regard to choosing parameter of (7), we suggest that parameters be chosen so that
${S}_{14D}^{}$ and
${S}_{00D}^{}$ are both close to zero (i.e., the probability that a woman is truly negative (positive) for endometriosis but all IEs independently diagnose her as positive (negative) is reasonably assumed to be zero). Figure 1 shows posterior probability of having disease for different values of *γ*_{2} in (8) where *γ*_{0} = −4.5, *γ*_{1} = 0.1 and *π*_{1} = 0.70. The impact of the change of *γ*_{2} on the estimation of the diagnostic accuracy of the regional physicians is examined in both the simulation and application sections.

Posterior probability of having disease for different values of *γ*_{2} in equations (8) and (7). *γ*_{0} and *γ*_{1} are assumed known with *γ*_{0} = −4.5, *γ*_{1} = 0.1.

In the simulation and discussion sections as well as the web-based supplementary materials, we show that the proposed approach for exploiting the IEs and conducting sensitivity analysis is robust to (i) the assumed dependence structure among the R-OB/GYN ratings and (ii) the exchangeability assumption implicit in using the sum of the IE ratings as an imperfect reference standard.

The simulation datasets were generated from the latent class models with random effects that follow mixture normal distribution (“true model”) and were fit to models with either normal or mixture normal random effects (“working model”). All simulations were conducted with 500 replications, each with 100 subjects (*I* = 100). The average sensitivity and specificity and the disease prevalence over 500 replications and their standard deviations are shown with five (*J* = 5) or ten (*J* = 10) raters in Table 1.

Table 1(A) presents the results when there is no gold or imperfect reference standard. The results show that, with no help of a gold or imperfect reference standard, there is sizable bias in estimation when the random effects (both between subjects and between raters) in the true model follow mixture normal distributions and in the working model follow normal distributions. The averages of estimated mean sensitivity and specificity are both 0.81 for 5 raters, and 0.82 and 0.83 for 10 raters, respectively, compared to the true sensitivity of 0.88 and true specificity of 0.87. Moreover, we are unable to distinguish between the true and the misspecified models by using model selection criterion IC_{H}_{(0),}* _{Q}* (Ibrahim, Zhu, and Tang, 2008); the percentages that IC

In the case when an ordinal imperfect reference standard is used and the diagnostic accuracy of the imperfect reference standard is known, we assume that the imperfect reference standard *T _{i}* is ordinal taking values 0, 1, · · ·, 4, and that the true disease status

The results in Table 1(B) show how incorporating an imperfect reference standard improves the robustness of the estimation, assuming that we know the diagnostic accuracy of it. However, in most practice situations, we do not know the diagnostic accuracy of the imperfect reference standard. In the PRS, we construct the imperfect reference standard from four IEs, but have no information about the diagnostic accuracy of this imperfect reference standard. Thus, it is of interest to investigate the robustness of the proposed latent class model to reasonable misspecification of the diagnostic accuracy of the imperfect reference standard. For this reason, we repeated the simulation study in Table 1(B), but with a misspecified diagnostic accuracy (*γ*_{0} = −4.5, *γ*_{1} = 0.1, and *γ*_{2} = 0.1 in (7)) for the imperfect reference standard. The resulting posterior disease probabilities given *t _{i}* = 0, 1, 2, 3, 4 were 0.02, 0.46, 0.70, 0.86, and 1.00, respectively. Table 1(C) shows that even with the misspecified diagnostic accuracy of the imperfect reference standard, the estimates of sensitivity and specificity are still nearly unbiased. Further, we are still able to distinguish between the true and misspecified models by using model selection criterion IC

Although the robustness of the proposed latent class model in estimating diagnostic accuracy is shown in Table 1 with the imperfect reference standard, we also investigate the sensitivity of the proposed model to different values of the diagnostic accuracy of the imperfect reference standard. Specifically, we estimate sensitivity, specificity, and prevalence for various values of the diagnostic accuracy of the imperfect reference standard by varying *γ*_{2} in the parameterization given in (7). The simulations were also conducted with 500 replications for 100 individuals (*I* = 100). Table 2 shows results of the average estimated sensitivity and specificity in the population with five (*J* = 5) or ten (*J* = 10) raters under the scenarios that the true imperfect reference standard was generated from (7) with *γ*_{0} = −4.5, *γ*_{1} = 0.1, and *γ*_{2} = 0.2 and the misspecified imperfect reference standards with *γ*_{0} = −4.5, *γ*_{1} = 0.1, but *γ*_{2} = −0.1, 0, 0.1, 0.15, 0.25, and 0.3 were used. Severe departure was not considered here, because it is not likely for investigators to use imperfect reference standards that have unreasonable poor quality. The simulation datasets were generated from the latent class models with random effects that follow mixture normal distribution and were fit to the models with normal random effects, but with the aid of the imperfect reference standard with an incorrectly specified diagnostic accuracy. As shown in Table 2, the robustness of the latent class model remains in all scenarios when the diagnostic accuracy of the imperfect reference standard is misspecified. Thus, the simulations demonstrate that exploiting the ordinal imperfect reference standard (e.g., a group of expert raters) and performing a sensitivity analysis provides a more robust solution than latent class models without a gold or imperfect reference standard for estimating diagnostic accuracy. Further, it is simpler to distinguish between competing models for the dependence between the experimental raters (e.g. ROB/GYNs) when we exploit the ordinal imperfect reference standard.

Simulation results for sensitivity and specificity with incorrectly specified imperfect reference standards. The random effects of the true models and working models always follow mixture normal distribution and normal distribution, respectively. The **...**

One alternative of using the IE ratings is to incorporate both R-OB/GYNs and IEs into the latent class approach and fit a conventional model without a gold standard. We show the inherent problem of this alternative approach by considering the following model that was modified from Equation (1):

$$P({Y}_{ij}=1{D}_{i}={d}_{i},{b}_{i},{c}_{j})=\mathrm{\Phi}\left({\beta}_{{d}_{i}}+{\alpha}_{{d}_{i}}{E}_{j}+{\sigma}_{{d}_{i}}{b}_{i}+{\tau}_{{d}_{i}}{c}_{j}\right),\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}{\sigma}_{{d}_{i}},{\tau}_{{d}_{i}}0.$$

(9)

Equation (9) is (1) with the addition of *E _{j}* as an indicator variable for IEs where both types of ratings are included in the latent class model. To investigate the performance of (9) in estimating the sensitivity and specificity of the R-OB/GYNs, we set

The proposed methodology is applied to data from the PRS in the diagnosis of endometriosis. In this study, eight R-OB/GYNs diagnosed 79 subjects for endometriosis based on digital images from laparoscopies. Our interest is to obtain average sensitivity and specificity of the R-OB/GYNs in the diagnosis of endometriosis.

We estimated the diagnostic accuracy of R-OB/GYNs using the proposed latent class model with crossed random effects, under two random effects distributions for the between-subject and between-rater variation: normal distribution and mixture normal distribution. Table 3 shows the overall estimates of prevalence, sensitivity, and specificity for the diagnosis of endometriosis, as well as the IC_{H}_{(0),}* _{Q}* values of the fitted models. Bootstrap standard errors based 500 replications are also presented under each model. When no imperfect reference standard information is incorporated (Table 3(A)), estimates of diagnostic accuracy are close across models with the different random effects structures. For example, the sensitivity is 0.96 under the normal random effects distribution and is 0.93 under the mixture normal random effects distribution. The IC

The estimates (standard errorss) of sensitivity, specificity, and disease prevalence, and IC_{H}_{(0),}_{Q} values of the models for the example of endometriosis diagnosis: (A) no gold or imperfect reference standard, (B) with an imperfect reference standard with **...**

Due to the lack of a gold standard diagnosing endometriosis in the PRS, the ratings from the four IEs are used to construct an imperfect reference standard as stated in Section 2.2. This imperfect reference standard is ordinal, with *T _{i}* = 0 representing all four IEs’ diagnosing no disease for the

We further examined the robustness of the estimates of the proposed latent class model with respect to the different diagnostic accuracy of the imperfect reference standard information using (7). Figure 2 shows estimates of sensitivity and specificity, along with disease prevalence, for the latent class model that has normal random effects and incorporates the different imperfect reference standard information. The diagnostic accuracy of the imperfect reference standard was generated from (7) with *γ*_{0} = −4.5, *γ*_{1} = 0.1 and *γ*_{2} changing from 0 to 0.5 by 0.01. Varying values of *γ*_{2} corresponds to the probabilities
${S}_{1{t}_{i}D}^{}$ as shown in Figure 1. The overall estimates of sensitivity and specificity for the R-OB/GYN physicians are nearly identical across *γ*_{2}. Thus, estimation of the average sensitivity and specificity forthe R-OB/GYNs are insensitive over a wide range of scientifically reasonable values of the diagnostic accuracy of the imperfect reference standard (derived from the IEs’ diagnoses).

Estimating diagnostic accuracy without a gold standard is a challenging problem that has received substantial recent attention. Most of these methods involve latent class models where the true disease state is considered latent. However, these approaches have received criticism from a conceptual (Pepe and Janes, 2007) and robustness (Albert and Dodd, 2004) prospective.

In practical situations, we are still left with the problem of whether it is possible to estimate diagnostic accuracy when there is no gold standard. This was the motivation in the PRS where an important goal was estimating the sensitivity and specificity of diagnosing endometriosis from viewing intra-uterus digital pictures taken during laparoscopic surgeries for the R-OB/GYNs at a Utah site. Fortunately, we have additional information on a set of four IEs which we can exploit to estimate diagnostic accuracy of endometriosis among typical obstetrics and gynecology physicians. The methodology assumes that we know the diagnostic accuracy of the IEs, and we perform sensitivity analysis by varying this diagnostic accuracy in scientifically sensible ways. For this application, we are confident that when all raters agree there is only a negligible probability of misclassification. Hence, we primarily focus on varying the diagnostic accuracy for situations where the IEs differ. Through simulations and data analysis we show that the diagnostic accuracy estimation is remarkably robust to reasonable misspecification of assumed diagnostic accuracy of the IEs as well as to the distributional assumptions on the random effects for modeling the dependence among ROB/GYNs.

The robustness for estimating the diagnostic accuracy of the R-OB/GYNs to the assumed accuracy for the IEs is in part due to the low frequency of discordance in IEs’ ratings, and the reasonable assumption that there is no misclassification when all IEs agree. We expect that estimation will be more sensitive to variation in the diagnostic accuracy of the imperfect standard when there is less consensus among the IEs. Our approach will still be useful in this case since we can report ranges of diagnostic accuracy estimates of R-OB/GYNs corresponding to ranges in the assumed diagnostic accuracy of the IEs.

This work has important implications for diagnostic accuracy studies, suggesting that exploiting expert raters or multiple high quality tests may provide a good approach for estimating diagnostic accuracy of other raters or tests. Especially, we showed that these types of experts raters or high quality tests can greatly improve the robustness of the estimation with respect to the misspecification of random effect distributions in the latent class models. Thus, this approach provides a robust alternative to traditional latent class models for estimating diagnostic accuracy without a gold standard.

The use of IE data is one of the key aspects in the application of the proposed methodology. First, the fact that the inference are not sensitive to parameters of the polychotomous logit model is in part due to ${S}_{14D}^{\approx}$ and ${S}_{00D}^{\approx}$ for all parameters considered. This is sensible since it would seem very unlikely that one would have a positive (negative) gold standard when all the IEs were negative (positive). One would need a large enough group of IEs to have this confidence, with this number being application dependent. However, for a general rule, we recommend a minimum of four expert ratings. Second, we conducted simulation studies to investigate the performance of the proposed method when IEs only examine a subset of the patients. Estimates of the sensitivity and specificity of the R-OB/GYNs are nearly unbiased under a misspecified dependence structure when the IEs examine 80% and 50% of the patients. When the proportion of examined patients decreased to 20%, the estimates have substantially more bias. Third, we conducted simulation studies to investigate the performance of the proposed method when the patients are not examined by all IEs. The simulations indicate that the estimates of the sensitivity and specificity of the R-OB/GYNs are robust to dependence misspecification. Therefore, we suggest that, when the patients are not examined by all IEs, the proposed method can still function very well with the appropriate imputation for the missingness. Please see Web Appendix D in the web-based supplementary materials for simulation results.

The use of the proposed polychotomous logit model for the imperfect reference standard provides nearly unbiased estimation of the diagnostic accuracy and disease prevalence, regardless of whether or not the IEs are exchangeable (exhangeablilty of the four IEs means, for any combination of *e*_{1}, *e*_{2}, *e*_{3}, and *e*_{4},
$P({\stackrel{~}{T}}_{i}^{(1)}={e}_{1},{\stackrel{~}{T}}_{i}^{(2)}={e}_{2},{\stackrel{~}{T}}_{i}^{(3)}={e}_{3},{\stackrel{~}{T}}_{i}^{(4)}={e}_{4}{D}_{i}={d}_{i})=P({T}_{i}={\sum}_{l=1}^{4}{e}_{l}{D}_{i}={d}_{i})$, where *d _{i}* = 0 or 1). Also, in (5), we assume independence between the random effect

The focus of this work was on estimating diagnostic accuracy measures such as sensitivity, specificity, and prevalence. Other work in the area of diagnostic testing have criticized these measures in favor of positive and negative predictive value (Moons et al., 1997). These alternative measures can easily be estimated from estimated sensitivity, specificity, and prevalence discussed in this paper. The analysis of the PRS data with the new methodology show that the average R-OB/GYN’s sensitivity is high (≈0.93) with the average specificity being rather low (≈0.77). This is important new information since it suggests that R-OB/GYNs are overly diagnosing endometriosis in regular clinical practice.

We thank Editor, Associate Editor and the anonymous reviewer for their thoughtful and constructive comments, which have led to an improved article. This research was supported by the Intramural Research Program of the National Institutes of Health, *Eunice Kennedy Shriver* National Institute of Child Health and Human Development. We thank the Center for Information Technology, the National Institutes of Health, for providing access to the high performance computational capabilities of the Biowulf Linux cluster.

Web Appendices referenced in Sections 1, 2, 3, and 5 are available with this paper at the Biometrics website on Wiley Online Library.

- Albert PS. Estimating diagnostic accuracy of multiple binary tests with an imperfect reference standard. Statistics in Medicine. 2009;28:780–797. [PMC free article] [PubMed]
- Albert PS, Dodd LE. A cautionary note on the robustness of latent class models for estimating diagnostic error without a gold standard. Biometrics. 2004;60:427–435. [PubMed]
- Albert PS, McShane LM, Shih JH. the U.S. National Cancer Institute Bladder Tumor Marker Network. Latent class modeling approaches for assessing diagnostic error without a gold standard: with applications to p53 immunohistochemical assays in bladder tumors. Biometrics. 2001;57:610–619. [PubMed]
- Begg CB. Biases in the assessment of diagnostic tests. Statistics in Medicine. 1987;6:411–423. [PubMed]
- Brosens IA, Brosens JJ. Is laparoscopy the gold standard for the diagnosis of endometriosis? European Journal of Obstetrics Gynecology And Reproductive Biology. 2000;88:117–119. [PubMed]
- Buck Louis GM, Hediger ML, Peterson CM, Croughan M, Sundaram R, Stanford J, Chen Z, Fujimoto VY, Varner MW, Trumble A, Giudice LC. ENDO Study Working Group. Incidence of endometriosis by study population and diagnostic method: the ENDO study. Fertility and Sterility. 2011;96:360–365. [PMC free article] [PubMed]
- Hui SL, Walter SD. Estimating the error rates of diagnostic tests. Biometrics. 1980;36:167–171. [PubMed]
- Hui SL, Zhou XH. Evaluation of diagnostic tests without a gold standard. Statistical Methods in Medical Research. 1998;7:354–370. [PubMed]
- Ibrahim JG, Zhu H, Tang N. Model Selection Criteria for Missing-Data Problems Using the EM Algorithm. Journal of the American Statistical Association. 2008;103:1648–1658. [PMC free article] [PubMed]
- Moons KG, Van Es GA, Deckers JW, Habbema JD, Grobbee DE. Limitations of sensitivity, specificity, likelihood ratio, and Bayes’ theorem in assessing diagnostic probabilities: a clinical example. Epidemiology. 1997;8:12–17. [PubMed]
- Pepe MS. The Statistical Evaluation of Medical Tests for Classification and Prediction. New York: Oxford University Press; 2003.
- Pepe MS, Janes H. Insights into latent class analysis of diagnostic test performance. Biostatistics. 2006;8:474–484. [PubMed]
- Qu Y, Hadgu A. A model for evaluating sensitivity and specificity for correlated diagnostic tests in efficacy studies with an imperfect reference standard. Journal of the American Statistical Association. 1998;93:920–928.
- Qu Y, Tan M, Kutner MH. Random effects models in latent class analysis for evaluating accuracy of diagnostic tests. Biometrics. 1996;52:797–810. [PubMed]
- Schliep K, Stanford JB, Chen Z, Zhang B, Dorais JK, Johnstone EB, Hammoud AO, Varner MW, Buck Louis GM, Peterson CM. on behalf of the ENDO Study Working Group. Interrate and intrarater reliability in the diagnosis and staging of endometriosis. 2012 To appear in Obstetrics & Gynecology. [PubMed]
- Valenstein PN. Evaluating diagnostic tests with imperfect standards. American Journal of Clinical Pathology. 1990;93:252–258. [PubMed]
- Zhou XH, McClish DK, Obuchowski NA. Statistical Methods in Diagnostic Accuracy. New York: Wiley; 2002.

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |