Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2749284

Formats

Article sections

- Summary
- 1. Introduction
- 2. Evaluation of designs for biospecimens measurements with a DT: which design yields more numerical information?
- 3. Pooled–unpooled design
- 4. Maximum Likelihood Estimation (MLE)
- 5. Pooling and random error in design
- 6. Discussion
- References

Authors

Related links

Paediatr Perinat Epidemiol. Author manuscript; available in PMC 2009 September 23.

Published in final edited form as:

PMCID: PMC2749284

NIHMSID: NIHMS133531

Division of Epidemiology, Statistics and Prevention Research, National Institute of Child Health and Human Development, National Institutes of Health, Rockville, MD, USA

Correspondence: Enrique F. Schisterman, Division of Epidemiology, Statistics and Prevention Research, National Institute of Child Health and Human Development, 6100 Executive Boulevard, 7B03N, Rockville MD 20852, USA. E-mail: vog.hin.liam@eetsihcs

The publisher's final edited version of this article is available at Paediatr Perinat Epidemiol

See other articles in PMC that cite the published article.

Pooling of biological specimens has been utilised as a cost-efficient sampling strategy, but cost is not the unique limiting factor in biomarker development and evaluation. We examine the effect of different sampling strategies of biospecimens for exposure assessment that cannot be detected below a detection threshold (DT). The paper compares use of pooled samples to a randomly selected sample from a cohort in order to evaluate the efficiency of parameter estimates.

The proposed approach shows that a pooling design is more efficient than a random sample strategy under certain circumstances. Moreover, because pooling minimises the amount of information lost below the DT, the use of pooled data is preferable (in a context of a parametric estimation) to using all available individual measurements, for certain values of the DT. We propose a combined design, which applies pooled and unpooled biospecimens, in order to capture the strengths of the different sampling strategies and overcome instrument limitations (i.e. DT). Several Monte Carlo simulations and an example based on actual biomarker data illustrate the results of the article.

The use of biomarkers for exposure assessment is common in epidemiology. The power gained by using a large sample of individuals must be weighed against the cost of performing many assays. After reproducibility and variability are established for the biomarker, financial constraints usually limit further evaluation to small sets of samples. For example, the cost of a single assay measuring polychlorinated biphenyls (PCBs) is up to $1000 so only small studies have been able to examine, for example, whether PCBs are associated with cancer or endometriosis.^{1}^{,}^{2} However, the imprecision of the results limits the conclusions that can be drawn for the suggested association.

Currently, two different approaches have been suggested to evaluate expensive biomarkers. Suppose we have biological specimens from a patient population *A* of size *N*, *A* = {*A*_{1}, *A*_{2}, …, *A _{N}*}, with test results

Alternatively, a pooling strategy may be employed where two or more specimens are physically combined into a single ‘pooled’ unit for analysis. Thus, a greater portion of the population is assayed for the same price compared with the random sampling approach. The amount of information per assay increases so the number of assays needed to achieve equivalent information decreases.^{3}^{–}^{6} Formally, the samples from patient population *A* are randomly combined into *n* = *N*/*p* pooled specimens of size *p*. The *n* pooled assays are considered the average of the contributing individual results, i.e.

$$\begin{array}{l}{X}^{(p)}=\{{X}_{1}^{(p)},{X}_{2}^{(p)},\dots ,{X}_{n}^{(p)}\}\\ =\{\frac{1}{p}({X}_{{k}_{11}}+\dots +{X}_{{k}_{1p}}),\frac{1}{p}({X}_{{k}_{21}}+\dots +{X}_{{k}_{2p}}),\\ \dots ,\frac{1}{p}({X}_{{k}_{n1}}+\dots +{X}_{{k}_{np}})\},\end{array}$$

(1.1)

where {*k*_{1}* _{i}*,

The concept of pooling biospecimens can be utilised in population-based epidemiological studies to explore the relationship between biomarker levels and outcome. The method’s primary goal is in establishing distributional parameters for a specific biomarker. Consequently, pooling can be seen as a primary tool for case–control and cohort studies exploring discrete outcomes. The technique has been explored extensively in the literature starting with publications related to cost-efficient syphilis testing of World War II recruits.^{7} Weinberg and Umbach introduced pooling to estimate odds ratios for case–control studies.^{8} Faraggi *et al.*^{3} and Liu and Schisterman^{4}^{,}^{5} examined the inference of the effect of pooling on the area under the Receiver Operating Characteristic curve.

Cost is not the only limiting factor in biomarker evaluation. Instrument sensitivity may also be problematic. Another common complexity arises when some participants have levels below the detection threshold (DT).^{9} Under these circumstances, biomarker values at or above the DT are measured and reported, but values below the DT are unobservable. Formally, instead of *X*, we observe *Z* = {*Z*_{1}, *Z*_{2}, …}, such that

$${Z}_{i}=\{\begin{array}{ll}{X}_{i},\hfill & \text{if}\phantom{\rule{0.16667em}{0ex}}{X}_{i}\ge \text{d};\hfill \\ \text{Not}\phantom{\rule{0.16667em}{0ex}}\text{Available}(\text{N}/\text{A}),\hfill & \text{if}\phantom{\rule{0.16667em}{0ex}}{X}_{i}<\text{d},\hfill \end{array}$$

(1.2)

where *d* is a value of the DT. Similarly, for the pooling design, we observed
${Z}^{(p)}=\{{Z}_{1}^{(p)},\dots ,{Z}_{n}^{(p)}\}$, where

$${Z}_{i}^{(p)}=\{\begin{array}{ll}{X}_{i}^{(p)},\hfill & \text{if}\phantom{\rule{0.16667em}{0ex}}{X}_{i}^{(p)}\ge \text{d};\hfill \\ \text{N}/\text{A},\hfill & \text{if}\phantom{\rule{0.16667em}{0ex}}{X}_{i}^{(p)}<\text{d}.\hfill \end{array}$$

(1.3)

A variety of approaches have been used to analyse data with a lower DT. Substitution of *d*/*2* or
$d/\sqrt{2}$ for observations below the DT has been previously described.^{9}^{–}^{11} These values are based on the assumption of a normal (*d*/*2*) or lognormal (
$d/\sqrt{2}$) distribution.^{12} Lubin and colleagues proposed multiple imputation based on bootstrapping when the exposure distribution function is known.^{13} Recent work shows that substitution of *E*(*X* | *X* < *d*) for data below the DT allows for unbiased estimation of linear and, under certain conditions, logistic regression parameters.^{12} Schisterman and colleagues have shown that unbiased estimates may also be obtained non-parametrically if data below the DT are replaced by zero for no intercept models and by an estimator of *E*(*X* | *X* ≥ *d*) for intercept models.^{12}

The main objective of this paper is to examine parameter estimation and efficiency of the pooling approach compared with the random sampling approach for assays with a DT. In Section 2, we compare the numerical (quantifiable) information available and efficiency of each sampling scheme. In Section 3, we propose a mixed (unpooling–pooling) design, which takes advantage of the strengths of each approach. In Section 4, we present maximum likelihood techniques to be utilised with the different designs. In Section 5 we illustrate methods to account for pooling and random measurement error and in Section 6 we present our conclusions.

The efficiency of pooling and random sampling are compared to determine which design yields more numerical information. Efficiency, here, weighs the available information against the inherent limitations of each design. For clarity, we assume that *X* has a normal distribution. However, the conclusions from this section are true for most distributions commonly used, including gamma.

Figure 1a plots the density function of the normally distributed biomarker *X* with a DT at *d* = −1. The shaded area corresponds to values of *X* below the DT where missing values would be reported. The unshaded area corresponds to reportable numerical values of *X*. In this case, as Pr{*X*_{1} < −1} ≈ 0.16, the expected proportion of observations below the DT is approximately 16%. Pooling the specimens reduces the effective variance of biomarker *X*, i.e. by definition (1.1) the variance of the pooled samples is
$var({X}^{(p)})={\sigma}_{X}^{2}/p$ and the mean is 0.^{3} For the pooled samples, assuming *p* = *2*,
$Pr\{{X}_{1}^{(p)}<-1\}\approx 0.08$ so approximately 8% of the pooled observations are below the DT as shown in Fig. 1c. Thus, the expected number of unobserved test results from the random sample design is about twice (
$(n\times Pr\{{X}_{1}<d\})/(n\times Pr\{{X}_{1}^{(p)}<d\})\approx 0.16/0.08=2$) the expected number of *N*/*A*s under the pooling strategy.

Normally distributed data constrained by a detection threshold (shaded area represents unobserved data). DT, detection threshold.

The rationale for pooling in this case is to take advantage of the statistical properties of averages through physical implementation, i.e. the value of pooled specimens is the mean of the individual biomarker values.

The pooling strategy provides more numerical observations than the random sampling approach with equivalent initial sample size, because the pooled distribution *X*^{(}^{p}^{)} with
$var({X}^{(p)})={\sigma}_{X}^{2}/p$ is more concentrated around the expectation μ = 0. Hence, the pooling strategy is more efficient than the random sample in estimating the mean and variance. Note that if *d* = −∞ the maximum likelihood estimators of μ based on full data *Z* and pooled data *Z*^{(}^{p}^{)} have equal efficiency.^{3} In Situation 1, the ratio of the expected number of numerically observed test results of set *Z* to that of set *Z*^{(}^{p}^{)} is
$(N\times Pr\{{X}_{1}>d\})/(n\times Pr\{{X}_{1}^{(p)}>d\})\ge N/n$, i.e. the number of numerically observed results from *Z*^{(}^{p}^{)} increases relative to the observed numerical elements of *Z*. Moreover, although
$(N\times Pr\{{X}_{1}>d\})/(n\times Pr\{{X}_{1}^{(p)}>d\})>1$, we cannot conclude that the observed pooled data *Z*^{(}^{p}^{)} has less numerical information than the full data set *Z*.

Consider an example with four unpooled individual specimens: *X*_{1} = 3.1, *X*_{2} = 3.5, *X*_{3} = 4.0 and *X*_{4} = 2.0. If DT = 3.0, *Z* = {3.1, 3.5, 4.0, N/A}, where N/A signifies a value below the DT. If p = 2, the pooled samples include only two numerical observations:

$${X}_{1}^{(2)}=\frac{1}{2}({X}_{1}+{X}_{2})=3.3,\phantom{\rule{0.38889em}{0ex}}{X}_{2}^{(2)}=\frac{1}{2}({X}_{3}+{X}_{4})=3.0,$$

yielding *Z*^{(2)} = {3.3, 3.0}. In this example, the value of *X*_{4} is not ignored by the pooled data, which are less affected by the DT than the full sample.

Figures 1b and d depict when the location of the DT is above the mean of *X* and the situation previously described is reversed. If the DT is located at 1 for example, more pooled samples will have values less than the DT than unpooled samples. As shown in Fig. 1b, the amount of unobserved data (shaded area) is smaller in the unpooled data than in the pooled data. Hence, pooling is beneficial when the DT is below the mean and detrimental when it is above the mean.

Nevertheless, in Situation 2, the pooling strategy might still be more efficient than random sampling. Intuitively, the pooled observations might be more informative than the unpooled observations because each pooled observation is based on more than one test result.

When the DT is much greater than the mean biomarker value, the pooling strategy is completely inefficient because the pooled data are based upon substantially less numerical information than a random sample of unpooled data.

In order to demonstrate the conclusions from Situations 1–3 with respect to sample size, we generated random sample *X*_{1}, …, *X _{N}*

Consider the situation where an assay is relatively inexpensive and could be measured for every participant. As previously stated, numerical values are not assigned when *X* is below the DT. However, knowledge of the data below the DT is important for inference. Richardson and Ciampi suggested imputing *E*(*X* | *X* < *d*) for values below the DT.^{14} Because cost is not an impediment in this example, we propose to assay the individual specimens and then pool the specimens and assay the pooled samples as well. As described in Section 2, when an individual specimen with a value less than the DT is pooled, the pooled sample may have a numerical result. Therefore, the individual’s *X* may be back-calculated (reconstructed) using the pooled results and the individual results from the other samples in the pool. The combination of pooling results with traditional unpooled measurements can produce numerical results for the maximum number of study participants, including some below the DT.

In this discussion we use *p* = *2* without loss of generality. Consider an individual *k* with an unpooled value *X _{k}* <

By the combined application unpooled–pooled strategies, the value of some observations less than the DT can be calculated allowing *E*(*X* | *X* < *d*) to be non-parametrically estimated using the method proposed by Richardson and Ciampi.^{14} We call the proposed technique the pooled–unpooled resampling design.

Having introduced the pooled–unpooled hybrid design, we can utilise maximum likelihood to estimate unknown parameters of a biomarker’s distribution. We can consider a biomarker that follows *X _{i}* ~

Efficiency of the maximum likelihood estimators:
$\underset{N\to \infty}{lim}log(N\times var(\widehat{\mu}))$ and
$\underset{N\to \infty}{lim}log(N\times var(\widehat{\sigma}))$ are plotted by graphs (a) and (b) respectively. Curves (------), (——) and **...**

The estimations of μ and σ based on the pooled data are more efficient than those based upon the random sample up to *d* < μ. However, if *d* μ, then the pooling strategy is not recommended.

We can estimate parameters based on data following a gamma distribution in a similar manner.^{16} The gamma-shape parameter of the pooled data is *p*× the shape parameter of unpooled data and the scale parameter of the pooled data is 1/*p*× the scale parameter of unpooled data.^{3} The conclusions for the gamma case are similar to the normal.

The likelihood function in Section 4.1 is composed of two parts: one related to *N*/*A*-observed data (where *X* < *d*) and a second for numerically observed data (where *X* ≥ *d*); estimation for pooled–unpooled resampled data has three kinds of data. The first sample (*S*_{1}) has only *N*/*A* elements. Test results in this sample were initially below the DT and have not been reconstructed by applying the pooling resampling. Thus, for all *k* = 1, …, *N*, we have

$$\begin{array}{l}Pr\{{X}_{k}\in {S}_{1}\}\underset{\text{for}\phantom{\rule{0.16667em}{0ex}}\text{some}\phantom{\rule{0.16667em}{0ex}}l=1,\dots ,N}{=}Pr\{{X}_{l}\notin {S}_{1},{X}_{k}<d,\\ \frac{{X}_{k}+{X}_{l}}{2}<d\}+Pr\{{X}_{l}\in {S}_{1},{X}_{k}<d,\\ \frac{{X}_{k}+{X}_{l}}{2}<d\}=Pr\left\{{X}_{k}<d,\frac{{X}_{k}+{X}_{l}}{2}<d\right\}\\ =Pr\left\{{X}_{1}<d,\frac{{X}_{1}+{X}_{2}}{2}<d\right\}\end{array}$$

(3.1)

The second sample (*S*_{2}) has reconstructed elements. Test results in this sample were initially below the DT and have been reconstructed by applying the pooling resampling. Therefore, elements of set *S*_{2} have distribution function

$$\begin{array}{l}{F}_{R}(u)=Pr\{{X}_{k}<u,{X}_{k}\in {S}_{2}\}\underset{\text{for}\phantom{\rule{0.16667em}{0ex}}\text{some}\phantom{\rule{0.16667em}{0ex}}l=1,\dots ,N}{=}\\ Pr\{{X}_{k}<u,{X}_{k}<d,\frac{{X}_{k}+{X}_{l}}{2}>d\}\\ =Pr\{{X}_{1}<u,{X}_{1}<d,\frac{{X}_{1}+{X}_{2}}{2}>d\}.\end{array}$$

(3.2)

The last sample (*S*_{3}), as in Section 4.1, includes the numerically observed data. The likelihood function is a product of the densities that correspond to (3.1), (3.2) and the case, where numerical results were initially observed. We describe the likelihood in detail in Appendix formula (A.3).

To illustrate the proposed method, we generated a random sample {*X*_{1},…, *X _{N}*

Cholesterol measurements were collected for 10 normal volunteers at a medical centre. The mean and standard deviation for total cholesterol were estimated to be ( = 200.73) and ( = 51.72) respectively. The specimens were then randomly paired and the pooled specimens were assayed. For the purpose of demonstration, we artificially created a threshold (DT = 150) such that some numerical values could not be observed. In Table 1, we show the individual and pooled cholesterol values with and without the DT.

In this example, 20% of the individual measurements are below the threshold, whereas no pooled observations are below the DT. Applying the maximum likelihood method, the asymptotically unbiased mean and standard deviation were estimated to be ( = 196.13) and ( = 56.38), respectively, from unpooled data with the DT. Although more costly, by assaying both pooled and unpooled specimens, we can reasonably estimate values below the DT (Table 1). Moreover, using both the reconstructed data and the unpooled data above the DT, the mean and the variance are estimated to be = 198.99 and = 53.44

Although definition (1.1) shows the theoretical notation for pooled data, practically, pooling biological specimens can lead to additive pooling errors. In this section we use the maximum likelihood method from Section 4 and revise definition (1.1) to

$$\begin{array}{l}{X}^{(p)}=\{{X}_{1}^{(p)},{X}_{2}^{(p)},\dots ,{X}_{n}^{(p)}\}\\ =\{\frac{1}{p}({X}_{{k}_{11}}+\dots +{X}_{{k}_{1p}})+{\epsilon}_{1},\\ \frac{1}{p}({X}_{{k}_{21}}+\dots +{X}_{{k}_{2p}})+{\epsilon}_{2},\\ \dots \frac{1}{p}({X}_{{k}_{n1}}+\dots +{X}_{{k}_{np}})+{\epsilon}_{n}\},\end{array}$$

(5.1)

where pooling errors *ε*_{1},…, *ε _{n}* are independent
$N(0,{\sigma}_{\epsilon}^{2})$ distributed random variables and
$X\sim N(\mu ,{\sigma}_{X}^{2})$. Definition (5.1) accounts for the pooling errors which were ignored by definition (1.1). In order to investigate the robustness of our approach for addressing pooling errors, we executed MC simulations. Formally, we assumed that only

- Random sampling: We randomly choose
*X*_{1},…,*X*from the full sample and observed_{N/p}*Z*_{1},…,*Z*because of the DT. The mean of_{N/p}*X*was estimated using the likelihood approach on the truncated data {_{i}*Z*_{1},…,*Z*}._{N/p} - Pooling: We randomly choose biospecimen sets of size
*p*with ${X}_{i}^{(p)}=\frac{1}{p}{\displaystyle \sum _{j=p(i-1)+1}^{pi}}{X}_{j}+{\epsilon}_{i},i=1,\dots ,\frac{N}{p}$ and observed ${Z}_{i}^{(p)},i=1,\dots ,\frac{N}{p}$ by (1.3). Again, the mean of*X*was estimated using the MLE based on ${Z}_{1}^{(p)},\dots ,{Z}_{N/p}^{(p)}$._{i}

The accuracy of estimators (, ) of (*μ*, σ* _{X}*) is indicated by their MC variances. We assumed a biomarker distribution
${X}_{i}\sim N(\mu =1,{\sigma}_{X}^{2}=4)$,

Logarithm of the Monte Carlo estimators of *E*( − μ)^{2} and *E*( − σ_{X})^{2} [graphs (a) and (b) respectively], where the pooling error is in effect. Curves (------), (——) and (·········) **...**

The figure suggests that the conclusions in Sections 2 and 4 are correct for μ-estimation up to σ* _{ε}* ≤ σ

In addition to pooling error, studies are also subjected to random measurement error as a function of instrument calibration. Random measurement error occurs as a result of random instrument variability. One can account for random measurement error in the pooled or random sample designs through the use of standard techniques previously developed in the literature.^{18}^{,}^{19} These techniques include utilising error models, regression calibration models, validation studies or replication data to estimate and adjust for random measurement error. In addition, while not explicitly described here, standard information reported by a laboratory such as the coefficient of variation for the biomarker and reliability of the assay can be included in these models.

In this paper, we examine pooling and random sampling as strategies to evaluate biospecimens with a DT. These types of data are common in epidemiological research and include two types of values: numerical and non-numerical (i.e. N/A). Because numerical values yield more information than missing data, it is a goal of any researcher to minimise the number of N/A observations. Accordingly, we have explored theoretical methods as well as simulations where a pooling design is more efficient than a random sample. In addition, we show that the efficiency of the pooling design is dictated by the location of the DT but is independent of the distributional assumptions (e.g. gamma, t-distribution, Lognormal). For all distributions, there is a range of DTs where the pooling strategy is more efficient than a random sample because the inference-based pooling design provides more numerical information. In fact, in some cases pooling is more efficient than using the full sample. We showed that whenever *EX* > *d* (i.e. >50%) pooling is always the most efficient sampling strategy, but other factors, such as the underlying distribution, must be considered when *EX* < *d*.

Certainly, a preliminary analysis of biospecimens with incomplete measurements, such as a test to see if *EX* > *d*, is appropriate. Towards this end, the unpooled–pooled strategy proposed in Section 3 is not only helpful for the evaluation of pooling errors but can also be applied to a first-stage data study. In addition, the efficiency of MLEs under each design can be evaluated.

Cost has been the main motivation for pooling biological specimens or to randomly select a subset of individuals to be assayed. However, we have shown that in some cases, even when the full data are available, estimations based on pooled data increase efficiency over the use of individual measurements when the assay has a DT. This is because of the greater number of observations above the DT under pooling, which can then be used in the estimation procedure. However, using unpooled data allows, for example, distributional assumptions to be tested, the location of the DT to be estimated and the expected number of observations below the DT. In addition, one is able to stratify the pooled samples by confounders in order to retain confounding and covariate information in the pooled samples. To take advantage of the strengths of each of these approaches, we proposed a pooled–unpooled resampling design. According to this design, in the first stage all the patient population (or a random sample of them) is measured individually, and in the second stage, the patient population is pooled in groups of size *p* and these pooled samples are assayed. By employing this approach, we are able to reconstruct data that were unobserved in the first stage due to the DT.

This simple approach that we propose captures the strengths of the statistical properties of the distribution function of the averages by physically grouping biological specimens in order to overcome the instrument limitations.

Following Gupta’s method,^{15} we obtain the MLE based on a sample with observations subject to a DT:

$$\begin{array}{l}L(\mathrm{\Omega})=\frac{{N}_{\mathrm{\Omega}}!}{({k}_{\mathrm{\Omega}}-1)!({N}_{\mathrm{\Omega}}-{k}_{\mathrm{\Omega}})!}\xb7{(Pr\{{X}^{\mathrm{\Omega}}<d\})}^{{k}_{\mathrm{\Omega}}}\xb7\\ \prod _{i:{X}_{i}^{\mathrm{\Omega}}\ge d}{f}_{\mathrm{\Omega}}({X}_{i}^{\mathrm{\Omega}}),\end{array}$$

(A.1)

where *f*_{Ω} is a density function of
${X}_{1}^{\mathrm{\Omega}}$; *N*_{Ω} is a size of set Ω; *k*_{Ω} is a number of *N*/*A*-elements of set Ω; (Ω = *Z*, *X*^{Ω} = *X*_{1}) or (Ω = {*Z _{i}*,

$$\begin{array}{l}Pr\{{X}_{1}<d\}=\underset{-\infty}{\overset{d}{\int}}{f}_{z}(u)du,\phantom{\rule{0.38889em}{0ex}}Pr\{{X}_{1}^{(p)}<d\}=\underset{-\infty}{\overset{d}{\int}}{f}_{{z}^{(p)}}(u)du,\\ {f}_{z}(u)=\frac{1}{\sqrt{2\pi}\sigma}{e}^{\frac{-1}{2}{\left(\frac{u-\mu}{\sigma}\right)}^{2}}\phantom{\rule{0.38889em}{0ex}}\text{and}\phantom{\rule{0.38889em}{0ex}}{f}_{{z}^{(p)}}(u)=\frac{\sqrt{p}}{\sqrt{2\pi}\sigma}{e}^{\frac{-p}{2}{\left(\frac{u-\mu}{\sigma}\right)}^{2}}.\end{array}$$

Thus, *L*(Ω) is a function of unknown parameters μ and σ, say *L*(Ω) = *L*(μ, σ; Ω).

The target estimators , of μ, σ (where $(\mu ,\sigma )=arg\underset{(\mu ,\sigma )}{max}L(\mu ,\sigma ;\mathrm{\Omega})$) are numerical solutions of system

$$\{\frac{\partial log(L(\widehat{\mu},\widehat{\sigma};\mathrm{\Omega}))}{\partial \widehat{\mu}}=0,\phantom{\rule{0.38889em}{0ex}}\frac{\partial log(L(\widehat{\mu},\widehat{\sigma};\mathrm{\Omega}))}{\partial \widehat{\sigma}}=0.$$

The variances of considered estimators (, ) of (μ, σ) can be found by inverting the Fisher information matrix. Using Gupta,^{15} we obtain, depending on pooled/unpooled database,

$$\begin{array}{l}\underset{N\to \infty}{lim}N\left[\begin{array}{cc}var(\widehat{\mu})& cov(\widehat{\mu},\widehat{\sigma})\\ cov(\widehat{\mu},\widehat{\sigma})& var(\widehat{\sigma})\end{array}\right]\\ ={\sigma}^{2}{w}_{j}[\begin{array}{c}{p}_{j}+\phi ({\eta}_{j})G({\eta}_{j})\\ \frac{1}{{g}_{j}^{1/2}}(\phi ({\eta}_{j})+{\eta}_{j}\phi ({\eta}_{j})G({\eta}_{j}))\end{array}\\ {\begin{array}{c}\frac{1}{{g}_{j}^{1/2}}(\phi ({\eta}_{j})+{\eta}_{j}\phi ({\eta}_{j})G({\eta}_{j}))\\ \frac{1}{{g}_{j}}(2{p}_{j}+{\eta}_{j}\phi ({\eta}_{j})+{\eta}_{j}^{2}\phi ({\eta}_{j})G({\eta}_{j}))\end{array}]}^{-1},\end{array}$$

(A.2)

where
${\eta}_{j}=\frac{{g}_{j}^{1/2}(d-\mu )}{\sigma};{p}_{j}=\underset{{\eta}_{j}}{\overset{\infty}{\int}}\frac{1}{\sqrt{2\pi}}{e}^{-\frac{1}{2}{u}^{2}}du;G(\eta )=\frac{{e}^{-\frac{1}{2}{\eta}^{2}}}{\underset{-\infty}{\overset{\eta}{\int}}{e}^{-\frac{1}{2}{u}^{2}}du}+\eta $; (*u*)is the standard normal density function
$\frac{1}{\sqrt{2\pi}}{e}^{-\frac{1}{2}{u}^{2}}$ and if (, ) are based on

$$\{\begin{array}{c}\text{full}\phantom{\rule{0.16667em}{0ex}}\text{data}\phantom{\rule{0.16667em}{0ex}}\text{Z}\phantom{\rule{0.16667em}{0ex}}\text{then}\phantom{\rule{0.16667em}{0ex}}j=1,{g}_{1}=1,{w}_{1}=1;\\ \text{pooled}\phantom{\rule{0.16667em}{0ex}}\text{data}\phantom{\rule{0.16667em}{0ex}}{\text{Z}}^{(\text{p})}\phantom{\rule{0.16667em}{0ex}}\text{then}\phantom{\rule{0.16667em}{0ex}}j=2,{g}_{2}=p,{w}_{2}=1;\\ \text{random}\phantom{\rule{0.16667em}{0ex}}\text{sample}\phantom{\rule{0.16667em}{0ex}}\{{Z}_{i},i=1,\dots ,n\}\phantom{\rule{0.16667em}{0ex}}\text{then}\\ j=3,{g}_{3}=1,{w}_{3}=p.\end{array}$$

In accord with Section 3.2, we have

$$\begin{array}{l}L(\mu ,{\sigma}_{X};{S}_{1},{S}_{2},{S}_{3})\propto {(Pr\{{X}_{1}\in {S}_{1}\})}^{{m}_{1}}\\ \left(\prod _{i:{X}_{i}\in {S}_{2}}\frac{d}{du}{F}_{R}(u){\mid}_{u={X}_{i}}\right)\times \frac{1}{{\sigma}_{X}^{N-{m}_{1}-{m}_{2}}}\\ exp\left(-\frac{1}{2{\sigma}_{X}^{2}}\sum _{i:{X}_{i}\in {S}_{3}}{({Z}_{i}-\mu )}^{2}\right)=:{L}_{c}(\mu ,{\sigma}_{X}),\end{array}$$

(A.4)

where *m*_{1} and *m*_{2} are number of elements of data *S*_{1} and *S*_{2} respectively; by applying (3.1), (3.2) and convolution transforms, we obtain

$$\begin{array}{l}Pr\{{X}_{1}\in {S}_{1}\}=Pr\left\{{X}_{1}<d,\frac{{X}_{1}+{X}_{2}}{2}<d\right\}\\ =\frac{1}{2\pi {\sigma}_{X}^{2}}\underset{-\infty}{\overset{d}{\int}}\underset{-\infty}{\overset{2d-u}{\int}}exp\left(-\frac{{(t-\mu )}^{2}}{2{\sigma}_{X}^{2}}\right)dt\\ exp\left(-\frac{{(u-\mu )}^{2}}{2{\sigma}_{X}^{2}}\right)du,\\ \frac{d}{du}{F}_{R}(u){\mid}_{u={X}_{i}}=\frac{d}{du}Pr\{{X}_{1}<u,{X}_{1}<d,\frac{{X}_{1}+{X}_{2}}{2}>d\}{\mid}_{u={X}_{i}}\\ =\frac{1}{2\pi {\sigma}_{X}^{2}}\frac{d}{du}(\underset{-\infty}{\overset{min(u,d)}{\int}}\underset{2d-t}{\overset{\infty}{\int}}exp\left(-\frac{{(y-\mu )}^{2}}{2{\sigma}_{X}^{2}}\right)dy\\ exp\left(-\frac{{(t-\mu )}^{2}}{2{\sigma}_{X}^{2}}\right)dt){\mid}_{u={X}_{i}<d}\\ =\frac{1}{2\pi {\sigma}_{X}^{2}}\underset{2d-{X}_{i}}{\overset{\infty}{\int}}exp\left(-\frac{{(y-\mu )}^{2}}{2{\sigma}_{X}^{2}}\right)dy\\ exp\left(-\frac{{({X}_{i}-\mu )}^{2}}{2{\sigma}_{X}^{2}}\right).\end{array}$$

Now, the maximum likelihood estimators are $(\widehat{\mu},{\widehat{\sigma}}_{X})=arg\underset{(\mu ,{\sigma}_{X})}{max}log({L}_{c}(\mu ,{\sigma}_{X}))$.

The general maximum likelihood function is

$$\begin{array}{l}L(\mu ,{\sigma}_{X},{\sigma}_{\epsilon};\{{Z}_{1},\dots ,{Z}_{{n}_{1}}\}\cap \{{Z}_{1}^{(p)},\dots ,{Z}_{{n}_{2}}^{(p)}\})\propto \\ {\left(\frac{1}{\sqrt{2\pi {\sigma}_{x}^{2}}}\underset{-\infty}{\overset{d}{\int}}{e}^{-\frac{1}{2{\sigma}_{x}^{2}}{(u-\mu )}^{2}}du\right)}^{{m}_{1}}\times (\frac{1}{\sqrt{2\pi ({\sigma}_{x}^{2}/p+{\sigma}_{\epsilon}^{2})}}\\ {\underset{-\infty}{\overset{d}{\int}}{e}^{\frac{1}{2({\sigma}_{x}^{2}/p+{\sigma}_{\epsilon}^{2})}{(u-\mu )}^{2}}du)}^{{m}_{2}}\frac{1}{{\sigma}_{X}^{{n}_{1}-{m}_{1}}{({\sigma}_{X}^{2}/p+{\sigma}_{\epsilon}^{2})}^{{n}_{2}/2-{m}_{2}}}\times \\ exp(-\frac{1}{2{\sigma}_{X}^{2}}\sum _{i:{Z}_{i}\ne N/A}{({Z}_{i}-\mu )}^{2}-\frac{1}{2{({\sigma}_{X}^{2}/p+{\sigma}_{\epsilon}^{2})}^{2}}\\ \sum _{i:{Z}_{i}^{(p)}\ne N/A}^{{n}_{2}}{({Z}_{i}^{(p)}-\mu )}^{2})=:{L}_{c}(\mu ,{\sigma}_{X},{\sigma}_{\epsilon}),\end{array}$$

(A.3)

where *m*_{1} and *m*_{2} are number of N/As of sets {*Z _{i}*,

1. Laden F, Neas LM, Spiegelman D, Hankinson SE, Willett WC, Ireland K, et al. Predictors of plasma concentrations of DDE and pcbs in a group of U.S. women. Environmental Health Perspectives. 1999;107:75–81. [PMC free article] [PubMed]

2. Louis GM, Weiner JM, Whitcomb BW, Sperrazza R, Schisterman EF, Lobdell DT, et al. Environmental PCB exposure and risk of endometriosis. Human Reproduction. 2005;20:279–285. [PubMed]

3. Faraggi D, Reiser B, Schisterman EF. ROC curve analysis for biomarkers based on pooled assessments. Statistics in Medicine. 2003;22:2515–2527. [PubMed]

4. Liu A, Schisterman EF. Sample size and power calculation in comparing diagnostic accuracy of biomarkers with pooled assessments. Journal of Applied Statistics. 2004;31:41–51.

5. Liu A, Schisterman E. Comparison of diagnostic accuracy of biomarkers with pooled assessments. Biometrical Journal. 2003;45:631–644.

6. Schisterman EF, Perkins NJ, Liu A, Bondell H. Optimal cut-point and its corresponding Youden Index to discriminate individuals using pooled blood samples. Epidemiology. 2005;16:73–81. [PubMed]

7. Keeler E, Berwick D. Effects of pooled samples. Health Laboratory Science. 1976;13:121–128. [PubMed]

8. Weinberg CR, Umbach DM. Using pooled exposure assessment to improve efficiency in case-control studies. Biometrics. 1999;55:718–726. [PubMed]

9. Helsel D. Nondetects and Data Analysis: Statistics for Censored Environmental Data. Hoboken, NJ: John Wiley & Sons, Inc; 2005.

10. Finkelstein M, Verma D. Exposure estimation in the presence of nondetectable values: another look. AIHAJ. 2001;62:195–198. [PubMed]

11. Hornung R, Reed L. Estimation of average concentration in the presence of nondetectable values. Applied Occupational and Environmental Hygiene. 1990;5:46–51.

12. Schisterman EF, Vexler A, Whitcomb BW, Liu A. The limitations due to exposure detection limits for regression models. American Journal of Epidemiology. 2006;163:374–383. [PMC free article] [PubMed]

13. Lubin JH, Colt JS, Camann D, Davis S, Cerhan JR, Severson RK, et al. Epidemiologic evaluation of measurement data in the presence of detection limits. Environmental Health Perspectives. 2004;112:1691–1696. [PMC free article] [PubMed]

14. Richardson DB, Ciampi A. Effects of exposure measurement error when an exposure variable is constrained by a lower limit. American Journal of Epidemiology. 2003;157:355–363. [PubMed]

15. Gupta AK. Estimation of the mean and standard deviation of a normal population from a censored sample. Biometrika. 1952;39:260–273.

16. Chapman DG. Estimating the parameters of a truncated gamma distribution. Annals of Mathematical Statistics. 1956;27:498–506.

17. Schisterman EF, Faraggi D, Reiser B, Trevisan M. Statistical inference for the area under the receiver operating characteristic curve in the presence of random measurement error. American Journal of Epidemiology. 2001;154:174–179. [PubMed]

18. Carroll RJ, Ruppert D, Stefanki LA. Measurement Error in Nonlinear Models. Boca Raton, FL: Chapman & Hall; 1995.

19. Gustafson P. Measurement Error and Misclassification in Statistics and Epidemiology. Boca Raton, FL: Chapman & Hall; 2004.

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |