PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (470)
 

Clipboard (0)
None

Select a Filter Below

Journals
Year of Publication
more »
1.  Two-stage designs for Phase 2 dose-finding trials 
Statistics in medicine  2012;31(24):2872-2881.
SUMMARY
We propose a Bayesian adaptive two-stage design for the efficient estimation of the maximum dose or the minimum effective dose in a dose-finding trial. The new design allocates subjects in stage two according to the posterior distribution of the target dose location. Simulations show that the proposed two-stage design is superior to equal allocation and to a two-stage strategy where only one dose is left in the second stage.
doi:10.1002/sim.5365
PMCID: PMC4090751  PMID: 22865626
Dose ranging; Minimum effective dose; Maximum dose; Phase 2 trials; two-stage designs
2.  Comparing and Combining Data across Multiple Sources via Integration of Paired-sample Data to Correct for Measurement Error 
Statistics in medicine  2012;31(28):3748-3759.
Summary
In biomedical research such as the development of vaccines for infectious diseases or cancer, measures from the same assay are often collected from multiple sources or laboratories. Measurement error that may vary between laboratories needs to be adjusted for when combining samples across laboratories. We incorporate such adjustment in comparing and combining independent samples from different labs via integration of external data, collected on paired samples from the same two laboratories. We propose: 1) normalization of individual level data from two laboratories to the same scale via the expectation of true measurements conditioning on the observed; 2) comparison of mean assay values between two independent samples in the Main study accounting for inter-source measurement error; and 3) sample size calculations of the paired-sample study so that hypothesis testing error rates are appropriately controlled in the Main study comparison. Because the goal is not to estimate the true underlying measurements but to combine data on the same scale, our proposed methods do not require that the true values for the errorprone measurements are known in the external data. Simulation results under a variety of scenarios demonstrate satisfactory finite sample performance of our proposed methods when measurement errors vary. We illustrate our methods using real ELISpot assay data generated by two HIV vaccine laboratories.
doi:10.1002/sim.5446
PMCID: PMC4087038  PMID: 22764070
assay comparison; inter-laboratory measurement error; multiple data sources; regression calibration
3.  Combining markers with and without the limit of detection 
Statistics in medicine  2013;33(8):1307-1320.
In this paper, we consider the combination of markers with and without the limit of detection (LOD). LOD is often encountered when measuring proteomic markers. Because of the limited detecting ability of an equipment or instrument, it is difficult to measure markers at a relatively low level. Suppose that after some monotonic transformation, the marker values approximately follow multivariate normal distributions. We propose to estimate distribution parameters while taking the LOD into account, and then combine markers using the results from the linear discriminant analysis. Our simulation results show that the ROC curve parameter estimates generated from the proposed method are much closer to the truth than simply using the linear discriminant analysis to combine markers without considering the LOD. In addition, we propose a procedure to select and combine a subset of markers when many candidate markers are available. The procedure based on the correlation among markers is different from a common understanding that a subset of the most accurate markers should be selected for the combination. The simulation studies show that the accuracy of a combined marker can be largely impacted by the correlation of marker measurements. Our methods are applied to a protein pathway dataset to combine proteomic biomarkers to distinguish cancer patients from non-cancer patients.
doi:10.1002/sim.6027
PMCID: PMC4084760  PMID: 24132938
ROC curve; diagnostic accuracy; limit of detection; linear discriminant analysis
4.  The National Children’s Study (NCS) Establishment and Protection of the Inferential Base 
Statistics in medicine  2010;29(13):1360-1367.
SUMMARY
The National Children’s Study is a unique study of environment and health that will follow a cohort of 100,000 women from prior to or early in pregnancy and then their children until 21 years of age. The NCS cohort will be a national multi-stage probability sample, using a U.S. Census Bureau geographic sampling frame unrelated to factors that might influence selection into the sample (e.g. access to health care). I present the case for the use of a national probability sample as the design base for the NCS, arguing that selection of the original cohort should be as free from selection bias as possible. The dangers of using a selected or non probability sample approach are demonstrated by an example of its use in outlining the clinical management of children with febrile seizures, an infrequent disorder, which was so wrong for decades. In addition I stress the importance of and the NCS approach to avoiding selection bias that might occur after the initial selection of the cohort. Selection of and maintenance of an unselected cohort is an important element for the validity of inferences in this major undertaking.
doi:10.1002/sim.3635
PMCID: PMC4084869  PMID: 20527009
National Children’s Study (NCS); Selection bias; National probability sample; Long term follow-up; Inference from observational studies
5.  Homogeneity tests of clustered diagnostic markers with applications to the BioCycle Study 
Statistics in medicine  2012;31(28):3638-3648.
Diagnostic trials often require the use of a homogeneity test among several markers. Such a test may be necessary to determine the power both during the design phase and in the initial analysis stage. However, no formal method is available for the power and sample size calculation when the number of markers is greater than two and marker measurements are clustered in subjects. This article presents two procedures for testing the accuracy among clustered diagnostic markers. The first procedure is a test of homogeneity among continuous markers based on a global null hypothesis of the same accuracy. The result under the alternative provides the explicit distribution for the power and sample size calculation. The second procedure is a simultaneous pairwise comparison test based on weighted areas under the receiver operating characteristic curves. This test is particularly useful if a global difference among markers is found by the homogeneity test. We apply our procedures to the BioCycle Study designed to assess and compare the accuracy of hormone and oxidative stress markers in distinguishing women with ovulatory menstrual cycles from those without.
doi:10.1002/sim.5391
PMCID: PMC4084872  PMID: 22733707
ROC curve; biomarker; homogeneity test; sample size
6.  Distribution-free Models for Longitudinal Count Responses with Overdispersion and Structural zeros 
Statistics in medicine  2012;32(14):2390-2405.
Summary
Overdispersion and structural zeros are two major manifestations of departure from the Poisson assumption when modeling count responses using Poisson loglinear regression. As noted in a large body of literature, ignoring such departures could yield bias and lead to wrong conclusions. Different approaches have been developed to tackle these two major problems. In this paper, we review available methods for dealing with overdispersion and structural zeros within a longitudinal data setting and propose a distribution-free modeling approach to address the limitations of these methods by utilizing a new class of functional response models (FRM). We illustrate our approach with both simulated and real study data.
doi:10.1002/sim.5691
PMCID: PMC3806502  PMID: 23239019
functional response models; monotone missing data pattern; negative binomial; zero-inflated Poisson; weighted generalized estimating equations
7.  Design considerations for case series models with exposure onset measurement error 
Statistics in medicine  2012;32(5):772-786.
Summary
The case series model allows for estimation of the relative incidence of events, such as cardiovascular events, within a pre-specified time window after an exposure, such as an infection. The method requires only cases (individuals with events) and controls for all fixed/time-invariant confounders. The measurement error case series model extends the original case series model to handle imperfect data, where the timing of an infection (exposure) is not known precisely. In this work, we propose a method for power/sample size determination for the measurement error case series model. Extensive simulation studies are used to assess the accuracy of the proposed sample size formulas. We also examine the magnitude of the relative loss of power due to exposure onset measurement error, compared to the ideal situation where the time of exposure is measured precisely. To facilitate the design of case series studies, we provide publicly available web-based tools for determining power/sample size for both the measurement error case series model as well as the standard case series model.
doi:10.1002/sim.5552
PMCID: PMC4075338  PMID: 22911898
case series models; exposure timing measurement error; longitudinal observational database; non-homogeneous Poisson process; sample size
8.  Modelling intervention effects after cancer relapses 
Statistics in medicine  2005;24(24):3959-3975.
Summary
This article addresses the problem of incorporating information regarding the effects of treatments or interventions into models for repeated cancer relapses. In contrast to many existing models, our approach permits the impact of interventions to differ after each relapse. We adopt the general model for recurrent events proposed by Peña and Hollander, in which the effect of interventions is represented by an effective age process acting on the baseline hazard rate function. To accommodate the situation of cancer relapse, we propose an effective age function that encodes three possible therapeutic responses: complete remission, partial remission, and null response. The proposed model also incorporates the effect of covariates, the impact of previous relapses, and heterogeneity among individuals. We use our model to analyse the times to relapse for 63 patients with a particular subtype of indolent lymphoma and compare the results to those obtained using existing methods.
doi:10.1002/sim.2394
PMCID: PMC4066387  PMID: 16320269
recurrent events; effective age process; intensity models; cancer recurrence model
9.  Parametric latent class joint model for a longitudinal biomarker and recurrent events 
Statistics in medicine  2007;26(29):5285-5302.
SUMMARY
A joint model for a longitudinal biomarker and recurrent events is proposed. This general model accommodates the effects of covariates on the biomarker and event processes, the effects of accumulating event occurrences, and effects caused by interventions after each event occurrence. Association between the biomarker and recurrent event processes is captured through a latent class structure, which also serves to handle an underlying heterogeneous population. We use the EM algorithm for maximum likelihood estimation of the model parameters and a penalized likelihood measure to determine the number of latent classes. This joint model is validated by simulation and illustrated with a data set from epileptic seizure study.
doi:10.1002/sim.2915
PMCID: PMC4066416  PMID: 17542002
latent class model; recurrent events; joint model; longitudinal biomarker; heterogeneous population
10.  A probabilistic algorithm for robust interference suppression in bioelectromagnetic sensor data 
Statistics in medicine  2007;26(21):3886-3910.
SUMMARY
Magnetoencephalography (MEG) and electroencephalography (EEG) sensor measurements are often contaminated by several interferences such as background activity from outside the regions of interest, by biological and non-biological artifacts, and by sensor noise. Here, we introduce a probabilistic graphical model and inference algorithm based on variational-Bayes expectation-maximization for estimation of activity of interest through interference suppression. The algorithm exploits the fact that electromagnetic recording data can often be partitioned into baseline periods, when only interferences are present, and active time periods, when activity of interest is present in addition to interferences. This algorithm is found to be robust and efficient and significantly superior to many other existing approaches on real and simulated data.
doi:10.1002/sim.2941
PMCID: PMC4060743  PMID: 17546712
magnetoencephalography; electroencephalography; graphical models
11.  Adaptive Prior Variance Calibration in the Bayesian Continual Reassessment Method 
Statistics in medicine  2012;32(13):2221-2234.
Use of the Continual Reassessment Method (CRM) and other model-based approaches to design in Phase I clinical trials has increased due to the ability of the CRM to identify the maximum tolerated dose (MTD) better than the 3+3 method. However, the CRM can be sensitive to the variance selected for the prior distribution of the model parameter, especially when a small number of patients are enrolled. While methods have emerged to adaptively select skeletons and to calibrate the prior variance only at the beginning of a trial, there has not been any approach developed to adaptively calibrate the prior variance throughout a trial. We propose three systematic approaches to adaptively calibrate the prior variance during a trial and compare them via simulation to methods proposed to calibrate the variance at the beginning of a trial.
doi:10.1002/sim.5621
PMCID: PMC3561509  PMID: 22987660
adaptive design; Bayes factor; dose-finding study; dose-escalation study
12.  Nonparametric ROC Summary Statistics for Correlated Diagnostic Marker Data 
Statistics in medicine  2012;32(13):2209-2220.
We propose efficient nonparametric statistics to compare medical imaging modalities in multi-reader multi-test data and to compare markers in longitudinal ROC data. The proposed methods are based on the weighted area under the ROC curve which includes the area under the curve and the partial area under the curve as special cases. The methods maximize the local power for detecting the difference between imaging modalities. The asymptotic results of the proposed methods are developed under a complex correlation structure. Our simulation studies show that the proposed statistics result in much better powers than existing statistics. We applied the proposed statistics to an endometriosis diagnosis study.
doi:10.1002/sim.5654
PMCID: PMC3578098  PMID: 23055248
ROC curve; Optimal weights; Wilcoxon statistics; Correlated data
13.  The Trend Odds Model for Ordinal Data‡ 
Statistics in medicine  2012;32(13):2250-2261.
Ordinal data appear in a wide variety of scientific fields. These data are often analyzed using ordinal logistic regression models that assume proportional odds. When this assumption is not met, it may be possible to capture the lack of proportionality using a constrained structural relationship between the odds and the cut-points of the ordinal values (Peterson and Harrell, 1990). We consider a trend odds version of this constrained model, where the odds parameter increases or decreases in a monotonic manner across the cut-points. We demonstrate algebraically and graphically how this model is related to latent logistic, normal, and exponential distributions. In particular, we find that scale changes in these potential latent distributions are consistent with the trend odds assumption, with the logistic and exponential distributions having odds that increase in a linear or nearly linear fashion. We show how to fit this model using SAS Proc Nlmixed, and perform simulations under proportional odds and trend odds processes. We find that the added complexity of the trend odds model gives improved power over the proportional odds model when there are moderate to severe departures from proportionality. A hypothetical dataset is used to illustrate the interpretation of the trend odds model, and we apply this model to a Swine Influenza example where the proportional odds assumption appears to be violated.
doi:10.1002/sim.5689
PMCID: PMC3650098  PMID: 23225520
Non-proportional odds; Constrained cumulative odds; Influenza; Latent distributions; Logistic distribution
14.  A Bayesian Model for Misclassified Binary Outcomes and Correlated Survival Data with Applications to Breast Cancer 
Statistics in medicine  2012;32(13):2320-2334.
Breast cancer patients may experience ipsilateral breast tumor relapse (IBTR) after breast conservation therapy. IBTR is classified as either true local recurrence (TR) or new ipsilateral primary tumor (NP). The correct classification of IBTR status has significant implications in therapeutic decision-making and patient management. However, the diagnostic tests to classify IBTR are imperfect and prone to misclassification. In addition, some observed survival data (e.g., time to relapse, time from relapse to death) are strongly correlated with IBTR status. We present a Bayesian approach to model the potentially misclassified IBTR status and the correlated survival information. The inference is conducted using a Bayesian framework via Markov Chain Monte Carlo simulation implemented in WinBUGS. Extensive simulation shows that the proposed method corrects biases and provides more efficient estimates for the covariate effects on the probability of IBTR and the diagnostic test accuracy. Moreover, our method provides useful subject-specific patient prognostic information. Our method is motivated by, and applied to, a dataset of 397 breast cancer patients.
doi:10.1002/sim.5629
PMCID: PMC3897718  PMID: 22996169
Binomial regression; Cox model; Frailty model; Latent class model; Markov chain Monte Carlo; Tumor relapse
15.  Frailty modeling of age-incidence curves of osteosarcoma and Ewing sarcoma among individuals younger than 40 years 
Statistics in medicine  2012;31(28):3731-3747.
The Armitage—Doll model with random frailty can fail to describe incidence rates of rare cancers influenced by an accelerated biological mechanism at some, possibly short, period of life. We propose a new model to account for this influence. Osteosarcoma and Ewing sarcoma are primary bone cancers with characteristic age-incidence patterns that peak in adolescence. We analyze SEER incidence data for whites younger than 40 years diagnosed during the period 1975−2005, with an Armitage—Doll model with compound Poisson frailty. A new model treating the adolescent growth spurt as the accelerated mechanism affecting cancer development is a significant improvement over that model. We also model the incidence rate conditioning on the event of having developed the cancers before the age of 40 years and compare the results with those predicted by the Armitage—Doll model. Our results support existing evidence of an underlying susceptibility for the two cancers among a very small proportion of the population. In addition, the modeling results suggest that susceptible individuals with a rapid growth spurt acquire the cancers sooner than they otherwise would have, if their growth had been slower. The new model is suitable for modeling incidence rates of rare diseases influenced by an accelerated biological mechanism.
doi:10.1002/sim.5441
PMCID: PMC4052707  PMID: 22744906
Frailty; osteosarcoma; Ewing sarcoma; growth spurt; susceptibility; survival analysis
16.  Estimating the Efficacy of an Interstitial Cystitis/Painful Bladder Syndrome Medication in a Randomized Trial with Both Non-adherence and Loss to Follow-up 
Statistics in medicine  2012;10.1002/sim.5702.
We are motivated by a randomized clinical trial evaluating the efficacy of amitriptyline for the treatment of interstitial cystitis and painful bladder syndrome in treatment-naïve patients. In the trial, both the non-adherence rate and the rate of loss to follow-up are fairly high. To estimate the effect of the treatment received on the outcome, we use the generalized structural mean model (GSMM), originally proposed to deal with non-adherence, to adjust for both non-adherence and loss to follow-up. In the model, loss to follow-up is handled by weighting the estimation equations for GSMM with one over the probability of not being lost to follow-up, estimated using a logistic regression model. We re-analyzed the data from the trial and found a possible benefit of amitriptyline when administered at a high-dose level.
doi:10.1002/sim.5702
PMCID: PMC3868645  PMID: 23225539
causal inference; non-adherence; loss to follow-up; inverse probability weighting; structural mean model
17.  Principal Interactions Analysis for Repeated Measures Data: Application to Gene-Gene, Gene-Environment Interactions 
Statistics in medicine  2012;31(22):2531-2551.
Many existing cohorts with longitudinal data on environmental exposures, occupational history, lifestyle/behavioral characteristics and health outcomes have collected genetic data in recent years. In this paper, we consider the problem of modeling gene-gene, gene-environment interactions with repeated measures data on a quantitative trait. We review possibilities of using classical models proposed by Tukey (1949) and Mandel (1961) using the cell means of a two-way classification array for such data. Whereas these models are effective for detecting interactions in presence of main effects, they fail miserably if the interaction structure is misspecified. We explore a more robust class of interaction models that are based on a singular value decomposition of the cell means residual matrix after fitting the additive main effect terms. This class of additive main effects and multiplicative interaction (AMMI) models (Gollob, 1968) provide useful summaries for subject-specific and time-varying effects as represented in terms of their contribution to the leading eigenvalues of the interaction matrix. It also makes the interaction structure more amenable to geometric representation. We call this analysis “Principal Interactions Analysis” (PIA). While the paper primarily focusses on a cell-mean based analysis of repeated measures outcome, we also introduce resampling-based methods that appropriately recognize the unbalanced and longitudinal nature of the data instead of reducing the response to cell-means. The proposed methods are illustrated by using data from the Normative Aging Study, a longitudinal cohort study of Boston area veterans since 1963. We carry out simulation studies under an array of classical interaction models and common epistasis models to illustrate the properties of the PIA procedure in comparison to the classical alternatives.
doi:10.1002/sim.5315
PMCID: PMC4046647  PMID: 22415818
biplot; column interaction; eigenvalue; epistasis; intraclass correlation; likelihood-ratio test; non-additivity; permutation tests; pseudo F-test; row interaction; singular vector; Wishart matrix
18.  Time-varying Coefficient Proportional Hazards Model with Missing Covariates 
Statistics in medicine  2012;32(12):2013-2030.
SUMMARY
Missing covariates often arise in biomedical studies with survival outcomes. Existing approaches for missing covariates generally assume proportional hazards. The proportionality assumption may not hold in practice, as illustrated by data from a mouse leukemia study with covariate effects changing over time. To tackle this restriction, we study the missing data problem under the varying-coefficient proportional hazards model. Based on the local partial likelihood approach, we develop inverse selection probability weighted estimators. We consider reweighting and augmentation techniques for possible improvement of efficiency and robustness. The proposed estimators are assessed via simulation studies and illustrated by application to the mouse leukemia data.
doi:10.1002/sim.5652
PMCID: PMC3574968  PMID: 23044762
augmentation; inverse probability weighting; local partial likelihood; reweighting
19.  Sample size estimation in educational intervention trials with subgroup heterogeneity in only one arm 
Statistics in medicine  2012;32(12):2140-2154.
We present closed form sample size and power formulas motivated by the study of a psycho-social intervention in which the experimental group has the intervention delivered in teaching subgroups while the control group receives usual care. This situation is different from the usual clustered randomized trial since subgroup heterogeneity only exists in one arm. We take this modification into consideration and present formulas for the situation in which we compare a continuous outcome at both a single point in time and longitudinally over time. In addition, we present the optimal combination of parameters such as the number of subgroups and number of time points for minimizing sample size and maximizing power subject to constraints such as the maximum number of measurements that can be taken (i.e. a proxy for cost).
doi:10.1002/sim.5678
PMCID: PMC3615113  PMID: 23172724
Sample size; heterogeneous subgroups; clinical trials; longitudinal data
20.  A Bayesian decision-theoretic sequential response-adaptive randomization design 
Statistics in medicine  2013;32(12):10.1002/sim.5735.
We propose a class of phase II clinical trial designs with sequential stopping and adaptive treatment allocation to evaluate treatment efficacy. Our work is based on two-arm (control and experimental treatment) designs with binary endpoints. Our overall goal is to construct more efficient and ethical randomized phase II trials by reducing the average sample sizes and increasing the percentage of patients assigned to the better treatment arms of the trials. The designs combine the Bayesian decision-theoretic sequential approach with adaptive randomization procedures in order to achieve simultaneous goals of improved efficiency and ethics. The design parameters represent the costs of different decisions, e.g., the decisions for stopping or continuing the trials. The parameters enable us to incorporate the actual costs of the decisions in practice. The proposed designs allow the clinical trials to stop early for either efficacy or futility. Furthermore, the designs assign more patients to better treatment arms by applying adaptive randomization procedures. We develop an algorithm based on the constrained backward induction and forward simulation to implement the designs. The algorithm overcomes the computational difficulty of the backward induction method, thereby making our approach practicable. The designs result in trials with desirable operating characteristics under the simulated settings. Moreover, the designs are robust with respect to the response rate of the control group.
doi:10.1002/sim.5735
PMCID: PMC3873748  PMID: 23315678
sequential method; response adaptive randomization; Bayesian decision–theoretic approach; backward induction; forward simulation
21.  Phase II clinical trials with time-to-event endpoints: Optimal two-stage designs with one-sample log-rank test 
Statistics in medicine  2013;33(12):2004-2016.
Summary
Phase II clinical trials are often conducted to determine whether a new treatment is sufficiently promising to warrant a major controlled clinical evaluation against a standard therapy. We consider single-arm phase II clinical trials with right censored survival time responses where the ordinary one-sample logrank test is commonly used for testing the treatment efficacy. For planning such clinical trials this paper presents two-stage designs that are optimal in the sense that the expected sample size is minimized if the new regimen has low efficacy subject to constraints of the type I and type II errors. Two-stage designs which minimize the maximal sample size are also determined. Optimal and minimax designs for a range of design parameters are tabulated along with examples.
doi:10.1002/sim.6073
PMCID: PMC4013236  PMID: 24338995
logrank test; minimax design; optimal design; single-arm trial; two-stage design; time to event
22.  Network-based Regularization for Matched Case-Control Analysis of High-dimensional DNA Methylation Data 
Statistics in medicine  2012;32(12):2127-2139.
The matched case-control designs are commonly used to control for potential confounding factors in genetic epidemiology studies especially epigenetic studies with DNA methylation. Compared with unmatched case-control studies with high-dimensional genomic or epigenetic data, there have been few variable selection methods for matched sets. In an earlier article, we proposed the penalized logistic regression model for the analysis of unmatched DNA methylation data using a network-based penalty. However, for popularly applied matched designs in epigenetic studies that compare DNA methylation between tumor and adjacent non-tumor tissues or between pre-treatment and post-treatment conditions, applying ordinary logistic regression ignoring matching is known to bring serious bias in estimation. In this article, we developed a penalized conditional logistic model using the network-based penalty that encourages a grouping-effect of 1) linked CpG sites within a gene or 2) linked genes within a genetic pathway for analysis of matched DNA methylation data. In our simulation studies, we demonstrated the superiority of using conditional logistic model over unconditional logistic model in high-dimensional variable selection problems for matched case-control data. We further investigated the benefits of utilizing biological group or graph information for matched case-control data. The proposed method was applied to a genome-wide DNA methylation study on hepatocellular carcinoma (HCC) where DNA methylation levels of tumor and adjacent non-tumor tissues from HCC patients were investigated using the Illumina Infinium HumanMethylation27 Beadchip. Several new CpG sites and genes known to be related to HCC were identified but were missed by the standard method in the original paper.
doi:10.1002/sim.5694
PMCID: PMC4038397  PMID: 23212810
DNA methylation; Genetic pathways; Matched case-control; Network-based regularization; Penalized conditional logistic; Variable selection
23.  Estimation of Gene-Environment Interaction by Pooling Biospecimens 
Statistics in medicine  2012;31(26):3241-3252.
Summary
Case-control studies are prone to low power for testing gene-environment interactions (GXE) given the need for a sufficient number of individuals on each strata of disease, gene, and environment. We propose a new study design to increase power by strategically pooling biospecimens. Pooling biospecimens allows us to increase the number of subjects significantly, thereby providing substantial increase in power. We focus on a special, though realistic case, where disease and environmental statuses are binary, and gene status is ordinal with each individual having 0, 1 or 2 minor alleles. Through pooling, we obtain an allele frequency for each level of disease and environmental status. Using the allele frequencies, we develop new methodology for estimating and testing GXE that is comparable to the situation when we have complete data on gene status for each individual. We also explore the measurement process and its effect on the GXE estimator. Using an illustration, we show the effectiveness of pooling with an epidemiologic study which tests an interaction for fiber and PON1 on anovulation. Through simulation, we show that taking 12 pooled measurements from 1000 individuals achieves more power than individually genotyping 500 individuals. Our findings suggest that strategic pooling should be considered when an investigator designs a pilot study to test for a GXE.
doi:10.1002/sim.5357
PMCID: PMC4037867  PMID: 22859290
Allele frequency measurements; case-control study; gene-environment interaction; pooling; power
24.  Null but Not Void: Considerations for Hypothesis Testing Running Head: Null but Not Void 
Statistics in medicine  2012;32(2):196-205.
Standard statistical theory teaches us that once the null and alternative hypotheses have been defined for a parameter, the choice of the statistical test is clear. Standard theory does not teach us how to choose the null or alternative hypothesis appropriate to the scientific question of interest. Neither does it tell us that in some cases, depending on which alternatives are realistic, we may want to define our null hypothesis differently. Problems in statistical practice are frequently not as pristinely summarized as the classic theory in our textbooks. In this article, we present examples in statistical hypothesis testing in which seemingly simple choices are in fact rich with nuance that, when given full consideration, make the choice of the right hypothesis test much less straightforward.
doi:10.1002/sim.5497
PMCID: PMC4034366  PMID: 22807023
Binomial proportion; hypothesis testing; Lachenbruch test; mixed models; repeated measures; strong null hypothesis
25.  Empirical likelihood-based confidence intervals for length-biased data 
Statistics in medicine  2012;32(13):2278-2291.
Logistic or other constraints often preclude the possibility of conducting incident cohort studies. A feasible alternative in such cases is to conduct a cross-sectional prevalent cohort study for which we recruit prevalent cases, i.e. subjects who have already experienced the initiating event, say the onset of a disease. When the interest lies in estimating the lifespan between the initiating event and a terminating event, say death for instance, such subjects may be followed prospectively until the terminating event or loss to follow-up, whichever happens first. It is well known that prevalent cases have, on average, longer lifespans. As such they do not constitute a representative random sample from the target population; they comprise a biased sample. If the initiating events are generated from a stationary Poisson process, the so-called stationarity assumption, this bias is called length bias. The current literature on length-biased sampling lacks a simple method for estimating the margin of errors of commonly used summary statistics. We fill this gap using the empirical likelihood-based confidence intervals by adapting this method to right-censored length-biased survival data. Both large and small sample behaviors of these confidence intervals are studied. We illustrate our method using a set of data on survival with dementia, collected as part of the Canadian Study of Health and Aging.
doi:10.1002/sim.5637
PMCID: PMC4034580  PMID: 23027662
confidence interval; length-biased data; empirical likelihood ratio test; mean; median; quantile; survival function

Results 1-25 (470)