PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (39)
 

Clipboard (0)
None
Journals
Year of Publication
Document Types
1.  Incorporating parental information into family-based association tests 
Biostatistics (Oxford, England)  2012;14(3):556-572.
Assumptions regarding the true underlying genetic model, or mode of inheritance, are necessary when quantifying genetic associations with disease phenotypes. Here we propose new methods to ascertain the underlying genetic model from parental data in family-based association studies. Specifically, for parental mating-type data, we propose a novel statistic to test whether the underlying genetic model is additive, dominant, or recessive; for parental genotype–phenotype data, we propose three strategies to determine the true mode of inheritance. We illustrate how to incorporate the information gleaned from these strategies into family-based association tests. Because family-based association tests are conducted conditional on parental genotypes, the type I error rate of these procedures is not inflated by the information learned from parental data. This result holds even if such information is weak or when the assumption of Hardy–Weinberg equilibrium is violated. Our simulations demonstrate that incorporating parental data into family-based association tests can improve power under common inheritance models. The application of our proposed methods to a candidate-gene study of type 1 diabetes successfully detects a recessive effect in MGAT5 that would otherwise be missed by conventional family-based association tests.
doi:10.1093/biostatistics/kxs048
PMCID: PMC3732025  PMID: 23266418
Case–parents; Dominant; Mode of inheritance; Nuclear families; Recessive; Robust
2.  Lasso tree for cancer staging with survival data 
Biostatistics (Oxford, England)  2012;14(2):327-339.
The tumor-node-metastasis staging system has been the lynchpin of cancer diagnosis, treatment, and prognosis for many years. For meaningful clinical use, an orderly grouping of the T and N categories into a staging system needs to be defined, usually with respect to a time-to-event outcome. This can be reframed as a model selection problem with respect to features arranged on a partially ordered two-way grid, and a penalized regression method is proposed for selecting the optimal grouping. Instead of penalizing the L1-norm of the coefficients like lasso, in order to enforce the stage grouping, we place L1 constraints on the differences between neighboring coefficients. The underlying mechanism is the sparsity-enforcing property of the L1 penalty, which forces some estimated coefficients to be the same and hence leads to stage grouping. Partial ordering constraints is also required as both the T and N categories are ordinal. A series of optimal groupings with different numbers of stages can be obtained by varying the tuning parameter, which gives a tree-like structure offering a visual aid on how the groupings are progressively made. We hence call the proposed method the lasso tree. We illustrate the utility of our method by applying it to the staging of colorectal cancer using survival outcomes. Simulation studies are carried out to examine the finite sample performance of the selection procedure. We demonstrate that the lasso tree is able to give the right grouping with moderate sample size, is stable with regard to changes in the data, and is not affected by random censoring.
doi:10.1093/biostatistics/kxs044
PMCID: PMC3590926  PMID: 23221681
Cancer staging; Cox model; Lasso; Lasso tree; Model selection
3.  Regression models for group testing data with pool dilution effects 
Biostatistics (Oxford, England)  2012;14(2):284-298.
Group testing is widely used to reduce the cost of screening individuals for infectious diseases. There is an extensive literature on group testing, most of which traditionally has focused on estimating the probability of infection in a homogeneous population. More recently, this research area has shifted towards estimating individual-specific probabilities in a regression context. However, existing regression approaches have assumed that the sensitivity and specificity of pooled biospecimens are constant and do not depend on the pool sizes. For those applications, where this assumption may not be realistic, these existing approaches can lead to inaccurate inference, especially when pool sizes are large. Our new approach, which exploits the information readily available from underlying continuous biomarker distributions, provides reliable inference in settings where pooling would be most beneficial and does so even for larger pool sizes. We illustrate our methodology using hepatitis B data from a study involving Irish prisoners.
doi:10.1093/biostatistics/kxs045
PMCID: PMC3590921  PMID: 23197382
Binary response; Biomarker; Maximum likelihood; Pooled testing; Sensitivity; Specificity
4.  Incorporating auxiliary information for improved prediction in high-dimensional datasets: an ensemble of shrinkage approaches 
Biostatistics (Oxford, England)  2012;14(2):259-272.
With advancement in genomic technologies, it is common that two high-dimensional datasets are available, both measuring the same underlying biological phenomenon with different techniques. We consider predicting a continuous outcome Y using X, a set of p markers which is the best available measure of the underlying biological process. This same biological process may also be measured by W, coming from a prior technology but correlated with X. On a moderately sized sample, we have (Y,X,W), and on a larger sample we have (Y,W). We utilize the data on W to boost the prediction of Y by X. When p is large and the subsample containing X is small, this is a p>n situation. When p is small, this is akin to the classical measurement error problem; however, ours is not the typical goal of calibrating W for use in future studies. We propose to shrink the regression coefficients β of Y on X toward different targets that use information derived from W in the larger dataset. We compare these proposals with the classical ridge regression of Y on X, which does not use W. We also unify all of these methods as targeted ridge estimators. Finally, we propose a hybrid estimator which is a linear combination of multiple estimators of β. With an optimal choice of weights, the hybrid estimator balances efficiency and robustness in a data-adaptive way to theoretically yield a smaller prediction error than any of its constituents. The methods, including a fully Bayesian alternative, are evaluated via simulation studies. We also apply them to a gene-expression dataset. mRNA expression measured via quantitative real-time polymerase chain reaction is used to predict survival time in lung cancer patients, with auxiliary information from microarray technology available on a larger sample.
doi:10.1093/biostatistics/kxs036
PMCID: PMC3590922  PMID: 23087411
Cross-validation; Generalized ridge; Mean squared prediction error; Measurement error
5.  Structure-constrained sparse canonical correlation analysis with an application to microbiome data analysis 
Biostatistics (Oxford, England)  2012;14(2):244-258.
Motivated by studying the association between nutrient intake and human gut microbiome composition, we developed a method for structure-constrained sparse canonical correlation analysis (ssCCA) in a high-dimensional setting. ssCCA takes into account the phylogenetic relationships among bacteria, which provides important prior knowledge on evolutionary relationships among bacterial taxa. Our ssCCA formulation utilizes a phylogenetic structure-constrained penalty function to impose certain smoothness on the linear coefficients according to the phylogenetic relationships among the taxa. An efficient coordinate descent algorithm is developed for optimization. A human gut microbiome data set is used to illustrate this method. Both simulations and real data applications show that ssCCA performs better than the standard sparse CCA in identifying meaningful variables when there are structures in the data.
doi:10.1093/biostatistics/kxs038
PMCID: PMC3590923  PMID: 23074263
Dimension reduction; Graph; Phylogenetic tree; Regularization; Variable selection
6.  Aggregate-data estimation of an individual patient data linear random effects meta-analysis with a patient covariate-treatment interaction term 
Biostatistics (Oxford, England)  2012;14(2):273-283.
Individual patient-data meta-analysis of randomized controlled trials is the gold standard for investigating how patient factors modify the effectiveness of treatment. Because participant data from primary studies might not be available, reliable alternatives using published data are needed. In this paper, I show that the maximum likelihood estimates of a participant-level linear random effects meta-analysis with a patient covariate-treatment interaction can be determined exactly from aggregate data when the model's variance components are known. I provide an equivalent aggregate-data EM algorithm and supporting software with the R package ipdmeta for the estimation of the “interaction meta-analysis” when the variance components are unknown. The properties of the methodology are assessed with simulation studies. The usefulness of the methods is illustrated with analyses of the effect modification of cholesterol and age on pravastatin in the multicenter placebo-controlled regression growth evaluation statin study. When a participant-level meta-analysis cannot be performed, aggregate-data interaction meta-analysis is a useful alternative for exploring individual-level sources of treatment effect heterogeneity.
doi:10.1093/biostatistics/kxs035
PMCID: PMC3590924  PMID: 23001065
Clinical trials; Meta-analysis; Random effects models; Statistical methods in Health service research
7.  Expected estimating equations via EM for proportional hazards regression with covariate misclassification 
Biostatistics (Oxford, England)  2012;14(2):351-365.
In epidemiological and medical studies, covariate misclassification may occur when the observed categorical variables are not perfect measurements for an unobserved categorical latent predictor. It is well known that covariate measurement error in Cox regression may lead to biased estimation. Misclassification in covariates will cause bias, and adjustment for misclassification will be challenging when the gold standard variables are not available. In general, statistical modeling for misclassification is very different from that of the measurement error. In this paper, we investigate an approximate induced hazard estimator and propose an expected estimating equation estimator via an expectation–maximization algorithm to accommodate covariate misclassification when multiple surrogate variables are available. Finite sample performance is examined via simulation studies. The proposed method and other methods are applied to a human immunodeficiency virus clinical trial in which a few behavior variables from questionnaires are used as surrogates for a latent behavior variable.
doi:10.1093/biostatistics/kxs046
PMCID: PMC3590925  PMID: 23178735
EM algorithm; Estimating equation; Measurement error; Misclassification; Surrogate covariate
8.  Incorporating group correlations in genome-wide association studies using smoothed group Lasso 
Biostatistics (Oxford, England)  2012;14(2):205-219.
In genome-wide association studies, penalization is an important approach for identifying genetic markers associated with disease. Motivated by the fact that there exists natural grouping structure in single nucleotide polymorphisms and, more importantly, such groups are correlated, we propose a new penalization method for group variable selection which can properly accommodate the correlation between adjacent groups. This method is based on a combination of the group Lasso penalty and a quadratic penalty on the difference of regression coefficients of adjacent groups. The new method is referred to as smoothed group Lasso (SGL). It encourages group sparsity and smoothes regression coefficients for adjacent groups. Canonical correlations are applied to the weights between groups in the quadratic difference penalty. We first derive a GCD algorithm for computing the solution path with linear regression model. The SGL method is further extended to logistic regression for binary response. With the assistance of the majorize–minimization algorithm, the SGL penalized logistic regression turns out to be an iteratively penalized least-square problem. We also suggest conducting principal component analysis to reduce the dimensionality within groups. Simulation studies are used to evaluate the finite sample performance. Comparison with group Lasso shows that SGL is more effective in selecting true positives. Two datasets are analyzed using the SGL method.
doi:10.1093/biostatistics/kxs034
PMCID: PMC3590928  PMID: 22988281
Group selection; Regularization; SNP; Smoothing
9.  Extensions of criteria for evaluating risk prediction models for public health applications 
Biostatistics (Oxford, England)  2012;14(2):366-381.
We recently proposed two novel criteria to assess the usefulness of risk prediction models for public health applications. The proportion of cases followed, PCF(p), is the proportion of individuals who will develop disease who are included in the proportion p of individuals in the population at highest risk. The proportion needed to follow-up, PNF(q), is the proportion of the general population at highest risk that one needs to follow in order that a proportion q of those destined to become cases will be followed (Pfeiffer, R.M. and Gail, M.H., 2011. Two criteria for evaluating risk prediction models. Biometrics 67, 1057–1065). Here, we extend these criteria in two ways. First, we introduce two new criteria by integrating PCF and PNF over a range of values of q or p to obtain iPCF, the integrated PCF, and iPNF, the integrated PNF. A key assumption in the previous work was that the risk model is well calibrated. This assumption also underlies novel estimates of iPCF and iPNF based on observed risks in a population alone. The second extension is to propose and study estimates of PCF, PNF, iPCF, and iPNF that are consistent even if the risk models are not well calibrated. These new estimates are obtained from case–control data when the outcome prevalence in the population is known, and from cohort data, with baseline covariates and observed health outcomes. We study the efficiency of the various estimates and propose and compare tests for comparing two risk models, both of which were evaluated in the same validation data.
doi:10.1093/biostatistics/kxs037
PMCID: PMC3695651  PMID: 23087412
Area under the receiver operator characteristics curve (ROC); AUC; Discrimination; Discriminatory accuracy; Risk models; Study design
10.  Assessing the fit of parametric cure models 
Biostatistics (Oxford, England)  2012;14(2):340-350.
Survival data can contain an unknown fraction of subjects who are “cured” in the sense of not being at risk of failure. We describe such data with cure-mixture models, which separately model cure status and the hazard of failure among non-cured subjects. No diagnostic currently exists for evaluating the fit of such models; the popular Schoenfeld residual (Schoenfeld, 1982. Partial residuals for the proportional hazards regression-model. Biometrika 69, 239–241) is not applicable to data with cures. In this article, we propose a pseudo-residual, modeled on Schoenfeld's, to assess the fit of the survival regression in the non-cured fraction. Unlike Schoenfeld's approach, which tests the validity of the proportional hazards (PH) assumption, our method uses the full hazard and is thus also applicable to non-PH models. We derive the asymptotic distribution of the residuals and evaluate their performance by simulation in a range of parametric models. We apply our approach to data from a smoking cessation drug trial.
doi:10.1093/biostatistics/kxs043
PMCID: PMC3695652  PMID: 23197383
Accelerated failure time; Long-term survivors; Proportional hazards; Residual analysis
11.  Multicategory reclassification statistics for assessing improvements in diagnostic accuracy 
Biostatistics (Oxford, England)  2012;14(2):382-394.
In this paper, we extend the definitions of the net reclassification improvement (NRI) and the integrated discrimination improvement (IDI) in the context of multicategory classification. Both measures were proposed in Pencina and others (2008. Evaluating the added predictive ability of a new marker: from area under the receiver operating characteristic (ROC) curve to reclassification and beyond. Statistics in Medicine 27, 157–172) as numeric characterizations of accuracy improvement for binary diagnostic tests and were shown to have certain advantage over analyses based on ROC curves or other regression approaches. Estimation and inference procedures for the multiclass NRI and IDI are provided in this paper along with necessary asymptotic distributional results. Simulations are conducted to study the finite-sample properties of the proposed estimators. Two medical examples are considered to illustrate our methodology.
doi:10.1093/biostatistics/kxs047
PMCID: PMC3695653  PMID: 23197381
Area under the ROC curve; Integrated discrimination improvement; Multicategory classification; Multinomial logistic regression; Net reclassification improvement
12.  Non-parametric estimation of a time-dependent predictive accuracy curve 
A major biomedical goal associated with evaluating a candidate biomarker or developing a predictive model score for event-time outcomes is to accurately distinguish between incident cases from the controls surviving beyond t throughout the entire study period. Extensions of standard binary classification measures like time-dependent sensitivity, specificity, and receiver operating characteristic (ROC) curves have been developed in this context (Heagerty, P. J., and others, 2000. Time-dependent ROC curves for censored survival data and a diagnostic marker. Biometrics 56, 337–344). We propose a direct, non-parametric method to estimate the time-dependent Area under the curve (AUC) which we refer to as the weighted mean rank (WMR) estimator. The proposed estimator performs well relative to the semi-parametric AUC curve estimator of Heagerty and Zheng (2005. Survival model predictive accuracy and ROC curves. Biometrics 61, 92–105). We establish the asymptotic properties of the proposed estimator and show that the accuracy of markers can be compared very simply using the difference in the WMR statistics. Estimators of pointwise standard errors are provided.
doi:10.1093/biostatistics/kxs021
PMCID: PMC3520498  PMID: 22734044
AUC curve; Survival analysis; Time-dependent ROC
13.  Mark-specific proportional hazards model with multivariate continuous marks and its application to HIV vaccine efficacy trials 
For time-to-event data with finitely many competing risks, the proportional hazards model has been a popular tool for relating the cause-specific outcomes to covariates (Prentice and others, 1978. The analysis of failure time in the presence of competing risks. Biometrics 34, 541–554). Inspired by previous research in HIV vaccine efficacy trials, the cause of failure is replaced by a continuous mark observed only in subjects who fail. This article studies an extension of this approach to allow a multivariate continuum of competing risks, to better account for the fact that the candidate HIV vaccines tested in efficacy trials have contained multiple HIV sequences, with a purpose to elicit multiple types of immune response that recognize and block different types of HIV viruses. We develop inference for the proportional hazards model in which the regression parameters depend parametrically on the marks, to avoid the curse of dimensionality, and the baseline hazard depends nonparametrically on both time and marks. Goodness-of-fit tests are constructed based on generalized weighted martingale residuals. The finite-sample performance of the proposed methods is examined through extensive simulations. The methods are applied to a vaccine efficacy trial to examine whether and how certain antigens represented inside the vaccine are relevant for protection or anti-protection against the exposing HIVs.
doi:10.1093/biostatistics/kxs022
PMCID: PMC3520499  PMID: 22764174
Competing risks; Failure time data; Goodness-of-fit test; HIV vaccine trial; Hypothesis testing; Mark-specific relative risk; Multivariate data; Partial likelihood estimation; Semiparametric model; STEP trial
14.  Marginal additive hazards model for case-cohort studies with multiple disease outcomes: an application to the Atherosclerosis Risk in Communities (ARIC) study 
In the case-cohort studies conducted within the Atherosclerosis Risk in Communities (ARIC) study, it is of interest to assess and compare the effect of high-sensitivity C-reactive protein (hs-CRP) on the increased risks of incident coronary heart disease and incident ischemic stroke. Empirical cumulative hazards functions for different levels of hs-CRP reveal an additive structure for the risks for each disease outcome. Additionally, we are interested in estimating the difference in the risk for the different hs-CRP groups. Motivated by this, we consider fitting marginal additive hazards regression models for case-cohort studies with multiple disease outcomes. We consider a weighted estimating equations approach for the estimation of model parameters. The asymptotic properties of the proposed estimators are derived and their finite-sample properties are assessed via simulation studies. The proposed method is applied to analyze the ARIC Study.
doi:10.1093/biostatistics/kxs025
PMCID: PMC3520500  PMID: 22826550
Additive hazards model; ARIC study; Case-cohort study; Multivariate failure times; Weighted estimating equations
15.  Classification of patients from time-course gene expression 
Classifying patients into different risk groups based on their genomic measurements can help clinicians design appropriate clinical treatment plans. To produce such a classification, gene expression data were collected on a cohort of burn patients, who were monitored across multiple time points. This led us to develop a new classification method using time-course gene expressions. Our results showed that making good use of time-course information of gene expression improved the performance of classification compared with using gene expression from individual time points only. Our method is implemented into an R-package: time-course prediction analysis using microarray.
doi:10.1093/biostatistics/kxs027
PMCID: PMC3520502  PMID: 22926914
Classification; Gene expression; Longitudinal; Time-course
16.  ROC curve estimation under test-result-dependent sampling 
Biostatistics (Oxford, England)  2012;14(1):160-172.
The receiver operating characteristic (ROC) curve is often used to evaluate the performance of a biomarker measured on continuous scale to predict the disease status or a clinical condition. Motivated by the need for novel study designs with better estimation efficiency and reduced study cost, we consider a biased sampling scheme that consists of a SRC and a supplemental TDC. Using this approach, investigators can oversample or undersample subjects falling into certain regions of the biomarker measure, yielding improved precision for the estimation of the ROC curve with a fixed sample size. Test-result-dependent sampling will introduce bias in estimating the predictive accuracy of the biomarker if standard ROC estimation methods are used. In this article, we discuss three approaches for analyzing data of a test-result-dependent structure with a special focus on the empirical likelihood method. We establish asymptotic properties of the empirical likelihood estimators for covariate-specific ROC curves and covariate-independent ROC curves and give their corresponding variance estimators. Simulation studies show that the empirical likelihood method yields good properties and is more efficient than alternative methods. Recommendations on number of regions, cutoff points, and subject allocation is made based on the simulation results. The proposed methods are illustrated with a data example based on an ongoing lung cancer clinical trial.
doi:10.1093/biostatistics/kxs020
PMCID: PMC3577107  PMID: 22723502
Binormal model; Covariate-independent ROC curve; Covariate-specific ROC curve; Empirical likelihood method; Test-result-dependent sampling
17.  Deriving benefit of early detection from biomarker-based prognostic models 
Many prognostic models for cancer use biomarkers that have utility in early detection. For example, in prostate cancer, models predicting disease-specific survival use serum prostate-specific antigen levels. These models typically show that higher marker levels are associated with poorer prognosis. Consequently, they are often interpreted as indicating that detecting disease at a lower threshold of the biomarker is likely to generate a survival benefit. However, lowering the threshold of the biomarker is tantamount to early detection. For survival benefit to not be simply an artifact of starting the survival clock earlier, we must account for the lead time of early detection. It is not known whether the existing prognostic models imply a survival benefit under early detection once lead time has been accounted for. In this article, we investigate survival benefit implied by prognostic models where the predictor(s) of disease-specific survival are age and/or biomarker level at disease detection. We show that the benefit depends on the rate of biomarker change, the lead time, and the biomarker level at the original date of diagnosis as well as on the parameters of the prognostic model. Even if the prognostic model indicates that lowering the threshold of the biomarker is associated with longer disease-specific survival, this does not necessarily imply that early detection will confer an extension of life expectancy.
doi:10.1093/biostatistics/kxs018
PMCID: PMC3577108  PMID: 22730510
Disease-specific survival; Early Detection; Proportional hazards model
18.  Mixed effect regression analysis for a cluster-based two-stage outcome-auxiliary-dependent sampling design with a continuous outcome 
Biostatistics (Oxford, England)  2012;13(4):650-664.
Two-stage design is a well-known cost-effective way for conducting biomedical studies when the exposure variable is expensive or difficult to measure. Recent research development further allowed one or both stages of the two-stage design to be outcome dependent on a continuous outcome variable. This outcome-dependent sampling feature enables further efficiency gain in parameter estimation and overall cost reduction of the study (e.g. Wang, X. and Zhou, H., 2010. Design and inference for cancer biomarker study with an outcome and auxiliary-dependent subsampling. Biometrics 66, 502–511; Zhou, H., Song, R., Wu, Y. and Qin, J., 2011. Statistical inference for a two-stage outcome-dependent sampling design with a continuous outcome. Biometrics 67, 194–202). In this paper, we develop a semiparametric mixed effect regression model for data from a two-stage design where the second-stage data are sampled with an outcome-auxiliary-dependent sample (OADS) scheme. Our method allows the cluster- or center-effects of the study subjects to be accounted for. We propose an estimated likelihood function to estimate the regression parameters. Simulation study indicates that greater study efficiency gains can be achieved under the proposed two-stage OADS design with center-effects when compared with other alternative sampling schemes. We illustrate the proposed method by analyzing a dataset from the Collaborative Perinatal Project.
doi:10.1093/biostatistics/kxs013
PMCID: PMC3440236  PMID: 22723503
Center effect; Mixed model; Outcome-auxiliary-dependent sampling; Validation sample
19.  Optimal tests for rare variant effects in sequencing association studies 
Biostatistics (Oxford, England)  2012;13(4):762-775.
With development of massively parallel sequencing technologies, there is a substantial need for developing powerful rare variant association tests. Common approaches include burden and non-burden tests. Burden tests assume all rare variants in the target region have effects on the phenotype in the same direction and of similar magnitude. The recently proposed sequence kernel association test (SKAT) (Wu, M. C., and others, 2011. Rare-variant association testing for sequencing data with the SKAT. The American Journal of Human Genetics 89, 82–93], an extension of the C-alpha test (Neale, B. M., and others, 2011. Testing for an unusual distribution of rare variants. PLoS Genetics 7, 161–165], provides a robust test that is particularly powerful in the presence of protective and deleterious variants and null variants, but is less powerful than burden tests when a large number of variants in a region are causal and in the same direction. As the underlying biological mechanisms are unknown in practice and vary from one gene to another across the genome, it is of substantial practical interest to develop a test that is optimal for both scenarios. In this paper, we propose a class of tests that include burden tests and SKAT as special cases, and derive an optimal test within this class that maximizes power. We show that this optimal test outperforms burden tests and SKAT in a wide range of scenarios. The results are illustrated using simulation studies and triglyceride data from the Dallas Heart Study. In addition, we have derived sample size/power calculation formula for SKAT with a new family of kernels to facilitate designing new sequence association studies.
doi:10.1093/biostatistics/kxs014
PMCID: PMC3440237  PMID: 22699862
Burden tests; Correlated effects; Kernel association test; Rare variants; Score test
20.  Identifying genetic marker sets associated with phenotypes via an efficient adaptive score test 
Biostatistics (Oxford, England)  2012;13(4):776-790.
In recent years, genome-wide association studies (GWAS) and gene-expression profiling have generated a large number of valuable datasets for assessing how genetic variations are related to disease outcomes. With such datasets, it is often of interest to assess the overall effect of a set of genetic markers, assembled based on biological knowledge. Genetic marker-set analyses have been advocated as more reliable and powerful approaches compared with the traditional marginal approaches (Curtis and others, 2005. Pathways to the analysis of microarray data. TRENDS in Biotechnology 23, 429–435; Efroni and others, 2007. Identification of key processes underlying cancer phenotypes using biologic pathway analysis. PLoS One 2, 425). Procedures for testing the overall effect of a marker-set have been actively studied in recent years. For example, score tests derived under an Empirical Bayes (EB) framework (Liu and others, 2007. Semiparametric regression of multidimensional genetic pathway data: least-squares kernel machines and linear mixed models. Biometrics 63, 1079–1088; Liu and others, 2008. Estimation and testing for the effect of a genetic pathway on a disease outcome using logistic kernel machine regression via logistic mixed models. BMC bioinformatics 9, 292–2; Wu and others, 2010. Powerful SNP-set analysis for case-control genome-wide association studies. American Journal of Human Genetics 86, 929) have been proposed as powerful alternatives to the standard Rao score test (Rao, 1948. Large sample tests of statistical hypotheses concerning several parameters with applications to problems of estimation. Mathematical Proceedings of the Cambridge Philosophical Society, 44, 50–57). The advantages of these EB-based tests are most apparent when the markers are correlated, due to the reduction in the degrees of freedom. In this paper, we propose an adaptive score test which up- or down-weights the contributions from each member of the marker-set based on the Z-scores of their effects. Such an adaptive procedure gains power over the existing procedures when the signal is sparse and the correlation among the markers is weak. By combining evidence from both the EB-based score test and the adaptive test, we further construct an omnibus test that attains good power in most settings. The null distributions of the proposed test statistics can be approximated well either via simple perturbation procedures or via distributional approximations. Through extensive simulation studies, we demonstrate that the proposed procedures perform well in finite samples. We apply the tests to a breast cancer genetic study to assess the overall effect of the FGFR2 gene on breast cancer risk.
doi:10.1093/biostatistics/kxs015
PMCID: PMC3440238  PMID: 22734045
Adaptive procedures; Empirical Bayes; GWAS; Pathway analysis; Score test; SNP sets
21.  Transcriptional and metabolic data integration and modeling for identification of active pathways 
Biostatistics (Oxford, England)  2012;13(4):748-761.
With the growing availability of omics data generated to describe different cells and tissues, the modeling and interpretation of such data has become increasingly important. Pathways are sets of reactions involving genes, metabolites, and proteins highlighting functional modules in the cell. Therefore, to discover activated or perturbed pathways when comparing two conditions, for example two different tissues, it is beneficial to use several types of omics data. We present a model that integrates transcriptomic and metabolomic data in order to make an informed pathway-level decision. Since metabolites can be seen as end-points of perturbations happening at the gene level, the gene expression data constitute the explanatory variables in a sparse regression model for the metabolite data. Sophisticated model selection procedures are developed to determine an appropriate model. We demonstrate that the transcript profiles can be used to informatively explain the metabolite data from cancer cell lines. Simulation studies further show that the proposed model offers a better performance in identifying active pathways than, for example, enrichment methods performed separately on the transcript and metabolite data.
doi:10.1093/biostatistics/kxs016
PMCID: PMC3440239  PMID: 22699861
Enrichment; Integrated modeling; Metabolomics; Pathways; Transcriptomics
22.  On cross-odds ratio for multivariate competing risks data 
Biostatistics (Oxford, England)  2012;13(4):680-694.
The cross-odds ratio is defined as the ratio of the conditional odds of the occurrence of one cause-specific event for one subject given the occurrence of the same or a different cause-specific event for another subject in the same cluster over the unconditional odds of occurrence of the cause-specific event. It is a measure of the association between the correlated cause-specific failure times within a cluster. The joint cumulative incidence function can be expressed as a function of the marginal cumulative incidence functions and the cross-odds ratio. Assuming that the marginal cumulative incidence functions follow a generalized semiparametric model, this paper studies the parametric regression modeling of the cross-odds ratio. A set of estimating equations are proposed for the unknown parameters and the asymptotic properties of the estimators are explored. Non-parametric estimation of the cross-odds ratio is also discussed. The proposed procedures are applied to the Danish twin data to model the associations between twins in their times to natural menopause and to investigate whether the association differs among monozygotic and dizygotic twins and how these associations have changed over time.
doi:10.1093/biostatistics/kxs017
PMCID: PMC3440240  PMID: 22696688
Binomial modeling; Correlated cause-specific failure times; Danish twin data; Estimating equation; Generalized semiparametric additive model; Inverse censoring probability weighting; Joint cumulative incidence function; Large sample properties; Marginal cumulative incidence function; Parametric regression model
23.  Modeling RNA degradation for RNA-Seq with applications 
Biostatistics (Oxford, England)  2012;13(4):734-747.
RNA-Seq is widely used in biological and biomedical studies. Methods for the estimation of the transcript's abundance using RNA-Seq data have been intensively studied, many of which are based on the assumption that the short-reads of RNA-Seq are uniformly distributed along the transcripts. However, the short-reads are found to be nonuniformly distributed along the transcripts, which can greatly reduce the accuracies of these methods based on the uniform assumption. Several methods are developed to adjust the biases induced by this nonuniformity, utilizing the short-read's empirical distribution in transcript. As an alternative, we found that RNA degradation plays a major role in the formation of the short-read's nonuniform distribution and thus developed a new approach that quantifies the short-read's nonuniform distribution by precisely modeling RNA degradation. Our model of RNA degradation fits RNA-Seq data quite well, and based on this model, a new statistical method was further developed to estimate transcript expression level, as well as the RNA degradation rate, for individual genes and their isoforms. We showed that our method can improve the accuracy of transcript isoform expression estimation. The RNA degradation rate of individual transcript we estimated is consistent across samples and/or experiments/platforms. In addition, the RNA degradation rate from our model is independent of the RNA length, consistent with previous studies on RNA decay rate.
doi:10.1093/biostatistics/kxs001
PMCID: PMC3616752  PMID: 22353193
EM algorithm; Gene expression; Next generation sequencing; RNA degradation; RNA-Seq
24.  Pseudo semiparametric maximum likelihood estimation exploiting gene environment independence for population-based case–control studies with complex samples 
Biostatistics (Oxford, England)  2012;13(4):711-723.
Advances in human genetics have led to epidemiological investigations not only of the effects of genes alone but also of gene–environment (G–E) interaction. A widely accepted design strategy in the study of how G–E relate to disease risks is the population-based case–control study (PBCCS). For simple random samples, semiparametric methods for testing G–E have been developed by Chatterjee and Carroll in 2005. The use of complex sampling in PBCCS that involve differential probabilities of sample selection of cases and controls and possibly cluster sampling is becoming more common. Two complexities, weighting for selection probabilities and intracluster correlation of observations, are induced by the complex sampling. We develop pseudo-semiparametric maximum likelihood estimators (pseudo-SPMLE) that apply to PBCCS with complex sampling. We study the finite sample performance of the pseudo-SPMLE using simulations and illustrate the pseudo-SPMLE with a US case–control study of kidney cancer.
doi:10.1093/biostatistics/kxs008
PMCID: PMC3616753  PMID: 22522235
Hardy–Weinberg equilibrium; Population weights; Selection probability; Stratified multistage cluster sampling; Taylor linearization
25.  A unified procedure for meta-analytic evaluation of surrogate end points in randomized clinical trials 
Biostatistics (Oxford, England)  2012;13(4):609-624.
The meta-analytic approach to evaluating surrogate end points assesses the predictiveness of treatment effect on the surrogate toward treatment effect on the clinical end point based on multiple clinical trials. Definition and estimation of the correlation of treatment effects were developed in linear mixed models and later extended to binary or failure time outcomes on a case-by-case basis. In a general regression setting that covers nonnormal outcomes, we discuss in this paper several metrics that are useful in the meta-analytic evaluation of surrogacy. We propose a unified 3-step procedure to assess these metrics in settings with binary end points, time-to-event outcomes, or repeated measures. First, the joint distribution of estimated treatment effects is ascertained by an estimating equation approach; second, the restricted maximum likelihood method is used to estimate the means and the variance components of the random treatment effects; finally, confidence intervals are constructed by a parametric bootstrap procedure. The proposed method is evaluated by simulations and applications to 2 clinical trials.
doi:10.1093/biostatistics/kxs003
PMCID: PMC3616754  PMID: 22394448
Causal inference; Meta-analysis; Surrogacy

Results 1-25 (39)