PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (46)
 

Clipboard (0)
None

Select a Filter Below

Journals
Year of Publication
Document Types
1.  Assessing Discrimination of Risk Prediction Rules in a Clustered Data Setting* 
Lifetime data analysis  2012;19(2):242-256.
The AUC (area under ROC curve) is a commonly used metric to assess discrimination of risk prediction rules; however, standard errors of AUC are usually based on the Mann-Whitney U test that assumes independence of sampling units. For ophthalmologic applications, it is desirable to assess risk prediction rules based on eye-specific outcome variables which are generally highly, but not perfectly correlated in fellow eyes (eg. progression of individual eyes to age-related macular degeneration (AMD)). In this article, we use the extended Mann-Whitney U test (Rosner et al, 2009) for the case where subunits within a cluster may have different progression status and assess discrimination of different prediction rules in this setting. Both data analyses based on progression of AMD and simulation studies show reasonable accuracy of this extended Mann-Whitney U test to assess discrimination of eye-specific risk prediction rules.
doi:10.1007/s10985-012-9240-6
PMCID: PMC3622772  PMID: 23263872
risk prediction; ROC curves; clustered data; GEE
2.  ROC Analysis for Multiple Markers with Tree-Based Classification 
Lifetime data analysis  2012;19(2):257-277.
Multiple biomarkers are frequently observed or collected for detecting or understanding a disease. The research interest of this paper is to extend tools of ROC analysis from univariate marker setting to multivariate marker setting for evaluating predictive accuracy of biomarkers using a tree-based classification rule. Using an arbitrarily combined and-or classifier, an ROC function together with a weighted ROC function (WROC) and their conjugate counterparts are introduced for examining the performance of multivariate markers. Specific features of the ROC and WROC functions and other related statistics are discussed in comparison with those familiar properties for univariate marker. Nonparametric methods are developed for estimating the ROC and WROC functions, and area under curve (AUC) and concordance probability. With emphasis on population average performance of markers, the proposed procedures and inferential results are useful for evaluating marker predictability based on multivariate marker measurements with different choices of markers, and for evaluating different and-or combinations in classifiers.
doi:10.1007/s10985-012-9233-5
PMCID: PMC3633731  PMID: 23054242
Concordance probability; Multiple markers; Prediction accuracy; U-statistics
3.  Subgroup specific incremental value of new markers for risk prediction 
Lifetime data analysis  2012;19(2):142-169.
In many clinical applications, understanding when measurement of new markers is necessary to provide added accuracy to existing prediction tools could lead to more cost effective disease management. Many statistical tools for evaluating the incremental value (IncV) of the novel markers over the routine clinical risk factors have been developed in recent years. However, most existing literature focuses primarily on global assessment. Since the IncVs of new markers often vary across subgroups, it would be of great interest to identify subgroups for which the new markers are most/least useful in improving risk prediction. In this paper we provide novel statistical procedures for systematically identifying potential traditional-marker based subgroups in whom it might be beneficial to apply a new model with measurements of both the novel and traditional markers. We consider various conditional time-dependent accuracy parameters for censored failure time outcome to assess the subgroup-specific IncVs. We provide non-parametric kernel-based estimation procedures to calculate the proposed parameters. Simultaneous interval estimation procedures are provided to account for sampling variation and adjust for multiple testing. Simulation studies suggest that our proposed procedures work well in finite samples. The proposed procedures are applied to the Framingham Offspring Study to examine the added value of an inflammation marker, C-reactive protein, on top of the traditional Framingham risk score for predicting 10-year risk of cardiovascular disease.
doi:10.1007/s10985-012-9235-3
PMCID: PMC3633735  PMID: 23263882
Incremental value; Partial area under the ROC curve; Prognostic accuracy; Risk prediction; Subgroup analysis; Time dependent ROC analysis
4.  Understanding Increments in Model Performance Metrics 
Lifetime data analysis  2012;19(2):10.1007/s10985-012-9238-0.
The area under the receiver operating characteristic curve (AUC) is the most commonly reported measure of discrimination for prediction models with binary outcomes. However, recently it has been criticized for its inability to increase when important risk factors are added to a baseline model with good discrimination. This has led to the claim that the reliance on the AUC as a measure of discrimination may miss important improvements in clinical performance of risk prediction rules derived from a baseline model. In this paper we investigate this claim by relating the AUC to measures of clinical performance based on sensitivity and specificity under the assumption of multivariate normality. The behavior of the AUC is contrasted with that of discrimination slope. We show that unless rules with very good specificity are desired, the change in the AUC does an adequate job as a predictor of the change in measures of clinical performance. However, stronger or more numerous predictors are needed to achieve the same increment in the AUC for baseline models with good versus poor discrimination. When excellent specificity is desired, our results suggest that the discrimination slope might be a better measure of model improvement than AUC. The theoretical results are illustrated using a Framingham Heart Study example of a model for predicting the 10-year incidence of atrial fibrillation.
doi:10.1007/s10985-012-9238-0
PMCID: PMC3656609  PMID: 23242535
risk prediction; discrimination; AUC; IDI; Youden index; relative utility
5.  Estimating improvement in prediction with matched case–control designs 
Lifetime data analysis  2013;19(2):170-201.
When an existing risk prediction model is not sufficiently predictive, additional variables are sought for inclusion in the model. This paper addresses study designs to evaluate the improvement in prediction performance that is gained by adding a new predictor to a risk prediction model. We consider studies that measure the new predictor in a case–control subset of the study cohort, a practice that is common in biomarker research. We ask if matching controls to cases in regards to baseline predictors improves efficiency. A variety of measures of prediction performance are studied. We find through simulation studies that matching improves the efficiency with which most measures are estimated, but can reduce efficiency for some. Efficiency gains are less when more controls per case are included in the study. A method that models the distribution of the new predictor in controls appears to improve estimation efficiency considerably.
doi:10.1007/s10985-012-9237-1
PMCID: PMC3664641  PMID: 23358916
Classification; Diagnosis; Medical decision making; Receiver operating characteristic curve
6.  Semiparametric additive marginal regression models for multiple type recurrent events 
Lifetime data analysis  2012;18(4):10.1007/s10985-012-9226-4.
Recurrent event data are often encountered in biomedical research, for example, recurrent infections or recurrent hospitalizations for patients after renal transplant. In many studies, there are more than one type of events of interest. Cai and Schaubel (2004) advocated a proportional marginal rate model for multiple type recurrent event data. In this paper, we propose a general additive marginal rate regression model. Estimating equations approach is used to obtain the estimators of regression coefficients and baseline rate function. We prove the consistency and asymptotic normality of the proposed estimators. The finite sample properties of our estimators are demonstrated by simulations. The proposed methods are applied to the India renal transplant study to examine risk factors for bacterial, fungal and viral infections.
doi:10.1007/s10985-012-9226-4
PMCID: PMC3844629  PMID: 22899088
additive model; empirical process; multiple type recurrent events; recurrent events
7.  Competing risks with missing covariates: effect of haplotypematch on hematopoietic cell transplant patients 
Lifetime data analysis  2012;19(1):10.1007/s10985-012-9229-1.
In this paper we consider a problem from hematopoietic cell transplant (HCT) studies where there is interest on assessing the effect of haplotype match for donor and patient on the cumulative incidence function for a right censored competing risks data. For the HCT study, donor’s and patient’s genotype are fully observed and matched but their haplotypes are missing. In this paper we describe how to deal with missing covariates of each individual for competing risks data. We suggest a procedure for estimating the cumulative incidence functions for a flexible class of regression models when there are missing data, and establish the large sample properties. Small sample properties are investigated using simulations in a setting that mimics the motivating haplotype matching problem. The proposed approach is then applied to the HCT study.
doi:10.1007/s10985-012-9229-1
PMCID: PMC3817559  PMID: 22968448
Binomial modeling; Bone marrow transplant; Competing risks; Haplotype effects; Haplotype match; Missing covariates; Inverse-censoring probability weighting; Nonparametric effects; Non-proportionality; Regression effects
8.  Robust inference in discrete hazard models for randomized clinical trials 
Lifetime data analysis  2012;18(4):446-469.
Time-to-event data in which failures are only assessed at discrete time points are common in many clinical trials. Examples include oncology studies where events are observed through periodic screenings such as radiographic scans. When the survival endpoint is acknowledged to be discrete, common methods for the analysis of observed failure times include the discrete hazard models (e.g., the discrete-time proportional hazards and the continuation ratio model) and the proportional odds model. In this manuscript, we consider estimation of a marginal treatment effect in discrete hazard models where the constant treatment effect assumption is violated. We demonstrate that the estimator resulting from these discrete hazard models is consistent for a parameter that depends on the underlying censoring distribution. An estimator that removes the dependence on the censoring mechanism is proposed and its asymptotic distribution is derived. Basing inference on the proposed estimator allows for statistical inference that is scientifically meaningful and reproducible. Simulation is used to assess the performance of the presented methodology in finite samples.
doi:10.1007/s10985-012-9224-6
PMCID: PMC3440522  PMID: 22810273
Censoring; Estimating equations; Discrete survival endpoints; Model misspecification; Robust inference
9.  Bayesian Inference of the Fully Specified Subdistribution Model for Survival Data with Competing Risks 
Lifetime Data Analysis  2012;18(3):339-363.
Competing risks data are routinely encountered in various medical applications due to the fact that patients may die from different causes. Recently, several models have been proposed for fitting such survival data. In this paper, we develop a fully specified subdistribution model for survival data in the presence of competing risks via a subdistribution model for the primary cause of death and conditional distributions for other causes of death. Various properties of this fully specified subdistribution model have been examined. An efficient Gibbs sampling algorithm via latent variables is developed to carry out posterior computations. Deviance Information Criterion (DIC) and Logarithm of the Pseudomarginal Likelihood (LPML) are used for model comparison. An extensive simulation study is carried out to examine the performance of DIC and LPML in comparing the cause-specific hazards model, the mixture model, and the fully specified subdistribution model. The proposed methodology is applied to analyze a real dataset from a prostate cancer study in detail.
doi:10.1007/s10985-012-9221-9
PMCID: PMC3374158  PMID: 22484596
Latent variables; Markov chain Monte Carlo; Partial likelihood; Proportional hazards
10.  Relation between three classes of structural models for the effect of a time-varying exposure on survival 
Lifetime data analysis  2009;16(1):71-84.
Standard methods for estimating the effect of a time-varying exposure on survival may be biased in the presence of time-dependent confounders themselves affected by prior exposure. This problem can be overcome by inverse probability weighted estimation of Marginal Structural Cox Models (Cox MSM), g-estimation of Structural Nested Accelerated Failure Time Models (SNAFTM) and g-estimation of Structural Nested Cumulative Failure Time Models (SNCFTM). In this paper, we describe a data generation mechanism that approximately satisfies a Cox MSM, an SNAFTM and an SNCFTM. Besides providing a procedure for data simulation, our formal description of a data generation mechanism that satisfies all three models allows one to assess the relative advantages and disadvantages of each modeling approach. A simulation study is also presented to compare effect estimates across the three models.
doi:10.1007/s10985-009-9135-3
PMCID: PMC3635680  PMID: 19894116
11.  A copula model for bivariate hybrid censored survival data with application to the MACS study 
Lifetime data analysis  2010;16(2):231-249.
A copula model for bivariate survival data with hybrid censoring is proposed to study the association between survival time of individuals infected with HIV and persistence time of infection with an additional virus. Survival with HIV is right censored and the persistence time of the additional virus is subject to interval censoring case 1. A pseudo-likelihood method is developed to study the association between the two event times under such hybrid censoring. Asymptotic consistency and normality of the pseudo-likelihood estimator are established based on empirical process theory. Simulation studies indicate good performance of the estimator with moderate sample size. The method is applied to a motivating HIV study which investigates the effect of GB virus type C (GBV-C) co-infection on survival time of HIV infected individuals.
doi:10.1007/s10985-009-9139-z
PMCID: PMC3567926  PMID: 19921432
Association measure; Bivariate survival model; Copula; Current status data; Kendall's τ; Right censored data; Empirical process
12.  Marginal Hazard Regression for Correlated Failure Time Data with Auxiliary Covariates 
Lifetime Data Analysis  2011;18(1):116-138.
In many biomedical studies, it is common that due to budget constraints, the primary covariate is only collected in a randomly selected subset from the full study cohort. Often, there is an inexpensive auxiliary covariate for the primary exposure variable that is readily available for all the cohort subjects. Valid statistical methods that make use of the auxiliary information to improve study efficiency need to be developed. To this end, we develop an estimated partial likelihood approach for correlated failure time data with auxiliary information. We assume a marginal hazard model with common baseline hazard function. The asymptotic properties for the proposed estimators are developed. The proof of the asymptotic results for the proposed estimators is nontrivial since the moments used in estimating equation are not martingale-based and the classical martingale theory is not sufficient. Instead, our proofs rely on modern empirical theory. The proposed estimator is evaluated through simulation studies and is shown to have increased efficiency compared to existing methods. The proposed methods are illustrated with a data set from the Framingham study.
doi:10.1007/s10985-011-9209-x
PMCID: PMC3259288  PMID: 22094533
Marginal hazard model; Correlated failure time; Validation set; Auxiliary covariate
13.  The versatility of multi-state models for the analysis of longitudinal data with unobservable features 
Lifetime Data Analysis  2012;20:51-75.
Multi-state models provide a convenient statistical framework for a wide variety of medical applications characterized by multiple events and longitudinal data. We illustrate this through four examples. The potential value of the incorporation of unobserved or partially observed states is highlighted. In addition, joint modelling of multiple processes is illustrated with application to potentially informative loss to follow-up, mis-measured or missclassified data and causal inference.
doi:10.1007/s10985-012-9236-2
PMCID: PMC3884139  PMID: 23225140
Causal inference; Classification uncertainty; Informative missing data; Multi-state models; Time dependent explanatory variables
14.  Analysis of quality-of-life adjusted failure time datain the presence of competing, possibly informative, censoring mechanisms 
Lifetime data analysis  2008;15(1):1-23.
Summary
We derive estimators of the mean of a function of a quality-of-life adjusted failure time, in the presence of competing right censoring mechanisms. Our approach allows for the possibility that some or all of the competing censoring mechanisms are associated with the endpoint, even after adjustment for recorded prognostic factors, with the degree of residual association possibly different for distinct censoring processes. Our methods generalize from a single to many censoring processes and from ignorable to non-ignorable censoring processes.
doi:10.1007/s10985-008-9088-y
PMCID: PMC3499834  PMID: 18575980
Cause-specific; Dependent censoring; Inverse weighted probability; Sensitivity analysis
15.  Semiparametric Estimation of Treatment Effect with Time-Lagged Response in the Presence of Informative Censoring 
Lifetime data analysis  2011;17(4):566-593.
In many randomized clinical trials, the primary response variable, for example, the survival time, is not observed directly after the patients enroll in the study but rather observed after some period of time (lag time). It is often the case that such a response variable is missing for some patients due to censoring that occurs when the study ends before the patient’s response is observed or when the patients drop out of the study. It is often assumed that censoring occurs at random which is referred to as noninformative censoring; however, in many cases such an assumption may not be reasonable. If the missing data are not analyzed properly, the estimator or test for the treatment effect may be biased. In this paper, we use semiparametric theory to derive a class of consistent and asymptotically normal estimators for the treatment effect parameter which are applicable when the response variable is right censored. The baseline auxiliary covariates and post-treatment auxiliary covariates, which may be time-dependent, are also considered in our semiparametric model. These auxiliary covariates are used to derive estimators that both account for informative censoring and are more efficient then the estimators which do not consider the auxiliary covariates.
doi:10.1007/s10985-011-9199-8
PMCID: PMC3217309  PMID: 21706378
Informative censoring; Influence function; Logrank test; Nuisance tangent space; Proportional hazards model; Regular and asymptotically linear estimators
16.  Imputation for semiparametric transformation models with biased-sampling data 
Lifetime data analysis  2012;18(4):470-503.
Widely recognized in many fields including economics, engineering, epidemiology, health sciences, technology and wildlife management, length-biased sampling generates biased and right-censored data but often provide the best information available for statistical inference. Different from traditional right-censored data, length-biased data have unique aspects resulting from their sampling procedures. We exploit these unique aspects and propose a general imputation-based estimation method for analyzing length-biased data under a class of flexible semiparametric transformation models. We present new computational algorithms that can jointly estimate the regression coefficients and the baseline function semiparametrically. The imputation-based method under the transformation model provides an unbiased estimator regardless whether the censoring is independent or not on the covariates. We establish large-sample properties using the empirical processes method. Simulation studies show that under small to moderate sample sizes, the proposed procedure has smaller mean square errors than two existing estimation procedures. Finally, we demonstrate the estimation procedure by a real data example.
doi:10.1007/s10985-012-9225-5
PMCID: PMC3440536  PMID: 22903245
Biased sampling; Estimating equation; Imputation; Transformation models
17.  Evaluating bias correction in weighted proportional hazards regression 
Lifetime Data Analysis  2008;15(1):120-146.
Often in observational studies of time to an event, the study population is a biased (i.e., unrepresentative) sample of the target population. In the presence of biased samples, it is common to weight subjects by the inverse of their respective selection probabilities. Pan and Schaubel (2008) recently proposed inference procedures for an inverse selection probability weighted (ISPW) Cox model, applicable when selection probabilities are not treated as fixed but estimated empirically. The proposed weighting procedure requires auxiliary data to estimate the weights and is computationally more intense than unweighted estimation. The ignorability of sample selection process in terms of parameter estimators and predictions is often of interest, from several perspectives: e.g., to determine if weighting makes a significant difference to the analysis at hand, which would in turn address whether the collection of auxiliary data was required in future studies; to evaluate previous studies which did not correct for selection bias. In this article, we propose methods to quantify the degree of bias corrected by the weighting procedure in the partial likelihood and Breslow-Aalen estimators. Asymptotic properties of the proposed test statistics are derived. The finite-sample significance level and power are evaluated through simulation. The proposed methods are then applied to data from a national organ failure registry to evaluate the bias in a post kidney transplant survival model.
doi:10.1007/s10985-008-9102-4
PMCID: PMC3367517  PMID: 18958616
Confidence bands; Inverse-selection-probability weights; Observational studies; Proportional hazards model; Selection bias; Wald test
18.  Estimating treatment effects on the marginal recurrent event mean in the presence of a terminating event 
Lifetime Data Analysis  2010;16(4):451-477.
In biomedical studies where the event of interest is recurrent (e.g., hospitalization), it is often the case that the recurrent event sequence is subject to being stopped by a terminating event (e.g., death). In comparing treatment options, the marginal recurrent event mean is frequently of interest. One major complication in the recurrent/terminal event setting is that censoring times are not known for subjects observed to die, which renders standard risk set based methods of estimation inapplicable. We propose two semiparametric methods for estimating the difference or ratio of treatment-specific marginal means numbers of events. The first method involves imputing unobserved censoring times, while the second methods uses inverse probability of censoring weighting. In each case, imbalances in the treatment-specific covariate distributions are adjusted out through inverse probability of treatment weighting. After the imputation and/or weighting, the treatment-specific means (then their difference or ratio) are estimated nonparametrically. Large-sample properties are derived for each of the proposed estimators, with finite sample properties assessed through simulation. The proposed methods are applied to kidney transplant data.
doi:10.1007/s10985-009-9149-x
PMCID: PMC3364315  PMID: 20063183
Censoring; Imputation; Inverse weighting; Marginal mean; Multivariate survival analysis; Semiparametric methods
19.  Bayesian local influence for survival models 
Lifetime Data Analysis  2010;17(1):43-70.
The aim of this paper is to develop a Bayesian local influence method (Zhu et al. 2009, submitted) for assessing minor perturbations to the prior, the sampling distribution, and individual observations in survival analysis. We introduce a perturbation model to characterize simultaneous (or individual) perturbations to the data, the prior distribution, and the sampling distribution. We construct a Bayesian perturbation manifold to the perturbation model and calculate its associated geometric quantities including the metric tensor to characterize the intrinsic structure of the perturbation model (or perturbation scheme). We develop local influence measures based on several objective functions to quantify the degree of various perturbations to statistical models. We carry out several simulation studies and analyze two real data sets to illustrate our Bayesian local influence method in detecting influential observations, and for characterizing the sensitivity to the prior distribution and hazard function.
doi:10.1007/s10985-010-9170-0
PMCID: PMC3321488  PMID: 20526807
Bayesian local influence; Bayesian perturbation manifold; Perturbed model; Posterior distribution; Prior; Survival model
20.  Linear regression analysis of survival data with missing censoring indicators 
Lifetime data analysis  2010;17(2):256-279.
Linear regression analysis has been studied extensively in a random censorship setting, but typically all of the censoring indicators are assumed to be observed. In this paper, we develop synthetic data methods for estimating regression parameters in a linear model when some censoring indicators are missing. We define estimators based on regression calibration, imputation, and inverse probability weighting techniques, and we prove all three estimators are asymptotically normal. The finite-sample performance of each estimator is evaluated via simulation. We illustrate our methods by assessing the effects of sex and age on the time to non-ambulatory progression for patients in a brain cancer clinical trial.
doi:10.1007/s10985-010-9175-8
PMCID: PMC3020262  PMID: 20559722
Asymptotic normality; Censoring indicator; Imputation; Inverse probability weighting; Least squares; Missing at random; Regression calibration
21.  On estimation of linear transformation models with nested case–control sampling 
Lifetime Data Analysis  2011;18(1):80-93.
Nested case–control (NCC) sampling is widely used in large epidemiological cohort studies for its cost effectiveness, but its data analysis primarily relies on the Cox proportional hazards model. In this paper, we consider a family of linear transformation models for analyzing NCC data and propose an inverse selection probability weighted estimating equation method for inference. Consistency and asymptotic normality of our estimators for regression coefficients are established. We show that the asymptotic variance has a closed analytic form and can be easily estimated. Numerical studies are conducted to support the theory and an application to the Wilms’ Tumor Study is also given to illustrate the methodology.
doi:10.1007/s10985-011-9203-3
PMCID: PMC3259210  PMID: 21912975
Linear transformation models; Nested case–control sampling; Weighted estimating equation
22.  Additive–multiplicative rates model for recurrent events 
Lifetime data analysis  2010;16(3):353-373.
Recurrent events are frequently encountered in biomedical studies. Evaluating the covariates effects on the marginal recurrent event rate is of practical interest. There are mainly two types of rate models for the recurrent event data: the multiplicative rates model and the additive rates model. We consider a more flexible additive–multiplicative rates model for analysis of recurrent event data, wherein some covariate effects are additive while others are multiplicative. We formulate estimating equations for estimating the regression parameters. The estimators for these regression parameters are shown to be consistent and asymptotically normally distributed under appropriate regularity conditions. Moreover, the estimator of the baseline mean function is proposed and its large sample properties are investigated. We also conduct simulation studies to evaluate the finite sample behavior of the proposed estimators. A medical study of patients with cystic fibrosis suffered from recurrent pulmonary exacerbations is provided for illustration of the proposed method.
doi:10.1007/s10985-010-9160-2
PMCID: PMC3199147  PMID: 20229314
Recurrent events; Rate regression; Additive–multiplicative rates model; Counting process; Empirical process
23.  Semiparametric analysis of recurrent events: artificial censoring, truncation, pairwise estimation and inference 
Lifetime data analysis  2010;16(4):509-524.
The analysis of recurrent failure time data from longitudinal studies can be complicated by the presence of dependent censoring. There has been a substantive literature that has developed based on an artificial censoring device. We explore in this article the connection between this class of methods with truncated data structures. In addition, a new procedure is developed for estimation and inference in a joint model for recurrent events and dependent censoring. Estimation proceeds using a mixed U-statistic based estimating function approach. New resampling-based methods for variance estimation and model checking are also described. The methods are illustrated by application to data from an HIV clinical trial as with a limited simulation study.
doi:10.1007/s10985-009-9150-4
PMCID: PMC2939236  PMID: 20063182
Accelerated failure time model; Cause-specific hazard; Comparability; Competing risks; Empirical process; Semi-competing risks data
24.  Missing Genetic Information in Case-Control Family Data with General Semi-Parametric Shared Frailty Model 
Lifetime data analysis  2010;17(2):175-194.
Case-control family data are now widely used to examine the role of gene-environment interactions in the etiology of complex diseases. In these types of studies, exposure levels are obtained retrospectively and, frequently, information on most risk factors of interest is available on the probands but not on their relatives. In this work we consider correlated failure time data arising from population-based case-control family studies with missing genotypes of relatives. We present a new method for estimating the age-dependent marginalized hazard function. The proposed technique has two major advantages: (1) it is based on the pseudo full likelihood function rather than a pseudo composite likelihood function, which usually suffers from substantial efficiency loss; (2) the cumulative baseline hazard function is estimated using a two-stage estimator instead of an iterative process. We assess the performance of the proposed methodology with simulation studies, and illustrate its utility on a real data example.
doi:10.1007/s10985-010-9178-5
PMCID: PMC3174530  PMID: 21153764
case-control family study; missing genotypes; multivariate survival analysis; frailty model
25.  A general joint model for longitudinal measurements and competing risks survival data with heterogeneous random effects 
Lifetime data analysis  2010;17(1):80-100.
This article studies a general joint model for longitudinal measurements and competing risks survival data. The model consists of a linear mixed effects sub-model for the longitudinal outcome, a proportional cause-specific hazards frailty sub-model for the competing risks survival data, and a regression sub-model for the variance–covariance matrix of the multivariate latent random effects based on a modified Cholesky decomposition. The model provides a useful approach to adjust for non-ignorable missing data due to dropout for the longitudinal outcome, enables analysis of the survival outcome with informative censoring and intermittently measured time-dependent covariates, as well as joint analysis of the longitudinal and survival outcomes. Unlike previously studied joint models, our model allows for heterogeneous random covariance matrices. It also offers a framework to assess the homogeneous covariance assumption of existing joint models. A Bayesian MCMC procedure is developed for parameter estimation and inference. Its performances and frequentist properties are investigated using simulations. A real data example is used to illustrate the usefulness of the approach.
doi:10.1007/s10985-010-9169-6
PMCID: PMC3162577  PMID: 20549344
Cause-specific hazard; Bayesian analysis; Cholesky decomposition; Mixed effects model; MCMC; Modeling covariance matrices

Results 1-25 (46)