The Tshepo study was the first clinical trial to evaluate outcomes of adults receiving nevirapine (NVP)-based versus efavirenz (EFV)-based combination antiretroviral therapy (cART) in Botswana. This was a 3 year study (n=650) comparing the efficacy and tolerability of various first-line cART regimens, stratified by baseline CD4+: <200 (low) vs. 201-350 (high). Using targeted maximum likelihood estimation (TMLE), we retrospectively evaluated the causal effect of assigned NNRTI on time to virologic failure or death [intent-to-treat (ITT)] and time to minimum of virologic failure, death, or treatment modifying toxicity [time to loss of virological response (TLOVR)] by sex and baseline CD4+. Sex did significantly modify the effect of EFV versus NVP for both the ITT and TLOVR outcomes with risk differences in the probability of survival of males versus the females of approximately 6% (p=0.015) and 12% (p=0.001), respectively. Baseline CD4+ also modified the effect of EFV versus NVP for the TLOVR outcome, with a mean difference in survival probability of approximately 12% (p=0.023) in the high versus low CD4+ cell count group. TMLE appears to be an efficient technique that allows for the clinically meaningful delineation and interpretation of the causal effect of NNRTI treatment and effect modification by sex and baseline CD4+ cell count strata in this study. EFV-treated women and NVP-treated men had more favorable cART outcomes. In addition, adults initiating EFV-based cART at higher baseline CD4+ cell count values had more favorable outcomes compared to those initiating NVP-based cART.
When a large number of candidate variables are present, a dimension reduction procedure is usually conducted to reduce the variable space before the subsequent analysis is carried out. The goal of dimension reduction is to find a list of candidate genes with a more operable length ideally including all the relevant genes. Leaving many uninformative genes in the analysis can lead to biased estimates and reduced power. Therefore, dimension reduction is often considered a necessary predecessor of the analysis because it can not only reduce the cost of handling numerous variables, but also has the potential to improve the performance of the downstream analysis algorithms.
We propose a TMLE-VIM dimension reduction procedure based on the variable importance measurement (VIM) in the frame work of targeted maximum likelihood estimation (TMLE). TMLE is an extension of maximum likelihood estimation targeting the parameter of interest. TMLE-VIM is a two-stage procedure. The first stage resorts to a machine learning algorithm, and the second step improves the first stage estimation with respect to the parameter of interest.
We demonstrate with simulations and data analyses that our approach not only enjoys the prediction power of machine learning algorithms, but also accounts for the correlation structures among variables and therefore produces better variable rankings. When utilized in dimension reduction, TMLE-VIM can help to obtain the shortest possible list with the most truly associated variables.
Collaborative double robust targeted maximum likelihood estimators represent a fundamental further advance over standard targeted maximum likelihood estimators of a pathwise differentiable parameter of a data generating distribution in a semiparametric model, introduced in van der Laan, Rubin (2006). The targeted maximum likelihood approach involves fluctuating an initial estimate of a relevant factor (Q) of the density of the observed data, in order to make a bias/variance tradeoff targeted towards the parameter of interest. The fluctuation involves estimation of a nuisance parameter portion of the likelihood, g. TMLE has been shown to be consistent and asymptotically normally distributed (CAN) under regularity conditions, when either one of these two factors of the likelihood of the data is correctly specified, and it is semiparametric efficient if both are correctly specified.
In this article we provide a template for applying collaborative targeted maximum likelihood estimation (C-TMLE) to the estimation of pathwise differentiable parameters in semi-parametric models. The procedure creates a sequence of candidate targeted maximum likelihood estimators based on an initial estimate for Q coupled with a succession of increasingly non-parametric estimates for g. In a departure from current state of the art nuisance parameter estimation, C-TMLE estimates of g are constructed based on a loss function for the targeted maximum likelihood estimator of the relevant factor Q that uses the nuisance parameter to carry out the fluctuation, instead of a loss function for the nuisance parameter itself. Likelihood-based cross-validation is used to select the best estimator among all candidate TMLE estimators of Q0 in this sequence. A penalized-likelihood loss function for Q is suggested when the parameter of interest is borderline-identifiable.
We present theoretical results for “collaborative double robustness,” demonstrating that the collaborative targeted maximum likelihood estimator is CAN even when Q and g are both mis-specified, providing that g solves a specified score equation implied by the difference between the Q and the true Q0. This marks an improvement over the current definition of double robustness in the estimating equation literature.
We also establish an asymptotic linearity theorem for the C-DR-TMLE of the target parameter, showing that the C-DR-TMLE is more adaptive to the truth, and, as a consequence, can even be super efficient if the first stage density estimator does an excellent job itself with respect to the target parameter.
This research provides a template for targeted efficient and robust loss-based learning of a particular target feature of the probability distribution of the data within large (infinite dimensional) semi-parametric models, while still providing statistical inference in terms of confidence intervals and p-values. This research also breaks with a taboo (e.g., in the propensity score literature in the field of causal inference) on using the relevant part of likelihood to fine-tune the fitting of the nuisance parameter/censoring mechanism/treatment mechanism.
asymptotic linearity; coarsening at random; causal effect; censored data; crossvalidation; collaborative double robust; double robust; efficient influence curve; estimating function; estimator selection; influence curve; G-computation; locally efficient; loss-function; marginal structural model; maximum likelihood estimation; model selection; pathwise derivative; semiparametric model; sieve; super efficiency; super-learning; targeted maximum likelihood estimation; targeted nuisance parameter estimator selection; variable importance
A concrete example of the collaborative double-robust targeted likelihood estimator (C-TMLE) introduced in a companion article in this issue is presented, and applied to the estimation of causal effects and variable importance parameters in genomic data. The focus is on non-parametric estimation in a point treatment data structure. Simulations illustrate the performance of C-TMLE relative to current competitors such as the augmented inverse probability of treatment weighted estimator that relies on an external non-collaborative estimator of the treatment mechanism, and inefficient estimation procedures including propensity score matching and standard inverse probability of treatment weighting. C-TMLE is also applied to the estimation of the covariate-adjusted marginal effect of individual HIV mutations on resistance to the anti-retroviral drug lopinavir. The influence curve of the C-TMLE is used to establish asymptotically valid statistical inference. The list of mutations found to have a statistically significant association with resistance is in excellent agreement with mutation scores provided by the Stanford HIVdb mutation scores database.
causal effect; cross-validation; collaborative double robust; double robust; efficient influence curve; penalized likelihood; penalization; estimator selection; locally efficient; maximum likelihood estimation; model selection; super efficiency; super learning; targeted maximum likelihood estimation; targeted nuisance parameter estimator selection; variable importance
There is an active debate in the literature on censored data about the relative performance of model based maximum likelihood estimators, IPCW-estimators, and a variety of double robust semiparametric efficient estimators. Kang and Schafer (2007) demonstrate the fragility of double robust and IPCW-estimators in a simulation study with positivity violations. They focus on a simple missing data problem with covariates where one desires to estimate the mean of an outcome that is subject to missingness. Responses by Robins, et al. (2007), Tsiatis and Davidian (2007), Tan (2007) and Ridgeway and McCaffrey (2007) further explore the challenges faced by double robust estimators and offer suggestions for improving their stability. In this article, we join the debate by presenting targeted maximum likelihood estimators (TMLEs). We demonstrate that TMLEs that guarantee that the parametric submodel employed by the TMLE procedure respects the global bounds on the continuous outcomes, are especially suitable for dealing with positivity violations because in addition to being double robust and semiparametric efficient, they are substitution estimators. We demonstrate the practical performance of TMLEs relative to other estimators in the simulations designed by Kang and Schafer (2007) and in modified simulations with even greater estimation challenges.
censored data; collaborative double robustness; collaborative targeted maximum likelihood estimation; double robust; estimator selection; inverse probability of censoring weighting; locally efficient estimation; maximum likelihood estimation; semiparametric model; targeted maximum likelihood estimation; targeted minimum loss based estimation; targeted nuisance parameter estimator selection
We consider two-stage sampling designs, including so-called nested case control studies, where one takes a random sample from a target population and completes measurements on each subject in the first stage. The second stage involves drawing a subsample from the original sample, collecting additional data on the subsample. This data structure can be viewed as a missing data structure on the full-data structure collected in the second-stage of the study. Methods for analyzing two-stage designs include parametric maximum likelihood estimation and estimating equation methodology. We propose an inverse probability of censoring weighted targeted maximum likelihood estimator (IPCW-TMLE) in two-stage sampling designs and present simulation studies featuring this estimator.
two-stage designs; targeted maximum likelihood estimators; nested case control studies; double robust estimation
In longitudinal and repeated measures data analysis, often the goal is to determine the effect of a treatment or aspect on a particular outcome (e.g., disease progression). We consider a semiparametric repeated measures regression model, where the parametric component models effect of the variable of interest and any modification by other covariates. The expectation of this parametric component over the other covariates is a measure of variable importance. Here, we present a targeted maximum likelihood estimator of the finite dimensional regression parameter, which is easily estimated using standard software for generalized estimating equations.
The targeted maximum likelihood method provides double robust and locally efficient estimates of the variable importance parameters and inference based on the influence curve. We demonstrate these properties through simulation under correct and incorrect model specification, and apply our method in practice to estimating the activity of transcription factor (TF) over cell cycle in yeast. We specifically target the importance of SWI4, SWI6, MBP1, MCM1, ACE2, FKH2, NDD1, and SWI5.
The semiparametric model allows us to determine the importance of a TF at specific time points by specifying time indicators as potential effect modifiers of the TF. Our results are promising, showing significant importance trends during the expected time periods. This methodology can also be used as a variable importance analysis tool to assess the effect of a large number of variables such as gene expressions or single nucleotide polymorphisms.
targeted maximum likelihood; semiparametric; repeated measures; longitudinal; transcription factors
Covariate adjustment using linear models for continuous outcomes in randomized trials has been shown to increase efficiency and power over the unadjusted method in estimating the marginal effect of treatment. However, for binary outcomes, investigators generally rely on the unadjusted estimate as the literature indicates that covariate-adjusted estimates based on the logistic regression models are less efficient. The crucial step that has been missing when adjusting for covariates is that one must integrate/average the adjusted estimate over those covariates in order to obtain the marginal effect. We apply the method of targeted maximum likelihood estimation (tMLE) to obtain estimators for the marginal effect using covariate adjustment for binary outcomes. We show that the covariate adjustment in randomized trials using the logistic regression models can be mapped, by averaging over the covariate(s), to obtain a fully robust and efficient estimator of the marginal effect, which equals a targeted maximum likelihood estimator. This tMLE is obtained by simply adding a clever covariate to a fixed initial regression. We present simulation studies that demonstrate that this tMLE increases efficiency and power over the unadjusted method, particularly for smaller sample sizes, even when the regression model is mis-specified.
clinical trails; efficiency; covariate adjustment; variable selection
The proportional odds model may serve as a useful alternative to the Cox proportional hazards model to study association between covariates and their survival functions in medical studies. In this article, we study an extended proportional odds model that incorporates the so-called “external” time-varying covariates. In the extended model, regression parameters have a direct interpretation of comparing survival functions, without specifying the baseline survival odds function. Semiparametric and maximum likelihood estimation procedures are proposed to estimate the extended model. Our methods are demonstrated by Monte-Carlo simulations, and applied to a landmark randomized clinical trial of a short course Nevirapine (NVP) for mother-to-child transmission (MTCT) of human immunodeficiency virus type-1 (HIV-1). Additional application includes analysis of the well-known Veterans Administration (VA) Lung Cancer Trial.
Counting process; Estimating function; HIV/AIDS; Maximum likelihood estimation; Semiparametric model; Time-varying covariate
Nevirapine (NVP) and Efavirenz (EFV) have generally comparable clinical and virologic efficacy. However, data comparing NVP durability to EFV are imprecise. We analyzed cohort data to compare durability of NVP to EFV among patients initiating ART in Mbabane, Swaziland. The primary outcome was poor regimen durability defined as any modification of NVP or EFV to the ART regimen. Multivariate Cox proportional hazards models were employed to estimate the risk of poor regimen durability (all-cause) for the two regimens and also separately to estimate risk of drug-related toxicity. We analyzed records for 769 patients initiating ART in Mbabane, Swaziland from March 2006 to December 2007. 30 patients (3.9%) changed their NVP or EFV-based regimen during follow up. Cumulative incidence for poor regimen durability was 5.3% and 2.7% for NVP and EFV, respectively. Cumulative incidence for drug-related toxicity was 1.9% and 2.7% for NVP and EFV, respectively. Burden of TB was high and 14 (46.7%) modifications were due to patients substituting NVP due to beginning TB treatment. Though the estimates were imprecise, use of NVP - based regimens seemed to be associated with higher risk of modifications compared to use of EFV - based regimens (HR 2.03 95%CI 0.58 - 7.05) and NVP - based regimens had a small advantage over EFV - based regimens with regard to toxicity - related modifications (HR 0.87 95%CI 0.26 - 2.90). Due to the high burden of TB and a significant proportion of patients changing their ART regimen after starting TB treatment, use of EFV as the preferred NNRTI over NVP in high TB endemic settings may result in improved first-line regimen tolerance. Further studies comparing the cost-effectiveness of delivering these two NNRTIs in light of their different limitations are required.
Tolerability; Toxicity; Efavirenz; Nevirapine; Antiretroviral therapy; Resource limited setting; Swaziland
In many semiparametric models that are parameterized by two types of parameters – a Euclidean parameter of interest and an infinite-dimensional nuisance parameter, the two parameters are bundled together, i.e., the nuisance parameter is an unknown function that contains the parameter of interest as part of its argument. For example, in a linear regression model for censored survival data, the unspecified error distribution function involves the regression coefficients. Motivated by developing an efficient estimating method for the regression parameters, we propose a general sieve M-theorem for bundled parameters and apply the theorem to deriving the asymptotic theory for the sieve maximum likelihood estimation in the linear regression model for censored survival data. The numerical implementation of the proposed estimating method can be achieved through the conventional gradient-based search algorithms such as the Newton-Raphson algorithm. We show that the proposed estimator is consistent and asymptotically normal and achieves the semiparametric efficiency bound. Simulation studies demonstrate that the proposed method performs well in practical settings and yields more efficient estimates than existing estimating equation based methods. Illustration with a real data example is also provided.
Accelerated failure time model; B-spline; bundled parameters; efficient score function; semiparametric efficiency; sieve maximum likelihood estimation
National initiatives offering NNRTI-based combination antiretroviral therapy (cART) have expanded in sub-Saharan Africa (SSA). The Tshepo study is the first clinical trial evaluating the long-term efficacy and tolerability of EFV- vs. NVP-based cART among adults in Botswana.
Three year randomized study (n = 650) using a 3×2×2 factorial design comparing efficacy and tolerability among: A: ZDV/3TC vs. ZDV/ddI vs. d4T/3TC; B: EFV vs. NVP, and C: Com-DOT vs. standard adherence strategies. This manuscript focuses on comparison B.
There was no significant difference by assigned NNRTI in time to virologic failure with resistance (log-rank p = 0.14), NVP vs. EFV risk ratio (RR) = 1.54 [0.86-2.70]. Rates of virologic failure with resistance were 9.6% NVP-treated [6.8-13.5] vs. 6.6% EFV-treated [4.2-10.0] at 3 years. Women receiving NVP-based cART trended towards higher virological failure rates when compared to EFV-treated women, Holm-corrected log-rank p = 0.072, NVP vs. EFV RR = 2.22 [0.94-5.00]. 139 patients had 176 treatment modifying toxicities, with shorter time to event in NVP-treated vs. EFV-treated, RR = 1.85 [1.20-2.86], log-rank p = 0.0002.
Tshepo-treated patients had excellent overall immunologic and virologic outcomes, and no significant differences were observed by randomized NNRTI comparison. NVP-treated women trended towards higher virologic failure with resistance compared to EFV-treated women. NVP-treated adults had higher treatment modifying toxicity rates when compared to those receiving EFV. NVP-based cART can continue to be offered to women in SSA if routine safety monitoring chemistries are done and the potential risk of EFV-related teratogenicity is considered.
HIV/AIDS; HAART; non-nucleoside reverse transcriptase inhibitors (NNRTI’s); nevirapine versus efavirenz; sub-Saharan Africa; randomized clinical trial
During stavudine phase-out plan in developing countries, tenofovir is used to substitute stavudine. However, knowledge regarding whether there is any difference of the frequency of renal injury between tenofovir/lamivudine/efavirenz and tenofovir/lamivudine/nevirapine is lacking.
This prospective study was conducted among HIV-infected patients who were switched NRTI from stavudine/lamivudine to tenofovir/lamivudine in efavirenz-based (EFV group) and nevirapine-based regimen (NVP group) after two years of an ongoing randomized trial. All patients were assessed for serum phosphorus, uric acid, creatinine, estimated glomerular filtration rate (eGFR), and urinalysis at time of switching, 12 and 24 weeks.
Of 62 patients, 28 were in EFV group and 34 were in NVP group. Baseline characteristics and eGFR were not different between two groups. At 12 weeks, comparing mean ± SD measures between EFV group and NVP group were: phosphorus of 3.16 ± 0.53 vs. 2.81 ± 0.42 mg/dL (P = 0.005), %patients with proteinuria were 15% vs. 38% (P = 0.050). At 24 weeks, mean ± SD phosphorus and median (IQR) eGFR between the corresponding groups were 3.26 ± 0.78 vs. 2.84 ± 0.47 mg/dL (P = 0.011) and 110 (99-121) vs. 98 (83-112) mL/min (P = 0.008). In NVP group, comparing week 12 to time of switching, there was a decrement of phosphorus (P = 0.007) and eGFR (P = 0.034). By multivariate analysis, 'receiving nevirapine', 'old age' and 'low baseline serum phosphorus' were associated with hypophosphatemia at 24 weeks (P < 0.05). Receiving nevirapine and low baseline eGFR were associated with lower eGFR at 24 weeks (P < 0.05).
The frequency of tenofovir-associated renal impairment was higher in patients receiving tenofovir/lamivudine/nevirapine compared to tenofovir/lamivudine/efavirenz. Further studies regarding patho-physiology are warranted.
Although tenofovir (TDF) is a common component of antiretroviral therapy (ART), recent evidence suggests inferior outcomes when it is combined with nevirapine (NVP).
We compared outcomes among patients initiating TDF+emtricitabine or lamivudine (XTC)+NVP, TDF+XTC+efavirenz (EFV), zidovudine (ZDV)+lamuvidine (3TC)+NVP, and ZDV+3TC+EFV. We categorized drug exposure by initial ART dispensation, by a time-varying analysis that accounted for drug substitutions, and by predominant exposure (>75% of drug dispensations) during an initial window period. Risks for death and program failure were estimated using Cox proportional hazard models. All were regimens were compared to ZDV+3TC+NVP.
Between July 2007 and November 2010, 18,866 treatment-naïve adults initiated ART: 18.2% on ZDV+3TC+NVP, 1.8% on ZDV+3TC+EFV, 36.2% on TDF+XTC+NVP, and 43.8% on TDF+XTC+EFV. When exposure was categorized by initial prescription, patients on TDF+XTC+NVP (adjusted hazard ratio [AHR]:1.45; 95%CI:1.03–2.06) had a higher post-90 day mortality. TDF+XTC+NVP was also associated with an elevated risk for mortality when exposure was categorized as time-varying (AHR:1.51; 95%CI:1.18–1.95) or by predominant exposure over the first 90 days (AHR:1.91, 95%CI:1.09–3.34). However, these findings were not consistently observed across sensitivity analyses or when program failure was used as a secondary outcome.
TDF+XTC+NVP was associated with higher mortality when compared to ZDV+3TC+NVP, but not consistently across sensitivity analyses. These findings may be explained in part by inherent limitations to our retrospective approach, including residual confounding. Further research is urgently needed to compare the effectiveness of ART regimens in use in resource-constrained settings.
tenofovir; zidovudine; nevirapine; antiretroviral therapy; Africa
We consider a class of semiparametric normal transformation models for right censored bivariate failure times. Nonparametric hazard rate models are transformed to a standard normal model and a joint normal distribution is assumed for the bivariate vector of transformed variates. A semiparametric maximum likelihood estimation procedure is developed for estimating the marginal survival distribution and the pairwise correlation parameters. This produces an efficient estimator of the correlation parameter of the semiparametric normal transformation model, which characterizes the bivariate dependence of bivariate survival outcomes. In addition, a simple positive-mass-redistribution algorithm can be used to implement the estimation procedures. Since the likelihood function involves infinite-dimensional parameters, the empirical process theory is utilized to study the asymptotic properties of the proposed estimators, which are shown to be consistent, asymptotically normal and semiparametric efficient. A simple estimator for the variance of the estimates is also derived. The finite sample performance is evaluated via extensive simulations.
Asymptotic normality; Bivariate failure time; Consistency; Semiparametric efficiency; Semiparametric maximum likelihood estimate; Semiparametric normal transformation
We define a new measure of variable importance of an exposure on a continuous outcome, accounting for potential confounders. The exposure features a reference level x0 with positive mass and a continuum of other levels. For the purpose of estimating it, we fully develop the semi-parametric estimation methodology called targeted minimum loss estimation methodology (TMLE) [23, 22]. We cover the whole spectrum of its theoretical study (convergence of the iterative procedure which is at the core of the TMLE methodology; consistency and asymptotic normality of the estimator), practical implementation, simulation study and application to a genomic example that originally motivated this article. In the latter, the exposure X and response Y are, respectively, the DNA copy number and expression level of a given gene in a cancer cell. Here, the reference level is x0 = 2, that is the expected DNA copy number in a normal cell. The confounder is a measure of the methylation of the gene. The fact that there is no clear biological indication that X and Y can be interpreted as an exposure and a response, respectively, is not problematic.
Variable importance measure; non-parametric estimation; targeted minimum loss estimation; robustness; asymptotics
Meta-analysis typically involves combining the estimates from independent studies in order to estimate a parameter of interest across a population of studies. However, outliers often occur even under the random effects model. The presence of such outliers could substantially alter the conclusions in a meta-analysis. This paper proposes a methodology for identifying and, if desired, downweighting studies that do not appear representative of the population they are thought to represent under the random effects model.
An outlier is taken as an observation (study result) with an inflated random effect variance. We used the likelihood ratio test statistic as an objective measure for determining whether observations have inflated variance and are therefore considered outliers. A parametric bootstrap procedure was used to obtain the sampling distribution of the likelihood ratio test statistics and to account for multiple testing. Our methods were applied to three illustrative and contrasting meta-analytic data sets.
For the three meta-analytic data sets our methods gave robust inferences when the identified outliers were downweighted.
The proposed methodology provides a means to identify and, if desired, downweight outliers in meta-analysis. It does not eliminate them from the analysis however and we consider the proposed approach preferable to simply removing any or all apparently outlying results. We do not however propose that our methods in any way replace or diminish the standard random effects methodology that has proved so useful, rather they are helpful when used in conjunction with the random effects model.
Purpose of the study
Efavirenz (EFV) is still discussed for its high rate of interruption due to adverse event, in particular central nervous system side effects (CNS-SE). Aim of the study was to define if better drug formulations up to single tablet regimen (STR), including (EFV) plus NRTI backbone (tenofovir-emtricitabine), reduced the risk of interruption.
From the database of two reference centers, patients starting any cART regimen including EFV+2 NRTI or switching to EFV+2 NRTI for simplification after virological suppression were selected. Probability of interruption by virological failure, side effects, CNS-SE and any cause were assessed with survival analysis and Cox proportional hazard model.
Summary of results
Overall, 533 patients, starting EFV-containing regimen from May 1998 to March 2012, were included (51.2% naïve, 48.8% switched). Patients characteristics: males 70.7%, median age 39 years, injecting drug use (IDU) 11.2%, median nadir CD4 194/mmc, median CD4 at EFV start 305/mmc: 38.7% started BID regimen, 43.9% OD regimen and 17.4% STR. At survival analysis, the overall proportion of EFV interruption was 19.1% at 1 year and 33.0% at 3 years; interruption for virological failure were 2.8% and 7.4% and for toxicity 10.2% and 15.9%, respectively. CNS-SE accounted for about half of interruptions for toxicity (5.7% and 8.0% at 1 and 3 years, respectively). Naïve patients had a higher risk of interruption as compared to switched patients: 37.7% vs. 28.0% at 3 years (p=0.06). While no significant difference was observed comparing OD vs. B ID regimens, starting with STR was associated with significant lower proportion of overall interruption at 3 years (17.1% vs. 36.6%, p<0.01). No virological failure was observed with STR up to 3 years (0.0% vs. 8.9%, p=0.05); no difference of interruption by overall toxicity and higher, though non-significant, frequency of interruption by CNS-SE (12.8% vs. 6.8%) were also observed. STR also accounted for lower proportion of interruption by patient wish, including low adherence (1.5% vs. 12.3%, p=0.01). At adjusted Cox model, STR (HR: 0.44; 95% CI: 0.26–0.77) and male gender (HR: 0.71; 95% CI: 0.53–0.97) were associated with lower risk of EFV interruption and IDU with higher risk (HR: 1.64; 95% CI: 1.11–2.42).
In our experience, EFV co-formulated in STR was associated with lower virological failure and higher adherence, despite keeping CNS toxicity, thus reducing the risk of treatment interruption.
Efavirenz (EFV) administration is still controversial for its high rates of interruption mainly related to central nervous system side effects (CNS-SE). Aim of the study was to define if single tablet regimen (STR) as compared to bis-in-die (BID) or once-daily (OD) with ≥2 pills-a-day EFV formulations reduced the risk of interruption.
Patients starting any cART regimen including EFV + 2NRTIs or switching to EFV + 2NRTIs for simplification after virological suppression were retrospectively selected. Incidence, probability and prognostic factors of interruption by different causes were assessed by survival analysis and Cox regression model.
Overall, 553 patients starting EFV-containing regimens were included: 38.2% started BID regimen, 44.5% OD regimens ≥2 pills and 17.4% STR. The overall proportion of EFV interruption was 37.4% at 4 years; at the same time point, interruptions for virological failure and toxicity were 8.8% and 16.5% (8% for CNS-SE), respectively. Starting EFV co-formulated in STR was associated with lower proportion of overall interruption at 4 years (17.1% vs. 40.6%, p < 0.01). Only one virological failure was observed with STR up to 4 years (1.1% vs. 10.3% in non-STR, p = 0.051). STR also accounted for lower proportion of interruption by patient decision (1.5% vs. 11.8%, p = 0.01). No differences of interruption by overall toxicity and CNS-SE were observed. In multivariable analysis, STR and male gender were associated with lower risk of EFV interruption, while higher CD4 nadir and IDU with higher risk.
In our experience, starting EFV co-formulated in STR was associated with lower virological failure and higher adherence, despite a similar proportion of CNS toxicity, thus reducing the risk of treatment interruption.
STR; Discontinuation; Combination antiretroviral therapy; Toxicity; Adherence
There is conflicting evidence and practice regarding the use of the non-nucleoside reverse transcriptase inhibitors (NNRTI) efavirenz (EFV) and nevirapine (NVP) in first-line antiretroviral therapy (ART).
We systematically reviewed virological outcomes in HIV-1 infected, treatment-naive patients on regimens containing EFV versus NVP from randomised trials and observational cohort studies. Data sources include PubMed, Embase, the Cochrane Central Register of Controlled Trials and conference proceedings of the International AIDS Society, Conference on Retroviruses and Opportunistic Infections, between 1996 to May 2013. Relative risks (RR) and 95% confidence intervals were synthesized using random-effects meta-analysis. Heterogeneity was assessed using the I2 statistic, and subgroup analyses performed to assess the potential influence of study design, duration of follow up, location, and tuberculosis treatment. Sensitivity analyses explored the potential influence of different dosages of NVP and different viral load thresholds.
Of 5011 citations retrieved, 38 reports of studies comprising 114 391 patients were included for review. EFV was significantly less likely than NVP to lead to virologic failure in both trials (RR 0.85 [0.73–0.99] I2 = 0%) and observational studies (RR 0.65 [0.59–0.71] I2 = 54%). EFV was more likely to achieve virologic success than NVP, though marginally significant, in both randomised controlled trials (RR 1.04 [1.00–1.08] I2 = 0%) and observational studies (RR 1.06 [1.00–1.12] I2 = 68%).
EFV-based first line ART is significantly less likely to lead to virologic failure compared to NVP-based ART. This finding supports the use of EFV as the preferred NNRTI in first-line treatment regimen for HIV treatment, particularly in resource limited settings.
The full likelihood approach in statistical analysis is regarded as the most efficient means for estimation and inference. For complex length-biased failure time data, computational algorithms and theoretical properties are not readily available, especially when a likelihood function involves infinite-dimensional parameters. Relying on the invariance property of length-biased failure time data under the semiparametric density ratio model, we present two likelihood approaches for the estimation and assessment of the difference between two survival distributions. The most efficient maximum likelihood estimators are obtained by the em algorithm and profile likelihood. We also provide a simple numerical method for estimation and inference based on conditional likelihood, which can be generalized to k-arm settings. Unlike conventional survival data, the mean of the population failure times can be consistently estimated given right-censored length-biased data under mild regularity conditions. To check the semiparametric density ratio model assumption, we use a test statistic based on the area between two survival distributions. Simulation studies confirm that the full likelihood estimators are more efficient than the conditional likelihood estimators. We analyse an epidemiological study to illustrate the proposed methods.
Conditional likelihood; Density ratio model; em algorithm; Length-biased sampling; Maximum likelihood approach
Researchers of uncommon diseases are often interested in assessing potential risk factors. Given the low incidence of disease, these studies are frequently case-control in design. Such a design allows a sufficient number of cases to be obtained without extensive sampling and can increase efficiency; however, these case-control samples are then biased since the proportion of cases in the sample is not the same as the population of interest. Methods for analyzing case-control studies have focused on utilizing logistic regression models that provide conditional and not causal estimates of the odds ratio. This article will demonstrate the use of the prevalence probability and case-control weighted targeted maximum likelihood estimation (MLE), as described by van der Laan (2008), in order to obtain causal estimates of the parameters of interest (risk difference, relative risk, and odds ratio). It is meant to be used as a guide for researchers, with step-by-step directions to implement this methodology. We will also present simulation studies that show the improved efficiency of the case-control weighted targeted MLE compared to other techniques.
It is of interest to estimate the distribution of usual nutrient intake for a population from repeat 24-h dietary recall assessments. A mixed effects model and quantile estimation procedure, developed at the National Cancer Institute (NCI), may be used for this purpose. The model incorporates a Box–Cox parameter and covariates to estimate usual daily intake of nutrients; model parameters are estimated via quasi-Newton optimization of a likelihood approximated by the adaptive Gaussian quadrature. The parameter estimates are used in a Monte Carlo approach to generate empirical quantiles; standard errors are estimated by bootstrap. The NCI method is illustrated and compared with current estimation methods, including the individual mean and the semi-parametric method developed at the Iowa State University (ISU), using data from a random sample and computer simulations. Both the NCI and ISU methods for nutrients are superior to the distribution of individual means. For simple (no covariate) models, quantile estimates are similar between the NCI and ISU methods. The bootstrap approach used by the NCI method to estimate standard errors of quantiles appears preferable to Taylor linearization. One major advantage of the NCI method is its ability to provide estimates for subpopulations through the incorporation of covariates into the model. The NCI method may be used for estimating the distribution of usual nutrient intake for populations and subpopulations as part of a unified framework of estimation of usual intake of dietary constituents.
statistical distributions; diet surveys; nutrition assessment; mixed-effects model; nutrients; percentiles
We propose a new cure model for survival data with a surviving or cure fraction. The new model is a mixture cure model where the covariate effects on the proportion of cure and the distribution of the failure time of uncured patients are separately modeled. Unlike the existing mixture cure models, the new model allows covariate effects on the failure time distribution of uncured patients to be negligible at time zero and to increase as time goes by. Such a model is particularly useful in some cancer treatments when the treat effect increases gradually from zero, and the existing models usually cannot handle this situation properly. We develop a rank based semiparametric estimation method to obtain the maximum likelihood estimates of the parameters in the model. We compare it with existing models and methods via a simulation study, and apply the model to a breast cancer data set. The numerical studies show that the new model provides a useful addition to the cure model literature.
In this paper, we propose a new pharmacokinetic model for parameter estimation of dynamic contrast-enhanced (DCE) MRI by using Gaussian process inference. Our model is based on the Tofts dual-compartment model for the description of tracer kinetics and the observed time series from DCE-MRI is treated as a Gaussian stochastic process. The parameter estimation is done through a maximum likelihood approach and we propose a variant of the coordinate descent method to solve this likelihood maximization problem. The new model was shown to outperform a baseline method on simulated data. Parametric maps generated on prostate DCE data with the new model also provided better enhancement of tumors, lower intensity on false positives, and better boundary delineation when compared with the baseline method. New statistical parameter maps from the process model were also found to be informative, particularly when paired with the PK parameter maps.
DCE-MRI; Gaussian Stochastic Process; Pharmacokinetic Model; Bayesian Inference; Coordinate Descent Optimization