PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (876091)

Clipboard (0)
None

Related Articles

1.  Statistical identifiability and the surrogate endpoint problem, with application to vaccine trials 
Biometrics  2010;66(4):1153-1161.
Summary
Given a randomized treatment Z, a clinical outcome Y, and a biomarker S measured some fixed time after Z is administered, we may be interested in addressing the surrogate endpoint problem by evaluating whether S can be used to reliably predict the effect of Z on Y. Several recent proposals for the statistical evaluation of surrogate value have been based on the framework of principal stratification. In this paper, we consider two principal stratification estimands: joint risks and marginal risks. Joint risks measure causal associations of treatment effects on S and Y, providing insight into the surrogate value of the biomarker, but are not statistically identifiable from vaccine trial data. While marginal risks do not measure causal associations of treatment effects, they nevertheless provide guidance for future research, and we describe a data collection scheme and assumptions under which the marginal risks are statistically identifiable. We show how different sets of assumptions affect the identifiability of these estimands; in particular, we depart from previous work by considering the consequences of relaxing the assumption of no individual treatment effects on Y before S is measured. Based on algebraic relationships between joint and marginal risks, we propose a sensitivity analysis approach for assessment of surrogate value, and show that in many cases the surrogate value of a biomarker may be hard to establish, even when the sample size is large.
doi:10.1111/j.1541-0420.2009.01380.x
PMCID: PMC3597127  PMID: 20105158
Estimated likelihood; Identifiability; Principal stratification; Sensitivity analysis; Surrogate endpoint; Vaccine trials
2.  Accommodating Missingness When Assessing Surrogacy Via Principal Stratification 
Clinical trials (London, England)  2013;10(3):363-377.
Background
When an outcome of interest in a clinical trial is late-occurring or difficult to obtain, surrogate markers can extract information about the effect of the treatment on the outcome of interest. Understanding associations between the causal effect of treatment on the outcome and the causal effect of treatment on the surrogate is critical to understanding the value of a surrogate from a clinical perspective.
Purpose
Traditional regression approaches to determine the proportion of the treatment effect explained by surrogate markers suffer from several shortcomings: they can be unstable, and can lie outside of the 0–1 range. Further, they do not account for the fact that surrogate measures are obtained post-randomization, and thus the surrogate-outcome relationship may be subject to unmeasured confounding. Methods to avoid these problem are of key importance.
Methods
Frangakis C, Rubin DM. Principal stratification in causal inference. Biometrics 2002; 58:21–9 suggested assessing the causal effect of treatment within pre-randomization “principal strata” defined by the counterfactual joint distribution of the surrogate marker under the different treatment arms, with the proportion of the overall outcome causal effect attributable to subjects for whom the treatment affects the proposed surrogate as the key measure of interest. Li Y, Taylor JMG, Elliott MR. Bayesian approach to surrogacy assessment using principal stratification in clinical trials. Biometrics 2010; 66:523–31 developed this “principal surrogacy” approach for dichotomous markers and outcomes, utilizing Bayesian methods that accommodated non-identifiability in the model parameters. Because the surrogate marker is typically observed early, outcome data is often missing. Here we extend Li, Taylor, and Elliott to accommodate missing data in the observable final outcome under ignorable and non-ignorable settings. We also allow for the possibility that missingness has a counterfactual component, a feature that previous literature has not addressed.
Results
We apply the proposed methods to a trial of glaucoma control comparing surgery versus medication, where intraocular pressure (IOP) control at 12 months is a surrogate for IOP control at 96 months. We also conduct a series of simulations to consider the impacts of non-ignorability, as well as sensitivity to priors and the ability of the Decision Information Criterion to choose the correct model when parameters are not fully identified.
Limitations
Because model parameters cannot be fully identified from data, informative priors can introduce non-trivial bias in moderate sample size settings, while more non-informative priors can yield wide credible intervals.
Conclusions
Assessing the linkage between causal effects of treatment on a surrogate marker and causal effects of a treatment on an outcome is important to understanding the value of a marker. These causal effects are not fully identifiable: hence we explore the sensitivity and identifiability aspects of these models and show that relatively weak assumptions can still yield meaningful results.
doi:10.1177/1740774513479522
PMCID: PMC4096330  PMID: 23553326
Causal Inference; Surrogate Marker; Bayesian Analysis; dentifiability; Non-response; Counterfactual
3.  Commentary on “Principal Stratification — a Goal or a Tool?” by Judea Pearl 
This commentary takes up Pearl's welcome challenge to clearly articulate the scientific value of principal stratification estimands that we and colleagues have investigated, in the area of randomized placebo-controlled preventive vaccine efficacy trials, especially trials of HIV vaccines. After briefly arguing that certain principal stratification estimands for studying vaccine effects on post-infection outcomes are of genuine scientific interest, the bulk of our commentary argues that the “causal effect predictiveness” (CEP) principal stratification estimand for evaluating immune biomarkers as surrogate endpoints is not of ultimate scientific interest, because it evaluates surrogacy restricted to the setting of a particular vaccine efficacy trial, but is nevertheless useful for guiding the selection of primary immune biomarker endpoints in Phase I/II vaccine trials and for facilitating assessment of transportability/bridging surrogacy.
doi:10.2202/1557-4679.1341
PMCID: PMC3204668  PMID: 22049267
principal stratification; causal inference; vaccine trial
4.  Causal Vaccine Effects on Binary Postinfection Outcomes 
The effects of vaccine on postinfection outcomes, such as disease, death, and secondary transmission to others, are important scientific and public health aspects of prophylactic vaccination. As a result, evaluation of many vaccine effects condition on being infected. Conditioning on an event that occurs posttreatment (in our case, infection subsequent to assignment to vaccine or control) can result in selection bias. Moreover, because the set of individuals who would become infected if vaccinated is likely not identical to the set of those who would become infected if given control, comparisons that condition on infection do not have a causal interpretation. In this article we consider identifiability and estimation of causal vaccine effects on binary postinfection outcomes. Using the principal stratification framework, we define a postinfection causal vaccine efficacy estimand in individuals who would be infected regardless of treatment assignment. The estimand is shown to be not identifiable under the standard assumptions of the stable unit treatment value, monotonicity, and independence of treatment assignment. Thus selection models are proposed that identify the causal estimand. Closed-form maximum likelihood estimators (MLEs) are then derived under these models, including those assuming maximum possible levels of positive and negative selection bias. These results show the relations between the MLE of the causal estimand and two commonly used estimators for vaccine effects on postinfection outcomes. For example, the usual intent-to-treat estimator is shown to be an upper bound on the postinfection causal vaccine effect provided that the magnitude of protection against infection is not too large. The methods are used to evaluate postinfection vaccine effects in a clinical trial of a rotavirus vaccine candidate and in a field study of a pertussis vaccine. Our results show that pertussis vaccination has a significant causal effect in reducing disease severity.
doi:10.1198/016214505000000970
PMCID: PMC2603579  PMID: 19096723
Causal inference; Infectious disease; Maximum likelihood; Principal stratification; Sensitivity analysis
5.  AN APPLICATION OF PRINCIPAL STRATIFICATION TO CONTROL FOR INSTITUTIONALIZATION AT FOLLOW-UP IN STUDIES OF SUBSTANCE ABUSE TREATMENT PROGRAMS* 
The annals of applied statistics  2008;2(3):1034-1055.
Participants in longitudinal studies on the effects of drug treatment and criminal justice system interventions are at high risk for institutionalization (e.g., spending time in an environment where their freedom to use drugs, commit crimes, or engage in risky behavior may be circumscribed). Methods used for estimating treatment effects in the presence of institutionalization during follow-up can be highly sensitive to assumptions that are unlikely to be met in applications and thus likely to yield misleading inferences. In this paper, we consider the use of principal stratification to control for institutionalization at follow-up. Principal stratification has been suggested for similar problems where outcomes are unobservable for samples of study participants because of dropout, death, or other forms of censoring. The method identifies principal strata within which causal effects are well defined and potentially estimable. We extend the method of principal stratification to model institutionalization at follow-up and estimate the effect of residential substance abuse treatment versus outpatient services in a large scale study of adolescent substance abuse treatment programs. Additionally, we discuss practical issues in applying the principal stratification model to data. We show via simulation studies that the model can only recover true effects provided the data meet strenuous demands and that there must be caution taken when implementing principal stratification as a technique to control for post-treatment confounders such as institutionalization.
doi:10.1214/08-AOAS179
PMCID: PMC2749670  PMID: 19779599
Principal Stratification; Post-Treatment Confounder; Institutionalization; Causal Inference
6.  Clarifying the Role of Principal Stratification in the Paired Availability Design 
The paired availability design for historical controls postulated four classes corresponding to the treatment (old or new) a participant would receive if arrival occurred during either of two time periods associated with different availabilities of treatment. These classes were later extended to other settings and called principal strata. Judea Pearl asks if principal stratification is a goal or a tool and lists four interpretations of principal stratification. In the case of the paired availability design, principal stratification is a tool that falls squarely into Pearl's interpretation of principal stratification as “an approximation to research questions concerning population averages.” We describe the paired availability design and the important role played by principal stratification in estimating the effect of receipt of treatment in a population using data on changes in availability of treatment. We discuss the assumptions and their plausibility. We also introduce the extrapolated estimate to make the generalizability assumption more plausible. By showing why the assumptions are plausible we show why the paired availability design, which includes principal stratification as a key component, is useful for estimating the effect of receipt of treatment in a population. Thus, for our application, we answer Pearl's challenge to clearly demonstrate the value of principal stratification.
doi:10.2202/1557-4679.1338
PMCID: PMC3114955  PMID: 21686085
principal stratification; causal inference; paired availability design
7.  Principal Stratification — Uses and Limitations 
Pearl (2011) asked for the causal inference community to clarify the role of the principal stratification framework in the analysis of causal effects. Here, I argue that the notion of principal stratification has shed light on problems of non-compliance, censoring-by-death, and the analysis of post-infection outcomes; that it may be of use in considering problems of surrogacy but further development is needed; that it is of some use in assessing “direct effects”; but that it is not the appropriate tool for assessing “mediation.” There is nothing within the principal stratification framework that corresponds to a measure of an “indirect” or “mediated” effect.
doi:10.2202/1557-4679.1329
PMCID: PMC3154088  PMID: 21841939
causal inference; mediation; non-compliance; potential outcomes; principal stratification; surrogates
8.  Estimating Causal Effects in Trials Involving Multi-Treatment Arms Subject to Non-compliance: A Bayesian framework 
Summary
Data analysis for randomized trials including multi-treatment arms is often complicated by subjects who do not comply with their treatment assignment. We discuss here methods of estimating treatment efficacy for randomized trials involving multi-treatment arms subject to non-compliance. One treatment effect of interest in the presence of non-compliance is the complier average causal effect (CACE) (Angrist et al. 1996), which is defined as the treatment effect for subjects who would comply regardless of the assigned treatment. Following the idea of principal stratification (Frangakis & Rubin 2002), we define principal compliance (Little et al. 2009) in trials with three treatment arms, extend CACE and define causal estimands of interest in this setting. In addition, we discuss structural assumptions needed for estimation of causal effects and the identifiability problem inherent in this setting from both a Bayesian and a classical statistical perspective. We propose a likelihood-based framework that models potential outcomes in this setting and a Bayes procedure for statistical inference. We compare our method with a method of moments approach proposed by Cheng & Small (2006) using a hypothetical data set, and further illustrate our approach with an application to a behavioral intervention study (Janevic et al. 2003).
doi:10.1111/j.1467-9876.2009.00709.x
PMCID: PMC3104736  PMID: 21637737
Causal Inference; Complier Average Causal Effect; Multi-arm Trials; Non-compliance; Principal Compliance; Principal Stratification
9.  Accounting for Population Stratification in DNA Methylation Studies 
Genetic epidemiology  2014;38(3):231-241.
DNA methylation is an important epigenetic mechanism that has been linked to complex disease and is of great interest to researchers as a potential link between genome, environment, and disease. As the scale of DNA methylation association studies approaches that of genome-wide association studies (GWAS), issues such as population stratification will need to be addressed. It is well-documented that failure to adjust for population stratification can lead to false positives in genetic association studies, but population stratification is often unaccounted for in DNA methylation studies. Here, we propose several approaches to correct for population stratification using principal components from different subsets of genome-wide methylation data. We first illustrate the potential for confounding due to population stratification by demonstrating widespread associations between DNA methylation and race in 388 individuals (365 African American and 23 Caucasian). We subsequently evaluate the performance of our principal-components approaches and other methods in adjusting for confounding due to population stratification. Our simulations show that 1) all of the methods considered are effective at removing inflation due to population stratification, and 2) maximum power can be obtained with SNP-based principal components, followed by methylation-based principal components, which out-perform both surrogate variable analysis and genomic control. Among our different approaches to computing methylation-based principal components, we find that principal components based on CpG sites chosen for their potential to proxy nearby SNPs can provide a powerful and computationally efficient approach to adjustment for population stratification in DNA methylation studies when genome-wide SNP data are unavailable.
doi:10.1002/gepi.21789
PMCID: PMC4090102  PMID: 24478250
10.  ASSESSING SURROGATE ENDPOINTS IN VACCINE TRIALS WITH CASE-COHORT SAMPLING AND THE COX MODEL1 
The annals of applied statistics  2008;2(1):386-407.
Assessing immune responses to study vaccines as surrogates of protection plays a central role in vaccine clinical trials. Motivated by three ongoing or pending HIV vaccine efficacy trials, we consider such surrogate endpoint assessment in a randomized placebo-controlled trial with case-cohort sampling of immune responses and a time to event endpoint. Based on the principal surrogate definition under the principal stratification framework proposed by Frangakis and Rubin [Biometrics 58 (2002) 21–29] and adapted by Gilbert and Hudgens (2006), we introduce estimands that measure the value of an immune response as a surrogate of protection in the context of the Cox proportional hazards model. The estimands are not identified because the immune response to vaccine is not measured in placebo recipients. We formulate the problem as a Cox model with missing covariates, and employ novel trial designs for predicting the missing immune responses and thereby identifying the estimands. The first design utilizes information from baseline predictors of the immune response, and bridges their relationship in the vaccine recipients to the placebo recipients. The second design provides a validation set for the unmeasured immune responses of uninfected placebo recipients by immunizing them with the study vaccine after trial closeout. A maximum estimated likelihood approach is proposed for estimation of the parameters. Simulated data examples are given to evaluate the proposed designs and study their properties.
doi:10.1214/07-AOAS132
PMCID: PMC2601643  PMID: 19079758
Clinical trial; discrete failure time model; missing data; potential outcomes; principal stratification; surrogate marker
11.  Estimation of dynamical model parameters taking into account undetectable marker values 
Background
Mathematical models are widely used for studying the dynamic of infectious agents such as hepatitis C virus (HCV). Most often, model parameters are estimated using standard least-square procedures for each individual. Hierarchical models have been proposed in such applications. However, another issue is the left-censoring (undetectable values) of plasma viral load due to the lack of sensitivity of assays used for quantification. A method is proposed to take into account left-censored values for estimating parameters of non linear mixed models and its impact is demonstrated through a simulation study and an actual clinical trial of anti-HCV drugs.
Methods
The method consists in a full likelihood approach distinguishing the contribution of observed and left-censored measurements assuming a lognormal distribution of the outcome. Parameters of analytical solution of system of differential equations taking into account left-censoring are estimated using standard software.
Results
A simulation study with only 14% of measurements being left-censored showed that model parameters were largely biased (from -55% to +133% according to the parameter) with the exception of the estimate of initial outcome value when left-censored viral load values are replaced by the value of the threshold. When left-censoring was taken into account, the relative bias on fixed effects was equal or less than 2%. Then, parameters were estimated using the 100 measurements of HCV RNA available (with 12% of left-censored values) during the first 4 weeks following treatment initiation in the 17 patients included in the trial. Differences between estimates according to the method used were clinically significant, particularly on the death rate of infected cells. With the crude approach the estimate was 0.13 day-1 (95% confidence interval [CI]: 0.11; 0.17) compared to 0.19 day-1 (CI: 0.14; 0.26) when taking into account left-censoring. The relative differences between estimates of individual treatment efficacy according to the method used varied from 0.001% to 37%.
Conclusion
We proposed a method that gives unbiased estimates if the assumed distribution is correct (e.g. lognormal) and that is easy to use with standard software.
doi:10.1186/1471-2288-6-38
PMCID: PMC1559636  PMID: 16879756
12.  Association analyses of the MAS-QTL data set using grammar, principal components and Bayesian network methodologies 
BMC Proceedings  2011;5(Suppl 3):S8.
Background
It has been shown that if genetic relationships among individuals are not taken into account for genome wide association studies, this may lead to false positives. To address this problem, we used Genome-wide Rapid Association using Mixed Model and Regression and principal component stratification analyses. To account for linkage disequilibrium among the significant markers, principal components loadings obtained from top markers can be included as covariates. Estimation of Bayesian networks may also be useful to investigate linkage disequilibrium among SNPs and their relation with environmental variables.
For the quantitative trait we first estimated residuals while taking polygenic effects into account. We then used a single SNP approach to detect the most significant SNPs based on the residuals and applied principal component regression to take linkage disequilibrium among these SNPs into account. For the categorical trait we used principal component stratification methodology to account for background effects. For correction of linkage disequilibrium we used principal component logit regression. Bayesian networks were estimated to investigate relationship among SNPs.
Results
Using the Genome-wide Rapid Association using Mixed Model and Regression and principal component stratification approach we detected around 100 significant SNPs for the quantitative trait (p<0.05 with 1000 permutations) and 109 significant (p<0.0006 with local FDR correction) SNPs for the categorical trait. With additional principal component regression we reduced the list to 16 and 50 SNPs for the quantitative and categorical trait, respectively.
Conclusions
GRAMMAR could efficiently incorporate the information regarding random genetic effects. Principal component stratification should be cautiously used with stringent multiple hypothesis testing correction to correct for ancestral stratification and association analyses for binary traits when there are systematic genetic effects such as half sib family structures. Bayesian networks are useful to investigate relationships among SNPs and environmental variables.
doi:10.1186/1753-6561-5-S3-S8
PMCID: PMC3103207  PMID: 21624178
13.  Partially hidden Markov model for time-varying principal stratification in HIV prevention trials 
It is frequently of interest to estimate the intervention effect that adjusts for post-randomization variables in clinical trials. In the recently completed HPTN 035 trial, there is differential condom use between the three microbicide gel arms and the No Gel control arm, so that intention to treat (ITT) analyses only assess the net treatment effect that includes the indirect treatment effect mediated through differential condom use. Various statistical methods in causal inference have been developed to adjust for post-randomization variables. We extend the principal stratification framework to time-varying behavioral variables in HIV prevention trials with a time-to-event endpoint, using a partially hidden Markov model (pHMM). We formulate the causal estimand of interest, establish assumptions that enable identifiability of the causal parameters, and develop maximum likelihood methods for estimation. Application of our model on the HPTN 035 trial reveals an interesting pattern of prevention effectiveness among different condom-use principal strata.
doi:10.1080/01621459.2011.643743
PMCID: PMC3649016  PMID: 23667279
microbicide; causal inference; posttreatment variables; direct effect
14.  Cereal Domestication and Evolution of Branching: Evidence for Soft Selection in the Tb1 Orthologue of Pearl Millet (Pennisetum glaucum [L.] R. Br.) 
PLoS ONE  2011;6(7):e22404.
Background
During the Neolithic revolution, early farmers altered plant development to domesticate crops. Similar traits were often selected independently in different wild species; yet the genetic basis of this parallel phenotypic evolution remains elusive. Plant architecture ranks among these target traits composing the domestication syndrome. We focused on the reduction of branching which occurred in several cereals, an adaptation known to rely on the major gene Teosinte-branched1 (Tb1) in maize. We investigate the role of the Tb1 orthologue (Pgtb1) in the domestication of pearl millet (Pennisetum glaucum), an African outcrossing cereal.
Methodology/Principal Findings
Gene cloning, expression profiling, QTL mapping and molecular evolution analysis were combined in a comparative approach between pearl millet and maize. Our results in pearl millet support a role for PgTb1 in domestication despite important differences in the genetic basis of branching adaptation in that species compared to maize (e.g. weaker effects of PgTb1). Genetic maps suggest this pattern to be consistent in other cereals with reduced branching (e.g. sorghum, foxtail millet). Moreover, although the adaptive sites underlying domestication were not formerly identified, signatures of selection pointed to putative regulatory regions upstream of both Tb1 orthologues in maize and pearl millet. However, the signature of human selection in the pearl millet Tb1 is much weaker in pearl millet than in maize.
Conclusions/Significance
Our results suggest that some level of parallel evolution involved at least regions directly upstream of Tb1 for the domestication of pearl millet and maize. This was unanticipated given the multigenic basis of domestication traits and the divergence of wild progenitor species for over 30 million years prior to human selection. We also hypothesized that regular introgression of domestic pearl millet phenotypes by genes from the wild gene pool could explain why the selective sweep in pearl millet is softer than in maize.
doi:10.1371/journal.pone.0022404
PMCID: PMC3142148  PMID: 21799845
15.  Mediation Analysis with Principal Stratification 
Statistics in medicine  2009;28(7):1108-1130.
In assessing the mechanism of treatment efficacy in randomized clinical trials, investigators often perform mediation analyses by analyzing if the significant intent-to-treat treatment effect on outcome occurs through or around a third intermediate or mediating variable: indirect and direct effects, respectively. Standard mediation analyses assume sequential ignorability, i.e., conditional on covariates the intermediate or mediating factor is randomly assigned, as is the treatment in a randomized clinical trial. This research focuses on the application of the principal stratification approach for estimating the direct effect of a randomized treatment but without the standard sequential ignorability assumption. This approach is used to estimate the direct effect of treatment as a difference between expectations of potential outcomes within latent sub-groups of participants for whom the intermediate variable behavior would be constant, regardless of the randomized treatment assignment. Using a Bayesian estimation procedure, we also assess the sensitivity of results based on the principal stratification approach to heterogeneity of the variances among these principal strata. We assess this approach with simulations and apply it to two psychiatric examples. Both examples and the simulations indicated robustness of our findings to the homogeneous variance assumption. However, simulations showed that the magnitude of treatment effects derived under the principal stratification approach were sensitive to model mis-specification.
doi:10.1002/sim.3533
PMCID: PMC2669107  PMID: 19184975
Principal stratification; mediating variables; direct effects; principal strata probabilities; heterogeneous variances
16.  Evaluating Candidate Principal Surrogate Endpoints 
Biometrics  2008;64(4):1146-1154.
Summary
Frangakis and Rubin (2002, Biometrics 58, 21–29) proposed a new definition of a surrogate endpoint (a “principal” surrogate) based on causal effects. We introduce an estimand for evaluating a principal surrogate, the causal effect predictiveness (CEP) surface, which quantifies how well causal treatment effects on the biomarker predict causal treatment effects on the clinical endpoint. Although the CEP surface is not identifiable due to missing potential outcomes, it can be identified by incorporating a baseline covariate(s) that predicts the biomarker. Given case–cohort sampling of such a baseline predictor and the biomarker in a large blinded randomized clinical trial, we develop an estimated likelihood method for estimating the CEP surface. This estimation assesses the “surrogate value” of the biomarker for reliably predicting clinical treatment effects for the same or similar setting as the trial. A CEP surface plot provides a way to compare the surrogate value of multiple biomarkers. The approach is illustrated by the problem of assessing an immune response to a vaccine as a surrogate endpoint for infection.
doi:10.1111/j.1541-0420.2008.01014.x
PMCID: PMC2726718  PMID: 18363776
Case cohort; Causal inference; Clinical trial; HIV vaccine; Postrandomization selection bias; Structural model; Prentice criteria; Principal stratification
17.  Estimation of colorectal adenoma recurrence with dependent censoring 
Background
Due to early colonoscopy for some participants, interval-censored observations can be introduced into the data of a colorectal polyp prevention trial. The censoring could be dependent of risk of recurrence if the reasons of having early colonoscopy are associated with recurrence. This can complicate estimation of the recurrence rate.
Methods
We propose to use midpoint imputation to convert interval-censored data problems to right censored data problems. To adjust for potential dependent censoring, we use information from auxiliary variables to define risk groups to perform the weighted Kaplan-Meier estimation to the midpoint imputed data. The risk groups are defined using two risk scores derived from two working proportional hazards models with the auxiliary variables as the covariates. One is for the recurrence time and the other is for the censoring time. The method described here is explored by simulation and illustrated with an example from a colorectal polyp prevention trial.
Results
We first show that midpoint imputation under an assumption of independent censoring will produce an unbiased estimate of recurrence rate at the end of the trial, which is often the main interest of a colorectal polyp prevention trial, and then show in simulations that the weighted Kaplan-Meier method using the information from auxiliary variables based on the midpoint imputed data can improve efficiency in a situation with independent censoring and reduce bias in a situation with dependent censoring compared to the conventional methods, while estimating the recurrence rate at the end of the trial.
Conclusion
The research in this paper uses midpoint imputation to handle interval-censored observations and then uses the information from auxiliary variables to adjust for dependent censoring by incorporating them into the weighted Kaplan-Meier estimation. This approach can handle a situation with multiple auxiliary variables by deriving two risk scores from two working PH models. Although the idea of this approach might appear simple, the results do show that the weighted Kaplan-Meier approach can gain efficiency and reduce bias due to dependent censoring.
doi:10.1186/1471-2288-9-66
PMCID: PMC2760573  PMID: 19788750
18.  Probabilistic PCA of censored data: accounting for uncertainties in the visualization of high-throughput single-cell qPCR data 
Bioinformatics  2014;30(13):1867-1875.
Motivation: High-throughput single-cell quantitative real-time polymerase chain reaction (qPCR) is a promising technique allowing for new insights in complex cellular processes. However, the PCR reaction can be detected only up to a certain detection limit, whereas failed reactions could be due to low or absent expression, and the true expression level is unknown. Because this censoring can occur for high proportions of the data, it is one of the main challenges when dealing with single-cell qPCR data. Principal component analysis (PCA) is an important tool for visualizing the structure of high-dimensional data as well as for identifying subpopulations of cells. However, to date it is not clear how to perform a PCA of censored data. We present a probabilistic approach that accounts for the censoring and evaluate it for two typical datasets containing single-cell qPCR data.
Results: We use the Gaussian process latent variable model framework to account for censoring by introducing an appropriate noise model and allowing a different kernel for each dimension. We evaluate this new approach for two typical qPCR datasets (of mouse embryonic stem cells and blood stem/progenitor cells, respectively) by performing linear and non-linear probabilistic PCA. Taking the censoring into account results in a 2D representation of the data, which better reflects its known structure: in both datasets, our new approach results in a better separation of known cell types and is able to reveal subpopulations in one dataset that could not be resolved using standard PCA.
Availability and implementation: The implementation was based on the existing Gaussian process latent variable model toolbox (https://github.com/SheffieldML/GPmat); extensions for noise models and kernels accounting for censoring are available at http://icb.helmholtz-muenchen.de/censgplvm.
Contact: fbuettner.phys@gmail.com
Supplementary information: Supplementary data are available at Bioinformatics online.
doi:10.1093/bioinformatics/btu134
PMCID: PMC4071202  PMID: 24618470
19.  Multiple approaches to assessing the effects of delays for hip fracture patients in the United States and Canada. 
Health Services Research  2000;34(7):1499-1518.
OBJECTIVE: To examine the determinants of postsurgery length of stay (LOS) and inpatient mortality in the United States (California and Massachusetts) and Canada (Manitoba and Quebec). DATA SOURCES/STUDY SETTING: Patient discharge abstracts from the Agency for Health Care Policy and Research Nationwide Inpatient Sample and from provincial health ministries. STUDY DESIGN: Descriptive statistics by state or province, pooled competing risks hazards models (which control for censoring of LOS and inpatient mortality data), and instrumental variables (which control for confounding in observational data) were used to analyze the effect of wait time for hip fracture surgery on postsurgery outcomes. DATA EXTRACTIONS: Data were extracted for patients admitted to an acute care hospital with a primary diagnosis of hip fracture who received hip fracture surgery, were admitted from home or the emergency room, were age 45 or older, stayed in the hospital 365 days or less, and were not trauma patients. PRINCIPAL FINDINGS: The descriptive data indicate that wait times for surgery are longer in the two Canadian provinces than in the two U.S. states. Canadians also have longer postsurgery LOS and higher inpatient mortality. Yet the competing risks hazards model indicates that the effect of wait time on postsurgery LOS is small in magnitude. Instrumental variables analysis reveals that wait time for surgery is not a significant predictor of postsurgery length of stay. The hazards model reveals significant differences in mortality across regions. However, both the regressions and the instrumental variables indicate that these differences are not attributable to wait time for surgery. CONCLUSIONS: Statistical models that account for censoring and confounding yield conclusions that differ from those implied by descriptive statistics in administrative data. Longer wait time for hip fracture surgery does not explain the difference in postsurgery outcomes across countries.
PMCID: PMC1975661  PMID: 10737450
20.  Predicting treatment effect from surrogate endpoints and historical trials: an extrapolation involving probabilities of a binary outcome or survival to a specific time 
Biometrics  2011;68(1):248-257.
SUMMARY
Using multiple historical trials with surrogate and true endpoints, we consider various models to predict the effect of treatment on a true endpoint in a target trial in which only a surrogate endpoint is observed. This predicted result is computed using (1) a prediction model (mixture, linear, or principal stratification) estimated from historical trials and the surrogate endpoint of the target trial and (2) a random extrapolation error estimated from successively leaving out each trial among the historical trials. The method applies to either binary outcomes or survival to a particular time that is computed from censored survival data. We compute a 95% confidence interval for the predicted result and validate its coverage using simulation. To summarize the additional uncertainty from using a predicted instead of true result for the estimated treatment effect, we compute its multiplier of standard error. Software is available for download.
doi:10.1111/j.1541-0420.2011.01646.x
PMCID: PMC3218246  PMID: 21838732
Randomized trials; Reproducibility; Principal stratification
21.  Using Cure Models and Multiple Imputation to Utilize Recurrence as an Auxiliary Variable for Overall Survival 
Background
Intermediate outcome variables can often be used as auxiliary variables for the true outcome of interest in randomized clinical trials. For many cancers, time to recurrence is an informative marker in predicting a patient’s overall survival outcome, and could provide auxiliary information for the analysis of survival times.
Purpose
To investigate whether models linking recurrence and death combined with a multiple imputation procedure for censored observations can result in efficiency gains in the estimation of treatment effects, and be used to shorten trial lengths.
Methods
Recurrence and death times are modeled using data from 12 trials in colorectal cancer. Multiple imputation is used as a strategy for handling missing values arising from censoring. The imputation procedure uses a cure model for time to recurrence and a time-dependent Weibull proportional hazards model for time to death. Recurrence times are imputed, and then death times are imputed conditionally on recurrence times. To illustrate these methods, trials are artificially censored 2-years after the last accrual, the imputation procedure is implemented, and a log-rank test and Cox model are used to analyze and compare these new data with the original data.
Results
The results show modest, but consistent gains in efficiency in the analysis by using the auxiliary information in recurrence times. Comparison of analyses show the treatment effect estimates and log rank test results from the 2-year censored imputed data to be in between the estimates from the original data and the artificially censored data, indicating that the procedure was able to recover some of the lost information due to censoring.
Limitations
The models used are all fully parametric, requiring distributional assumptions of the data.
Conclusions
The proposed models may be useful to improve the efficiency in estimation of treatment effects in cancer trials and shortening trial length.
doi:10.1177/1740774511414741
PMCID: PMC3197975  PMID: 21921063
Auxiliary Variables; Colon Cancer; Cure Models; Multiple Imputation; Surrogate Endpoints
22.  Limitation of Inverse Probability-of-Censoring Weights in Estimating Survival in the Presence of Strong Selection Bias 
American Journal of Epidemiology  2011;173(5):569-577.
In time-to-event analyses, artificial censoring with correction for induced selection bias using inverse probability-of-censoring weights can be used to 1) examine the natural history of a disease after effective interventions are widely available, 2) correct bias due to noncompliance with fixed or dynamic treatment regimens, and 3) estimate survival in the presence of competing risks. Artificial censoring entails censoring participants when they meet a predefined study criterion, such as exposure to an intervention, failure to comply, or the occurrence of a competing outcome. Inverse probability-of-censoring weights use measured common predictors of the artificial censoring mechanism and the outcome of interest to determine what the survival experience of the artificially censored participants would be had they never been exposed to the intervention, complied with their treatment regimen, or not developed the competing outcome. Even if all common predictors are appropriately measured and taken into account, in the context of small sample size and strong selection bias, inverse probability-of-censoring weights could fail because of violations in assumptions necessary to correct selection bias. The authors used an example from the Multicenter AIDS Cohort Study, 1984–2008, regarding estimation of long-term acquired immunodeficiency syndrome-free survival to demonstrate the impact of violations in necessary assumptions. Approaches to improve correction methods are discussed.
doi:10.1093/aje/kwq385
PMCID: PMC3105434  PMID: 21289029
epidemiologic methods; selection bias; survival analysis
23.  The Brazil SimSmoke Policy Simulation Model: The Effect of Strong Tobacco Control Policies on Smoking Prevalence and Smoking-Attributable Deaths in a Middle Income Nation 
PLoS Medicine  2012;9(11):e1001336.
David Levy and colleagues use the SimSmoke model to estimate the effect of Brazil's recent stronger tobacco control policies on smoking prevalence and associated premature mortality, and the effect that additional policies may have.
Background
Brazil has reduced its smoking rate by about 50% in the last 20 y. During that time period, strong tobacco control policies were implemented. This paper estimates the effect of these stricter policies on smoking prevalence and associated premature mortality, and the effect that additional policies may have.
Methods and Findings
The model was developed using the SimSmoke tobacco control policy model. Using policy, population, and smoking data for Brazil, the model assesses the effect on premature deaths of cigarette taxes, smoke-free air laws, mass media campaigns, marketing restrictions, packaging requirements, cessation treatment programs, and youth access restrictions. We estimate the effect of past policies relative to a counterfactual of policies kept to 1989 levels, and the effect of stricter future policies. Male and female smoking prevalence in Brazil have fallen by about half since 1989, which represents a 46% (lower and upper bounds: 28%–66%) relative reduction compared to the 2010 prevalence under the counterfactual scenario of policies held to 1989 levels. Almost half of that 46% reduction is explained by price increases, 14% by smoke-free air laws, 14% by marketing restrictions, 8% by health warnings, 6% by mass media campaigns, and 10% by cessation treatment programs. As a result of the past policies, a total of almost 420,000 (260,000–715,000) deaths had been averted by 2010, increasing to almost 7 million (4.5 million–10.3 million) deaths projected by 2050. Comparing future implementation of a set of stricter policies to a scenario with 2010 policies held constant, smoking prevalence by 2050 could be reduced by another 39% (29%–54%), and 1.3 million (0.9 million–2.0 million) out of 9 million future premature deaths could be averted.
Conclusions
Brazil provides one of the outstanding public health success stories in reducing deaths due to smoking, and serves as a model for other low and middle income nations. However, a set of stricter policies could further reduce smoking and save many additional lives.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Tobacco kills up to half its users—more than 5 million smokers die every year from tobacco-related causes. It also kills more than half a million non-smokers annually who have been exposed to second-hand smoke. If current trends continue, annual tobacco-related deaths could increase to more than 8 million by 2030. In response to this global tobacco epidemic, the World Health Organization has developed an international instrument for tobacco control called the Framework Convention on Tobacco Control (FCTC). Since it came into force in February 2005, 176 countries have become parties to the FCTC. As such, they agree to implement comprehensive bans on tobacco advertizing, promotion, and sponsorship; to ban misleading and deceptive terms on tobacco packaging; to protect people from exposure to cigarette smoke in public spaces and indoor workplaces; to implement tax policies aimed at reducing tobacco consumption; and to combat illicit trade in tobacco products.
Why Was This Study Done?
Brazil has played a pioneering role in providing support for tobacco control measures in low and middle income countries. It introduced its first cigarette-specific tax in 1990 and, in 1996, it placed the first warnings on cigarette packages and introduced smoke-free air laws. Many of these measures have subsequently been strengthened. Over the same period, the prevalence of smoking among adults (the proportion of the population that smokes) has halved in Brazil, falling from 34.8% in 1989 to 18.5% in 2008. But did the introduction of tobacco control policies contribute to this decline, and if so, which were the most effective policies? In this study, the researchers use a computational model called the SimSmoke tobacco control policy model to investigate this question and to examine the possible effect of introducing additional control policies consistent with the FCTC, which Brazil has been a party to since 2006.
What Did the Researchers Do and Find?
The researchers developed Brazil SimSmoke by incorporating policy, population, and smoking data for Brazil into the SimSmoke simulation model; Brazil SimSmoke estimates smoking prevalence and smoking-attributable deaths from 1989 forwards. They then compared smoking prevalences and smoking-attributable deaths estimated by Brazil SimSmoke for 2010 with and without the inclusion of the tobacco control policies that were introduced between 1989 and 2010. The model estimated that the smoking prevalence in Brazil in 2010 was reduced by 46% by the introduction of tobacco control measures. Almost half of this reduction was explained by price increases, 14% by smoke-free laws, 14% by marketing restrictions, 8% by health warnings, 6% by anti-smoking media campaigns, and 10% by cessation treatment programs. Moreover, as a result of past policies, the model estimated that almost 420,000 tobacco-related deaths had been averted by 2010 and that almost 7 million deaths will have been averted by 2050. Finally, using the model to compare the effects of a scenario that includes stricter policies (for example, an increase in tobacco tax) with a scenario that includes the 2010 policies only, indicated that stricter control policies would reduce the estimated smoking prevalence by an extra 39% between 2010 and 2050 and avert about 1.3 million additional premature deaths.
What Do These Findings Mean?
These findings indicate that the introduction of tobacco control policies has been a critical factor in the rapid decline in smoking prevalence in Brazil over the past 20 years. They also suggest that the introduction of stricter policies that are fully consistent with the FCTC has the potential to reduce the prevalence of smoking further and save many additional lives. Although the reduction in smoking prevalence in Brazil between 1989 and 2010 predicted by the Brazil SimSmoke model is close to the recorded reduction over that period, these findings need to be interpreted with caution because of the many assumptions incorporated in the model. Moreover, the accuracy of the model's predictions depends on the accuracy of the data fed into it, some of which was obtained from other countries and may not accurately reflect the situation in Brazil. Importantly, however, these findings show that, even for a middle income nation, reducing tobacco use is a “winnable battle” that carries huge dividends in terms of reducing illness and death without requiring unlimited resources.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001336.
The World Health Organization provides information about the dangers of tobacco (in several languages), about the Framework Convention on Tobacco Control, and about tobacco control in Brazil
The Framework Convention Alliance provides more information about the FCTC
The Brazilian National Cancer Institute (INCA) provides information on tobacco control policies in Brazil; additional information about tobacco control laws in Brazil is available on the Tobacco Control Laws interactive website, which provides information about tobacco control legislation worldwide
More information on the SimSmoke model of tobacco control policies is available in document or slideshow form
SmokeFree, a website provided by the UK National Health Service, offers advice on quitting smoking and includes personal stories from people who have stopped smoking
doi:10.1371/journal.pmed.1001336
PMCID: PMC3491001  PMID: 23139643
24.  Schizophrenia 
Clinical Evidence  2012;2012:1007.
Introduction
The lifetime prevalence of schizophrenia is approximately 0.7% and incidence rates vary between 7.7 and 43.0 per 100,000; about 75% of people have relapses and continued disability, and one third fail to respond to standard treatment. Positive symptoms include auditory hallucinations, delusions, and thought disorder. Negative symptoms (demotivation, self-neglect, and reduced emotion) have not been consistently improved by any treatment.
Methods and outcomes
We conducted a systematic review and aimed to answer the following clinical questions: What are the effects of drug treatments for positive, negative, or cognitive symptoms of schizophrenia? What are the effects of drug treatments in people with schizophrenia who are resistant to standard antipsychotic drugs? What are the effects of interventions to improve adherence to antipsychotic medication in people with schizophrenia? We searched: Medline, Embase, The Cochrane Library, and other important databases up to May 2010 (Clinical Evidence reviews are updated periodically; please check our website for the most up-to-date version of this review). We included harms alerts from relevant organisations such as the US Food and Drug Administration (FDA) and the UK Medicines and Healthcare products Regulatory Agency (MHRA).
Results
We found 51 systematic reviews, RCTs, or observational studies that met our inclusion criteria. We performed a GRADE evaluation of the quality of evidence for interventions.
Conclusions
In this systematic review, we present information relating to the effectiveness and safety of the following interventions: amisulpride, chlorpromazine, clozapine, depot haloperidol decanoate, haloperidol, olanzapine, pimozide, quetiapine, risperidone, sulpiride, ziprasidone, zotepine, aripiprazole, sertindole, paliperidone, flupentixol, depot flupentixol decanoate, zuclopenthixol, depot zuclopenthixol decanoate, behavioural therapy, clozapine, compliance therapy, first-generation antipsychotic drugs in treatment-resistant people, multiple-session family interventions, psychoeducational interventions, and second-generation antipsychotic drugs in treatment-resistant people.
Key Points
The lifetime prevalence of schizophrenia is approximately 0.7% and incidence rates vary between 7.7 and 43.0 per 100,000; about 75% of people have relapses and continued disability, and one third fail to respond to standard treatment. Positive symptoms include auditory hallucinations, delusions, and thought disorder. Negative symptoms (anhedonia, asociality, flattening of affect, and demotivation) and cognitive dysfunction have not been consistently improved by any treatment.
Standard treatment of schizophrenia has been antipsychotic drugs, the first of which included chlorpromazine and haloperidol, but these so-called first-generation antipsychotics can all cause adverse effects such as extrapyramidal adverse effects, hyperprolactinaemia, and sedation. Attempts to address these adverse effects led to the development of second-generation antipsychotics.
The second-generation antipsychotics amisulpride, clozapine, olanzapine, and risperidone may be more effective at reducing positive symptoms compared with first-generation antipsychotic drugs, but may cause similar adverse effects, plus additional metabolic effects such as weight gain.
CAUTION: Clozapine has been associated with potentially fatal blood dyscrasias. Blood monitoring is essential, and it is recommended that its use be limited to people with treatment-resistant schizophrenia.
Pimozide, quetiapine, aripiprazole, sulpiride, ziprasidone, and zotepine seem to be as effective as standard antipsychotic drugs at improving positive symptoms. Again, these drugs cause similar adverse effects to first-generation antipsychotics and other second-generation antipsychotics.
CAUTION: Pimozide has been associated with sudden cardiac death at doses above 20 mg daily.
We found very little evidence regarding depot injections of haloperidol decanoate, flupentixol decanoate, or zuclopenthixol decanoate; thus, we don’t know if they are more effective than oral treatments at improving symptoms.
In people who are resistant to standard antipsychotic drugs, clozapine may improve symptoms compared with first-generation antipsychotic agents, but this benefit must be balanced against the likelihood of adverse effects. We found limited evidence on other individual first- or second-generation antipsychotic drugs other than clozapine in people with treatment-resistant schizophrenia.In people with treatment-resistant schizophrenia, we don't know how second-generation agents other than clozapine compare with each other or first-generation antipsychotic agents, or how clozapine compares with other second-generation antipsychotic agents, because of a lack of evidence.
We don't know whether behavioural interventions, compliance therapy, psychoeducational interventions, or family interventions improve adherence to antipsychotic medication compared with usual care because of a paucity of good-quality evidence.
It is clear that some included studies in this review have serious failings and that the evidence base for the efficacy of antipsychotic medication and other interventions is surprisingly weak. For example, although in many trials haloperidol has been used as the standard comparator, the clinical trial evidence for haloperidol is less impressive may be expected.
By their very nature, systematic reviews and RCTs provide average indices of probable efficacy in groups of selected individuals. Although some RCTs limit inclusion criteria to a single category of diagnosis, many studies include individuals with different diagnoses such as schizoaffective disorder. In all RCTs, even in those recruiting people with a single DSM or ICD-10 diagnosis, there is considerable clinical heterogeneity.
Genome-wide association studies of large samples with schizophrenia demonstrate that this clinical heterogeneity reflects, in turn, complex biological heterogeneity. For example, genome-wide association studies suggest that around 1000 genetic variants of low penetrance and other individually rare genetic variants of higher penetrance, along with epistasis and epigenetic mechanisms, are thought to be responsible, probably with the biological and psychological effects of environmental factors, for the resultant complex clinical phenotype. A more stratified approach to clinical trials would help to identify those subgroups that seem to be the best responders to a particular intervention.
To date, however, there is little to suggest that stratification on the basis of clinical characteristics successfully helps to predict which drugs work best for which people. There is a pressing need for the development of biomarkers with clinical utility for mental health problems. Such measures could help to stratify clinical populations or provide better markers of efficacy in clinical trials, and would complement the current use of clinical outcome scales. Clinicians are also well aware that many people treated with antipsychotic medication develop significant adverse effects such as extrapyramidal symptoms or weight gain. Again, our ability to identify which people will develop which adverse effects is poorly developed, and might be assisted by using biomarkers to stratify populations.
The results of this review tend to indicate that as far as antipsychotic medication goes, current drugs are of limited efficacy in some people, and that most drugs cause adverse effects in most people. Although this is a rather downbeat conclusion, it should not be too surprising, given clinical experience and our knowledge of the pharmacology of the available antipsychotic medication. All currently available antipsychotic medications have the same putative mechanism of action — namely, dopaminergic antagonism with varying degrees of antagonism at other receptor sites. More efficacious antipsychotic medication awaits a better understanding of the biological pathogenesis of these conditions so that rational treatments can be developed.
PMCID: PMC3385413  PMID: 23870705
25.  Improving Melanoma Classification by Integrating Genetic and Morphologic Features 
PLoS Medicine  2008;5(6):e120.
Background
In melanoma, morphology-based classification systems have not been able to provide relevant information for selecting treatments for patients whose tumors have metastasized. The recent identification of causative genetic alterations has revealed mutations in signaling pathways that offer targets for therapy. Identifying morphologic surrogates that can identify patients whose tumors express such alterations (or functionally equivalent alterations) would be clinically useful for therapy stratification and for retrospective analysis of clinical trial data.
Methodology/Principal Findings
We defined and assessed a panel of histomorphologic measures and correlated them with the mutation status of the oncogenes BRAF and NRAS in a cohort of 302 archival tissues of primary cutaneous melanomas from an academic comprehensive cancer center. Melanomas with BRAF mutations showed distinct morphological features such as increased upward migration and nest formation of intraepidermal melanocytes, thickening of the involved epidermis, and sharper demarcation to the surrounding skin; and they had larger, rounder, and more pigmented tumor cells (all p-values below 0.0001). By contrast, melanomas with NRAS mutations could not be distinguished based on these morphological features. Using simple combinations of features, BRAF mutation status could be predicted with up to 90.8% accuracy in the entire cohort as well as within the categories of the current World Health Organization (WHO) classification. Among the variables routinely recorded in cancer registries, we identified age < 55 y as the single most predictive factor of BRAF mutation in our cohort. Using age < 55 y as a surrogate for BRAF mutation in an independent cohort of 4,785 patients of the Southern German Tumor Registry, we found a significant survival benefit (p < 0.0001) for patients who, based on their age, were predicted to have BRAF mutant melanomas in 69% of the cases. This group also showed a different pattern of metastasis, more frequently involving regional lymph nodes, compared to the patients predicted to have no BRAF mutation and who more frequently displayed satellite, in-transit metastasis, and visceral metastasis (p < 0.0001).
Conclusions
Refined morphological classification of primary melanomas can be used to improve existing melanoma classifications by forming subgroups that are genetically more homogeneous and likely to differ in important clinical variables such as outcome and pattern of metastasis. We expect this information to improve classification and facilitate stratification for therapy as well as retrospective analysis of existing trial data.
Boris Bastian and colleagues present a refined morphological classification of primary melanomas that can be used to improve existing melanoma classifications by defining genetically homogeneous subgroups.
Editors' Summary
Background.
Skin cancers—the most commonly diagnosed cancers worldwide—are usually caused by exposure to ultraviolet (UV) radiation in sunlight. UV radiation damages the DNA in skin cells and can introduce permanent genetic changes (mutations) into the skin cells that allow them to divide uncontrollably to form a tumor, a disorganized mass of cells. Because there are many different cell types in the skin, there are many types of skin cancer. The most dangerous type—melanoma—develops when genetic changes occur in melanocytes, the cells that produce the skin pigment melanin. Although only 4% of skin cancers are melanomas, 80% of skin cancer deaths are caused by melanomas. The first signs of a melanoma are often a change in the appearance or size of a mole (a pigmented skin blemish that is also called a nevus) or a newly arising pigmented lesion that looks different from the other moles (an “ugly duckling”). If this early sign is noticed and the melanoma is diagnosed before it has spread from the skin into other parts of the body, surgery can sometimes provide a cure. But, for more advanced melanomas, the outlook is generally poor. Although radiation therapy, chemotherapy, or immunotherapy (drugs that stimulate the immune system to kill the cancer cells) can prolong the life expectancy of some patients, these treatments often fail to remove all of the cancer cells.
Why Was This Study Done?
Now, however, scientists have identified some of the genetic alterations that cause melanoma. For example, they know that many melanomas carry mutations in either the BRAF gene or the NRAS gene, and that the proteins made from these mutated genes (“oncogenes”) help cancer cells to grow uncontrollably. The hope is that targeted drugs designed to block the activity of oncogenic BRAF or NRAS might stop the growth of those melanomas that make these altered proteins. But how can the patients with these specific tumors be identified in the clinic? The expression of altered proteins is likely to affect the microscopic growth patterns (“histomorphology”) of melanomas. However, the current histomorphology-based classification system for melanomas, which distinguishes four main types of melanoma, does not help clinicians choose the best treatment for their patients. In this study, the researchers have tried to improve melanoma classification by looking for correlations between histomorphological features and genetic alterations in a large collection of melanomas.
What Did the Researchers Do and Find?
The researchers examined several histomorphological features in more than 300 melanoma samples and used statistical methods to correlate these features with the mutation status of BRAF and NRAS in the tumors. They found that some individual histomorphological features were strongly associated with the BRAF (but not the NRAS) mutation status of the tumors. For example, melanomas with BRAF mutations had more melanocytes in the upper layers of the epidermis (the outermost layer of the skin) than did those without BRAF mutations (melanocytes usually live at the bottom of the epidermis). Then, by combining several individual histomorphological features, the researchers built a model that correctly predicted the BRAF mutation status of more than 90% of the melanomas. They also found that, among the variables routinely recorded in cancer registries, being younger than 55 years old was the single most predictive factor for BRAF mutations. Finally, in another large group of patients with melanoma, the researchers found that those patients predicted to have a BRAF mutation on the basis of their age survived longer than those patients predicted not to have a BRAF mutation using the same criterion.
What Do These Findings Mean?
These findings suggest that an improved classification of melanomas that combines an analysis of known genetic factors with histomorphological features might divide melanomas into subgroups that are likely to differ in terms of their clinical outcome and responses to targeted therapies when they become available. Additional studies are needed to investigate whether the histomorphological features identified here can be readily assessed in clinical settings and whether different observers will agree on the scoring of these features. The classification model defined by the researchers also needs to be validated and refined in independent groups of patients. Nevertheless, these findings represent an important first step toward helping clinicians improve outcomes for patients with melanoma.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050120.
A related PLoS Medicine Research in Translation article is available
The MedlinePlus encyclopedia provides information for patients about melanoma
The US National Cancer Institute provides information for patients and health professionals about melanoma (in English and Spanish)
Cancer Research UK also provides detailed information about the causes, diagnosis, and treatment of melanoma
doi:10.1371/journal.pmed.0050120
PMCID: PMC2408611  PMID: 18532874

Results 1-25 (876091)