PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (938318)

Clipboard (0)
None

Related Articles

1.  A tutorial on principal stratification-based sensitivity analysis: Application to smoking cessation studies 
Background
One problem with assessing effects of smoking cessation interventions on withdrawal symptoms is that symptoms are affected by whether participants abstain from smoking during trials. Those who enter a randomized trial but do not change smoking behavior might not experience withdrawal related symptoms.
Purpose
We present a tutorial of how one can use a principal stratification sensitivity analysis to account for abstinence in the estimation of smoking cessation intervention effects. The paper is intended to introduce researchers to principal stratification and describe how they might implement the methods.
Methods
We provide a hypothetical example that demonstrates why estimating effects within observed abstention groups is problematic. We demonstrate how estimation of effects within groups defined by potential abstention that an individual would have in either arm of a study can provide meaningful inferences. We describe a sensitivity analysis method to estimate such effects, and use it to investigate effects of a combined behavioral and nicotine replacement therapy intervention on withdrawal symptoms in a female prisoner population.
Results
Overall, the intervention was found to reduce withdrawal symptoms but the effect was not statistically significant in the group that was observed to abstain. More importantly, the intervention was found to be highly effective in the group that would abstain regardless of intervention assignment. The effectiveness of the intervention in other potential abstinence strata depends on the sensitivity analysis assumptions.
Limitations
We make assumptions to narrow the range of our sensitivity parameter estimates. While appropriate in this situation, such assumptions might not be plausible in all situations.
Conclusions
A principal stratification sensitivity analysis provides a meaningful method of accounting for abstinence effects in the evaluation of smoking cessation interventions on withdrawal symptoms. Smoking researchers have previously recommended analyses in subgroups defined by observed abstention status in the evaluation of smoking cessation interventions. We believe that principal stratification analyses should replace such analyses as the preferred means of accounting for post-randomization abstinence effects in the evaluation of smoking cessation programs.
doi:10.1177/1740774510367811
PMCID: PMC2874094  PMID: 20423924
2.  AN APPLICATION OF PRINCIPAL STRATIFICATION TO CONTROL FOR INSTITUTIONALIZATION AT FOLLOW-UP IN STUDIES OF SUBSTANCE ABUSE TREATMENT PROGRAMS* 
The annals of applied statistics  2008;2(3):1034-1055.
Participants in longitudinal studies on the effects of drug treatment and criminal justice system interventions are at high risk for institutionalization (e.g., spending time in an environment where their freedom to use drugs, commit crimes, or engage in risky behavior may be circumscribed). Methods used for estimating treatment effects in the presence of institutionalization during follow-up can be highly sensitive to assumptions that are unlikely to be met in applications and thus likely to yield misleading inferences. In this paper, we consider the use of principal stratification to control for institutionalization at follow-up. Principal stratification has been suggested for similar problems where outcomes are unobservable for samples of study participants because of dropout, death, or other forms of censoring. The method identifies principal strata within which causal effects are well defined and potentially estimable. We extend the method of principal stratification to model institutionalization at follow-up and estimate the effect of residential substance abuse treatment versus outpatient services in a large scale study of adolescent substance abuse treatment programs. Additionally, we discuss practical issues in applying the principal stratification model to data. We show via simulation studies that the model can only recover true effects provided the data meet strenuous demands and that there must be caution taken when implementing principal stratification as a technique to control for post-treatment confounders such as institutionalization.
doi:10.1214/08-AOAS179
PMCID: PMC2749670  PMID: 19779599
Principal Stratification; Post-Treatment Confounder; Institutionalization; Causal Inference
3.  Mediation Analysis with Principal Stratification 
Statistics in medicine  2009;28(7):1108-1130.
In assessing the mechanism of treatment efficacy in randomized clinical trials, investigators often perform mediation analyses by analyzing if the significant intent-to-treat treatment effect on outcome occurs through or around a third intermediate or mediating variable: indirect and direct effects, respectively. Standard mediation analyses assume sequential ignorability, i.e., conditional on covariates the intermediate or mediating factor is randomly assigned, as is the treatment in a randomized clinical trial. This research focuses on the application of the principal stratification approach for estimating the direct effect of a randomized treatment but without the standard sequential ignorability assumption. This approach is used to estimate the direct effect of treatment as a difference between expectations of potential outcomes within latent sub-groups of participants for whom the intermediate variable behavior would be constant, regardless of the randomized treatment assignment. Using a Bayesian estimation procedure, we also assess the sensitivity of results based on the principal stratification approach to heterogeneity of the variances among these principal strata. We assess this approach with simulations and apply it to two psychiatric examples. Both examples and the simulations indicated robustness of our findings to the homogeneous variance assumption. However, simulations showed that the magnitude of treatment effects derived under the principal stratification approach were sensitive to model mis-specification.
doi:10.1002/sim.3533
PMCID: PMC2669107  PMID: 19184975
Principal stratification; mediating variables; direct effects; principal strata probabilities; heterogeneous variances
4.  Comparing breast cancer mortality rates before-and-after a change in availability of screening in different regions: Extension of the paired availability design 
Background
In recent years there has been increased interest in evaluating breast cancer screening using data from before-and-after studies in multiple geographic regions. One approach, not previously mentioned, is the paired availability design. The paired availability design was developed to evaluate the effect of medical interventions by comparing changes in outcomes before and after a change in the availability of an intervention in various locations. A simple potential outcomes model yields estimates of efficacy, the effect of receiving the intervention, as opposed to effectiveness, the effect of changing the availability of the intervention. By combining estimates of efficacy rather than effectiveness, the paired availability design avoids confounding due to different fractions of subjects receiving the interventions at different locations. The original formulation involved short-term outcomes; the challenge here is accommodating long-term outcomes.
Methods
The outcome is incident breast cancer deaths in a time period, which are breast cancer deaths that were diagnosed in the same time period. We considered the plausibility of the basic five assumptions of the paired availability design and propose a novel analysis to accommodate likely violations of the assumption of stable screening effects.
Results
We applied the paired availability design to data on breast cancer screening from six counties in Sweden. The estimated yearly change in incident breast cancer deaths per 100,000 persons ages 40–69 (in most counties) due to receipt of screening (among the relevant type of subject in the potential outcomes model) was -9 with 95% confidence interval (-14, -4) or (-14, -5), depending on the sensitivity analysis.
Conclusion
In a realistic application, the extended paired availability design yielded reasonably precise confidence intervals for the effect of receiving screening on the rate of incident breast cancer death. Although the assumption of stable preferences may be questionable, its impact will be small if there is little screening in the first time period. However, estimates may be substantially confounded by improvements in systemic therapy over time. Therefore the results should be interpreted with care.
doi:10.1186/1471-2288-4-12
PMCID: PMC434501  PMID: 15149551
5.  Measuring the Population Burden of Injuries—Implications for Global and National Estimates: A Multi-centre Prospective UK Longitudinal Study 
PLoS Medicine  2011;8(12):e1001140.
Ronan Lyons and colleagues compared the population burden of injuries using different approaches from the UK Burden of Injury and Global Burden of Disease studies and find that the absolute UK burden of injury is higher than previously estimated.
Background
Current methods of measuring the population burden of injuries rely on many assumptions and limited data available to the global burden of diseases (GBD) studies. The aim of this study was to compare the population burden of injuries using different approaches from the UK Burden of Injury (UKBOI) and GBD studies.
Methods and Findings
The UKBOI was a prospective cohort of 1,517 injured individuals that collected patient-reported outcomes. Extrapolated outcome data were combined with multiple sources of morbidity and mortality data to derive population metrics of the burden of injury in the UK. Participants were injured patients recruited from hospitals in four UK cities and towns: Swansea, Nottingham, Bristol, and Guildford, between September 2005 and April 2007. Patient-reported changes in quality of life using the EQ-5D at baseline, 1, 4, and 12 months after injury provided disability weights used to calculate the years lived with disability (YLDs) component of disability adjusted life years (DALYs). DALYs were calculated for the UK and extrapolated to global estimates using both UKBOI and GBD disability weights. Estimated numbers (and rates per 100,000) for UK population extrapolations were 750,999 (1,240) for hospital admissions, 7,982,947 (13,339) for emergency department (ED) attendances, and 22,185 (36.8) for injury-related deaths in 2005. Nonadmitted ED-treated injuries accounted for 67% of YLDs. Estimates for UK DALYs amounted to 1,771,486 (82% due to YLDs), compared with 669,822 (52% due to YLDs) using the GBD approach. Extrapolating patient-derived disability weights to GBD estimates would increase injury-related DALYs 2.6-fold.
Conclusions
The use of disability weights derived from patient experiences combined with additional morbidity data on ED-treated patients and inpatients suggests that the absolute burden of injury is higher than previously estimated. These findings have substantial implications for improving measurement of the national and global burden of injury.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Injuries—resulting from traffic collisions, drowning, poisoning, falls or burns, and violence from assault, self-inflicted violence, or acts of war—kill more than 5 million people worldwide every year and cause harm to millions more. Injuries account for at least 9% of global mortality and are a threat to health in every country of the world. Furthermore, for every death-related injury, dozens of injured people are admitted to hospitals, hundreds visit emergency rooms, and thousands go to see their doctors by appointment. A large proportion of people surviving their injuries will be left with temporary or permanent disabilities.
The Global Burden of Diseases, Injuries and Risk Factors (GBD) Studies are instrumental in quantifying the burden of injuries placed on society and are essential for the public health response, priority setting, and policy development. Central to the GBD methodology is the concept of Disability Adjusted Life years (DALYs), and a combination of premature mortality, referred to as years of life lost and years lived with disability. However, rather than evidence and measurements, the GBD Study used panel studies and expert opinion to estimate weights and durations of disability. Therefore, although the GBD has been a major development, it may have underestimated the population burden.
Why Was This Study Done?
Accurate measurement of the burden of injuries is essential to ensure adequate policy responses to prevention and treatment. In this study, the researchers aimed to overcome the limitations of previous studies and for the first time, measured the population burden of injuries in the UK using a combination of disability and morbidity metrics, including years of life lost, and years lived with disabilities.
What Did the Researchers Do and Find?
The researchers recruited patients aged over 5 years with a wide range of injuries (including fractures and dislocations, lacerations, bruises and abrasions, sprains, burns and scalds, and head, eye, thorax, and abdominal injuries) from hospitals in four English cities—Swansea, Nottingham, Bristol, and Guildford—between September 2005 and April 2007. The researchers collected data on injury-related mortality, hospital admissions, and attendances to emergency rooms. They also invited patients (or their proxy, if participants were young children) to complete a self-administered questionnaire at recruitment and at 1, 4, and 12 months postinjury to allow data collection on injury characteristics, use of health and social services, time off work, and recovery from injury, in addition to sociodemographic and economic and occupational characteristics. The researchers also used standardized tools to measure health-related quality of life and work problems. Then, the researchers used these patient-reported changes to calculate DALYs for the UK and then extrapolated these results to calculate global estimates.
In the four study sites, a total of 1,517 injured people (median age of 37.4 years and 53.9% male) participated in the study. The researchers found that the vast majority of injuries were unintentional and that the home was the most frequent location of injury. Using the data and information collected from the questionnaires, the researchers extrapolated their results and found that in 2005, there were an estimated 750,999 injury-related hospital admissions, 7,982,947 emergency room attendances, and 22,185 injury-related deaths, translating to a rate per 100,000 of 1,240, 13,339, and 36.8, respectively. The researchers estimated UK DALYs related to injury to be 1,771,486 compared with 669,822 using the GBD approach. Furthermore, the researchers found that extrapolating patient-derived disability weights to GBD estimates would increase injury-related DALYs 2.6-fold.
What Do These Findings Mean?
The findings of this study suggest that, when using data and information derived from patient experiences, combined with additional morbidity data on patients treated in emergency rooms and those, admitted to hospital, the absolute burden of injury is higher than previously estimated. While this study was carried out in the UK the principal findings are relevant to other countries. However, measurement of the population burden of injuries requires access to high quality data, which may be difficult in less affluent countries, and these data rely on access to health facilities, which is often restricted in resource-limited settings. Despite these concerns, these findings have substantial implications for improving measurements of the national and global burden of injury.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001140.
The World Health Organization website provides detailed information about injuries and also details the work of the Global Burden of Disease Study
The Global Burden of Injury's website is a portal to websites run by groups conducting ongoing research into the measurement of global injury metrics
doi:10.1371/journal.pmed.1001140
PMCID: PMC3232198  PMID: 22162954
6.  Comparing treatments via the propensity score: stratification or modeling? 
In observational studies of treatments or interventions, propensity score (PS) adjustment is often useful for controlling bias in estimation of treatment effects. Regression on PS is used most often and can be highly efficient, but it can lead to biased results when model assumptions are violated. The validity of stratification on PS depends on fewer model assumptions, but this approach is less efficient than regression adjustment when the regression assumptions hold. To investigate these issues, we compare stratification and regression adjustments in a Monte Carlo simulation study. We consider two stratification approaches: equal frequency strata and an approach that attempts to choose strata that minimize the mean squared error (MSE) of the treatment effect estimate. The regression approach that we consider is a Generalized Additive Model (GAM) that estimates treatment effect controlling for a potentially nonlinear association between PS and outcome. We find that under a wide range of plausible data generating distributions the GAM approach outperforms stratification in treatment effect estimation with respect to bias, variance, and thereby MSE. We illustrate each approach in an analysis of insurance plan choice and its relation to satisfaction with asthma care.
doi:10.1007/s10742-012-0080-3
PMCID: PMC4238307  PMID: 25419169
Propensity score; Generalized Additive Model; Observational study; Optimal stratification; Causal inference; Nonlinear modeling
7.  Using latent outcome trajectory classes in causal inference* 
Statistics and its interface  2009;2(4):403-412.
In longitudinal studies, outcome trajectories can provide important information about substantively and clinically meaningful underlying subpopulations who may also respond differently to treatments or interventions. Growth mixture analysis is an efficient way of identifying heterogeneous trajectory classes. However, given its exploratory nature, it is unclear how involvement of latent classes should be handled in the analysis when estimating causal treatment effects. In this paper, we propose a 2-step approach, where formulation of trajectory strata and identification of causal effects are separated. In Step 1, we stratify individuals in one of the assignment conditions (reference condition) into trajectory strata on the basis of growth mixture analysis. In Step 2, we estimate treatment effects for different trajectory strata, treating the stratum membership as partly known (known for individuals assigned to the reference condition and missing for the rest). The results can be interpreted as how subpopulations that differ in terms of outcome prognosis under one treatment condition would change their prognosis differently when exposed to another treatment condition. Causal effect estimation in Step 2 is consistent with that in the principal stratification approach (Frangakis and Rubin, 2002) in the sense that clarified identifying assumptions can be employed and therefore systematic sensitivity analyses are possible. Longitudinal development of attention deficit among children from the Johns Hopkins School Intervention Trial (Ialongo et al., 1999) will be presented as an example.
PMCID: PMC2863041  PMID: 20445809
Causal inference; Latent trajectory class; Longitudinal outcome prognosis; Growth mixture modeling; Principal stratification; Reference stratification
8.  Causal Inference for Vaccine Effects on Infectiousness 
The International Journal of Biostatistics  2012;8(2):10.2202/1557-4679.1354 /j/ijb.2012.8.issue-2/1557-4679.1354/1557-4679.1354.xml.
If a vaccine does not protect individuals completely against infection, it could still reduce infectiousness of infected vaccinated individuals to others. Typically, vaccine efficacy for infectiousness is estimated based on contrasts between the transmission risk to susceptible individuals from infected vaccinated individuals compared with that from infected unvaccinated individuals. Such estimates are problematic, however, because they are subject to selection bias and do not have a causal interpretation. Here, we develop causal estimands for vaccine efficacy for infectiousness for four different scenarios of populations of transmission units of size two. These causal estimands incorporate both principal stratification, based on the joint potential infection outcomes under vaccine and control, and interference between individuals within transmission units. In the most general scenario, both individuals can be exposed to infection outside the transmission unit and both can be assigned either vaccine or control. The three other scenarios are special cases of the general scenario where only one individual is exposed outside the transmission unit or can be assigned vaccine. The causal estimands for vaccine efficacy for infectiousness are well defined only within certain principal strata and, in general, are identifiable only with strong unverifiable assumptions. Nonetheless, the observed data do provide some information, and we derive large sample bounds on the causal vaccine efficacy for infectiousness estimands. An example of the type of data observed in a study to estimate vaccine efficacy for infectiousness is analyzed in the causal inference framework we developed.
doi:10.2202/1557-4679.1354
PMCID: PMC3348179  PMID: 22499732
causal inference; principal stratification; interference; infectious disease; vaccine
9.  The International Community-Acquired Pneumonia (CAP) Collaboration Cohort (ICCC) study: rationale, design and description of study cohorts and patients 
BMJ Open  2012;2(3):e001030.
Objective
To improve the understanding of the determinants of prognosis and accurate risk stratification in community-acquired pneumonia (CAP).
Design
Multicentre collaboration of prospective cohorts.
Setting
6 cohorts from the USA, Canada, Hong Kong and Spain.
Participants
From a published meta-analysis of risk stratification studies in CAP, the authors identified and pooled individual patient-level data from six prospective cohort studies of CAP (three from the USA, one each from Canada, Hong Kong and Spain) to create the International CAP Collaboration Cohort. Initial essential inclusion criteria of meta-analysis were (1) prospective design, (2) in English language, (3) reported 30-day mortality and transfer to an intensive or high dependency care and (4) minimum 1000 participants. Common baseline patient characteristics included demographics, history and physical examination findings, comorbidities and laboratory and radiographic findings.
Primary and secondary outcome measures
This paper reports the rationale, hypotheses and analytical framework and also describes study cohorts and patients. The authors aim to (1) compare the prognostic accuracy of existing CAP risk stratification tools, (2) assess patient-level determinants of prognosis, (3) improve risk stratification by combined use of scoring systems and (4) understand prognostic factors for specific patient groups.
Results
The six cohorts assembled from 1991 to 2007 included 13 784 patients (median age 71 years, 54% men). Aside from one randomised controlled study, the remaining five were cohort studies, but all had similar inclusion criteria. Overall, there was 0%–6% missing data. A total of 6159 (44%) had severe pneumonia by Pneumonia Severity Index class IV/V. Mortality at 30 days was 8% (1036). Admission to intensive care or high dependency unit was also 8% (1059).
Conclusions
International CAP Collaboration Cohort provides a pooled multicentre data set of patients with CAP, which will help us to better understand the prognosis of CAP.
Article summary
Article focus
This paper reports the rationale, hypotheses and analytical framework and also describes study cohorts and patients. We aim to
compare the prognostic accuracy of existing CAP risk stratification tools;
assess patient-level determinants of prognosis;
improve risk stratification by combined use of scoring systems;
understand prognostic factors for specific patient groups.
Key messages
The International CAP Collaboration Cohort (ICCC) as described in this report will be able to provide better understanding of determinants of outcomes in CAP. Examples of such development include comparison of commonly and less commonly known CAP severity scoring systems and identification of characteristics of CAP patients with poor outcome (30-day mortality) despite non-severe status of severity score.
In view of the large sample size, the ICCC cohort will be able to provide the determinants of outcomes in patient groups with specific conditions such as cardiovascular and respiratory diseases taking into account case mix and individual prognostic indicators.
The ICCC cohort will be of benefit to the CAP research community and help define a future agenda for research, as well as helping clinicians make better clinical decisions for patients with CAP.
Strengths and limitations of this study
The ICCC is a multicentre/multiethnic cohort where all collaborating groups defined pneumonia based on clinical features and the presence of CXR evidence of pneumonia. The major strengths of ICCC are prospective study design, inclusion of CAP patients spanning across wide age range, ethnicity, different healthcare settings and large sample size. Potential areas of improvement in assessment of CAP might be identification of at-risk patients with pneumonia who have been initially assessed as non-severe CAP. With large sample size, ICCC may provide an opportunity to identify characteristics of such individuals. Based on this work, risk assessment may be applied at more than one point in time in order to observe temporal trends in recovery or deterioration in future CAP research and clinical practice.
There were multiple observers and data collections across several centres. However, all cohorts followed the strict criteria in data collection as described in table 1. Furthermore, the data collected were objective measures such as age and urea level, thereby ruling out potential observer bias. The process of care between hospitals may be variable. There may be a variation in clinical management between different hospitals and in different healthcare setting between the various countries such as there may be important variations in antibiotic use, patterns of infective micro-organisms, care protocols and treatment guidelines. Other limitations to consider are biomarkers, healthcare provider and site characteristics. The patients were enrolled into the study at different time periods. However, this presents an opportunity to compare and contrast different healthcare systems to better understand the variation in healthcare setting and outcomes. Since all six studies used the Pneumonia Severity Index (PSI) for risk stratification, this can have implications, for example, patients who scored non-severe at initial assessment (low PSI) but might have had worse outcome are under-represented if such patients were sent home. This could contribute to attenuation of estimates in low PSI group. Nevertheless, it is possible that these patients would have presented again to the medical centre if/when deterioration occurred. Cohorts that only had data on CURB-related variables and cohorts with smaller sample sizes were not included in the ICCC, and this may introduce some degree of selection bias. Nevertheless, this should not have any effect on the internal relationship between the predictors and outcomes of interest.
doi:10.1136/bmjopen-2012-001030
PMCID: PMC3358618  PMID: 22614174
10.  Principal Stratification in Causal Inference 
Biometrics  2002;58(1):21-29.
Summary
Many scientific problems require that treatment comparisons be adjusted for posttreatment variables, but the estimands underlying standard methods are not causal effects. To address this deficiency, we propose a general framework for comparing treatments adjusting for posttreatment variables that yields principal effects based on principal stratification. Principal stratification with respect to a posttreatment variable is a cross-classification of subjects defined by the joint potential values of that posttreatment variable under each of the treatments being compared. Principal effects are causal effects within a principal stratum. The key property of principal strata is that they are not affected by treatment assignment and therefore can be used just as any pretreatment covariate, such as age category. As a result, the central property of our principal effects is that they are always causal effects and do not suffer from the complications of standard posttreatment-adjusted estimands. We discuss briefly that such principal causal effects are the link between three recent applications with adjustment for posttreatment variables: (i) treatment noncompliance, (ii) missing outcomes (dropout) following treatment noncompliance, and (iii) censoring by death. We then attack the problem of surrogate or biomarker endpoints, where we show, using principal causal effects, that all current definitions of surrogacy, even when perfectly true, do not generally have the desired interpretation as causal effects of treatment on outcome. We go on to formulate estimands based on principal stratification and principal causal effects and show their superiority.
PMCID: PMC4137767  PMID: 11890317
Biomarker; Causal inference; Censoring by death; Missing data; Posttreatment variable; Principal stratification; Quality of life; Rubin causal model; Surrogate
11.  Development and mapping of Simple Sequence Repeat markers for pearl millet from data mining of Expressed Sequence Tags 
BMC Plant Biology  2008;8:119.
Background
Pearl millet [Pennisetum glaucum (L.) R. Br.] is a staple food and fodder crop of marginal agricultural lands of sub-Saharan Africa and the Indian subcontinent. It is also a summer forage crop in the southern USA, Australia and Latin America, and is the preferred mulch in Brazilian no-till soybean production systems. Use of molecular marker technology for pearl millet genetic improvement has been limited. Progress is hampered by insufficient numbers of PCR-compatible co-dominant markers that can be used readily in applied breeding programmes. Therefore, we sought to develop additional SSR markers for the pearl millet research community.
Results
A set of new pearl millet SSR markers were developed using available sequence information from 3520 expressed sequence tags (ESTs). After clustering, unigene sequences (2175 singlets and 317 contigs) were searched for the presence of SSRs. We detected 164 sequences containing SSRs (at least 14 bases in length), with a density of one per 1.75 kb of EST sequence. Di-nucleotide repeats were the most abundant followed by tri-nucleotide repeats. Ninety primer pairs were designed and tested for their ability to detect polymorphism across a panel of 11 pairs of pearl millet mapping population parental lines. Clear amplification products were obtained for 58 primer pairs. Of these, 15 were monomorphic across the panel. A subset of 21 polymorphic EST-SSRs and 6 recently developed genomic SSR markers were mapped using existing mapping populations. Linkage map positions of these EST-SSR were compared by homology search with mapped rice genomic sequences on the basis of pearl millet-rice synteny. Most new EST-SSR markers mapped to distal regions of linkage groups, often to previous gaps in these linkage maps. These new EST-SSRs are now are used by ICRISAT in pearl millet diversity assessment and marker-aided breeding programs.
Conclusion
This study has demonstrated the potential of EST-derived SSR primer pairs in pearl millet. As reported for other crops, EST-derived SSRs provide a cost-saving marker development option in pearl millet. Resources developed in this study have added a sizeable number of useful SSRs to the existing repertoire of circa 100 genomic SSRs that were previously available to pearl millet researchers.
doi:10.1186/1471-2229-8-119
PMCID: PMC2632669  PMID: 19038016
12.  Performance of stroke risk scores in older people with atrial fibrillation not taking warfarin: comparative cohort study from BAFTA trial 
Objective To compare the predictive power of the main existing and recently proposed schemes for stratification of risk of stroke in older patients with atrial fibrillation.
Design Comparative cohort study of eight risk stratification scores.
Setting Trial of thromboprophylaxis in stroke, the Birmingham Atrial Fibrillation in the Aged (BAFTA) trial.
Participants 665 patients aged 75 or over with atrial fibrillation based in the community who were randomised to the BAFTA trial and were not taking warfarin throughout or for part of the study period.
Main outcome measures Events rates of stroke and thromboembolism.
Results 54 (8%) patients had an ischaemic stroke, four (0.6%) had a systemic embolism, and 13 (2%) had a transient ischaemic attack. The distribution of patients classified into the three risk categories (low, moderate, high) was similar across three of the risk stratification scores (revised CHADS2, NICE, ACC/AHA/ESC), with most patients categorised as high risk (65-69%, n=460-457) and the remaining classified as moderate risk. The original CHADS2 (Congestive heart failure, Hypertension, Age ≥75 years, Diabetes, previous Stroke) score identified the lowest number as high risk (27%, n=180). The incremental risk scores of CHADS2, Rietbrock modified CHADS2, and CHA2DS2-VASc (CHA2DS2-Vascular disease, Age 65-74 years, Sex) failed to show an increase in risk at the upper range of scores. The predictive accuracy was similar across the tested schemes with C statistic ranging from 0.55 (original CHADS2) to 0.62 (Rietbrock modified CHADS2), with all except the original CHADS2 predicting better than chance. Bootstrapped paired comparisons provided no evidence of significant differences between the discriminatory ability of the schemes.
Conclusions Based on this single trial population, current risk stratification schemes in older people with atrial fibrillation have only limited ability to predict the risk of stroke. Given the systematic undertreatment of older people with anticoagulation, and the relative safety of warfarin versus aspirin in those aged over 70, there could be a pragmatic rationale for classifying all patients over 75 as “high risk” until better tools are available.
doi:10.1136/bmj.d3653
PMCID: PMC3121229  PMID: 21700651
13.  Stratified Sampling Design Based on Data Mining 
Healthcare Informatics Research  2013;19(3):186-195.
Objectives
To explore classification rules based on data mining methodologies which are to be used in defining strata in stratified sampling of healthcare providers with improved sampling efficiency.
Methods
We performed k-means clustering to group providers with similar characteristics, then, constructed decision trees on cluster labels to generate stratification rules. We assessed the variance explained by the stratification proposed in this study and by conventional stratification to evaluate the performance of the sampling design. We constructed a study database from health insurance claims data and providers' profile data made available to this study by the Health Insurance Review and Assessment Service of South Korea, and population data from Statistics Korea. From our database, we used the data for single specialty clinics or hospitals in two specialties, general surgery and ophthalmology, for the year 2011 in this study.
Results
Data mining resulted in five strata in general surgery with two stratification variables, the number of inpatients per specialist and population density of provider location, and five strata in ophthalmology with two stratification variables, the number of inpatients per specialist and number of beds. The percentages of variance in annual changes in the productivity of specialists explained by the stratification in general surgery and ophthalmology were 22% and 8%, respectively, whereas conventional stratification by the type of provider location and number of beds explained 2% and 0.2% of variance, respectively.
Conclusions
This study demonstrated that data mining methods can be used in designing efficient stratified sampling with variables readily available to the insurer and government; it offers an alternative to the existing stratification method that is widely used in healthcare provider surveys in South Korea.
doi:10.4258/hir.2013.19.3.186
PMCID: PMC3810526  PMID: 24175117
Sampling Studies; Decision Trees; Data Mining
14.  Toxoplasma gondii Infection in Kyrgyzstan: Seroprevalence, Risk Factor Analysis, and Estimate of Congenital and AIDS-Related Toxoplasmosis 
Background
HIV-prevalence, as well as incidence of zoonotic parasitic diseases like cystic echinococcosis, has increased in the Kyrgyz Republic due to fundamental socio-economic changes after the breakdown of the Soviet Union. The possible impact on morbidity and mortality caused by Toxoplasma gondii infection in congenital toxoplasmosis or as an opportunistic infection in the emerging AIDS pandemic has not been reported from Kyrgyzstan.
Methodology/Principal Findings
We screened 1,061 rural and 899 urban people to determine the seroprevalence of T. gondii infection in 2 representative but epidemiologically distinct populations in Kyrgyzstan. The rural population was from a typical agricultural district where sheep husbandry is a major occupation. The urban population was selected in collaboration with several diagnostic laboratories in Bishkek, the largest city in Kyrgyzstan. We designed a questionnaire that was used on all rural subjects so a risk-factor analysis could be undertaken. The samples from the urban population were anonymous and only data with regard to age and gender was available. Estimates of putative cases of congenital and AIDS-related toxoplasmosis in the whole country were made from the results of the serology. Specific antibodies (IgG) against Triton X-100 extracted antigens of T. gondii tachyzoites from in vitro cultures were determined by ELISA. Overall seroprevalence of infection with T. gondii in people living in rural vs. urban areas was 6.2% (95%CI: 4.8–7.8) (adjusted seroprevalence based on census figures 5.1%, 95% CI 3.9–6.5), and 19.0% (95%CI: 16.5–21.7) (adjusted 16.4%, 95% CI 14.1–19.3), respectively, without significant gender-specific differences. The seroprevalence increased with age. Independently low social status increased the risk of Toxoplasma seropositivity while increasing numbers of sheep owned decreased the risk of seropositivity. Water supply, consumption of unpasteurized milk products or undercooked meat, as well as cat ownership, had no significant influence on the risk for seropositivity.
Conclusions
We present a first seroprevalence analysis for human T. gondii infection in the Kyrgyz Republic. Based on these data we estimate that 173 (95% CI 136–216) Kyrgyz children will be born annually to mothers who seroconverted to toxoplasmosis during pregnancy. In addition, between 350 and 1,000 HIV-infected persons are currently estimated to be seropositive for toxoplasmosis. Taken together, this suggests a substantial impact of congenital and AIDS-related symptomatic toxoplasmosis on morbidity and mortality in Kyrgyzstan.
Author Summary
A serological study on toxoplasmosis was undertaken in a rural and urban population in Kyrgyzstan. The observed seroprevalence was adjusted because of differences between age and gender stratifications in the study group compared to population census figures. This gave an estimated seroprevalence in rural and urban populations of 5.1% and 16.4% respectively. In our analysis we determined the risk-factors for infection in the rural population to be age, low social-status and low number of sheep owned. While the seroprevalence in this rural population was relatively low, the seroprevalence found in the urban population of Bishkek correlated better with international data. Extrapolating from our data, about 173 seroconversions during pregnancy may be expected annually in Kyrgyzstan. In addition, considering a prevalence of HIV-Toxoplasma-co-infection between 7/100,000 (official HIV-prevalence data) and 19.4/100,000 (UNAIDS-estimates), 350–1,000 people are at risk for AIDS-related toxoplasmosis. Therefore, in the face of the rising prevalence of HIV infection education of medical personnel on treatment and prevention of toxoplasmosis is recommended.
doi:10.1371/journal.pntd.0002043
PMCID: PMC3566989  PMID: 23409201
15.  Principal Stratification and Attribution Prohibition: Good Ideas Taken Too Far 
Pearl’s article provides a useful springboard for discussing further the benefits and drawbacks of principal stratification and the associated discomfort with attributing effects to post-treatment variables. The basic insights of the approach are important: pay close attention to modification of treatment effects by variables not observable before treatment decisions are made, and be careful in attributing effects to variables when counterfactuals are ill-defined. These insights have often been taken too far in many areas of application of the approach, including instrumental variables, censoring by death, and surrogate outcomes. A novel finding is that the usual principal stratification estimand in the setting of censoring by death is by itself of little practical value in estimating intervention effects.
doi:10.2202/1557-4679.1367
PMCID: PMC3204670  PMID: 22049269
principal stratification; causal inference
16.  Quantification of Population Structure Using Correlated SNPs by Shrinkage Principal Components 
Human Heredity  2010;70(1):9-22.
Background/Aims
Association studies using unrelated individuals have become the most popular design for mapping complex traits. One of the major challenges of association mapping is avoiding spurious association due to population stratification. Principal component analysis (PCA) on genome-wide marker genotypes is one of the most popular population stratification control methods. It implicitly assumes that the markers are in linkage equilibrium, a condition that is rarely satisfied and that we plan to relax.
Methods
We carefully examined the impact of linkage disequilibrium (LD) on PCA, and proposed a simple modification of the standard PCA to automatically adjust for the correlations among markers.
Results
We demonstrated that LD patterns in genome-wide association datasets can distort the techniques for stratification control, showing ‘subpopulations’ reflecting localized LD phenomena rather than plausible population structure. We showed that the proposed method effectively removes the artifactual effect of LD patterns, and successfully recovers underlying population structure that is not apparent from standard PCA.
Conclusion
PCA is highly influenced by sets of SNPs with high LD, obscuring the true population substructure. Our shrinkage PCA applies to all available markers, regardless of the LD patterns. The proposed method is easier to implement than most existing LD adjusted PCA methods.
doi:10.1159/000288706
PMCID: PMC2912642  PMID: 20413978
PCA; Loadings; GWAS
17.  Improving Melanoma Classification by Integrating Genetic and Morphologic Features 
PLoS Medicine  2008;5(6):e120.
Background
In melanoma, morphology-based classification systems have not been able to provide relevant information for selecting treatments for patients whose tumors have metastasized. The recent identification of causative genetic alterations has revealed mutations in signaling pathways that offer targets for therapy. Identifying morphologic surrogates that can identify patients whose tumors express such alterations (or functionally equivalent alterations) would be clinically useful for therapy stratification and for retrospective analysis of clinical trial data.
Methodology/Principal Findings
We defined and assessed a panel of histomorphologic measures and correlated them with the mutation status of the oncogenes BRAF and NRAS in a cohort of 302 archival tissues of primary cutaneous melanomas from an academic comprehensive cancer center. Melanomas with BRAF mutations showed distinct morphological features such as increased upward migration and nest formation of intraepidermal melanocytes, thickening of the involved epidermis, and sharper demarcation to the surrounding skin; and they had larger, rounder, and more pigmented tumor cells (all p-values below 0.0001). By contrast, melanomas with NRAS mutations could not be distinguished based on these morphological features. Using simple combinations of features, BRAF mutation status could be predicted with up to 90.8% accuracy in the entire cohort as well as within the categories of the current World Health Organization (WHO) classification. Among the variables routinely recorded in cancer registries, we identified age < 55 y as the single most predictive factor of BRAF mutation in our cohort. Using age < 55 y as a surrogate for BRAF mutation in an independent cohort of 4,785 patients of the Southern German Tumor Registry, we found a significant survival benefit (p < 0.0001) for patients who, based on their age, were predicted to have BRAF mutant melanomas in 69% of the cases. This group also showed a different pattern of metastasis, more frequently involving regional lymph nodes, compared to the patients predicted to have no BRAF mutation and who more frequently displayed satellite, in-transit metastasis, and visceral metastasis (p < 0.0001).
Conclusions
Refined morphological classification of primary melanomas can be used to improve existing melanoma classifications by forming subgroups that are genetically more homogeneous and likely to differ in important clinical variables such as outcome and pattern of metastasis. We expect this information to improve classification and facilitate stratification for therapy as well as retrospective analysis of existing trial data.
Boris Bastian and colleagues present a refined morphological classification of primary melanomas that can be used to improve existing melanoma classifications by defining genetically homogeneous subgroups.
Editors' Summary
Background.
Skin cancers—the most commonly diagnosed cancers worldwide—are usually caused by exposure to ultraviolet (UV) radiation in sunlight. UV radiation damages the DNA in skin cells and can introduce permanent genetic changes (mutations) into the skin cells that allow them to divide uncontrollably to form a tumor, a disorganized mass of cells. Because there are many different cell types in the skin, there are many types of skin cancer. The most dangerous type—melanoma—develops when genetic changes occur in melanocytes, the cells that produce the skin pigment melanin. Although only 4% of skin cancers are melanomas, 80% of skin cancer deaths are caused by melanomas. The first signs of a melanoma are often a change in the appearance or size of a mole (a pigmented skin blemish that is also called a nevus) or a newly arising pigmented lesion that looks different from the other moles (an “ugly duckling”). If this early sign is noticed and the melanoma is diagnosed before it has spread from the skin into other parts of the body, surgery can sometimes provide a cure. But, for more advanced melanomas, the outlook is generally poor. Although radiation therapy, chemotherapy, or immunotherapy (drugs that stimulate the immune system to kill the cancer cells) can prolong the life expectancy of some patients, these treatments often fail to remove all of the cancer cells.
Why Was This Study Done?
Now, however, scientists have identified some of the genetic alterations that cause melanoma. For example, they know that many melanomas carry mutations in either the BRAF gene or the NRAS gene, and that the proteins made from these mutated genes (“oncogenes”) help cancer cells to grow uncontrollably. The hope is that targeted drugs designed to block the activity of oncogenic BRAF or NRAS might stop the growth of those melanomas that make these altered proteins. But how can the patients with these specific tumors be identified in the clinic? The expression of altered proteins is likely to affect the microscopic growth patterns (“histomorphology”) of melanomas. However, the current histomorphology-based classification system for melanomas, which distinguishes four main types of melanoma, does not help clinicians choose the best treatment for their patients. In this study, the researchers have tried to improve melanoma classification by looking for correlations between histomorphological features and genetic alterations in a large collection of melanomas.
What Did the Researchers Do and Find?
The researchers examined several histomorphological features in more than 300 melanoma samples and used statistical methods to correlate these features with the mutation status of BRAF and NRAS in the tumors. They found that some individual histomorphological features were strongly associated with the BRAF (but not the NRAS) mutation status of the tumors. For example, melanomas with BRAF mutations had more melanocytes in the upper layers of the epidermis (the outermost layer of the skin) than did those without BRAF mutations (melanocytes usually live at the bottom of the epidermis). Then, by combining several individual histomorphological features, the researchers built a model that correctly predicted the BRAF mutation status of more than 90% of the melanomas. They also found that, among the variables routinely recorded in cancer registries, being younger than 55 years old was the single most predictive factor for BRAF mutations. Finally, in another large group of patients with melanoma, the researchers found that those patients predicted to have a BRAF mutation on the basis of their age survived longer than those patients predicted not to have a BRAF mutation using the same criterion.
What Do These Findings Mean?
These findings suggest that an improved classification of melanomas that combines an analysis of known genetic factors with histomorphological features might divide melanomas into subgroups that are likely to differ in terms of their clinical outcome and responses to targeted therapies when they become available. Additional studies are needed to investigate whether the histomorphological features identified here can be readily assessed in clinical settings and whether different observers will agree on the scoring of these features. The classification model defined by the researchers also needs to be validated and refined in independent groups of patients. Nevertheless, these findings represent an important first step toward helping clinicians improve outcomes for patients with melanoma.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050120.
A related PLoS Medicine Research in Translation article is available
The MedlinePlus encyclopedia provides information for patients about melanoma
The US National Cancer Institute provides information for patients and health professionals about melanoma (in English and Spanish)
Cancer Research UK also provides detailed information about the causes, diagnosis, and treatment of melanoma
doi:10.1371/journal.pmed.0050120
PMCID: PMC2408611  PMID: 18532874
18.  The Cost and Impact of Scaling Up Pre-exposure Prophylaxis for HIV Prevention: A Systematic Review of Cost-Effectiveness Modelling Studies 
PLoS Medicine  2013;10(3):e1001401.
Gabriela Gomez and colleagues systematically review cost-effectiveness modeling studies of pre-exposure prophylaxis (PrEP) for preventing HIV transmission and identify the main considerations to address when considering the introduction of PrEP to HIV prevention programs.
Background
Cost-effectiveness studies inform resource allocation, strategy, and policy development. However, due to their complexity, dependence on assumptions made, and inherent uncertainty, synthesising, and generalising the results can be difficult. We assess cost-effectiveness models evaluating expected health gains and costs of HIV pre-exposure prophylaxis (PrEP) interventions.
Methods and Findings
We conducted a systematic review comparing epidemiological and economic assumptions of cost-effectiveness studies using various modelling approaches. The following databases were searched (until January 2013): PubMed/Medline, ISI Web of Knowledge, Centre for Reviews and Dissemination databases, EconLIT, and region-specific databases. We included modelling studies reporting both cost and expected impact of a PrEP roll-out. We explored five issues: prioritisation strategies, adherence, behaviour change, toxicity, and resistance. Of 961 studies retrieved, 13 were included. Studies modelled populations (heterosexual couples, men who have sex with men, people who inject drugs) in generalised and concentrated epidemics from Southern Africa (including South Africa), Ukraine, USA, and Peru. PrEP was found to have the potential to be a cost-effective addition to HIV prevention programmes in specific settings. The extent of the impact of PrEP depended upon assumptions made concerning cost, epidemic context, programme coverage, prioritisation strategies, and individual-level adherence. Delivery of PrEP to key populations at highest risk of HIV exposure appears the most cost-effective strategy. Limitations of this review include the partial geographical coverage, our inability to perform a meta-analysis, and the paucity of information available exploring trade-offs between early treatment and PrEP.
Conclusions
Our review identifies the main considerations to address in assessing cost-effectiveness analyses of a PrEP intervention—cost, epidemic context, individual adherence level, PrEP programme coverage, and prioritisation strategy. Cost-effectiveness studies indicating where resources can be applied for greatest impact are essential to guide resource allocation decisions; however, the results of such analyses must be considered within the context of the underlying assumptions made.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Every year approximately 2.5 million people are infected with HIV, the virus that causes AIDS. Behavioral strategies like condom use and reduction of sexual partners have been the hallmarks of HIV prevention efforts. However, biological prevention measures have also recently been shown to be effective. These include male circumcision, treatment as prevention (treating HIV-infected people with antiretroviral drugs to reduce transmission), and pre-exposure prophylaxis (PrEP), where people not infected with HIV take antiretroviral drugs to reduce the probability of transmission. Strategies such as PrEP may be viable prevention measure for couples in long-term relationships where one partner is HIV-positive and the other is HIV-negative (HIV serodiscordant couples) or groups at higher risk of HIV infection, such as men who have sex with men, and injection drug users.
Why Was This Study Done?
The findings from recent clinical trials that demonstrate PrEP can reduce HIV transmission have led to important policy discussions and in the US, Southern Africa, and the UK new clinical guidelines have been developed on the use of PrEP for the prevention of HIV infection. For those countries that are considering whether to introduce PrEP into HIV prevention programs, national policy and decision makers need to determine potential costs and health outcomes. Cost-effectiveness models—mathematical models that simulate cost and health effects of different interventions—can help inform such decisions. However, the cost-effectiveness estimates that could provide guidance for PrEP programs are dependent on, and limited by, the assumptions included in the models, which can make their findings difficult to generalize. A systematic comparison of published cost-effectiveness models of HIV PrEP interventions would be useful for policy makers who are considering introducing PrEP intervention programs.
What Did the Researchers Do and Find?
The researchers performed a systematic review to identify published cost-effectiveness models that evaluated the health gains and costs of HIV PrEP interventions. Systematic reviews attempt to identify, appraise, and synthesize all the empirical evidence that meets pre-specified eligibility criteria to answer a given research question by using explicit methods aimed at minimizing bias. By searching databases the authors identified 13 published studies that evaluated the impact of PrEP in different populations (heterosexual couples, men who have sex with men, and injection drug users) in different geographic settings, which included Southern Africa, Ukraine, US, and Peru.
The authors identified seven studies that assessed the introduction of PrEP into generalized HIV epidemics in Southern Africa. These studies suggest that PrEP may be a cost effective intervention to prevent heterosexual transmission. However, the authors note that funding PrEP while other cost-effective HIV prevention methods are underfunded in this setting may have high opportunity costs. The authors identified five studies where PrEP was introduced for concentrated epidemics among men who have sex with men (four studies in the US and one in Peru). These studies suggest that PrEP may have a substantial impact on the HIV epidemic but may not be affordable at current drug prices. The authors also identified a single study that modeled the introduction of PrEP for people who inject drugs in the Ukraine, which found PrEP not to be cost effective.
In all settings the price of antiretroviral drugs was found to be a limiting factor in terms of affordability of PrEP programs. Behavioral changes and adherence to PrEP were estimated to have potentially significant impacts on program effectiveness but the emergence of drug resistance or PrEP-related toxicity did not significantly affect cost-effectiveness estimates. Several PrEP prioritization strategies were explored in included studies and delivering PrEP to populations at highest risk of HIV exposure was shown to improve cost-effectiveness estimates. However, the extra costs of identifying and engaging with high-risk populations were not taken into consideration. The authors note that the geographic coverage of identified studies was limited and that the findings are very dependent on the setting which limits generalizability.
What Do these Findings Mean?
These findings suggest that PrEP could be a cost-effective tool to reduce new HIV infections in some settings. However, the cost-effectiveness of PrEP is dependent upon cost, the epidemic context, program coverage and prioritization strategies, participants' adherence to the drug regimen, and PrEP efficacy estimates. These findings will aid decision makers quantify and compare the reductions in HIV incidence that could be achieved by implementing a PrEP program.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001401.
The US National Institute of Allergy and Infectious Diseases has information on HIV/AIDS
aidsmap provides basic information about HIV/AIDS, summaries of recent research findings on HIV care and treatment, and has a section on PrEP
Information is available from Avert, an international AIDS charity, on many aspects of HIV/AIDS, including HIV prevention
AVAC Global Advocacy for HIV Prevention provides information on HIV prevention, including PrEP
The US Centers for Disease Control and Prevention also has information on PrEP
The World Health Organization has a page on its WHO-CHOICE criteria for cost-effectiveness
doi:10.1371/journal.pmed.1001401
PMCID: PMC3595225  PMID: 23554579
19.  Clustering by genetic ancestry using genome-wide SNP data 
BMC Genetics  2010;11:108.
Background
Population stratification can cause spurious associations in a genome-wide association study (GWAS), and occurs when differences in allele frequencies of single nucleotide polymorphisms (SNPs) are due to ancestral differences between cases and controls rather than the trait of interest. Principal components analysis (PCA) is the established approach to detect population substructure using genome-wide data and to adjust the genetic association for stratification by including the top principal components in the analysis. An alternative solution is genetic matching of cases and controls that requires, however, well defined population strata for appropriate selection of cases and controls.
Results
We developed a novel algorithm to cluster individuals into groups with similar ancestral backgrounds based on the principal components computed by PCA. We demonstrate the effectiveness of our algorithm in real and simulated data, and show that matching cases and controls using the clusters assigned by the algorithm substantially reduces population stratification bias. Through simulation we show that the power of our method is higher than adjustment for PCs in certain situations.
Conclusions
In addition to reducing population stratification bias and improving power, matching creates a clean dataset free of population stratification which can then be used to build prediction models without including variables to adjust for ancestry. The cluster assignments also allow for the estimation of genetic heterogeneity by examining cluster specific effects.
doi:10.1186/1471-2156-11-108
PMCID: PMC3018397  PMID: 21143920
20.  Risk Stratification by Self-Measured Home Blood Pressure across Categories of Conventional Blood Pressure: A Participant-Level Meta-Analysis 
PLoS Medicine  2014;11(1):e1001591.
Jan Staessen and colleagues compare the risk of cardiovascular, cardiac, or cerebrovascular events in patients with elevated office blood pressure vs. self-measured home blood pressure.
Please see later in the article for the Editors' Summary
Background
The Global Burden of Diseases Study 2010 reported that hypertension is worldwide the leading risk factor for cardiovascular disease, causing 9.4 million deaths annually. We examined to what extent self-measurement of home blood pressure (HBP) refines risk stratification across increasing categories of conventional blood pressure (CBP).
Methods and Findings
This meta-analysis included 5,008 individuals randomly recruited from five populations (56.6% women; mean age, 57.1 y). All were not treated with antihypertensive drugs. In multivariable analyses, hazard ratios (HRs) associated with 10-mm Hg increases in systolic HBP were computed across CBP categories, using the following systolic/diastolic CBP thresholds (in mm Hg): optimal, <120/<80; normal, 120–129/80–84; high-normal, 130–139/85–89; mild hypertension, 140–159/90–99; and severe hypertension, ≥160/≥100.
Over 8.3 y, 522 participants died, and 414, 225, and 194 had cardiovascular, cardiac, and cerebrovascular events, respectively. In participants with optimal or normal CBP, HRs for a composite cardiovascular end point associated with a 10-mm Hg higher systolic HBP were 1.28 (1.01–1.62) and 1.22 (1.00–1.49), respectively. At high-normal CBP and in mild hypertension, the HRs were 1.24 (1.03–1.49) and 1.20 (1.06–1.37), respectively, for all cardiovascular events and 1.33 (1.07–1.65) and 1.30 (1.09–1.56), respectively, for stroke. In severe hypertension, the HRs were not significant (p≥0.20). Among people with optimal, normal, and high-normal CBP, 67 (5.0%), 187 (18.4%), and 315 (30.3%), respectively, had masked hypertension (HBP≥130 mm Hg systolic or ≥85 mm Hg diastolic). Compared to true optimal CBP, masked hypertension was associated with a 2.3-fold (1.5–3.5) higher cardiovascular risk. A limitation was few data from low- and middle-income countries.
Conclusions
HBP substantially refines risk stratification at CBP levels assumed to carry no or only mildly increased risk, in particular in the presence of masked hypertension. Randomized trials could help determine the best use of CBP vs. HBP in guiding BP management. Our study identified a novel indication for HBP, which, in view of its low cost and the increased availability of electronic communication, might be globally applicable, even in remote areas or in low-resource settings.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Globally, hypertension (high blood pressure) is the leading risk factor for cardiovascular disease and is responsible for 9.4 million deaths annually from heart attacks, stroke, and other cardiovascular diseases. Hypertension, which rarely has any symptoms, is diagnosed by measuring blood pressure, the force that blood circulating in the body exerts on the inside of large blood vessels. Blood pressure is highest when the heart is pumping out blood (systolic blood pressure) and lowest when the heart is refilling (diastolic blood pressure). European guidelines define optimal blood pressure as a systolic blood pressure of less than 120 millimeters of mercury (mm Hg) and a diastolic blood pressure of less than 80 mm Hg (a blood pressure of less than 120/80 mm Hg). Normal blood pressure, high-normal blood pressure, and mild hypertension are defined as blood pressures in the ranges 120–129/80–84 mm Hg, 130–139/85–89 mm Hg, and 140–159/90–99 mm Hg, respectively. A blood pressure of more than 160 mm Hg systolic or 100 mm Hg diastolic indicates severe hypertension. Many factors affect blood pressure; overweight people and individuals who eat salty or fatty food are at high risk of developing hypertension. Lifestyle changes and/or antihypertensive drugs can be used to control hypertension.
Why Was This Study Done?
The current guidelines for the diagnosis and management of hypertension recommend risk stratification based on conventionally measured blood pressure (CBP, the average of two consecutive measurements made at a clinic). However, self-measured home blood pressure (HBP) more accurately predicts outcomes because multiple HBP readings are taken and because HBP measurement avoids the “white-coat effect”—some individuals have a raised blood pressure in a clinical setting but not at home. Could risk stratification across increasing categories of CBP be refined through the use of self-measured HBP, particularly at CBP levels assumed to be associated with no or only mildly increased risk? Here, the researchers undertake a participant-level meta-analysis (a study that uses statistical approaches to pool results from individual participants in several independent studies) to answer this question.
What Did the Researchers Do and Find?
The researchers included 5,008 individuals recruited from five populations and enrolled in the International Database of Home Blood Pressure in Relation to Cardiovascular Outcome (IDHOCO) in their meta-analysis. CBP readings were available for all the participants, who measured their HBP using an oscillometric device (an electronic device for measuring blood pressure). The researchers used information on fatal and nonfatal cardiovascular, cardiac, and cerebrovascular (stroke) events to calculate the hazard ratios (HRs, indicators of increased risk) associated with a 10-mm Hg increase in systolic HBP across standard CBP categories. In participants with optimal CBP, an increase in systolic HBP of 10-mm Hg increased the risk of any cardiovascular event by nearly 30% (an HR of 1.28). Similar HRs were associated with a 10-mm Hg increase in systolic HBP for all cardiovascular events among people with normal and high-normal CBP and with mild hypertension, but for people with severe hypertension, systolic HBP did not significantly add to the prediction of any end point. Among people with optimal, normal, and high-normal CBP, 5%, 18.4%, and 30.4%, respectively, had a HBP of 130/85 or higher (“masked hypertension,” a higher blood pressure in daily life than in a clinical setting). Finally, compared to individuals with optimal CBP without masked hypertension, individuals with masked hypertension had more than double the risk of cardiovascular disease.
What Do These Findings Mean?
These findings indicate that HBP measurements, particularly in individuals with masked hypertension, refine risk stratification at CBP levels assumed to be associated with no or mildly elevated risk of cardiovascular disease. That is, HBP measurements can improve the prediction of cardiovascular complications or death among individuals with optimal, normal, and high-normal CBP but not among individuals with severe hypertension. Clinical trials are needed to test whether the identification and treatment of masked hypertension leads to a reduction of cardiovascular complications and is cost-effective compared to the current standard of care, which does not include HBP measurements and does not treat people with normal or high-normal CBP. Until then, these findings provide support for including HBP monitoring in primary prevention strategies for cardiovascular disease among individuals at risk for masked hypertension (for example, people with diabetes), and for carrying out HBP monitoring in people with a normal CBP but unexplained signs of hypertensive target organ damage.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001591.
This study is further discussed in a PLOS Medicine Perspective by Mark Caulfield
The US National Heart, Lung, and Blood Institute has patient information about high blood pressure (in English and Spanish) and a guide to lowering high blood pressure that includes personal stories
The American Heart Association provides information on high blood pressure and on cardiovascular diseases (in several languages); it also provides personal stories about dealing with high blood pressure
The UK National Health Service Choices website provides detailed information for patients about hypertension (including a personal story) and about cardiovascular disease
The World Health Organization provides information on cardiovascular disease and controlling blood pressure; its A Global Brief on Hypertension was published on World Health Day 2013
The UK charity Blood Pressure UK provides information about white-coat hypertension and about home blood pressure monitoring
MedlinePlus provides links to further information about high blood pressure, heart disease, and stroke (in English and Spanish)
doi:10.1371/journal.pmed.1001591
PMCID: PMC3897370  PMID: 24465187
21.  Effect of population stratification on the identification of significant single-nucleotide polymorphisms in genome-wide association studies 
BMC Proceedings  2009;3(Suppl 7):S13.
The North American Rheumatoid Arthritis Consortium case-control study collected case participants across the United States and control participants from New York. More than 500,000 single-nucleotide polymorphisms (SNPs) were genotyped in the sample of 2000 cases and controls. Careful adjustment for the confounding effect of population stratification must be conducted when analyzing these data; the variance inflation factor (VIF) without adjustment is 1.44. In the primary analyses of these data, a clustering algorithm in the program PLINK was used to reduce the VIF to 1.14, after which genomic control was used to control residual confounding. Here we use stratification scores to achieve a unified and coherent control for confounding. We used the first 10 principal components, calculated genome-wide using a set of 81,500 loci that had been selected to have low pair-wise linkage disequilibrium, as risk factors in a logistic model to calculate the stratification score. We then divided the data into five strata based on quantiles of the stratification score. The VIF of these stratified data is 1.04, indicating substantial control of stratification. However, after control for stratification, we find that there are no significant loci associated with rheumatoid arthritis outside of the HLA region. In particular, we find no evidence for association of TRAF1-C5 with rheumatoid arthritis.
PMCID: PMC2795903  PMID: 20017996
22.  A high-density SNP genetic linkage map for the silver-lipped pearl oyster, Pinctada maxima: a valuable resource for gene localisation and marker-assisted selection 
BMC Genomics  2013;14(1):810.
Background
The silver-lipped pearl oyster, Pinctada maxima, is an important tropical aquaculture species extensively farmed for the highly sought "South Sea" pearls. Traditional breeding programs have been initiated for this species in order to select for improved pearl quality, but many economic traits under selection are complex, polygenic and confounded with environmental factors, limiting the accuracy of selection. The incorporation of a marker-assisted selection (MAS) breeding approach would greatly benefit pearl breeding programs by allowing the direct selection of genes responsible for pearl quality. However, before MAS can be incorporated, substantial genomic resources such as genetic linkage maps need to be generated. The construction of a high-density genetic linkage map for P. maxima is not only essential for unravelling the genomic architecture of complex pearl quality traits, but also provides indispensable information on the genome structure of pearl oysters.
Results
A total of 1,189 informative genome-wide single nucleotide polymorphisms (SNPs) were incorporated into linkage map construction. The final linkage map consisted of 887 SNPs in 14 linkage groups, spans a total genetic distance of 831.7 centimorgans (cM), and covers an estimated 96% of the P. maxima genome. Assessment of sex-specific recombination across all linkage groups revealed limited overall heterochiasmy between the sexes (i.e. 1.15:1 F/M map length ratio). However, there were pronounced localised differences throughout the linkage groups, whereby male recombination was suppressed near the centromeres compared to female recombination, but inflated towards telomeric regions. Mean values of LD for adjacent SNP pairs suggest that a higher density of markers will be required for powerful genome-wide association studies. Finally, numerous nacre biomineralization genes were localised providing novel positional information for these genes.
Conclusions
This high-density SNP genetic map is the first comprehensive linkage map for any pearl oyster species. It provides an essential genomic tool facilitating studies investigating the genomic architecture of complex trait variation and identifying quantitative trait loci for economically important traits useful in genetic selection programs within the P. maxima pearling industry. Furthermore, this map provides a foundation for further research aiming to improve our understanding of the dynamic process of biomineralization, and pearl oyster evolution and synteny.
Electronic supplementary material
The online version of this article (doi:10.1186/1471-2164-14-810) contains supplementary material, which is available to authorized users.
doi:10.1186/1471-2164-14-810
PMCID: PMC4046678  PMID: 24252414
23.  CoAIMs: A Cost-Effective Panel of Ancestry Informative Markers for Determining Continental Origins 
PLoS ONE  2010;5(10):e13443.
Background
Genetic ancestry is known to impact outcomes of genotype-phenotype studies that are designed to identify risk for common diseases in human populations. Failure to control for population stratification due to genetic ancestry can significantly confound results of disease association studies. Moreover, ancestry is a critical factor in assessing lifetime risk of disease, and can play an important role in optimizing treatment. As modern medicine moves towards using personal genetic information for clinical applications, it is important to determine genetic ancestry in an accurate, cost-effective and efficient manner. Self-identified race is a common method used to track and control for population stratification; however, social constructs of race are not necessarily informative for genetic applications. The use of ancestry informative markers (AIMs) is a more accurate method for determining genetic ancestry for the purposes of population stratification.
Methodology/Principal Findings
Here we introduce a novel panel of 36 microsatellite (MSAT) AIMs that determines continental admixture proportions. This panel, which we have named Continental Ancestry Informative Markers or CoAIMs, consists of MSAT AIMs that were chosen based upon their measure of genetic variance (Fst), allele frequencies and their suitability for efficient genotyping. Genotype analysis using CoAIMs along with a Bayesian clustering method (STRUCTURE) is able to discern continental origins including Europe/Middle East (Caucasians), East Asia, Africa, Native America, and Oceania. In addition to determining continental ancestry for individuals without significant admixture, we applied CoAIMs to ascertain admixture proportions of individuals of self declared race.
Conclusion/Significance
CoAIMs can be used to efficiently and effectively determine continental admixture proportions in a sample set. The CoAIMs panel is a valuable resource for genetic researchers performing case-control genetic association studies, as it can control for the confounding effects of population stratification. The MSAT-based approach used here has potential for broad applicability as a cost effective tool toward determining admixture proportions.
doi:10.1371/journal.pone.0013443
PMCID: PMC2955551  PMID: 20976178
24.  Stratification and Selection Probabilities for a Sample of General Medical Hospitals 
Health Services Research  1966;1(2):170-192.
With the data for the 1961 universe of nonfederal short-term general medical hospitals in the United States, stratum boundaries are constructed using bed capacity as the stratification variable. The method of construction is that developed by Dalenius and Hodges of equalizing intervals in cumulative √f where f denotes the frequency distribution of hospitals ordered by size. When hospitals have equal selection probabilities within strata and the total sample size is held fixed, equal allocation of the sample to strata and allocation to strata in proportion to bed capacity are found to result in about the same precision for the estimates considered. Furthermore, sampling with pps without stratification is seen to result in higher precision than stratification and equal selection probabilities, unless the sample size is large enough to make the finite population correction important. The effect on precision of estimates of moderate departure from boundaries constructed by rule so that boundaries may be expressed in convenient multiples of size measure are examined.
Because data for the United States general medical hospitals are not readily available by diagnostic class, data for six small diagnostic categories were obtained for the hospitals participating in the Professional Activities Study (PAS) during the full year of 1961. The effects of stratification and selection probabilities on precision of estimates for these small groups are explored.
PMCID: PMC1067325  PMID: 5971546
25.  Effect of sample stratification on dairy GWAS results 
BMC Genomics  2012;13:536.
Background
Artificial insemination and genetic selection are major factors contributing to population stratification in dairy cattle. In this study, we analyzed the effect of sample stratification and the effect of stratification correction on results of a dairy genome-wide association study (GWAS). Three methods for stratification correction were used: the efficient mixed-model association expedited (EMMAX) method accounting for correlation among all individuals, a generalized least squares (GLS) method based on half-sib intraclass correlation, and a principal component analysis (PCA) approach.
Results
Historical pedigree data revealed that the 1,654 contemporary cows in the GWAS were all related when traced through approximately 10–15 generations of ancestors. Genome and phenotype stratifications had a striking overlap with the half-sib structure. A large elite half-sib family of cows contributed to the detection of favorable alleles that had low frequencies in the general population and high frequencies in the elite cows and contributed to the detection of X chromosome effects. All three methods for stratification correction reduced the number of significant effects. EMMAX method had the most severe reduction in the number of significant effects, and the PCA method using 20 principal components and GLS had similar significance levels. Removal of the elite cows from the analysis without using stratification correction removed many effects that were also removed by the three methods for stratification correction, indicating that stratification correction could have removed some true effects due to the elite cows. SNP effects with good consensus between different methods and effect size distributions from USDA’s Holstein genomic evaluation included the DGAT1-NIBP region of BTA14 for production traits, a SNP 45kb upstream from PIGY on BTA6 and two SNPs in NIBP on BTA14 for protein percentage. However, most of these consensus effects had similar frequencies in the elite and average cows.
Conclusions
Genetic selection and extensive use of artificial insemination contributed to overlapped genome, pedigree and phenotype stratifications. The presence of an elite cluster of cows was related to the detection of rare favorable alleles that had high frequencies in the elite cluster and low frequencies in the remaining cows. Methods for stratification correction could have removed some true effects associated with genetic selection.
doi:10.1186/1471-2164-13-536
PMCID: PMC3496570  PMID: 23039970

Results 1-25 (938318)