PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-18 (18)
 

Clipboard (0)
None

Select a Filter Below

Year of Publication
Document Types
1.  A hybrid method in combining treatment effects from matched and unmatched studies 
Statistics in medicine  2013;32(28):4924-4937.
The most common data structures in the biomedical studies have been matched or unmatched designs. Data structures resulting from a hybrid of the two may create challenges for statistical inferences. The question may arise whether to use parametric or nonparametric methods on the hybrid data structure. The Early Treatment for Retinopathy of Prematurity (ETROP) study was a multicenter clinical trial sponsored by the National Eye Institute (NEI). The design produced data requiring a statistical method of a hybrid nature. An infant in this multicenter randomized clinical trial had high-risk prethreshold ROP that was eligible for treatment in one or both eyes at entry into the trial. During follow-up, recognition visual acuity was accessed for both eyes. Data from both eyes (matched) and from only one eye (unmatched) were eligible to be used in the trial. The new hybrid nonparametric method is a meta-analysis based on combining the Hodges-Lehmann estimates of treatment effects from the Wilcoxon signed rank and rank sum tests. To compare the new method, we used the classic meta-analysis with the t-test method to combine estimates of treatment effects from the paired and two sample t-tests. Simulations were used to calculate the empirical size and power of the test statistics, as well as the bias, mean square and confidence interval width of the corresponding estimators. The proposed method provides an effective tool to evaluate data from clinical trials and similar comparative studies.
doi:10.1002/sim.5887
PMCID: PMC3887129  PMID: 23839782
Hybrid Design; Matched and Unmatched Studies; Meta-Analysis; Nonparametric Methods; Wilcoxon Rank Sum; Wilcoxon Signed Rank
2.  Optimal Allocation of Sample Sizes to Multicenter Clinical Trials 
In this article, we discussed an approach for optimal sample size allocation in designing multicenter clinical trials. The method we studied was adopted from a stratified sampling survey design. The sample size allocated to centers is a function of center's treatment cost, the standard deviation of the endpoint and the availability of patients. We illustrated our approach using two hypothetical scenarios derived from our experiences in designing and conducting multicenter clinical trials. Simulation results were also presented.
doi:10.1080/10543406.2013.789884
PMCID: PMC3694743  PMID: 23786227
Cost Constraints; Multicenter Clinical Trials; Optimal Allocation; Sample Size; Weighting
3.  Progression of myopia and high myopia in the Early Treatment for Retinopathy of Prematurity Study: Findings at 4 to 6 years of age 
Purpose
To report the prevalence of myopia and high myopia in children <6 years of age born preterm with birth weights <1251 g who developed high-risk prethreshold retinopathy of prematurity (ROP) and who participated in the Early Treatment for ROP (ETROP) trial.
Methods
Surviving children from the cohort of 401 participants who had developed high-risk prethreshold ROP in one or both eyes underwent cycloplegic retinoscopy at 6 and 9 months corrected age and yearly between 2 and 6 years postnatal age. Eyes were randomized to receive treatment at high-risk prethreshold ROP or conventional management, with treatment only if threshold ROP developed. Myopia (spherical equivalent ≥0.25 D) or high myopia (≥5.00 D) in eyes at 4, 5, and 6 year examinations was reported.
Results
At ages 4, 5, and 6 years, there was no difference in the percentage of eyes with myopia (range, 64.8%–69.9%) and eyes with high myopia (range, 35.3%–39.4%) between earlier treated (ET) and conventionally managed eyes (CM).
Conclusions
Approximately two-thirds of eyes with high-risk prethreshold ROP during the neonatal period are likely to be myopic into the preschool and early school years. In addition, the increase in the proportion of eyes with high myopia that had been observed in both ET and CM eyes between ages 6 months and 3 years does not continue between ages 3 and 6 years.
doi:10.1016/j.jaapos.2012.10.025
PMCID: PMC3725578  PMID: 23622444
4.  Randomized double-blind controlled trial of bovine lactoferrin for prevention of diarrhea in children 
The Journal of pediatrics  2012;162(2):349-356.
Objective
To determine the effect of bovine lactoferrin on prevention of diarrhea in children.
Study design
We conducted a community-based randomized double-blind placebo controlled trial comparing supplementation with bovine lactoferrin versus placebo. Previously weaned children were enrolled at 12–18 months and followed for 6 months with daily home visits for data collection and supplement administration. Anthropometric measures were done monthly.
Results
555 children were randomized: 277 to lactoferrin and 278 to placebo; 65 dropped out; 147,894 doses were administered (92% compliance). Overall there were 91,446 child-days of observation and 1,235 diarrhea episodes lasting 6,219 days. The main pathogens isolated during diarrheal episodes were norovirus (35.0%), enteropathogenic E. coli (11.4%), Campylobacter (10.6%), enteroaggregative E. coli (8.4%), enterotoxigenic E. coli (6.9%) and Shigella (6.6%). The diarrhea incidence was not different between groups: 5.4 vs. 5.2 episodes/child/year for lactoferrin and placebo, respectively (p=0.375). However, the diarrhea longitudinal prevalence was lower in the lactoferrin group (6.6% vs. 7.0%, p=0.017) as well as the median duration of episodes (4.8 vs. 5.3 days, p=0.046), proportion of episodes with moderate or severe dehydration (1.0% vs. 2.6%, p=0.045) and liquid stools load (95.0 vs. 98.6) liquid stools/child/year, p<0.001). There were no adverse events related to the intervention.
Conclusions
Although there was no decrease in diarrhea incidence, longitudinal prevalence and severity were decreased with lactoferrin.
doi:10.1016/j.jpeds.2012.07.043
PMCID: PMC3547155  PMID: 22939927
lactoferrin; diarrhea; children; prevention; clinical trial
5.  Was Mandatory Quarantine Necessary in China for Controlling the 2009 H1N1 Pandemic? 
The Chinese government enforced mandatory quarantine for 60 days (from 10 May to 8 July 2009) as a preventative strategy to control the spread of the 2009 H1N1 pandemic. Such a prevention strategy was stricter than other non-pharmaceutical interventions that were carried out in many other countries. We evaluated the effectiveness of the mandatory quarantine and provide suggestions for interventions against possible future influenza pandemics. We selected one city, Beijing, as the analysis target. We reviewed the epidemiologic dynamics of the 2009 H1N1 pandemic and the implementation of quarantine measures in Beijing. The infectious population was simulated under two scenarios (quarantined and not quarantined) using a deterministic Susceptible-Exposed-Infectious-Recovered (SEIR) model. The basic reproduction number R0 was adjusted to match the epidemic wave in Beijing. We found that mandatory quarantine served to postpone the spread of the 2009 H1N1 pandemic in Beijing by one and a half months. If mandatory quarantine was not enforced in Beijing, the infectious population could have reached 1,553 by 21 October, i.e., 5.6 times higher than the observed number. When the cost of quarantine is taken into account, mandatory quarantine was not an economically effective intervention approach against the 2009 H1N1 pandemic. We suggest adopting mitigation methods for an influenza pandemic with low mortality and morbidity.
doi:10.3390/ijerph10104690
PMCID: PMC3823329  PMID: 24084677
China; 2009 H1N1 pandemic; prevention policy; quarantine
6.  A New Method for Quantitative Real-Time Polymerase Chain Reaction Data Analysis 
Journal of Computational Biology  2013;20(9):703-711.
Abstract
Quantitative real-time polymerase chain reaction (qPCR) is a sensitive gene quantification method that has been extensively used in biological and biomedical fields. The currently used methods for PCR data analysis, including the threshold cycle method and linear and nonlinear model-fitting methods, all require subtracting background fluorescence. However, the removal of background fluorescence can hardly be accurate and therefore can distort results. We propose a new method, the taking-difference linear regression method, to overcome this limitation. Briefly, for each two consecutive PCR cycles, we subtract the fluorescence in the former cycle from that in the latter cycle, transforming the n cycle raw data into n−1 cycle data. Then, linear regression is applied to the natural logarithm of the transformed data. Finally, PCR amplification efficiencies and the initial DNA molecular numbers are calculated for each reaction. This taking-difference method avoids the error in subtracting an unknown background, and thus it is more accurate and reliable. This method is easy to perform, and this strategy can be extended to all current methods for PCR data analysis.
doi:10.1089/cmb.2012.0279
PMCID: PMC3762066  PMID: 23841653
background subtraction; initial DNA amount; linear regression; polymerase chain reaction efficiency; quantitative real-time polymerase chain reaction
7.  Influence of safety warnings on ESA prescribing among dialysis patients using an interrupted time series 
BMC Nephrology  2013;14:172.
Background
In March, 2007, a black box warning was issued by the Food and Drug Administration (FDA) to use the lowest possible erythropoiesis-stimulating agents (ESA) doses for treatment of anemia associated with renal disease. The goal is to determine if a change in ESA use was observed following the warning among US dialysis patients.
Methods
ESA therapy was examined from September 2004 through August 2009 (thirty months before and after the FDA black box warning) among adult Medicare hemodialysis patients. An interrupted time series model assessed the impact of the warnings.
Results
The FDA black box warning did not appear to influence ESA prescribing among the overall dialysis population. However, significant declines in ESA therapy after the FDA warnings were observed for selected populations. Patients with a hematocrit ≥36% had a declining month-to-month trend before (−164 units/week, p = <0.0001) and after the warnings (−80 units/week, p = .001), and a large drop in ESA level immediately after the black box (−4,744 units/week, p = <.0001). Not-for-profit facilities had a declining month-to-month trend before the warnings (−90 units/week, p = .009) and a large drop in ESA dose immediately afterwards (−2,487 units/week, p = 0.015). In contrast, for-profit facilities did not have a significant change in ESA prescribing.
Conclusions
ESA therapy had been both profitable for providers and controversial regarding benefits for nearly two decades. The extent to which a FDA black box warning highlighting important safety concerns influenced use of ESA therapy among nephrologists and dialysis providers was unknown. Our study found no evidence of changes in ESA prescribing for the overall dialysis population resulting from a FDA black box warning.
doi:10.1186/1471-2369-14-172
PMCID: PMC3751481  PMID: 23927675
Epoetin; ESA therapy; Black box warnings; Interrupted time series; Anemia management; ESRD
8.  Effect of the Use and Timing of Bone Marrow Mononuclear Cell Delivery on Left Ventricular Function After Acute Myocardial Infarction: The TIME Randomized Trial 
Context
While the delivery of cell therapy following ST segment myocardial infarction (STEMI) has been evaluated in previous clinical trials, the influence of the timing of cell delivery on the effect on left ventricular (LV) function has not been analyzed in a trial that randomly designated the time of delivery.
Objective
To determine 1) the effect of intracoronary autologous bone marrow mononuclear cell (BMC) delivery following STEMI on recovery of global and regional LV function and 2) if timing of BMC delivery (3 versus 7 days following reperfusion) influences this effect.
Design, Setting, and Patients
Between July 17, 2008 and November 15, 2011, 120 patients were enrolled in a randomized, 2×2 factorial, double-blind, placebo-controlled trial of the National Heart, Lung, and Blood Institute (NHLBI)-sponsored Cardiovascular Cell Therapy Research Network (CCTRN) of patients with LV dysfunction (LV Ejection Fraction (LVEF) ≤45%) following successful primary percutaneous coronary intervention (PCI) of anterior STEMI.
Interventions
Intracoronary infusion of 150 × 106 BMCs or placebo (randomized 2:1 BMC:placebo) within 12 hours of aspiration and processing administered at Day 3 or Day 7 (randomized 1:1) post-PCI.
Main Outcome Measures
Co-primary endpoints were: 1) Change in global (LVEF) and regional (wall motion) LV function in infarct and border zones at 6 months measured by cardiac magnetic resonance imaging and 2) Change in LV function as affected by timing of treatment on Day 3 versus Day 7. Secondary endpoints included major adverse cardiovascular events as well as changes in LV volumes and infarct size.
Results
Patient mean age was 56.9±10.9 years with 87.5% male. At 6 months, LVEF increased similarly in both BMC (45.2±10.6 to 48.3±13.3 %) and placebo groups (44.5±10.8 to 47.8±13.6 %). No detectable treatment effect on regional LV function was observed in either infarct or border zones. Differences between therapy groups in the change in global LV function over time when treated at Day 3 (−0.9±2.9%, 95% CI 6.6 to 4.9%, p=0.763) or Day 7 (1.1±2.9%, 95% CI −4.7 to 6.9, p=0.702) were not significant, nor were they different from each other. Also, timing of treatment had no detectable effect on recovery of regional LV function. Major adverse events were rare with no difference between groups.
Conclusions
Patients with STEMI, who underwent successful primary PCI and administration of intra-coronary BMCs at either 3 or 7 days following the event, had recovery of global and regional LV function similar to placebo
Trial Registration
ClinicalTrials.gov Number, NCT00684021
doi:10.1001/jama.2012.28726
PMCID: PMC3652242  PMID: 23129008
9.  Effect of Transendocardial Delivery of Autologous Bone Marrow Mononuclear Cells on Functional Capacity, Left Ventricular Function, and Perfusion in Chronic Ischemic Heart Failure: The FOCUS-CCTRN Trial 
Context
Previous studies utilizing autologous bone marrow mononuclear cells (BMCs) in patients with ischemic cardiomyopathy have demonstrated safety and suggested efficacy. The FOCUS protocol was designed to assess efficacy of a larger cell dose in an adequately well-powered phase II study.
Objective
To determine if administration of BMCs through transendocardial injections improves myocardial perfusion, reduces left ventricular (LV) end systolic volume, or enhances maximal oxygen consumption in patients with coronary artery disease (CAD), LV dysfunction, and limiting heart failure and/or angina.
Design, Setting, and Patients
This is a 100 million cell, first-in-man randomized, double-blind, placebo-controlled trial was performed by the National Heart, Lung, and Blood Institute-sponsored Cardiovascular Cell Therapy Research Network (CCTRN) in symptomatic patients (NYHA II-III and/or CCS II-IV) receiving maximal medical therapy, with a left ventricular ejection fraction (LVEF)≤45%, perfusion defect by single-photon emission tomography (SPECT), and CAD not amenable to revascularization.
Intervention
All patients underwent bone marrow aspiration, isolation of BMCs using a standardized automated system performed locally, and transendocardial injection of 100 million BMCs or placebo (2:1 BMC: placebo).
Main Outcome Measures
Three co-primary endpoints assessed at 6 months were changes in (a) LV end systolic volume (LVESV) by echocardiography, (b) maximal oxygen consumption (MVO2), and (c) reversibility on SPECT. Secondary measures included other SPECT measures, magnetic resonance imaging (MRI), echocardiography, clinical improvement, and major adverse cardiac events (MACE). Phenotypic and functional analyses of the cell product were performed by the CCTRN Biorepository lab.
Results
Of 153 consented patients, a total of 92 (82 men; average age, 63 years) were randomized (n= 61 BMC, 31 placebo) at 5 sites between April 29, 2009 and April 18, 2011. Changes in LVESV index, (−0.9 ± 11.3 mL/m2; P = 0.733; 95% CI, −6.1 to 4.3), MVO2 (1.0 ± 2.9; P = 0.169; 95% CI, −0.42 to 2.34), percent reversible defect change, (−1.2 ± 23.3; P = 0.835; 95% CI, −12.50 to 10.12), and incidence of MACEwere not statistically significant. However, in an exploratory analysis the change in LVEF across the entire cohort by therapy group was significant (2.7 ± 5.2%; P = 0.030; 95% CI, 0.27 to 5.07).
Conclusions
This is the largest cell therapy trial of autologous BMCs in patients with ischemic LV dysfunction. In patients with chronic ischemic heart disease, transendocardial injection of BMCs compared to placebo did not improve LVESV, MVO2, or reversibility on SPECT.
doi:10.1001/jama.2012.418
PMCID: PMC3600947  PMID: 22447880
Chronic CAD; Ischemic Heart Failure; Chronic Angina; bone marrow mononuclear cells; cardiac performance
10.  Effect of Age and Frequency of Injections on Immune Response to Hepatitis B Vaccination in Drug Users 
Vaccine  2011;30(2):342-349.
Despite the high immunogenicity of the hepatitis B vaccine, evidence suggests that immunological response in drug users is impaired compared to the general population.
A sample of not-in-treatment adult drug users from two communities in Houston, Texas, USA, susceptible to hepatitis B virus (HBV), was sampled via outreach workers and referral methodology. Participants were randomized to either the standard multi-dose hepatitis B vaccine schedule (0, 1, 6 month) or to an accelerated (0, 1, 2 month) schedule. The participants were followed for one year. Antibody levels were measured at 2, 6 and 12 months after enrollment in order to determine the immune responses.
At 12 months, cumulative adequate protective response was achieved in 65% of the HBV susceptible subgroup using both the standard and accelerated schedules. The standard group had a higher mean antibody titer (184.6 vs 57.6 mIU/mL). But at six months, seroconversion at the adequate protective response was reached by a higher proportion of participants and the mean antibody titer was also higher in the accelerated schedule group (104.8 vs. 64.3 mIU/mL). Multivariate analyses indicated a 63% increased risk of non-response for participants 40 years or older (p=0.046). Injecting drugs more than once a day was also highly associated with the risk of non-response (p=0.016).
Conclusions from this research will guide the development of future vaccination programs that anticipate other prevalent chronic conditions, susceptibilities, and risk-taking behaviors of hard-to-reach populations.
doi:10.1016/j.vaccine.2011.10.084
PMCID: PMC3246115  PMID: 22075088
hepatitis B vaccine; immunogenicity; drug users; hepatitis B virus
11.  Astigmatism Progression in the Early Treatment for Retinopathy of Prematurity Study to 6 years of age 
Ophthalmology  2011;118(12):2326-2329.
Purpose
To examine the prevalence of astigmatism (≥1.00 diopter (D)) and high astigmatism (≥2.00 D) from 6 months post term due date to 6 years postnatal, in preterm children with birth weight ≤ 1251g who developed high-risk prethreshold retinopathy of prematurity (ROP) and participated in the Early Treatment for ROP (ETROP) Study.
Design
Observational Cohort Study
Participants
401 infants who developed high-risk prethreshold ROP in one or both eyes and were randomized to early treatment (ET) versus conventional management (CM). Refractive error was measured by cycloplegic retinoscopy. Eyes were excluded if they received additional retinal, glaucoma, or cataract surgery.
Intervention
Eyes were randomized to receive laser photocoagulation at high-risk prethreshold ROP or to receive treatment only if threshold ROP developed.
Main Outcome Measures
Astigmatism and high astigmatism at each study visit.
Results
For both ET and CM eyes, there was a consistent increase in prevalence of astigmatism over time, increasing from 42% at 4 years to 52% by 6 years for the group of ET eyes and from 47% to 54% in the CM eyes. There was no statistically significant difference between the slopes (rate of change per month) of the ET and CM eyes for both astigmatism and high astigmatism. (P=0.75)
Conclusions
By 6 years of age, over 50% of eyes with high-risk prethreshold ROP developed astigmatism ≥ 1.00 D, and nearly 25% of such eyes had high astigmatism (≥ 2.00 D). Presence of astigmatism was not influenced by timing of treatment, zone of acute-phase ROP, or presence of plus disease. However, there was a trend toward higher prevalence of astigmatism and high astigmatism in eyes with ROP residua. Most astigmatism was with-the-rule (75º –105º). More eyes with Type 2 than Type 1 had astigmatism by 6 years. These findings reinforce the need for follow-up eye examinations through early grade school years in infants with high risk prethreshold ROP.
doi:10.1016/j.ophtha.2011.06.006
PMCID: PMC3227788  PMID: 21872933
astigmatism; refractive error; prematurity
12.  Combining Censored and Uncensored Data in a U-Statistic: Design and Sample Size Implications for Cell Therapy Research 
The assumptions that anchor large clinical trials are rooted in smaller, Phase II studies. In addition to specifying the target population, intervention delivery, and patient follow-up duration, physician-scientists who design these Phase II studies must select the appropriate response variables (endpoints). However, endpoint measures can be problematic. If the endpoint assesses the change in a continuous measure over time, then the occurrence of an intervening significant clinical event (SCE), such as death, can preclude the follow-up measurement. Finally, the ideal continuous endpoint measurement may be contraindicated in a fraction of the study patients, a change that requires a less precise substitution in this subset of participants.
A score function that is based on the U-statistic can address these issues of 1) intercurrent SCE's and 2) response variable ascertainments that use different measurements of different precision. The scoring statistic is easy to apply, clinically relevant, and provides flexibility for the investigators' prospective design decisions. Sample size and power formulations for this statistic are provided as functions of clinical event rates and effect size estimates that are easy for investigators to identify and discuss. Examples are provided from current cardiovascular cell therapy research.
doi:10.2202/1557-4679.1286
PMCID: PMC3154087  PMID: 21841940
U-statistic; clinical trials; score function; stem cells
13.  Accelerated Hepatitis B Vaccine Schedule among Drug Users – A Randomized Controlled Trial 
The Journal of infectious diseases  2010;202(10):1500-1509.
Background
Hepatitis B vaccine provides a model for improving uptake and completion of multi-dose vaccinations in the drug-using community.
Methods
DASH project conducted randomized controlled trial among not-in-treatment current drug users in two urban neighborhoods. Neighborhoods were cluster-randomized to receive a standard (HIV information) or enhanced (HBV vaccine acceptance/adherence) behavioral intervention; participants within clusters were randomized to a standard (0, 1, 6 mo) or accelerated (0, 1, 2 mo) vaccination schedule. Outcomes were completion of three-dose vaccine and HBV seroprotection.
Results
Of those screening negative for HIV/HBV, 77% accepted HB vaccination and 75% of those received all 3 doses. Injecting drug users (IDUs) on the accelerated schedule were significantly more likely to receive 3 doses (76%) than those on the standard schedule (66%, p=.04), although for drug users as a whole the adherence was 77% and 73%. No difference in adherence was observed between behavioral intervention groups. Predictors of adherence were older age, African American race, stable housing, and alcohol use. Cumulative HBV seroprotection (≥10 mIU/mL) was gained by 12 months by 65% of those completing. Seroprotection at 6 months was greater for the accelerated schedule group.
Conclusions
The accelerated vaccine schedule improves hepatitis B vaccination adherence among IDU.
doi:10.1086/656776
PMCID: PMC2957504  PMID: 20936979
HIV infections; Hepatitis B; prevention & control; drug users; Hepatitis B vaccines; AIDS vaccines; behavioral intervention; vaccine schedules
14.  Rationale and Design for the Intramyocardial Injection of Autologous Bone Marrow Mononuclear Cells for Patients with Chronic Ischemic Heart Disease and Left Ventricular Dysfunction Trial (FOCUS) 
American heart journal  2010;160(2):215-223.
Background
The increasing worldwide prevalence of coronary artery disease (CAD) continues to challenge the medical community. Management options include medical and revascularization therapy. Despite advances in these methods, CAD is a leading cause of recurrent ischemia and heart failure, posing significant morbidity and mortality risks along with increasing health costs in a large patient population worldwide.
Trial Design
The Cardiovascular Cell Therapy Research Network (CCTRN) was established by the National Institutes of Health to investigate the role of cell therapy in the treatment of chronic cardiovascular disease. FOCUS is a CCTRN-designed randomized Phase II, placebo-controlled clinical trial that will assess the effect of autologous bone marrow mononuclear cells delivered transendocardially to patients with left ventricular (LV) dysfunction and symptomatic heart failure or angina. All patients need to have limiting ischemia by reversible ischemia on SPECT assessment.
Results
After thoughtful consideration of both statistical and clinical principles, we will recruit 87 patients (58 cell treated and 29 placebo) to receive either bone marrow–derived stem cells or placebo. Myocardial perfusion, LV contractile performance, and maximal oxygen consumption are the primary outcome measures.
Conclusions
The designed clinical trial will provide a sound assessment of the effect of autologous bone marrow mononuclear cells in improving blood flow and contractile function of the heart. The target population is patients with CAD and LV dysfunction with limiting angina or symptomatic heat failure. Patient safety is a central concern of the CCTRN, and patients will be followed for at least 5 years.
doi:10.1016/j.ahj.2010.03.029
PMCID: PMC2921924  PMID: 20691824
15.  Validation of the Gravity Model in Predicting the Global Spread of Influenza 
The gravity model is often used in predicting the spread of influenza. We use the data of influenza A (H1N1) to check the model’s performance and validation, in order to determine the scope of its application. In this article, we proposed to model the pattern of global spread of the virus via a few important socio-economic indicators. We applied the epidemic gravity model for modelling the virus spread globally through the estimation of parameters of a generalized linear model. We compiled the daily confirmed cases of influenza A (H1N1) in each country as reported to the WHO and each state in the USA, and established the model to describe the relationship between the confirmed cases and socio-economic factors such as population size, per capita gross domestic production (GDP), and the distance between the countries/states and the country where the first confirmed case was reported (i.e., Mexico). The covariates we selected for the model were all statistically significantly associated with the global spread of influenza A (H1N1). However, within the USA, the distance and GDP were not significantly associated with the number of confirmed cases. The combination of the gravity model and generalized linear model provided a quick assessment of pandemic spread globally. The gravity model is valid if the spread period is long enough for estimating the model parameters. Meanwhile, the distance between donor and recipient communities has a good gradient. Besides, the spread should be at the early stage if a single source is taking into account.
doi:10.3390/ijerph8083134
PMCID: PMC3166731  PMID: 21909295
gravity model; influenza A (H1N1); generalized linear model; infectious disease; viral spread
16.  Kriged and modeled ambient air levels of benzene in an urban environment: an exposure assessment study 
Environmental Health  2011;10:21.
Background
There is increasing concern regarding the potential adverse health effects of air pollution, particularly hazardous air pollutants (HAPs). However, quantifying exposure to these pollutants is problematic.
Objective
Our goal was to explore the utility of kriging, a spatial interpolation method, for exposure assessment in epidemiologic studies of HAPs. We used benzene as an example and compared census tract-level kriged predictions to estimates obtained from the 1999 U.S. EPA National Air Toxics Assessment (NATA), Assessment System for Population Exposure Nationwide (ASPEN) model.
Methods
Kriged predictions were generated for 649 census tracts in Harris County, Texas using estimates of annual benzene air concentrations from 17 monitoring sites operating in Harris and surrounding counties from 1998 to 2000. Year 1999 ASPEN modeled estimates were also obtained for each census tract. Spearman rank correlation analyses were performed on the modeled and kriged benzene levels. Weighted kappa statistics were computed to assess agreement between discretized kriged and modeled estimates of ambient air levels of benzene.
Results
There was modest correlation between the predicted and modeled values across census tracts. Overall, 56.2%, 40.7%, 31.5% and 28.2% of census tracts were classified as having 'low', 'medium-low', 'medium-high' and 'high' ambient air levels of benzene, respectively, comparing predicted and modeled benzene levels. The weighted kappa statistic was 0.26 (95% confidence interval (CI) = 0.20, 0.31), indicating poor agreement between the two methods.
Conclusions
There was a lack of concordance between predicted and modeled ambient air levels of benzene. Applying methods of spatial interpolation for assessing exposure to ambient air pollutants in health effect studies is hindered by the placement and number of existing stationary monitors collecting HAP data. Routine monitoring needs to be expanded if we are to use these data to better assess environmental health risks in the future.
doi:10.1186/1476-069X-10-21
PMCID: PMC3070619  PMID: 21418645
17.  Competing Causes of Death for Women With Breast Cancer and Change Over Time From 1975 to 2003 
This study was to determine whether the proportion of death due to breast cancer changed over time in different cohorts of women diagnosed with breast cancer. We identified 316,149 women with breast cancer at age 20 or older during 1975–2003 from the Surveillance, Epidemiology, and End Results 9 tumor registries in the United States. Logistic regression models were used to assess the effects of time period on the likelihood of dying because of breast cancer as underlying cause of death, adjusting for other factors. Overall, underlying cause of death was 52.8% due to breast cancer, 17.8% due to heart disease, and 4.9% due to stroke. Percentage of death due to breast cancer did not change significantly from 1975 to 2003 in those who died <12 months after diagnosis, but decreased significantly in women who died between 1 and 15 years. Risk of death due to breast cancer in women diagnosed during 1995–1998 was significantly lower than those in 1975–1979 (odds ratio = 0.79, 95% confidence interval = 0.70 – 0.89), after adjusting for age, race, ethnicity, and tumor stage. Percentage of death due to breast cancer decreased significantly with age from 87.5% in women <40% to 30.7% in those 80 or older, which was not significantly affected by year of diagnosis. Proportion of death due to breast cancer increased with advanced tumor stage and was similar in various racial/ethnic groups of population. The findings demonstrated that the impact of breast cancer on overall death was reduced after 1 year of diagnosis, but suggested the need for continued cancer surveillance.
doi:10.1097/COC.0b013e318142c865
PMCID: PMC2570158  PMID: 18391593
breast cancer; cause of death; women; SEER; tumor registry; temporal trend
18.  Polymorphisms of phase II xenobiotic-metabolizing and DNA repair genes and in vitro N-ethyl-N-nitrosourea–induced O6-ethylguanine levels in human lymphocytes 
Mutation research  2006;627(2):146-157.
This study tested the hypothesis that genetic variants of phase II detoxification enzymes and DNA repair proteins affect individual response to DNA damage from alkylating agents. In 171 healthy individuals, an immunoslot blot assay was used to measure O6-ethylguanosine (O6-EtGua) adduct levels in peripheral blood lymphocytes treated with N-ethyl-N-nitrosourea (ENU) in vitro. The genotypes of GSTM1, GSTT1, GSTP1 I105V and A114V, MGMT L84F and I143V, XPD D312N and K751Q, and XRCC3 T241M were determined. Demographic and exposure information was collected by in-person interview. Student’s t test, analysis of (co)variance, and multiple linear regression models were used in statistical analyses. The mean and median (range) O6-EtGua levels were 94.6 and 84.8 (3.2–508.1) fmol/g DNA, respectively. The adduct level was significantly lower in people who smoked ≥ 25 years than that in never-smokers (square-root transformed mean values 8.20 versus 9.37, P = 0.03). Multiple linear regression models revealed that GSTT1 (β = −2.36, P = 0.009) polymorphism was a significant predictor of the level of adducts in 82 never-smokers, whereas the number of years smoked (β = −0.08, P = 0.005) and XRCC3 T241M (β = 2.22, P = 0.007) in 89 ever-smokers. The association between GSTP1 I105V, MGMT I143V, and XPD D312N with the level of adducts was not conclusive. Each polymorphism could explain 2% to 10% of the variation of the adduct level. These observations suggest that GSTT1 null and XRCC3 T241M polymorphism may have some functional significance in modulating the level of ENU-induced DNA damage and these effects are smoking-dependent. Results from this exploratory study need to be confirmed in other experimental systems.
doi:10.1016/j.mrgentox.2006.11.001
PMCID: PMC1828113  PMID: 17158087
single nucleotide polymorphism (SNP); phase II xenobiotic-metabolizing enzyme; DNA Repair protein; N-ethyl-N-nitrosourea (ENU); O6-ethylguanosine (O6-EtGua); human lymphocytes

Results 1-18 (18)