Search tips
Search criteria

Results 1-25 (32)

Clipboard (0)

Select a Filter Below

Year of Publication
1.  Spatiotemporal and Spatial Threshold Models for Relating UV Exposures and Skin Cancer in the Central United States 
The exact mechanisms relating exposure to ultraviolet (UV) radiation and elevated risk of skin cancer remain the subject of debate. For example, there is disagreement on whether the main risk factor is duration of the exposure, its intensity, or some combination of both. There is also uncertainty regarding the form of the dose-response curve, with many authors believing only exposures exceeding a given (but unknown) threshold are important. In this paper we explore methods to estimate such thresholds using hierarchical spatial logistic models based on a sample of a cohort of x-ray technologists for whom we have self-reports of time spent in the sun and numbers of blistering sunburns in childhood. A preliminary goal is to explore the temporal pattern of UV exposure and its gradient. Changes here would imply that identical exposure self-reports from different calendar years may correspond to differing cancer risks.
PMCID: PMC2705173  PMID: 20161236
Conditionally autoregressive (CAR) model; Erythemal exposure; Hierarchical model; Non-melanoma skin cancer
2.  A two-stage Bayesian design with sample size reestimation and subgroup analysis for phase II binary response trials 
Contemporary clinical trials  2013;36(2):10.1016/j.cct.2013.03.011.
Frequentist sample size determination for binary outcome data in a two-arm clinical trial requires initial guesses of the event probabilities for the two treatments. Misspecification of these event rates may lead to a poor estimate of the necessary sample size. In contrast, the Bayesian approach that considers the treatment effect to be random variable having some distribution may offer a better, more flexible approach. The Bayesian sample size proposed by Whitehead et al. (2008) for exploratory studies on efficacy justifies the acceptable minimum sample size by a “conclusiveness” condition. In this work, we introduce a new two-stage Bayesian design with sample size reestimation at the interim stage. Our design inherits the properties of good interpretation and easy implementation from Whitehead et al. (2008), generalizes their method to a two-sample setting, and uses a fully Bayesian predictive approach to reduce an overly large initial sample size when necessary. Moreover, our design can be extended to allow patient level covariates via logistic regression, now adjusting sample size within each subgroup based on interim analyses. We illustrate the benefits of our approach with a design in non-Hodgkin lymphoma with a simple binary covariate (patient gender), offering an initial step toward within-trial personalized medicine.
PMCID: PMC3757106  PMID: 23583925
Bayesian design; clinical trial; personalized medicine; predictive approach; sample size reestimation; subgroup analysis
3.  Hierarchical Multiresolution Approaches for Dense Point-Level Breast Cancer Treatment Data 
The analysis of point-level (geostatistical) data has historically been plagued by computational difficulties, owing to the high dimension of the nondiagonal spatial covariance matrices that need to be inverted. This problem is greatly compounded in hierarchical Bayesian settings, since these inversions need to take place at every iteration of the associated Markov chain Monte Carlo (MCMC) algorithm. This paper offers an approach for modeling the spatial correlation at two separate scales. This reduces the computational problem to a collection of lower-dimensional inversions that remain feasible within the MCMC framework. The approach yields full posterior inference for the model parameters of interest, as well as the fitted spatial response surface itself. We illustrate the importance and applicability of our methods using a collection of dense point-referenced breast cancer data collected over the mostly rural northern part of the state of Minnesota. Substantively, we wish to discover whether women who live more than a 60-mile drive from the nearest radiation treatment facility tend to opt for mastectomy over breast conserving surgery (BCS, or “lumpectomy”), which is less disfiguring but requires 6 weeks of follow-up radiation therapy. Our hierarchical multiresolution approach resolves this question while still properly accounting for all sources of spatial association in the data.
PMCID: PMC2344142  PMID: 19158942
Aggregated geographic data; Big N problem; Breast cancer; Conditionally autoregressive (CAR) model; Hierarchical modeling; Kriging
5.  Do neighborhood attributes moderate the relationship between alcohol establishment density and crime? 
Although numerous studies have found a positive association between density of alcohol establishments and various types of crime, few have examined how neighborhood attributes (e.g., schools, parks) could moderate this association. We used data from Minneapolis, Minnesota with neighborhood as the unit of analysis (n = 83). We examined eight types of crime (assault, rape, robbery, vandalism, nuisance crime, public alcohol consumption, driving while intoxicated, underage alcohol possession/consumption) and measured density as total number of establishments per roadway mile. Neighborhood attributes assessed as potential moderators included non-alcohol businesses, schools, parks, religious institutions, neighborhood activism, neighborhood quality, and number of condemned houses. Using Bayesian techniques, we created a model for each crime outcome (accounting for spatial auto-correlation and controlling for relevant demographics) with an interaction term (moderator × density) to test each potential moderating effect. Few interaction terms were statistically significant. Presence of at least one college was the only neighborhood attribute that consistently moderated the density-crime association, with presence of a college attenuating the association between density and three types of crime (assaults, nuisance crime, and public consumption). However, caution should be used when interpreting the moderating effect of college presence because of the small number of colleges in our sample. The lack of moderating effects of neighborhood attributes except for presence of a college suggests that the addition of alcohol establishments to any neighborhood regardless of its other attributes could result in an increase in a wide range of crime.
PMCID: PMC4058421  PMID: 24337980
Alcohol outlets; violent crime; neighborhood
6.  A Trivariate Continual Reassessment Method for Phase I/II Trials of Toxicity, Efficacy, and Surrogate Efficacy 
Statistics in medicine  2012;31(29):3885-3895.
Recently, many Bayesian methods have been developed for dose-finding when simultaneously modeling both toxicity and efficacy outcomes in a blended phase I/II fashion. A further challenge arises when all the true efficacy data cannot be obtained quickly after the treatment, so that surrogate markers are instead used (e.g, in cancer trials). We propose a framework to jointly model the probabilities of toxicity, efficacy and surrogate efficacy given a particular dose. Our trivariate binary model is specified as a composition of two bivariate binary submodels. In particular, we extend the bCRM approach [1], as well as utilize the Gumbel copula of Thall and Cook [2]. The resulting trivariate algorithm utilizes all the available data at any given time point, and can flexibly stop the trial early for either toxicity or efficacy. Our simulation studies demonstrate our proposed method can successfully improve dosage targeting efficiency and guard against excess toxicity over a variety of true model settings and degrees of surrogacy.
PMCID: PMC3532950  PMID: 22807126
Bayesian adaptive methods; Continual reassessment method (CRM); Maximum tolerated dose (MTD); Phase I/II clinical trial; Surrogate efficacy; Toxicity
7.  Bayesian Modeling and Analysis for Gradients in Spatiotemporal Processes 
Biometrics  2015;71(3):575-584.
Stochastic process models are widely employed for analyzing spatiotemporal datasets in various scientific disciplines including, but not limited to, environmental monitoring, ecological systems, forestry, hydrology, meteorology and public health. After inferring on a spatiotemporal process for a given dataset, inferential interest may turn to estimating rates of change, or gradients, over space and time. This manuscript develops fully model-based inference on spatiotemporal gradients under continuous space, continuous time settings. Our contribution is to offer, within a exible spatiotemporal process model setting, a framework to estimate arbitrary directional gradients over space at any given timepoint, temporal derivatives at any given spatial location and, finally, mixed spatiotemporal gradients that reflect rapid change in spatial gradients over time and vice-versa. We achieve such inference without compromising on rich and exible spatiotemporal process models and use nonseparable covariance structures. We illustrate our methodology using a simulated data example and subsequently apply it to a dataset of daily PM2.5 concentrations in California, where the spatiotemporal gradient process reveals the effects of California’s unique topography on pollution and detects the aftermath of a devastating series of wildfires.
PMCID: PMC4575262  PMID: 25898989
Gaussian process; Gradients; Markov chain Monte Carlo
8.  Order-free co-regionalized areal data models with application to multiple-disease mapping 
With the ready availability of spatial databases and geographical information system software, statisticians are increasingly encountering multivariate modelling settings featuring associations of more than one type: spatial associations between data locations and associations between the variables within the locations. Although flexible modelling of multivariate point-referenced data has recently been addressed by using a linear model of co-regionalization, existing methods for multivariate areal data typically suffer from unnecessary restrictions on the covariance structure or undesirable dependence on the conditioning order of the variables. We propose a class of Bayesian hierarchical models for multivariate areal data that avoids these restrictions, permitting flexible and order-free modelling of correlations both between variables and across areal units. Our framework encompasses a rich class of multivariate conditionally autoregressive models that are computationally feasible via modern Markov chain Monte Carlo methods. We illustrate the strengths of our approach over existing models by using simulation studies and also offer a real data application involving annual lung, larynx and oesophageal cancer death-rates in Minnesota counties between 1990 and 2000.
PMCID: PMC2963450  PMID: 20981244
Lattice data; Linear model of co-regionalization; Markov chain Monte Carlo methods; Multivariate conditionally autoregressive model; Spatial statistics
9.  Mixtures of Polya trees for flexible spatial frailty survival modelling 
Biometrika  2009;96(2):263-276.
Mixtures of Polya trees offer a very flexible nonparametric approach for modelling time-to-event data. Many such settings also feature spatial association that requires further sophistication, either at the point level or at the lattice level. In this paper, we combine these two aspects within three competing survival models, obtaining a data analytic approach that remains computationally feasible in a fully hierarchical Bayesian framework using Markov chain Monte Carlo methods. We illustrate our proposed methods with an analysis of spatially oriented breast cancer survival data from the Surveillance, Epidemiology and End Results program of the National Cancer Institute. Our results indicate appreciable advantages for our approach over competing methods that impose unrealistic parametric assumptions, ignore spatial association or both.
PMCID: PMC2749263  PMID: 19779579
Areal data; Bayesian modelling; Breast cancer; Conditionally autoregressive model; Log pseudo marginal likelihood; Nonparametric modelling
10.  Network Meta-analysis of Randomized Clinical Trials: Reporting the Proper Summaries 
Clinical trials (London, England)  2013;11(2):246-262.
In the absence of sufficient data directly comparing two or more treatments, indirect comparisons using network meta-analyses (NMA) across trials can potentially provide useful information to guide the use of treatments. Under current contrast-based methods for NMA of binary outcomes, which do not model the “baseline” risks and focus on modeling the relative treatment effects, the patient-centered measures including the overall treatment-specific event rates and risk differences are not provided, which may create some unnecessary obstacles for patients to comprehensively understand and trade-off efficacy and safety measures. Many NMAs only report odds ratios which are commonly misinterpreted as risk ratios by many physicians, patients and their care givers.
We aim to develop network meta-analysis to accurately estimate the overall treatment-specific event rates.
A novel Bayesian hierarchical model, developed from a missing data perspective, that borrows information across multiple treatment arms, is used to illustrate how treatment-specific event proportions, risk differences (RD) and relative risks (RR) can be computed in NMAs. We first compare our approach to alternative methods using two hypothetical NMAs assuming either a fixe RR or a fixed RD, and then use two published NMAs on new-generation anti-depressants and antimanic drugs to illustrate the improved reporting of NMAs possible with this new approach.
In the hypothetical NMAs, our approach outperforms current contrast-based NMA methods in terms of bias. In the NMAs on new-generation anti-depressants and on antimanic drugs, the outcomes were common with proportions ranging from 0.21 to 0.62. As expected, the RR estimates differ from ORs. In addition, differences in the magnitude of relative treatment effects and the statistical significance of several pairwise comparisons from previous report could lead to different treatment recommendations.
First, to facilitate the estimation of overall treatment-specific event proportions, we assume that each study hypothetically compares all treatments, with unstudied arms being missing at random conditional on the observed arms. However, it is plausible that investigators may have selected treatment arms on purpose based on the results of previous trials, which may lead to “nonignorable missingness” and potentially bias our event rate estimation. Second, we have not considered methods to identify and account for potential inconsistency in our missing data network meta-analysis framework. Both methods await further development.
The proposed NMA method can accurately estimate treatment-specific event rates or proportions, RDs, and RRs, and is recommended in practice. Application of this approach can lead to different conclusions, as illustrated here, from current NMA models that only estimate ORs.
PMCID: PMC3972291  PMID: 24096635
network meta-analysis; multiple treatment comparisons; population averaged event rates; Bayesian hierarchical model
11.  A Randomized Controlled Trial Comparing Health and Quality of Life of Lung Transplant Recipients Following Nurse and Computer-Based Triage Utilizing Home Spirometry Monitoring 
Telemedicine Journal and e-Health  2013;19(12):897-903.
Background: Lung transplantation is now a standard intervention for patients with advanced lung disease. Home monitoring of pulmonary function and symptoms has been used to follow the progress of lung transplant recipients in an effort to improve care and clinical status. The study objective was to determine the relative performance of a computer-based Bayesian algorithm compared with a manual nurse decision process for triaging clinical intervention in lung transplant recipients participating in a home monitoring program. Materials and Methods: This randomized controlled trial had 65 lung transplant recipients assigned to either the Bayesian or nurse triage study arm. Subjects monitored and transmitted spirometry and respiratory symptoms daily to the data center using an electronic spirometer/diary device. Subjects completed the Short Form-36 (SF-36) survey at baseline and after 1 year. End points were change from baseline after 1 year in forced expiratory volume at 1 s (FEV1) and quality of life (SF-36 scales) within and between each study arm. Results: There were no statistically significant differences between groups in FEV1 or SF-36 scales at baseline or after 1 year.: Results were comparable between nurse and Bayesian system for detecting changes in spirometry and symptoms, providing support for using computer-based triage support systems as remote monitoring triage programs become more widely available. Conclusions: The feasibility of monitoring critical patient data with a computer-based decision system is especially important given the likely economic constraints on the growth in the nurse workforce capable of providing these early detection triage services.
PMCID: PMC3850431  PMID: 24083367
home health monitoring; telehealth; telemedicine; m-health; transplantation
12.  Disease mapping 
PMCID: PMC4180601  PMID: 25285319
13.  Bayesian Adaptive Design for Device Surveillance 
Post-market device surveillance studies often have important primary objectives tied to estimating a survival function at some future time T with a certain amount of precision.
This paper presents the details and various operating characteristics of a Bayesian adaptive design for device surveillance, as well as a method for estimating a sample size vector (determined by the maximum sample size and a pre-set number of interim looks) that will deliver the desired power.
We adopt a Bayesian adaptive framework which recognizes the fact that persons enrolled in a study report their results over time, not all at once. At each interim look we assess whether we expect to achieve our goals with only the current group, or whether the achievement of such goals is extremely unlikely even for the maximum sample size.
Our Bayesian adaptive design can outperform two non-adaptive frequentist methods currently recommended by FDA guidance documents in many settings.
Our method's performance can be sensitive to model misspecification and changes in the trial's enrollment rate.
The proposed design provides a more efficient framework for conducting postmarket surveillance of medical devices.
PMCID: PMC4103032  PMID: 23188891
Adaptive trial; Bayesian statistics; futility analysis; interim analysis; Monte Carlo sampling
14.  Ecological boundary detection using Bayesian areal wombling 
Ecology  2010;91(12):3448-3514.
The study of ecological boundaries and their dynamics is of fundamental importance to much of ecology, biogeography, and evolution. Over the past two decades, boundary analysis (of which wombling is a subfield) has received considerable research attention, resulting in multiple approaches for the quantification of ecological boundaries. Nonetheless, few methods have been developed that can simultaneously (1) analyze spatially homogenized data sets (i.e., areal data in the form of polygons rather than point-reference data); (2) account for spatial structure in these data and uncertainty associated with them; and (3) objectively assign probabilities to boundaries once detected. Here we describe the application of a Bayesian hierarchical framework for boundary detection developed in public health, which addresses these issues but which has seen limited application in ecology. As examples, we analyze simulated spread data and the historic pattern of spread of an invasive species, the hemlock woolly adelgid (Adelges tsugae), using county-level summaries of the year of first reported infestation and several covariates potentially important to influencing the observed spread dynamics. Bayesian areal wombling is a promising approach for analyzing ecological boundaries and dynamics related to changes in the distributions of native and invasive species.
PMCID: PMC4024662  PMID: 21302814
Adelges tsugae; boundary analysis; ecotones; edge detection; hemlock woolly adelgid; invasive species; spatial statistics
15.  Commensurate Priors for Incorporating Historical Information in Clinical Trials Using General and Generalized Linear Models 
Bayesian analysis (Online)  2012;7(3):639-674.
Assessing between-study variability in the context of conventional random-effects meta-analysis is notoriously difficult when incorporating data from only a small number of historical studies. In order to borrow strength, historical and current data are often assumed to be fully homogeneous, but this can have drastic consequences for power and Type I error if the historical information is biased. In this paper, we propose empirical and fully Bayesian modifications of the commensurate prior model (Hobbs et al., 2011) extending Pocock (1976), and evaluate their frequentist and Bayesian properties for incorporating patient-level historical data using general and generalized linear mixed regression models. Our proposed commensurate prior models lead to preposterior admissible estimators that facilitate alternative bias-variance trade-offs than those offered by pre-existing methodologies for incorporating historical data from a small number of historical studies. We also provide a sample analysis of a colon cancer trial comparing time-to-disease progression using a Weibull regression model.
PMCID: PMC4007051  PMID: 24795786
clinical trials; historical controls; meta-analysis; Bayesian analysis; survival analysis; correlated data
16.  Semiparametric Bayesian commensurate survival model for post-market medical device surveillance with non-exchangeable historical data 
Biometrics  2013;70(1):185-191.
Trial investigators often have a primary interest in the estimation of the survival curve in a population for which there exists acceptable historical information from which to borrow strength. However, borrowing strength from a historical trial that is non-exchangeable with the current trial can result in biased conclusions. In this paper we propose a fully Bayesian semiparametric method for the purpose of attenuating bias and increasing efficiency when jointly modeling time-to-event data from two possibly non-exchangeable sources of information. We illustrate the mechanics of our methods by applying them to a pair of post-market surveillance datasets regarding adverse events in persons on dialysis that had either a bare metal or drug-eluting stent implanted during a cardiac revascularization surgery. We finish with a discussion of the advantages and limitations of this approach to evidence synthesis, as well as directions for future work in this area. The paper’s Supplementary Materials offer simulations to show our procedure’s bias, mean squared error, and coverage probability properties in a variety of settings.
PMCID: PMC3954409  PMID: 24308779
Bayesian hierarchical modeling; Commensurate prior; Evidence synthesis; Flexible proportional hazards model; Hazard smoothing; Non-exchangeable sources of data
17.  Adaptive adjustment of the randomization ratio using historical control data 
Clinical trials (London, England)  2013;10(3):10.1177/1740774513483934.
Prospective trial design often occurs in the presence of “acceptable” [1] historical control data. Typically this data is only utilized for treatment comparison in a posteriori retrospective analysis to estimate population-averaged effects in a random-effects meta-analysis.
We propose and investigate an adaptive trial design in the context of an actual randomized controlled colorectal cancer trial. This trial, originally reported by Goldberg et al. [2], succeeded a similar trial reported by Saltz et al. [3], and used a control therapy identical to that tested (and found beneficial) in the Saltz trial.
The proposed trial implements an adaptive randomization procedure for allocating patients aimed at balancing total information (concurrent and historical) among the study arms. This is accomplished by assigning more patients to receive the novel therapy in the absence of strong evidence for heterogeneity among the concurrent and historical controls. Allocation probabilities adapt as a function of the effective historical sample size (EHSS) characterizing relative informativeness defined in the context of a piecewise exponential model for evaluating time to disease progression. Commensurate priors [4] are utilized to assess historical and concurrent heterogeneity at interim analyses and to borrow strength from the historical data in the final analysis. The adaptive trial’s frequentist properties are simulated using the actual patient-level historical control data from the Saltz trial and the actual enrollment dates for patients enrolled into the Goldberg trial.
Assessing concurrent and historical heterogeneity at interim analyses and balancing total information with the adaptive randomization procedure leads to trials that on average assign more new patients to the novel treatment when the historical controls are unbiased or slightly biased compared to the concurrent controls. Large magnitudes of bias lead to approximately equal allocation of patients among the treatment arms. Using the proposed commensurate prior model to borrow strength from the historical data, after balancing total information with the adaptive randomization procedure, provides admissible estimators of the novel treatment effect with desirable bias-variance trade-offs.
Adaptive randomization methods in general are sensitive to population drift and more suitable for trials that initiate with gradual enrollment. Balancing information among study arms in time-to-event analyses is difficult in the presence of informative right-censoring.
The proposed design could prove important in trials that follow recent evaluations of a control therapy. Efficient use of the historical controls is especially important in contexts where reliance on pre-existing information is unavoidable because the control therapy is exceptionally hazardous, expensive, or the disease is rare.
PMCID: PMC3856641  PMID: 23690095
adaptive designs; Bayesian analysis; historical controls
18.  The Association between Density of Alcohol Establishments and Violent Crime within Urban Neighborhoods 
Numerous studies have found that areas with higher alcohol establishment density are more likely to have higher violent crime rates but many of these studies did not assess the differential effects of type of establishments or the effects on multiple categories of crime. In this study, we assess whether alcohol establishment density is associated with four categories of violent crime, and whether the strength of the associations varies by type of violent crime and by on-premise establishments (e.g., bars, restaurants) versus off-premise establishments (e.g., liquor and convenience stores).
Data come from the city of Minneapolis, Minnesota in 2009 and were aggregated and analyzed at the neighborhood level. Across the 83 neighborhoods in Minneapolis, we examined four categories of violent crime: assault, rape, robbery, and total violent crime. We used a Bayesian hierarchical inference approach to model the data, accounting for spatial auto-correlation and controlling for relevant neighborhood demographics. Models were estimated for total alcohol establishment density as well as separately for on-premise establishments and off-premise establishments.
Positive, statistically significant associations were observed for total alcohol establishment density and each of the violent crime outcomes. We estimate that a 3.9% to 4.3% increase across crime categories would result from a 20% increase in neighborhood establishment density. The associations between on-premise density and each of the individual violent crime outcomes were also all positive and significant and similar in strength as for total establishment density. The relationships between off-premise density and the crime outcomes were all positive but not significant for rape or total violent crime, and the strength of the associations was weaker than those for total and on-premise density.
Results of this study, combined with earlier findings, provide more evidence that community leaders should be cautious about increasing the density of alcohol establishments within their neighborhoods.
PMCID: PMC3412911  PMID: 22587231
Alcohol outlets; violent crime; neighborhood
19.  Impact of Small Group Size on Neighborhood Influences in Multilevel Models 
Given the growing availability of multilevel data from national surveys, researchers interested in contextual effects may find themselves with a small number of individuals per group. Although there is a growing body of literature on sample size in multilevel modeling, few have explored the impact of group size < 5.
In a simulated analysis of real data, we examined the impact of group size < 5 on both a continuous and dichotomous outcome in a simple two-level multilevel model. Models with group sizes 1 to 5 were compared to models with complete data. Four different linear and logistic models were examined: empty models, models with a group-level covariate, models with an individual-level covariate, and models with an aggregated group-level covariate. We further evaluated whether the impact of small group size differed depending on the total number of groups.
When the number of groups was large (N=459), neither fixed nor random components were affected by small group size, even when 90% of tracts had only 1 individual per tract and even when an aggregated group -level covariate was examined. As the number of groups decreased, the standard error estimates of both fixed and random effects were inflated. Furthermore, group-level variance estimates were more affected than were fixed components.
Datasets where there are a small to moderate number of groups with the majority very small group size (n < 5) size may fail to find or even consider a group-level effect when one may exist and also may be under-powered to detect fixed effects.
PMCID: PMC3706628  PMID: 20508007
Multilevel; Neighborhood; Body Weight; Obesity; Sample Size
20.  Bayesian Adaptive Trial Design for a Newly Validated Surrogate Endpoint 
Biometrics  2011;68(1):258-267.
The evaluation of surrogate endpoints for primary use in future clinical trials is an increasingly important research area, due to demands for more efficient trials coupled with recent regulatory acceptance of some surrogates as ‘valid.’ However, little consideration has been given to how a trial which utilizes a newly-validated surrogate endpoint as its primary endpoint might be appropriately designed. We propose a novel Bayesian adaptive trial design that allows the new surrogate endpoint to play a dominant role in assessing the effect of an intervention, while remaining realistically cautious about its use. By incorporating multi-trial historical information on the validated relationship between the surrogate and clinical endpoints, then subsequently evaluating accumulating data against this relationship as the new trial progresses, we adaptively guard against an erroneous assessment of treatment based upon a truly invalid surrogate. When the joint outcomes in the new trial seem plausible given similar historical trials, we proceed with the surrogate endpoint as the primary endpoint, and do so adaptively–perhaps stopping the trial for early success or inferiority of the experimental treatment, or for futility. Otherwise, we discard the surrogate and switch adaptive determinations to the original primary endpoint. We use simulation to test the operating characteristics of this new design compared to a standard O’Brien-Fleming approach, as well as the ability of our design to discriminate trustworthy from untrustworthy surrogates in hypothetical future trials. Furthermore, we investigate possible benefits using patient-level data from 18 adjuvant therapy trials in colon cancer, where disease-free survival is considered a newly-validated surrogate endpoint for overall survival.
PMCID: PMC3218207  PMID: 21838811
Bayesian adaptive design; Clinical trials; Surrogate endpoints; Survival analysis
21.  Effect of Dissemination of Evidence in Reducing Injuries from Falls 
The New England journal of medicine  2008;359(3):252-261.
Falling is a common and morbid condition among elderly persons. Effective strategies to prevent falls have been identified but are underutilized.
Using a nonrandomized design, we compared rates of injuries from falls in a region of Connecticut where clinicians had been exposed to interventions to change clinical practice (intervention region) and in a region where clinicians had not been exposed to such interventions (usual-care region). The interventions encouraged primary care clinicians and staff members involved in home care, outpatient rehabilitation, and senior centers to adopt effective risk assessments and strategies for the prevention of falls (e.g., medication reduction and balance and gait training). The outcomes were rates of serious fall-related injuries (hip and other fractures, head injuries, and joint dislocations) and fall-related use of medical services per 1000 person-years among persons who were 70 years of age or older. The interventions occurred from 2001 to 2004, and the evaluations took place from 2004 to 2006.
Before the interventions, the adjusted rates of serious fall-related injuries (per 1000 person-years) were 31.2 in the usual-care region and 31.9 in the intervention region. During the evaluation period, the adjusted rates were 31.4 and 28.6, respectively (adjusted rate ratio, 0.91; 95% Bayesian credibility interval, 0.88 to 0.94). Between the preintervention period and the evaluation period, the rate of fall-related use of medical services increased from 68.1 to 83.3 per 1000 person-years in the usual-care region and from 70.7 to 74.2 in the intervention region (adjusted rate ratio, 0.89; 95% credibility interval, 0.86 to 0.92). The percentages of clinicians who received intervention visits ranged from 62% (131 of 212 primary care offices) to 100% (26 of 26 home care agencies).
Dissemination of evidence about fall prevention, coupled with interventions to change clinical practice, may reduce fall-related injuries in elderly persons.
PMCID: PMC3472807  PMID: 18635430
22.  Multilevel empirical Bayes modeling for improved estimation of toxicant formulations to suppress parasitic sea lamprey in the upper Great Lakes 
Biometrics  2011;67(3):1153-1162.
Estimation of extreme quantal-response statistics, such as the concentration required to kill 99.9% of test subjects (LC99.9), remains a challenge in the presence of multiple covariates and complex study designs. Accurate and precise estimates of the LC99.9 for mixtures of toxicants is critical to ongoing control of a parasitic invasive species, the sea lamprey, in the Laurentian Great Lakes of North America. The toxicity of those chemicals is affected by local and temporal variations in water chemistry, which must be incorporated into the modeling. We develop multilevel empirical Bayes models for data from multiple laboratory studies. Our approach yields more accurate and precise estimation of the LC99.9 compared to alternative models considered. This study demonstrates that properly incorporating hierarchical structure in laboratory data yields better estimates of LC99.9 stream treatment values that are critical to larvae control in the field. In addition, out-of-sample prediction of the results of in situ tests reveals the presence of a latent seasonal effect not manifest in the laboratory studies, suggesting avenues for future study and illustrating the importance of dual consideration of both experimental and observational data.
PMCID: PMC3111860  PMID: 21361894
Lethal concentration/dose; Markov chain Monte Carlo (MCMC); Non-linear model; Quantal-response bioassay
23.  Hierarchical Commensurate and Power Prior Models for Adaptive Incorporation of Historical Information in Clinical Trials 
Biometrics  2011;67(3):1047-1056.
Bayesian clinical trial designs offer the possibility of a substantially reduced sample size, increased statistical power, and reductions in cost and ethical hazard. However when prior and current information conflict, Bayesian methods can lead to higher than expected Type I error, as well as the possibility of a costlier and lengthier trial. This motivates an investigation of the feasibility of hierarchical Bayesian methods for incorporating historical data that are adaptively robust to prior information that reveals itself to be inconsistent with the accumulating experimental data. In this paper, we present several models that allow for the commensurability of the information in the historical and current data to determine how much historical information is used. A primary tool is elaborating the traditional power prior approach based upon a measure of commensurability for Gaussian data. We compare the frequentist performance of several methods using simulations, and close with an example of a colon cancer trial that illustrates a linear models extension of our adaptive borrowing approach. Our proposed methods produce more precise estimates of the model parameters, in particular conferring statistical significance to the observed reduction in tumor size for the experimental regimen as compared to the control regimen.
PMCID: PMC3134568  PMID: 21361892
Adaptive Designs; Bayesian; Colorectal Cancer; Clinical Trials; Power Priors
24.  Joint modeling of multiple longitudinal patient-reported outcomes and survival 
Researchers often include patient-reported outcomes (PROs) in Phase III clinical trials to demonstrate the value of treatment from the patient’s perspective. These data are collected as longitudinal repeated measures and are often censored by occurrence of a clinical event that defines a survival time. Hierarchical Bayesian models having latent individual-level trajectories provide a flexible approach to modeling such multiple outcome types simultaneously. We consider the case of many zeros in the longitudinal data motivating a mixture model, and demonstrate several approaches to modeling multiple longitudinal PROs with survival in a cancer clinical trial. These joint models may enhance Phase III analyses and better inform health care decision makers.
PMCID: PMC3212950  PMID: 21830926
cancer; failure time; multivariate analysis; random effects model; repeated measures
25.  Hierarchical and Joint Site-Edge Methods for Medicare Hospice Service Region Boundary Analysis 
Biometrics  2009;66(2):355-364.
Hospice service offers a convenient and ethically preferable health care option for terminally ill patients. However, this option is unavailable to patients in remote areas not served by any hospice system. In this paper we seek to determine the service areas of two particular cancer hospice systems in northeastern Minnesota based only on death counts abstracted from Medicare billing records. The problem is one of spatial boundary analysis, a field that appears statistically underdeveloped for irregular areal (lattice) data, even though most publicly available human health data are of this type. In this paper, we suggest a variety of hierarchical models for areal boundary analysis that hierarchically or jointly parameterize both the areas and the edge segments. This leads to conceptually appealing solutions for our data that remain computationally feasible. While our approaches parallel similar developments in statistical image restoration using Markov random fields, important differences arise due to the irregular nature of our lattices, the sparseness and high variability of our data, the existence of important covariate information, and most importantly, our desire for full posterior inference on the boundary. Our results successfully delineate service areas for our two Minnesota hospice systems that sometimes conflict with the hospices' self-reported service areas. We also obtain boundaries for the spatial residuals from our fits, separating regions that differ for reasons yet unaccounted for by our model.
PMCID: PMC3061258  PMID: 19645704
Areal data; Conditionally autoregressive (CAR) model; Health services research; Ising model; Wombling

Results 1-25 (32)