A dynamic regime provides a sequence of treatments that are tailored to patient-specific characteristics and outcomes. In 2004 James Robins proposed g-estimation using structural nested mean models for making inference about the optimal dynamic regime in a multi-interval trial. The method provides clear advantages over traditional parametric approaches. Robins’ g-estimation method always yields consistent estimators, but these can be asymptotically biased under a given structural nested mean model for certain longitudinal distributions of the treatments and covariates, termed exceptional laws. In fact, under the null hypothesis of no treatment effect, every distribution constitutes an exceptional law under structural nested mean models which allow for interaction of current treatment with past treatments or covariates. This paper provides an explanation of exceptional laws and describes a new approach to g-estimation which we call Zeroing Instead of Plugging In (ZIPI). ZIPI provides nearly identical estimators to recursive g-estimators at non-exceptional laws while providing substantial reduction in the bias at an exceptional law when decision rule parameters are not shared across intervals.
adaptive treatment strategies; asymptotic bias; dynamic treatment regimes; g-estimation; optimal structural nested mean models; pre-test estimators
Individualized treatment rules, or rules for altering treatments over time in response to changes in individual covariates, are of primary importance in the practice of clinical medicine. Several statistical methods aim to estimate the rule, termed an optimal dynamic treatment regime, which will result in the best expected outcome in a population. In this article, we discuss estimation of an alternative type of dynamic regime—the statically optimal treatment rule. History-adjusted marginal structural models (HA-MSM) estimate individualized treatment rules that assign, at each time point, the first action of the future static treatment plan that optimizes expected outcome given a patient’s covariates. However, as we discuss here, HA-MSM-derived rules can depend on the way in which treatment was assigned in the data from which the rules were derived. We discuss the conditions sufficient for treatment rules identified by HA-MSM to be statically optimal, or in other words, to select the optimal future static treatment plan at each time point, regardless of the way in which past treatment was assigned. The resulting treatment rules form appropriate candidates for evaluation using randomized controlled trials. We demonstrate that a history-adjusted individualized treatment rule is statically optimal if it depends on a set of covariates that are sufficient to control for confounding of the effect of past treatment history on outcome. Methods and results are illustrated using an example drawn from the antiretroviral treatment of patients infected with HIV. Specifically, we focus on rules for deciding when to modify the treatment of patients infected with resistant virus.
causal inference; longitudinal data; dynamic treatment regime; adaptive treatment strategy; history-adjusted marginal structural model; human immunodeficiency virus
A treatment regime is a rule that assigns a treatment, among a set of possible treatments, to a patient as a function of his/her observed characteristics, hence “personalizing” treatment to the patient. The goal is to identify the optimal treatment regime that, if followed by the entire population of patients, would lead to the best outcome on average. Given data from a clinical trial or observational study, for a single treatment decision, the optimal regime can be found by assuming a regression model for the expected outcome conditional on treatment and covariates, where, for a given set of covariates, the optimal treatment is the one that yields the most favorable expected outcome. However, treatment assignment via such a regime is suspect if the regression model is incorrectly specified. Recognizing that, even if misspecified, such a regression model defines a class of regimes, we instead consider finding the optimal regime within such a class by finding the regime the optimizes an estimator of overall population mean outcome. To take into account possible confounding in an observational study and to increase precision, we use a doubly robust augmented inverse probability weighted estimator for this purpose. Simulations and application to data from a breast cancer clinical trial demonstrate the performance of the method.
Doubly robust estimator; Inverse probability weighting; Outcome regression; Personalized medicine; Potential outcomes; Propensity score
A dynamic treatment regime is a set of decision rules, one per stage, each taking a patient’s treatment and covariate history as input, and outputting a recommended treatment. In the estimation of the optimal dynamic treatment regime from longitudinal data, the treatment effect parameters at any stage prior to the last can be nonregular under certain distributions of the data. This results in biased estimates and invalid confidence intervals for the treatment effect parameters. In this paper, we discuss both the problem of nonregularity, and available estimation methods. We provide an extensive simulation study to compare the estimators in terms of their ability to lead to valid confidence intervals under a variety of nonregular scenarios. Analysis of a data set from a smoking cessation trial is provided as an illustration.
dynamic treatment regime; nonregularity; bias; hard-threshold; soft-threshold; empirical Bayes; bootstrap
The pretest–posttest study is commonplace in numerous applications. Typically, subjects are randomized to two treatments, and response is measured at baseline, prior to intervention with the randomized treatment (pretest), and at prespecified follow-up time (posttest). Interest focuses on the effect of treatments on the change between mean baseline and follow-up response. Missing posttest response for some subjects is routine, and disregarding missing cases can lead to invalid inference. Despite the popularity of this design, a consensus on an appropriate analysis when no data are missing, let alone for taking into account missing follow-up, does not exist. Under a semiparametric perspective on the pretest–posttest model, in which limited distributional assumptions on pretest or posttest response are made, we show how the theory of Robins, Rotnitzky and Zhao may be used to characterize a class of consistent treatment effect estimators and to identify the efficient estimator in the class. We then describe how the theoretical results translate into practice. The development not only shows how a unified framework for inference in this setting emerges from the Robins, Rotnitzky and Zhao theory, but also provides a review and demonstration of the key aspects of this theory in a familiar context. The results are also relevant to the problem of comparing two treatment means with adjustment for baseline covariates.
Analysis of covariance; covariate adjustment; influence function; inverse probability weighting; missing at random
Using validation sets for outcomes can greatly improve the estimation of vaccine efficacy (VE) in the field (Halloran and Longini, 2001; Halloran and others, 2003). Most statistical methods for using validation sets rely on the assumption that outcomes on those with no cultures are missing at random (MAR). However, often the validation sets will not be chosen at random. For example, confirmational cultures are often done on people with influenza-like illness as part of routine influenza surveillance. VE estimates based on such non-MAR validation sets could be biased. Here we propose frequentist and Bayesian approaches for estimating VE in the presence of validation bias. Our work builds on the ideas of Rotnitzky and others (1998, 2001), Scharfstein and others (1999, 2003), and Robins and others (2000). Our methods require expert opinion about the nature of the validation selection bias. In a re-analysis of an influenza vaccine study, we found, using the beliefs of a flu expert, that within any plausible range of selection bias the VE estimate based on the validation sets is much higher than the point estimate using just the non-specific case definition. Our approach is generally applicable to studies with missing binary outcomes with categorical covariates.
Bayesian; Expert opinion; Identifiability; Influenza; Missing data; Selection model; Vaccine efficacy
A dynamic treatment regime is a list of rules for how the level of treatment will be tailored through time to an individual’s changing severity. In general, individuals who receive the highest level of treatment are the individuals with the greatest severity and need for treatment. Thus there is planned selection of the treatment dose. In addition to the planned selection mandated by the treatment rules, the use of staff judgment results in unplanned selection of the treatment level. Given observational longitudinal data or data in which there is unplanned selection, of the treatment level, the methodology proposed here allows the estimation of a mean response to a dynamic treatment regime under the assumption of sequential randomization.
dynamic treatment regimes; nondynamic treatment regimes; causal inference; confounding
The effect of spatial structure has been proved very relevant in repeated games. In this work we propose an agent based model where a fixed finite population of tagged agents play iteratively the Nash demand game in a regular lattice. The model extends the multiagent bargaining model by Axtell, Epstein and Young  modifying the assumption of global interaction. Each agent is endowed with a memory and plays the best reply against the opponent's most frequent demand. We focus our analysis on the transient dynamics of the system, studying by computer simulation the set of states in which the system spends a considerable fraction of the time. The results show that all the possible persistent regimes in the global interaction model can also be observed in this spatial version. We also find that the mesoscopic properties of the interaction networks that the spatial distribution induces in the model have a significant impact on the diffusion of strategies, and can lead to new persistent regimes different from those found in previous research. In particular, community structure in the intratype interaction networks may cause that communities reach different persistent regimes as a consequence of the hindering diffusion effect of fluctuating agents at their borders.
Dynamic treatment regime is a decision rule in which the choice of the treatment of an individual at any given time can depend on the known past history of that individual, including baseline covariates, earlier treatments, and their measured responses. In this paper we argue that finding an optimal regime can, at least in moderately simple cases, be accomplished by a straightforward application of nonparametric Bayesian modeling and predictive inference. As an illustration we consider an inference problem in a subset of the Multicenter AIDS Cohort Study (MACS) data set, studying the effect of AZT initiation on future CD4-cell counts during a 12-month follow-up.
Bayesian nonparametric regression; causal inference; dynamic programming; monotonicity; optimal dynamic regimes
Antiviral resistance in influenza is rampant and has the possibility of causing major morbidity and mortality. Previous models have identified treatment regimes to minimize total infections and keep resistance low. However, the bulk of these studies have ignored stochasticity and heterogeneous contact structures. Here we develop a network model of influenza transmission with treatment and resistance, and present both standard mean-field approximations as well as simulated dynamics. We find differences in the final epidemic sizes for identical transmission parameters (bistability) leading to different optimal treatment timing depending on the number initially infected. We also find, contrary to previous results, that treatment targeted by number of contacts per individual (node degree) gives rise to more resistance at lower levels of treatment than non-targeted treatment. Finally we highlight important differences between the two methods of analysis (mean-field versus stochastic simulations), and show where traditional mean-field approximations fail. Our results have important implications not only for the timing and distribution of influenza chemotherapy, but also for mathematical epidemiological modeling in general. Antiviral resistance in influenza may carry large consequences for pandemic mitigation efforts, and models ignoring contact heterogeneity and stochasticity may provide misleading policy recommendations.
Resistance of influenza to common antiviral agents carries the possibility of causing large morbidity and mortality through failure of treatment and should be taken into account when planning public health interventions focused on stopping transmission. Here we present a mathematical model of influenza transmission which incorporates heterogeneous contact structure and stochastic transmission events. We find scenarios when treatment either induces large levels of resistance or no resistance at identical values of transmission rates depending on the number initially infected. We also find, contrary to previous results, that targeted treatment causes more resistance at lower treatment levels than non-targeted treatment. Our results have important implications for the timing and distribution of antivirals in epidemics and highlight important differences in how transmission is modeled and where assumptions made in previous models cause them to lead to erroneous conclusions.
Oncolytic viruses are viruses that specifically infect cancer cells and kill them, while leaving healthy cells largely intact. Their ability to spread through the tumor makes them an attractive therapy approach. While promising results have been observed in clinical trials, solid success remains elusive since we lack understanding of the basic principles that govern the dynamical interactions between the virus and the cancer. In this respect, computational models can help experimental research at optimizing treatment regimes. Although preliminary mathematical work has been performed, this suffers from the fact that individual models are largely arbitrary and based on biologically uncertain assumptions. Here, we present a general framework to study the dynamics of oncolytic viruses that is independent of uncertain and arbitrary mathematical formulations. We find two categories of dynamics, depending on the assumptions about spatial constraints that govern that spread of the virus from cell to cell. If infected cells are mixed among uninfected cells, there exists a viral replication rate threshold beyond which tumor control is the only outcome. On the other hand, if infected cells are clustered together (e.g. in a solid tumor), then we observe more complicated dynamics in which the outcome of therapy might go either way, depending on the initial number of cells and viruses. We fit our models to previously published experimental data and discuss aspects of model validation, selection, and experimental design. This framework can be used as a basis for model selection and validation in the context of future, more detailed experimental studies. It can further serve as the basis for future, more complex models that take into account other clinically relevant factors such as immune responses.
This article considers the problem of assessing causal effect moderation in longitudinal settings in which treatment (or exposure) is time-varying and so are the covariates said to moderate its effect. Intermediate Causal Effects that describe time-varying causal effects of treatment conditional on past covariate history are introduced and considered as part of Robins’ Structural Nested Mean Model. Two estimators of the intermediate causal effects, and their standard errors, are presented and discussed: The first is a proposed 2-Stage Regression Estimator. The second is Robins’ G-Estimator. The results of a small simulation study that begins to shed light on the small versus large sample performance of the estimators, and on the bias-variance trade-off between the two estimators are presented. The methodology is illustrated using longitudinal data from a depression study.
Causal inference; Effect modification; Estimating equations; G-Estimation; 2-stage estimation; Time-varying treatment; Time-varying covariates; Bias-variance trade-off
Chaotic dynamics in a recurrent neural network model and in two-dimensional cellular automata, where both have finite but large degrees of freedom, are investigated from the viewpoint of harnessing chaos and are applied to motion control to indicate that both have potential capabilities for complex function control by simple rule(s). An important point is that chaotic dynamics generated in these two systems give us autonomous complex pattern dynamics itinerating through intermediate state points between embedded patterns (attractors) in high-dimensional state space. An application of these chaotic dynamics to complex controlling is proposed based on an idea that with the use of simple adaptive switching between a weakly chaotic regime and a strongly chaotic regime, complex problems can be solved. As an actual example, a two-dimensional maze, where it should be noted that the spatial structure of the maze is one of typical ill-posed problems, is solved with the use of chaos in both systems. Our computer simulations show that the success rate over 300 trials is much better, at least, than that of a random number generator. Our functional simulations indicate that both systems are almost equivalent from the viewpoint of functional aspects based on our idea, harnessing of chaos.
Chaotic dynamics; Recurrent neural network; Cellular automata; Information processing; Complex control; Adaptive function
Multistability of oscillatory and silent regimes is a ubiquitous phenomenon exhibited by excitable systems such as neurons and cardiac cells. Multistability can play functional roles in short-term memory and maintaining posture. It seems to pose an evolutionary advantage for neurons which are part of multifunctional Central Pattern Generators to possess multistability. The mechanisms supporting multistability of bursting regimes are not well understood or classified.
Our study is focused on determining the bio-physical mechanisms underlying different types of co-existence of the oscillatory and silent regimes observed in a neuronal model. We develop a low-dimensional model typifying the dynamics of a single leech heart interneuron. We carry out a bifurcation analysis of the model and show that it possesses six different types of multistability of dynamical regimes. These types are the co-existence of 1) bursting and silence, 2) tonic spiking and silence, 3) tonic spiking and subthreshold oscillations, 4) bursting and subthreshold oscillations, 5) bursting, subthreshold oscillations and silence, and 6) bursting and tonic spiking. These first five types of multistability occur due to the presence of a separating regime that is either a saddle periodic orbit or a saddle equilibrium. We found that the parameter range wherein multistability is observed is limited by the parameter values at which the separating regimes emerge and terminate.
We developed a neuronal model which exhibits a rich variety of different types of multistability. We described a novel mechanism supporting the bistability of bursting and silence. This neuronal model provides a unique opportunity to study the dynamics of networks with neurons possessing different types of multistability.
Dynamic treatment regimes are time-varying treatments that individualize sequences of treatments to the patient. The construction of dynamic treatment regimes is challenging because a patient will be eligible for some treatment components only if he has not responded (or has responded) to other treatment components. In addition there are usually a number of potentially useful treatment components and combinations thereof. In this article, we propose new methodology for identifying promising components and screening out negligible ones. First, we define causal factorial effects for treatment components that may be applied sequentially to a patient. Second we propose experimental designs that can be used to study the treatment components. Surprisingly, modifications can be made to (fractional) factorial designs - more commonly found in the engineering statistics literature -for screening in this setting. Furthermore we provide an analysis model that can be used to screen the factorial effects. We demonstrate the proposed methodology using examples motivated in the literature and also via a simulation study.
Multi-stage Decisions; Experimental Design; Causal Inference
Size-selective mortality caused by fishing can impose strong selection on harvested fish populations, causing evolution in important life-history traits. Understanding and predicting harvest-induced evolutionary change can help maintain sustainable fisheries. We investigate the evolutionary sustainability of alternative management regimes for lacustrine brook charr (Salvelinus fontinalis) fisheries in southern Canada and aim to optimize these regimes with respect to the competing objectives of maximizing mean annual yield and minimizing evolutionary change in maturation schedules. Using a stochastic simulation model of brook charr populations consuming a dynamic resource, we investigate how harvesting affects brook charr maturation schedules. We show that when approximately 5% to 15% of the brook charr biomass is harvested, yields are high, and harvest-induced evolutionary changes remain small. Intensive harvesting (at approximately >15% of brook charr biomass) results in high average yields and little evolutionary change only when harvesting is restricted to brook charr larger than the size at 50% maturation probability at the age of 2 years. Otherwise, intensive harvesting lowers average yield and causes evolutionary change in the maturation schedule of brook charr. Our results indicate that intermediate harvesting efforts offer an acceptable compromise between avoiding harvest-induced evolutionary change and securing high average yields.
Fisheries-induced adaptive change; management regimes; models; probabilistic maturation reaction norm; Salvelinus fontinalis
Milestoning is a procedure to compute the time evolution of complicated processes such as barrier crossing events or long diffusive transitions between predefined states. Milestoning reduces the dynamics to transition events between intermediates (the milestones) and computes the local kinetic information to describe these transitions via short molecular dynamics (MD) runs between the milestones. The procedure relies on the ability to reinitialize MD trajectories on the milestones to get the right kinetic information about the transitions. It also rests on the assumptions that the transition events between successive milestones and the time-lags between these transitions are statistically independent. In this paper, we analyze the validity of these assumptions. We show that sets of optimal milestones exist, i.e. sets such that successive transitions are indeed statistically independent. The proof of this claim relies on the results of transition path theory and uses the isocommittor surfaces of the reaction as milestones. For systems in the overdamped limit, we also obtain the probability distribution to reinitialize the MD trajectories on the milestones, and we discuss why this distribution is not available in closed form for systems with inertia. We explain why the time-lags between transitions are not statistically independent even for optimal milestones, but we show that working with such milestones allows one to compute mean first passage times between milestones exactly. Finally, we discuss some practical implications of our results and we compare milestoning with Markov state models in view of our findings.
transition path theory; committor function; Markov chain model; transition rate; reduced dynamics
Acute lung injury (ALI) is a condition characterized by acute onset of severe hypoxemia and bilateral pulmonary infiltrates. ALI patients typically require mechanical ventilation in an intensive care unit. Low tidal volume ventilation (LTVV), a time-varying dynamic treatment regime, has been recommended as an effective ventilation strategy. This recommendation was based on the results of the ARMA study, a randomized clinical trial designed to compare low vs. high tidal volume strategies (The Acute Respiratory Distress Syndrome Network, 2000) . After publication of the trial, some critics focused on the high non-adherence rates in the LTVV arm suggesting that non-adherence occurred because treating physicians felt that deviating from the prescribed regime would improve patient outcomes. In this paper, we seek to address this controversy by estimating the survival distribution in the counterfactual setting where all patients assigned to LTVV followed the regime. Inference is based on a fully Bayesian implementation of Robins’ (1986) G-computation formula. In addition to re-analyzing data from the ARMA trial, we also apply our methodology to data from a subsequent trial (ALVEOLI), which implemented the LTVV regime in both of its study arms and also suffered from non-adherence.
Bayesian inference; Causal inference; Dynamic treatment regime; G-computation formula
Finite element (FE) modeling and multibody dynamics have traditionally been applied separately to the domains of tissue mechanics and musculoskeletal movements, respectively. Simultaneous simulation of both domains is needed when interactions between tissue and movement are of interest, but this has remained largely impractical due to high computational cost.
Method of Approach
Here we present a method for concurrent simulation of tissue and movement, in which state of the art methods are used in each domain, and communication occurs via a surrogate modeling system based on locally weighted regression. The surrogate model only performs FE simulations when regression from previous results is not within a user-specified tolerance. For proof of concept and to illustrate feasibility, the methods were demonstrated on an optimization of jumping movement using a planar musculoskeletal model coupled to a FE model of the foot. To test the relative accuracy of the surrogate model outputs against those of the FE model, a single forward dynamics simulation was performed with FE calls at every integration step and compared with a corresponding simulation with the surrogate model included. Neural excitations obtained from the jump height optimization were used for this purpose and root mean square (RMS) difference between surrogate and FE model outputs (ankle force and moment, peak contact pressure and peak von Mises stress) were calculated.
Optimization of jump height required 1800 iterations of the movement simulation, each requiring thousands of time steps. The surrogate modeling system only used the FE model in 5% of time steps, i.e. a 95% reduction of computation time. Errors introduced by the surrogate model were less than 1 mm in jump height and RMS errors of less than 2 N in ground reaction force, 0.25 Nm in ankle moment, and 10 kPa in peak tissue stress.
Adaptive surrogate modeling based on local regression allows efficient concurrent simulations of tissue mechanics and musculoskeletal movement.
Finite element modeling; Multibody dynamics; Surrogate modeling; Movement optimization
For many diseases where there are several treatment options often there is no consensus on the best treatment to give individual patients. In such cases it may be necessary to define a strategy for treatment assignment; that is, an algorithm which dictates the treatment an individual should receive based on their measured characteristics. Such a strategy or algorithm is also referred to as a treatment regime. The optimal treatment regime is the strategy that would provide the most public health benefit by minimizing as many poor outcomes as possible. Using a measure that is a generalization of attributable risk and notions of potential outcomes, we derive an estimator for the proportion of events that could have been prevented had the optimal treatment regime been implemented. Traditional attributable risk studies look at the added risk that can be attributed to exposure of some contaminant, here we will instead study the benefit that can be attributed to using the optimal treatment strategy.
We will show how regression models can be used to estimate the optimal treatment strategy and the attributable benefit of that strategy. We also derive the large sample properties of this estimator. As a motivating example we will apply our methods to an observational study of 3856 patients treated at the Duke University Medical Center with prior coronary artery bypass graft surgery and further heart related problems requiring a catheterization. The patients may be treated with either medical therapy alone or a combination of medical therapy and percutaneous coronary intervention without general consensus on which is the best treatment for individual patients.
Attributable Risk; Causal Inference; Influence Function; Optimal Treatment Regime
Computational modeling of genomic regulation has become an important focus of systems biology and genomic signal processing for the past several years. It holds the promise to uncover both the structure and dynamical properties of the complex gene, protein or metabolic networks responsible for the cell functioning in various contexts and regimes. This, in turn, will lead to the development of optimal intervention strategies for prevention and control of disease. At the same time, constructing such computational models faces several challenges. High complexity is one of the major impediments for the practical applications of the models. Thus, reducing the size/complexity of a model becomes a critical issue in problems such as model selection, construction of tractable subnetwork models, and control of its dynamical behavior. We focus on the reduction problem in the context of two specific models of genomic regulation: Boolean networks with perturbation (BNP) and probabilistic Boolean networks (PBN). We also compare and draw a parallel between the reduction problem and two other important problems of computational modeling of genomic networks: the problem of network inference and the problem of designing external control policies for intervention/altering the dynamics of the model.
The occurrence of qualitative shifts in population dynamical regimes has long been the focus of population biologists. Nonlinear ecological models predict that these shifts in dynamical regimes may occur as a result of parameter shifts, but unambiguous empirical evidence is largely restricted to laboratory populations. We used an individual-based modelling approach to predict dynamical shifts in field fish populations where the capacity to cannibalize differed between species. Model-generated individual growth trajectories that reflect different population dynamics were confronted with empirically observed growth trajectories, showing that our ordering and quantitative estimates of the different cannibalistic species in terms of life-history characteristics led to correct qualitative predictions of their dynamics.
Microbial diversity and distribution are topics of intensive research. In two companion papers in this issue, we describe the results of the Cariaco Microbial Observatory (Caribbean Sea, Venezuela). The Basin contains the largest body of marine anoxic water, and presents an opportunity to study protistan communities across biogeochemical gradients. In the first paper, we survey 18S ribosomal RNA (rRNA) gene sequence diversity using both Sanger- and pyrosequencing-based approaches, employing multiple PCR primers, and state-of-the-art statistical analyses to estimate microbial richness missed by the survey. Sampling the Basin at three stations, in two seasons, and at four depths with distinct biogeochemical regimes, we obtained the largest, and arguably the least biased collection of over 6000 nearly full-length protistan rRNA gene sequences from a given oceanographic regime to date, and over 80 000 pyrosequencing tags. These represent all major and many minor protistan taxa, at frequencies globally similar between the two sequence collections. This large data set provided, via the recently developed parametric modeling, the first statistically sound prediction of the total size of protistan richness in a large and varied environment, such as the Cariaco Basin: over 36 000 species, defined as almost full-length 18S rRNA gene sequence clusters sharing over 99% sequence homology. This richness is a small fraction of the grand total of known protists (over 100 000–500 000 species), suggesting a degree of protistan endemism.
protists; diversity; species richness; anoxic; pyrosequencing; 18S rRNA approach
Ecological systems with threshold behaviour show drastic shifts in population abundance or species diversity in response to small variation in critical parameters. Examples of threshold behaviour arise in resource competition theory, epidemiological theory and environmentally driven population dynamics, to name a few. Although expected from theory, thresholds may be difficult to detect in real datasets due to stochasticity, finite population size and confounding effects that soften the observed shifts and introduce variability in the data. Here, we propose a modelling framework for threshold responses to environmental drivers that allows for a flexible treatment of the transition between regimes, including variation in the sharpness of the transition and the variance of the response. The model assumes two underlying stochastic processes whose mixture determines the system's response. For environmentally driven systems, the mixture is a function of an environmental covariate and the response may exhibit strong nonlinearity. When applied to two datasets for water-borne diseases, the model was able to capture the effect of rainfall on the mean number of cases as well as the variance. A quantitative description of this kind of threshold behaviour is of more general application to predict the response of ecosystems and human health to climate change.
nonlinear; cholera; leptospirosis; seasonality; disease dynamics; extrinsic forcing
Recent advances in the use of paramagnetic relaxation enhancement (PRE) in structure refinement and in the analysis of transient dynamic processes involved in macromolecular complex formation are presented. In the slow exchange regime, we show, using the SRY/DNA complex as an example, that the PRE provides a powerful tool that can lead to significant increases in the reliability and accuracy of NMR structure determinations. Refinement necessitates the use of an ensemble representation of the paramagnetic center and a model free extension of the Solomon-Bloembergen equations. In the fast exchange regime, the PRE provides insight into dynamic processes and the existence of transient, low population intermediate species. The PRE allows one to characterize dynamic non-specific binding of a protein to DNA; to directly demonstrate that the search process whereby a transcription factor locates its specific DNA target site involves both intramolecular (sliding) and intermolecular (hopping and intersegment transfer) translocation; and to detect and visualize the distribution of an ensemble of transient encounter complexes in protein-protein association.