We surveyed how radiation oncologists think about and incorporate a palliative cancer patient’s life expectancy (LE) into their treatment recommendations.
Methods and Materials
A 41-item survey was e-mailed to 113 radiation oncology attending physicians and residents at radiation oncology centers within the Boston area. Physicians estimated how frequently they assessed the LE of their palliative cancer patients and rated the importance of 18 factors in formulating LE estimates. For 3 common palliative case scenarios, physicians estimated LE and reported whether they had an LE threshold below which they would modify their treatment recommendation. LE estimates were considered accurate when within the 95% confidence interval of median survival estimates from an established prognostic model.
Among 92 respondents (81%), the majority were male (62%), from an academic practice (75%), and an attending physician (70%). Physicians reported assessing LE in 91% of their evaluations and most frequently rated performance status (92%), overall metastatic burden (90%), presence of central nervous system metastases (75%), and primary cancer site (73%) as “very important” in assessing LE. Across the 3 cases, most (88%–97%) had LE thresholds that would alter treatment recommendations. Overall, physicians’ LE estimates were 22% accurate with 67% over the range predicted by the prognostic model.
Physicians often incorporate LE estimates into palliative cancer care and identify important prognostic factors. Most have LE thresholds that guide their treatment recommendations. However, physicians overestimated patient survival times in most cases. Future studies focused on improving LE assessment are needed.
Monoclonal antibody therapy has an important role to play as a post-exposure prophylactic and therapeutic for the treatment of viral infections, including emerging infections. For example, several patients of the present Ebola virus outbreak in West Africa were treated with ZMapp, a cocktail of three monoclonal antibodies which are expressed in Nicotiana benthamiana.
The majority of monoclonal antibodies in clinical use are expressed in mammalian cell lines which offer native folding and glycosylation of the expressed antibody. Monoclonal antibody expression in vegetal systems offers advantages over expression in mammalian cell lines, including improved potential for scale up and reduced costs. In this paper, I highlight the advantages of an upcoming protozoal system for the expression of recombinant antibody formats. Leishmania tarentolae offers a robust, economical expression of proteins with mammalian glycosylation patterns expressed in stable cell lines and grown in suspension culture. Several advantages of this system make it particularly suited for use in developing contexts.
Given the potential importance of monoclonal antibody therapy in the containment of emerging viral infections, novel and alternative strategies to improve production must be explored.
Electronic supplementary material
The online version of this article (doi:10.1186/2049-9957-4-8) contains supplementary material, which is available to authorized users.
Antibody; Ebola; Emerging; Expression; Infection; Leishmania; Monoclonal; Therapy
Impairment in oxygen (O2) delivery to the central nervous system (“brain”) and skeletal locomotor muscle during exercise has been associated with central and peripheral neuromuscular fatigue in healthy humans. From a clinical perspective, impaired tissue O2 transport is a key pathophysiological mechanism shared by cardiopulmonary diseases, such as chronic obstructive pulmonary disease (COPD) and chronic heart failure (CHF). In addition to arterial hypoxemic conditions in COPD, there is growing evidence that cerebral and muscle blood flow and oxygenation can be reduced during exercise in both isolated COPD and CHF. Compromised cardiac output due to impaired cardiopulmonary function/interactions and blood flow redistribution to the overloaded respiratory muscles (i.e., ↑work of breathing) may underpin these abnormalities. Unfortunately, COPD and CHF coexist in almost a third of elderly patients making these mechanisms potentially more relevant to exercise intolerance. In this context, it remains unknown whether decreased O2 delivery accentuates neuromuscular manifestations of central and peripheral fatigue in coexistent COPD-CHF. If this holds true, it is conceivable that delivering a low-density gas mixture (heliox) through non-invasive positive pressure ventilation could ameliorate cardiopulmonary function/interactions and reduce the work of breathing during exercise in these patients. The major consequence would be increased O2 delivery to the brain and active muscles with potential benefits to exercise capacity (i.e., ↓central and peripheral neuromuscular fatigue, respectively). We therefore hypothesize that patients with coexistent COPD-CHF stop exercising prematurely due to impaired central motor drive and muscle contractility as the cardiorespiratory system fails to deliver sufficient O2 to simultaneously attend the metabolic demands of the brain and the active limb muscles.
chronic heart failure; chronic obstructive pulmonary disease; oxygenation; respiratory muscle; skeletal muscle
The aim of this study was to validate a molecular expression signature [cell cycle progression (CCP) score] that identifies patients with a higher risk of cancer-related death after surgical resection of early stage (I-II) lung adenocarcinoma in a large patient cohort and evaluate the effectiveness of combining CCP score and pathological stage for predicting lung cancer mortality.
Formalin-fixed paraffin-embedded surgical tumor samples from 650 patients diagnosed with stage I and II adenocarcinoma who underwent definitive surgical treatment without adjuvant chemotherapy were analyzed for 31 proliferation genes by quantitative real-time polymerase chain reaction. The prognostic discrimination of the expression score was assessed by Cox proportional hazards analysis using 5-year lung cancer-specific death as primary outcome.
The CCP score was a significant predictor of lung cancer-specific mortality above clinical covariates [hazard ratio (HR) = 1.46 per interquartile range (95% confidence interval = 1.12–1.90; p = 0.0050)]. The prognostic score, a combination of CCP score and pathological stage, was a more significant indicator of lung cancer mortality risk than pathological stage in the full cohort (HR = 2.01; p = 2.8 × 10−11) and in stage I patients (HR = 1.67; p = 0.00027). Using the 85th percentile of the prognostic score as a threshold, there was a significant difference in lung cancer survival between low-risk and high-risk patient groups (p = 3.8 × 10−7).
This study validates the CCP score and the prognostic score as independent predictors of lung cancer death in patients with early stage lung adenocarcinoma treated with surgery alone. Patients with resected stage I lung adenocarcinoma and a high prognostic score may be candidates for adjuvant therapy to reduce cancer-related mortality.
Carcinoma; Nonsmall cell lung cancer; Real-time polymerase chain reaction; Risk stratification
Throughout the Amazon region, the age of forests regenerating on previously deforested land is determined, in part, by the periods of active land use prior to abandonment and the frequency of reclearance of regrowth, both of which can be quantified by comparing time-series of Landsat sensor data. Using these time-series of near annual data from 1973–2011 for an area north of Manaus (in Amazonas state), from 1984–2010 for south of Santarém (Pará state) and 1984–2011 near Machadinho d’Oeste (Rondônia state), the changes in the area of primary forest, non-forest and secondary forest were documented from which the age of regenerating forests, periods of active land use and the frequency of forest reclearance were derived. At Manaus, and at the end of the time-series, over 50% of regenerating forests were older than 16 years, whilst at Santarém and Machadinho d’Oeste, 57% and 41% of forests respectively were aged 6–15 years, with the remainder being mostly younger forests. These differences were attributed to the time since deforestation commenced but also the greater frequencies of reclearance of forests at the latter two sites with short periods of use in the intervening periods. The majority of clearance for agriculture was also found outside of protected areas. The study suggested that a) the history of clearance and land use should be taken into account when protecting deforested land for the purpose of restoring both tree species diversity and biomass through natural regeneration and b) a greater proportion of the forested landscape should be placed under protection, including areas of regrowth.
The orbitofrontal cortex (OFC) has been described as signaling outcome expectancies or value. Evidence for the latter comes from the studies showing that neural signals in the OFC correlate with value across features. Yet features can co-vary with value, and individual units may participate in multiple ensembles coding different features. Here we used unblocking to test whether OFC neurons would respond to a predictive cue signaling a ‘valueless’ change in outcome flavor. Neurons were recorded as the rats learned about cues that signaled either an increase in reward number or a valueless change in flavor. We found that OFC neurons acquired responses to both predictive cues. This activity exceeded that exhibited to a ‘blocked’ cue and was correlated with activity to the actual outcome. These results show that OFC neurons fire to cues with no value independent of what can be inferred through features of the predicted outcome.
Imagine you are at a restaurant and the waiter offers you a choice of cheesecake or fruit salad for dessert. When making your choice it is likely that you will consider the features of these desserts, such as their taste, their sweetness or how healthy they are. However, when you decide which dessert to have, you will pick the one that you judge to have the highest value for you at that moment in time. In this sense, ‘value’ is a subjective concept that varies from person to person, while ‘features’ remain relatively static.
It is generally agreed that the orbitofrontal cortex (OFC) is involved in making these sorts of decisions, but its role is still a topic of debate. According to one theory the neurons in the OFC signal the subjective value of an outcome, whereas a rival theory suggests that they signal the features of the expected outcome. However, it has proved challenging to test these theories in experiments because it is difficult to say for certain that a given decision was clearly due to the value or a feature.
Now, McDannald et al. have devised an approach that can tell the difference between neurons signaling value and neurons signaling features. They trained thirsty rats to associate different odours with either an increase in the amount of milk they were given (a change in both value and a feature), or a change in the flavor of the milk (a change in a feature without a change in value). Extensive testing showed that the rats did not value one flavor over the other.
McDannald et al. then examined how the neurons in the OFC responded. If these neurons signal only value, they should only fire when the value of the outcome changes. On the other hand, if they signal features, they should fire when a feature changes, even if the value does not. It turned out that the neurons in the OFC responded whenever the features changed, irrespective of whether or not the value changed. These findings present a challenge to popular conceptions of the role of the neurons in the OFC.
orbitofrontal; single unit; blocking; rat
Excess weight gain in American Indian/Alaskan native (AI/AN) children is a public health concern. This study tested 1) the feasibility of delivering community-wide interventions, alone or in combination with family-based interventions, to promote breastfeeding and reduce the consumption of sugar-sweetened beverages; and 2) whether these interventions decrease Body Mass Index (BMI)-Z scores in children 18–24 months of age.
Three AI/AN tribes were randomly assigned to two active interventions; a community-wide intervention alone (tribe A; n=63 families) or community-wide intervention containing a family component (tribes B and C; n=142 families). Tribal staff and the research team designed community-tailored interventions and trained community health workers to deliver the family intervention through home visits. Feasibility and acceptability of the intervention and BMI-Z scores at 18–24 months were compared between tribe A and tribes B&C combined using a separate sample pretest, posttest design.
Eighty-six percent of enrolled families completed the study. Breastfeeding initiation and 6-month duration increased 14 and 15%, respectively, in all tribes compared to national rates for American Indians. Breastfeeding at 12 months was comparable to national data. Parents expressed confidence in their ability to curtail family consumption of sugar-sweetened beverages. Compared to a pretest sample of children of a similar age two years before the study begun, BMI-Z scores increased in all tribes. However, the increase was less in tribes B &C compared to tribe A (−0.75, p=0.016).
Family, plus community-wide interventions to increase breastfeeding and curtail sugar-sweetened beverages attenuate BMI rise in AI/AN toddlers more than community-wide interventions alone.
Obesity prevention; infants; toddlers; breastfeeding; sugar-sweetened beverages
Addiction is characterized by maladaptive decision-making, in which individuals seem unable to use adverse outcomes to modify their behavior. Adverse outcomes are often infrequent, delayed, and even rare events, especially when compared to the reliable rewarding drug-associated outcomes. As a result, recognizing and using information about their occurrence put a premium on the operation of so-called model-based systems of behavioral control, which allow one to mentally simulate outcomes of different courses of action based on knowledge of the underlying associative structure of the environment. This suggests that addiction may reflect, in part, drug-induced dysfunction in these systems. Here, we tested this hypothesis.
This study aimed to test whether cocaine causes deficits in model-based behavior and learning independent of requirements for response inhibition or perception of costs or punishment.
We trained rats to self-administer sucrose or cocaine for 2 weeks. Four weeks later, the rats began training on a sensory preconditioning and inferred value blocking task. Like devaluation, normal performance on this task requires representations of the underlying task structure; however, unlike devaluation, it does not require either response inhibition or adapting behavior to reflect aversive outcomes.
Rats trained to self-administer cocaine failed to show conditioned responding or blocking to the preconditioned cue. These deficits were not observed in sucrose-trained rats nor did they reflect any changes in responding to cues paired directly with reward.
These results imply that cocaine disrupts the operation of neural circuits that mediate model-based behavioral control.
Addiction; Cocaine; Orbitofrontal; Sensory preconditioning; Blocking
Computational and learning theory models propose that behavioral control reflects value that is both cached (computed and stored during previous experience) and inferred (estimated on-the-fly based on knowledge of the causal structure of the environment). The latter is thought to depend on the orbitofrontal cortex. Yet, some accounts propose that the orbitofrontal cortex contributes to behavior by signaling “economic” value, regardless of the associative basis of the information. We found that the orbitofrontal cortex is critical for both value-based behavior and learning when value must be inferred but not when a cached value is sufficient. The orbitofrontal cortex is thus fundamental for accessing model-based representations of the environment to compute value rather than for signaling value per se.
To determine whether a rule-based algorithm applied to an outpatient electronic medical record (EMR) can identify patients who are pregnant and prescribed medications proved to cause birth defects.
A descriptive study using the University of Pennsylvania Health System outpatient EMR to simulate a prospective algorithm to identify exposures during pregnancy to category X medications, soon enough to intervene and potentially prevent the exposure. A subsequent post-hoc algorithm was also tested, working backwards from pregnancy endpoints, to search for possible exposures that should have been detected.
Category X medications prescribed to pregnant patients.
The alert simulation identified 2201 pregnancies with 16 969 pregnancy months (excluding abortions and ectopic pregnancies). Of these, 30 appeared to have an order for a non-hormone category X medication during pregnancy. However, none of the 30 ‘exposed pregnancies’ were confirmed as true exposures in medical records review. The post-hoc algorithm identified 5841 pregnancies with 64 exposed pregnancies in 52 569 risk months, only one of which was a confirmed case.
Category X medications may indeed be used in pregnancy, although rarely. However, most patients identified by the algorithm as exposed in pregnancy were not truly exposed. Therefore, implementing an electronic warning without evaluation would have inconvenienced prescribers, possibly hurting some patients (leading to non-use of needed drugs), with no benefit. These data demonstrate that computerized physician order entry interventions should be selected and evaluated carefully even before their use, using alert simulations such as that performed here, rather than just taken off the shelf and accepted as credible without formal evaluation.
Category X medications; computerized physician order entry (CPOE); electronic medical records (EMR); electronic warning; improving healthcare workflow and process efficiency; informatics; measuring/improving outcomes in specific conditions and patient subgroups; measuring/improving patient safety and reducing medical errors; medications during pregnancy; monitoring the health of populations; pharmacoepidemiology; statistical computing; violence
While β-dicarbonyl compounds are regularly employed as Michael donors, intermediates arising from the Michael addition of unsaturated β-ketoesters to α,β-unsaturated aldehydes are susceptible to multiple subsequent reaction pathways. We designed cyclic unsaturated β-ketoester substrates that enabled the development of the first diphenyl prolinol silyl ether-catalyzed Michael-Michael cascade reaction initiated by a β-dicarbonyl Michael donor to form cyclohexene products. The reaction conditions we developed for this Michael-Michael cascade reaction were also amenable to a variety of linear unsaturated β-ketoester substrates, including some of the same linear unsaturated β-ketoester substrates that were previously ineffective in Michael-Michael cascade reactions. These studies thus revealed that a change in simple reaction conditions, such as solvent and additives, enables the same substrate to undergo different cascade reactions, thereby accessing different molecular scaffolds. These studies also culminated in the development of a general organocatalyzed Michael-Michael cascade reaction that generates highly functionalized cyclohexenes with up to four stereocenters, in up to 97% yield, 32:1 dr, and 99% ee, in a single step from a variety of unsaturated β-ketoesters.
Prominent neurobiological theories of addiction posit a central role for aberrant mesolimbic dopamine release, but disagree as to whether repeated drug experience blunts or enhances this system. While drug withdrawal diminishes dopamine release, drug sensitization augments mesolimbic function, and both processes have been linked to drug-seeking. One possibility is that the dopamine system can rapidly switch from dampened to enhanced release depending upon the specific drug-predictive environment. To test this, we examined dopamine release when cues signaled delayed cocaine delivery versus imminent cocaine self-administration.
Fast-scan cyclic voltammetry was used to examine real-time dopamine release while simultaneously monitoring behavioral indices of aversion as rats experienced a sweet taste cue that predicted delayed cocaine availability and during self-administration. Further, the impact of cues signaling delayed drug availability on intracranial self-stimulation (ICSS), a broad measure of reward function, was assessed.
We observed decreased mesolimbic dopamine concentrations, decreased reward sensitivity, and negative affect in response to the cocaine-predictive taste cue that signaled delayed cocaine availability. Importantly, dopamine concentration rapidly switched to elevated levels to cues signaling imminent cocaine delivery in the subsequent self-administration session.
These findings reveal rapid, bivalent contextual control over brain reward processing, affect, and motivated behavior and have implications for mechanisms mediating substance abuse.
dopamine; aversion; reward; affect; cocaine; addiction
Efficient decision making requires that animals consider both the benefits and costs of potential actions, such as the amount of effort or temporal delay involved in reward seeking. The nucleus accumbens (NAc) has been implicated in the ability to choose between options with different costs and overcome high costs when necessary, but it is not clear how NAc processing contributes to this role. Here, neuronal activity in the NAc was monitored using multi-neuron electrophysiology during two cost-based decision tasks in which either reward effort or reward delay was manipulated. In each task, distinct visual cues predicted high value (low effort/immediate) and low value (high effort/delayed) rewards. After training, animals exhibited a behavioral preference for high value rewards, yet overcame high costs when necessary to obtain rewards. Electrophysiological analysis indicated that a subgroup of NAc neurons exhibited phasic increases in firing rate during cue presentations. In the effort-based decision task (but not the delay-based task), this population reflected the cost-discounted value of the future response. In contrast, other subgroups of cells were activated during response initiation or reward delivery, but activity did not differ on the basis of reward cost. Finally, another population of cells exhibited sustained changes in firing rate while animals completed high effort requirements or waited for delayed rewards. These findings are consistent with previous reports that implicate NAc function in reward prediction and behavioral allocation during reward-seeking behavior, and suggest a mechanism by which NAc activity contributes to both cost-based decisions and actual cost expenditure.
Nucleus accumbens; decision making; reward; motivation; cost; dopamine
Autism spectrum disorders (ASD) represent a class of neurodevelopmental disorders characterized by impairments in social interaction, verbal and non-verbal communication, as well as restricted interests and repetitive behavior. This latter class of symptoms often includes features such as compulsive behaviors and resistance to change. The BTBR T+tf/J mouse strain has been used as an animal model to investigate the social communication and restricted interest features in ASD. Less is known about whether this mouse strain models cognitive flexibility deficits also observed in ASD. The present experiment investigated performance of BTBR T+tf/J and C57BL/6J on two different spatial reversal learning tests (100% accurate feedback and 80/20 probabilistic feedback), as well as marble burying and grooming behavior. BTBR T+tf/J and C57BL/6J mice exhibited similar performance on acquisition and reversal learning with 100% accurate feedback. BTBR T+tf/J mice were impaired in probabilistic reversal learning compared to that of C57BL/6J mice. BTBR T+tf/J mice also displayed increased stereotyped repetitive behaviors compared to that of C57BL/6J mice as shown by increased marble burying and grooming behavior. The present findings indicate that BTBR T+tf/J mice exhibit similar features related to “insistence on sameness” in ASD that include not only stereotyped repetitive behaviors, but also alterations in behavioral flexibility. Thus, BTBR T+tf/J mice can serve as a model to understand the neural mechanisms underlying alterations in behavioral flexibility, as well as to test potential treatments in alleviating these symptoms.
BTBR T+ tf/J; Autism; Stereotypy; Reversal Learning; Mice; Memory
The ability to process information regarding reward-predictive cues involves a diverse network of neural substrates. Given the importance of the nucleus accumbens (NAc) and the basolateral amygdala (BLA) in associative reward processes, recent research has examined the functional importance of BLA-NAc interactions. Here, multi-neuron extracellular recordings of NAc neurons coupled to microinfusion of GABAA and GABAB agonists into the BLA were employed to determine the functional contribution of the BLA to phasic neural activity across the NAc core and shell during a cued-instrumental task. NAc neural response profiles prior to BLA inactivation exhibited largely indistinguishable activity across the core and shell. However, for NAc neurons that displayed cue-related increases in firing rates during the task, BLA inactivation significantly reduced this activity selectively in the core (not shell). Additionally, phasic increases in firing rate in the core (not shell) immediately following the lever press response were also significantly reduced following BLA manipulation. Concurrent with these neural changes, BLA inactivation caused a significant increase in latency to respond for rewards and a decrease in the percentage of trials in which animals made a conditioned approach to the cue. Together, these results suggest that an excitatory projection from the BLA provides a selective contribution to conditioned neural excitations of NAc core neurons during a cued-instrumental task, providing insight into the underlying neural circuitry that mediates responding to reward-predictive cues.
ventral striatum; electrophysiology; reward; cue; learning; reinforcement
Ca2+ signaling in nonexcitable cells is typically initiated by receptor-triggered production of inositol-1,4,5-trisphosphate and the release of Ca2+ from intracellular stores . An elusive signaling process senses the Ca2+ store depletion and triggers the opening of plasma membrane Ca2+ channels [2–5]. The resulting sustained Ca2+ signals are required for many physiological responses, such as T cell activation and differentiation . Here, we monitored receptor-triggered Ca2+ signals in cells transfected with siRNAs against 2,304 human signaling proteins, and we identified two proteins required for Ca2+-store-depletion-mediated Ca2+ influx, STIM1 and STIM2 [7–9]. These proteins have a single transmembrane region with a putative Ca2+ binding domain in the lumen of the endoplasmic reticulum. Ca2+ store depletion led to a rapid translocation of STIM1 into puncta that accumulated near the plasma membrane. Introducing a point mutation in the STIM1 Ca2+ binding domain resulted in prelocalization of the protein in puncta, and this mutant failed to respond to store depletion. Our study suggests that STIM proteins function as Ca2+ store sensors in the signaling pathway connecting Ca2+ store depletion to Ca2+ influx.
Optimal decision making requires that organisms correctly evaluate both the costs and benefits of potential choices. Dopamine transmission within the nucleus accumbens (NAc) has been heavily implicated in reward learning and decision making, but it is unclear how dopamine release may contribute to decisions that involve costs.
Cost-based decision making was examined in rats trained to associate visual cues with either immediate or delayed rewards (delay manipulation) or low effort or high effort rewards (effort manipulation). After training, dopamine concentration within the NAc was monitored on a rapid timescale using fast-scan cyclic voltammetry.
Animals exhibited a preference for immediate or low effort rewards over delayed or high effort rewards of equal magnitude. Reward-predictive cues, but not response execution or reward delivery, evoked increases in NAc dopamine concentration. When only one response option was available, cue-evoked dopamine release reflected the value of the future reward, with larger increases in dopamine signaling higher value rewards. In contrast, when both options were presented simultaneously, dopamine signaled the better of two options, regardless of the future choice.
Phasic dopamine signals in the NAc reflect two different types of reward cost and encode potential rather than chosen value under choice situations.
Dopamine; nucleus accumbens; decision making; reward; motivation; cost
Dopamine signaling in the nucleus accumbens (NAc) is essential for goal-directed behaviors and primarily arises from burst firing of ventral tegmental area (VTA) neurons. However, the role of associative neural substrates such as the basolateral amygdala (BLA) in regulating phasic dopamine release in the NAc, particularly during reward-seeking, remains unknown.
Male Sprague-Dawley rats learned to discriminate two cues; a discriminative stimulus (DS) that predicted sucrose reinforcement contingent upon a lever press, and a non-associated stimulus (NS) that predicted a second lever never reinforced with sucrose. Following training, a test session was completed in which NAc dopamine was measured using fast-scan cyclic voltammetry in conjunction with inactivation of the ipsilateral BLA (GABA agonists; baclofen/muscimol) to determine the contribution of BLA activity to dopamine release in the NAc core during the task.
Under vehicle conditions, DS and NS presentation elicited dopamine release within the NAc core. The DS evoked significantly more dopamine than the NS. Inactivation of the BLA selectively attenuated the magnitude of DS-evoked dopamine release, concurrent with an attenuation of DS-evoked conditioned approaches. Other behavioral responses (e.g., lever pressing) and dopamine release concomitant with those events were unaltered by BLA inactivation. Furthermore, neither VTA electrically-stimulated dopamine release nor the probability of high concentration dopamine release events was altered following BLA inactivation.
These results demonstrate that the BLA terminally modulates dopamine signals within the NAc core under specific, behaviorally-relevant conditions, illustrating a functional mechanism by which the BLA selectively facilitates responding to motivationally salient environmental stimuli.
behavior; ventral striatum; basolateral amygdala; reward; cue; learning
Normal aging is associated with deficits in cognitive flexibility thought to depend on prefrontal regions such as the orbitofrontal cortex (OFC). Here, we used Pavlovian reinforcer devaluation to test whether normal aging might also affect the ability to use outcome expectancies to guide appropriate behavioral responding, which is also known to depend on the OFC. Both young and aged rats were trained to associate a 10-s conditioned stimulus (CS+) with delivery of a sucrose pellet. After training, half of the rats in each age group received the sucrose pellets paired with illness induced by LiCl injections; the remaining rats received sucrose and illness explicitly unpaired. Subsequently, responding to the CS+ was assessed in an extinction probe test. Although aged rats displayed lower responding levels overall, both young and aged rats conditioned to the CS+ and developed a conditioned taste aversion following reinforcer devaluation. Furthermore, during the extinction probe test, both young and aged rats spontaneously attenuated conditioned responding to the cue as a result of reinforcer devaluation. These data show that normal aging does not affect the ability to use expected outcome value to appropriately guide Pavlovian responding. This result indicates that deficits in cognitive flexibility are dissociable from other known functions of prefrontal – and particularly orbitofrontal – cortex.
aging; associative learning; orbitofrontal; devaluation; rat
Cell cycle analysis typically relies on fixed time-point measurements of cells in particular phases of the cell cycle. The cell cycle, however, is a dynamic process whose subtle shifts are lost by fixed time-point methods. Live-cell fluorescent biosensors and time-lapse microscopy allows the collection of temporal information about real time cell cycle progression and arrest. Using two genetically-encoded biosensors, we measured the precision of the G1, S, G2 and M cell cycle phase durations in different cell types and identified a bimodal G1 phase duration in a fibroblast cell line that is not present in the other cell types. Using a cell line model for neuronal differentiation, we demonstrated that NGF-induced neurite extension occurs independently of NGF-induced cell cycle G1 phase arrest. Thus, we have begun to use cell cycle fluorescent biosensors to examine the proliferation of cell populations at the resolution of individual cells and neuronal differentiation as a dynamic process of parallel cell cycle arrest and neurite outgrowth.
phase duration; biosensor; fluorescence; live-cell imaging; PC12; neurite; G1 arrest
The events that mark the entry of a cell into mitosis—chromatin condensation, centrosome separation, and nuclear envelope breakdown (NEB)—are thought to be triggered by the activation of Cdk-cyclin complexes [1, 2]. However, it is not yet clear which complexes are important for which events, or how the various complexes are coordinated. Here we have used RNA interference (RNAi) to assess the roles of three mitotic cyclins, cyclins A2, B1, and B2, in HeLa cells. We found that the timing of NEB was affected very little by knocking down cyclins B1 and B2 alone or in combination. However, knocking down cyclin A2 markedly delayed NEB, and knocking down both cyclins A2 and B1 delayed NEB further. The timing of cyclin B1-Cdk1 activation was normal in cyclin A2 knockdown cells, and there was no delay in centrosome separation, an event apparently controlled by the activation of cytoplasmic cyclin B1-Cdk1 . However, nuclear accumulation of cyclin B1-Cdk1 was markedly delayed in cyclin A2 knockdown cells. Finally, a constitutively-nuclear cyclin B1, but not wild-type cyclin B1, restored normal NEB timing in cyclin A2 knockdown cells. These findings show that cyclin A2 is required for timely NEB, whereas cyclins B1 and B2 are not. Nevertheless cyclin B1 translocates to the nucleus just prior to NEB in a cyclin A2-dependent fashion, and is capable of supporting NEB if rendered constitutively nuclear.
A survey of 1,804 human dicer-generated signaling siRNAs using automated quantitative imaging identified the phosphatidylinositol-3,4,5-trisphosphate-mTOR signaling pathway as a primary regulator of iron-transferrin uptake.
Iron uptake via endocytosis of iron-transferrin-transferrin receptor complexes is a rate-limiting step for cell growth, viability and proliferation in tumor cells as well as non-transformed cells such as activated lymphocytes. Signaling pathways that regulate transferrin uptake have not yet been identified.
We surveyed the human signaling proteome for regulators that increase or decrease transferrin uptake by screening 1,804 dicer-generated signaling small interfering RNAs using automated quantitative imaging. In addition to known transport proteins, we identified 11 signaling proteins that included a striking signature set for the phosphatidylinositol-3,4,5-trisphosphate (PtdIns(3,4,5)P3)-target of rapamycin (mTOR) signaling pathway. We show that the PI3K-mTOR signaling pathway is a positive regulator of transferrin uptake that increases the number of transferrin receptors per endocytic vesicle without affecting endocytosis or recycling rates.
Our study identifies the PtdIns(3,4,5)P3-mTOR signaling pathway as a new regulator of iron-transferrin uptake and serves as a proof-of-concept that targeted RNA interference screens of the signaling proteome provide a powerful and unbiased approach to discover or rank signaling pathways that regulate a particular cell function.
We collected data during postexposure antimicrobial prophylaxis campaigns and from a prophylaxis program evaluation 60 days after start of antimicrobial prophylaxis involving persons from six U.S. sites where Bacillus anthracis exposures occurred. Adverse events associated with antimicrobial prophylaxis to prevent anthrax were commonly reported, but hospitalizations and serious adverse events as defined by Food and Drug Administration criteria were rare. Overall adherence during 60 days of antimicrobial prophylaxis was poor (44%), ranging from 21% of persons exposed in the Morgan postal facility in New York City to 64% of persons exposed at the Brentwood postal facility in Washington, D.C. Adherence was highest among participants in an investigational new drug protocol to receive additional antibiotics with or without anthrax vaccine—a likely surrogate for anthrax risk perception. Adherence of <60 days was not consistently associated with adverse events.
Anthrax; Bacillus anthracis; antimicrobial prophylaxis; adverse events; adherence
On October 4, 2001, we confirmed the first bioterrorism-related anthrax case identified in the United States in a resident of Palm Beach County, Florida. Epidemiologic investigation indicated that exposure occurred at the workplace through intentionally contaminated mail. One additional case of inhalational anthrax was identified from the index patient’s workplace. Among 1,076 nasal cultures performed to assess exposure, Bacillus anthracis was isolated from a co-worker later confirmed as being infected, as well as from an asymptomatic mail-handler in the same workplace. Environmental cultures for B. anthracis showed contamination at the workplace and six county postal facilities. Environmental and nasal swab cultures were useful epidemiologic tools that helped direct the investigation towards the infection source and transmission vehicle. We identified 1,114 persons at risk and offered antimicrobial prophylaxis.
Anthrax; Bacillus anthracis; bioterrorism; nasal swab cultures; environmental cultures