Natural killer (NK) cells are lymphocytes with the capacity to produce cytokines and kill target cells upon activation. NK cells have long been categorized as members of the innate immune system and as such have been thought to follow the ‘rules’ of innate immunity, including the principle that they have no immunologic memory, a property thought to be strictly limited to adaptive immunity. However, recent studies have suggested that NK cells have the capacity to alter their behavior based on prior activation. This property is analogous to adaptive immune memory; however, some NK cell memory-like functions are not strictly antigen-dependent and can be demonstrated following cytokine stimulation. Here we discuss the recent evidence that NK cells can exhibit properties of immunologic memory, focusing on the ability of cytokines to non-specifically induce memory-like NK cells with enhanced responses to restimulation.
natural killer cell; innate immunity; memory; cytokines; interferon γ
Nonlinear vocal phenomena are a ubiquitous feature of human and non-human animal vocalizations. Although we understand how these complex acoustic intrusions are generated, it is not clear whether they function adaptively for the animals producing them. One explanation is that nonlinearities make calls more unpredictable, increasing behavioural responses and ultimately reducing the chances of habituation to these call types. Meerkats (Suricata suricatta) exhibit nonlinear subharmonics in their predator alarm calls. We specifically tested the ‘unpredictability hypothesis’ by playing back naturally occurring nonlinear and linear medium-urgency alarm call bouts. Results indicate that subjects responded more strongly and foraged less after hearing nonlinear alarm calls. We argue that these findings support the unpredictability hypothesis and suggest this is the first study in animals or humans to show that nonlinear vocal phenomena function adaptively.
meerkats; nonlinearities; adaptive function
High precision positioning technology for a kind of high speed maglev train with an electromagnetic suspension (EMS) system is studied. At first, the basic structure and functions of the position sensor are introduced and some key techniques to enhance the positioning precision are designed. Then, in order to further improve the positioning signal quality and the fault-tolerant ability of the sensor, a new kind of discrete-time tracking differentiator (TD) is proposed based on nonlinear optimal control theory. This new TD has good filtering and differentiating performances and a small calculation load. It is suitable for real-time signal processing. The stability, convergence property and frequency characteristics of the TD are studied and analyzed thoroughly. The delay constant of the TD is figured out and an effective time delay compensation algorithm is proposed. Based on the TD technology, a filtering process is introduced in to improve the positioning signal waveform when the sensor is under bad working conditions, and a two-sensor switching algorithm is designed to eliminate the positioning errors caused by the joint gaps of the long stator. The effectiveness and stability of the sensor and its signal processing algorithms are proved by the experiments on a test train during a long-term test run.
maglev train; high precision position sensor; tracking differentiator; signal processing; time delay compensation
In this paper we present a fractional order generalization of Perelson et al. basic hepatitis C virus (HCV) model including an immune response term. We argue that fractional order equations are more suitable than integer order ones in modeling complex systems which include biological systems. The model is presented and discussed. Also we argue that the added immune response term represents some basic properties of the immune system and that it should be included to study longer term behavior of the disease.
Journal editorials are an important medium for communicating information about medical innovations. Evaluative statements contained in editorials pertain to the innovation's technical merits, as well as its probable economic, social and political, and ethical consequences. This information will either promote or impede the subsequent diffusion of innovations. This paper analyzes the evaluative information contained in thirty editorials that pertain to the topic of computer-assisted decision making (CDM). Most editorials agree that CDM technology is effective and economical in performing routine clinical tasks; controversy surrounds the use of more sophisticated CDM systems for complex problem solving. A few editorials argue that the innovation should play an integral role in transforming the established health care system. Most, however, maintain that it can or should be accommodated within the existing health care framework. Finally, while few editorials discuss the ethical ramifications of CDM technology, those that do suggest that it will contribute to more humane health care. The editorial analysis suggests that CDM technology aimed at routine clinical task will experience rapid diffusion. In contrast, the diffusion of more sophisticated CDM systems will, in the foreseeable future, likely be sporadic at best.
Although the mammalian immune system is generally thought to develop in a linear fashion, findings in avian and murine species argue instead for the developmentally ordered appearance (or “layering”) of unique hematopoietic stem cells (HSC) that give rise to distinct lymphocyte lineages at different stages of development. Here, we provide evidence of an analogous “layered” immune system in humans. Our results suggest that fetal and adult T cells are distinct populations that arise from different populations of HSC present at different stages of development. We also provide evidence that the fetal T cell lineage is biased towards immune tolerance. These observations offer a mechanistic explanation for the tolerogenic properties of the developing fetus and for variable degrees of immune responsiveness at birth.
Assessment, as an inextricable component of the curriculum, is an important factor influencing student approaches to learning. If assessment is to drive learning, then it must assess the desired outcomes. In an effort to alleviate some of the anxiety associated with a traditional discipline-based second year of medical studies, a bonus system was introduced into the Histology assessment. Students obtaining a year mark of 70% were rewarded with full marks for some tests, resulting in many requiring only a few percentage points in the final examination to pass Histology.
In order to ascertain whether this bonus system might be impacting positively on student learning, thirty-two second year medical students (non-randomly selected, representing four academic groups based on their mid-year results) were interviewed in 1997 and, in 1999, the entire second year class completed a questionnaire (n = 189). Both groups were asked their opinions of the bonus system.
Both groups overwhelming voted in favour of the bonus system, despite less than 45% of students failing to achieve it. Students commented that it relieved some of the stress of the year-end examinations, and was generally motivating with regard to their work commitment.
Being satisfied with how and what we assess in Histology, we are of the opinion that this reward system may contribute to engendering appropriate learning approaches (i.e. for understanding) in students. As a result of its apparent positive influence on learning and attitudes towards learning, this bonus system will continue to operate until the traditional programme is phased out. It is hoped that other educators, believing that their assessment is a reflection of the intended outcomes, might recognise merit in rewarding students for consistent achievement.
Graphene has received significant attention due to its excellent mechanical properties, which has resulted in the emergence of graphene-based nano-electro-mechanical system such as nanoresonators. The nonlinear vibration of a graphene resonator and its application to mass sensing (based on nonlinear oscillation) have been poorly studied, although a graphene resonator is able to easily reach the nonlinear vibration. In this work, we have studied the nonlinear vibration of a graphene resonator driven by a geometric nonlinear effect due to an edge-clamped boundary condition using a continuum elastic model such as a plate model. We have shown that an in-plane tension can play a role in modulating the nonlinearity of a resonance for a graphene. It has been found that the detection sensitivity of a graphene resonator can be improved by using nonlinear vibration induced by an actuation force-driven geometric nonlinear effect. It is also shown that an in-plane tension can control the detection sensitivity of a graphene resonator that operates both harmonic and nonlinear oscillation regimes. Our study suggests the design principles of a graphene resonator as a mass sensor for developing a novel detection scheme using graphene-based nonlinear oscillators.
Graphene resonator; Mass sensing; Nonlinear oscillation; NEMS
We introduce the Basic Immune Simulator (BIS), an agent-based model created to study the interactions between the cells of the innate and adaptive immune system. Innate immunity, the initial host response to a pathogen, generally precedes adaptive immunity, which generates immune memory for an antigen. The BIS simulates basic cell types, mediators and antibodies, and consists of three virtual spaces representing parenchymal tissue, secondary lymphoid tissue and the lymphatic/humoral circulation. The BIS includes a Graphical User Interface (GUI) to facilitate its use as an educational and research tool.
The BIS was used to qualitatively examine the innate and adaptive interactions of the immune response to a viral infection. Calibration was accomplished via a parameter sweep of initial agent population size, and comparison of simulation patterns to those reported in the basic science literature. The BIS demonstrated that the degree of the initial innate response was a crucial determinant for an appropriate adaptive response. Deficiency or excess in innate immunity resulted in excessive proliferation of adaptive immune cells. Deficiency in any of the immune system components increased the probability of failure to clear the simulated viral infection.
The behavior of the BIS matches both normal and pathological behavior patterns in a generic viral infection scenario. Thus, the BIS effectively translates mechanistic cellular and molecular knowledge regarding the innate and adaptive immune response and reproduces the immune system's complex behavioral patterns. The BIS can be used both as an educational tool to demonstrate the emergence of these patterns and as a research tool to systematically identify potential targets for more effective treatment strategies for diseases processes including hypersensitivity reactions (allergies, asthma), autoimmunity and cancer. We believe that the BIS can be a useful addition to the growing suite of in-silico platforms used as an adjunct to traditional research efforts.
One of the widely used methods for classification that is a decision-making process is artificial immune systems. Artificial immune systems based on natural immunity system can be successfully applied for classification, optimization, recognition, and learning in real-world problems. In this study, a reinforcement learning based artificial immune classifier is proposed as a new approach. This approach uses reinforcement learning to find better antibody with immune operators. The proposed new approach has many contributions according to other methods in the literature such as effectiveness, less memory cell, high accuracy, speed, and data adaptability. The performance of the proposed approach is demonstrated by simulation and experimental results using real data in Matlab and FPGA. Some benchmark data and remote image data are used for experimental results. The comparative results with supervised/unsupervised based artificial immune system, negative selection classifier, and resource limited artificial immune classifier are given to demonstrate the effectiveness of the proposed new method.
A fundamental problem in immunology is that of understanding how the immune system selects promptly which cells to kill without harming the body. This problem poses an apparent paradox. Strong reactivity against pathogens seems incompatible with perfect tolerance towards self. We propose a different view on cellular reactivity to overcome this paradox: effector functions should be seen as the outcome of cellular decisions which can be in conflict with other cells' decisions. We argue that if cellular systems are frustrated, then extensive cross-reactivity among the elements in the system can decrease the reactivity of the system as a whole and induce perfect tolerance. Using numerical and mathematical analyses, we discuss two simple models that perform optimal pathogenic detection with no autoimmunity if cells are maximally frustrated. This study strongly suggests that a principle of maximal frustration could be used to build artificial immune systems. It would be interesting to test this principle in the real adaptive immune system.
artificial immune systems; self/non-self discrimination; homeostatic responses; immunology; cellular frustration
In this study we use the principles of distributed cognition and the methodology of human-centered distributed information design to analyze a complex distributed human-computer system, identify its problems, and generate design requirements and implementation specifications of a replacement prototype for effective organizational memory and knowledge management. We argue that a distributed human-computer information system has unique properties, structures and processes that are best described in the language of distributed cognition. Distributed cognition provides researchers a richer theoretical understanding of human-computer interactions and enables researchers to capture the phenomenon that emerges in social interactions as well as the interactions between people and structures in their environment.
The neural systems that support motor adaptation in humans are thought to be distinct from those that support the declarative system. Yet, during motor adaptation changes in motor commands are supported by a fast adaptive process that has important properties (rapid learning, fast decay) that are usually associated with the declarative system. The fast process can be contrasted to a slow adaptive process that also supports motor memory, but learns gradually and shows resistance to forgetting. Here we show that after people stop performing a motor task, the fast motor memory can be disrupted by a task that engages declarative memory, but the slow motor memory is immune from this interference. Furthermore, we find that the fast/declarative component plays a major role in the consolidation of the slow motor memory. Because of the competitive nature of declarative and non-declarative memory during consolidation, impairment of the fast/declarative component leads to improvements in the slow/non-declarative component. Therefore, the fast process that supports formation of motor memory is not only neurally distinct from the slow process, but it shares critical resources with the declarative memory system.
adaptation; learning; memory; motor control; motor learning; movement
Problem solving methods (PSMs) are software components that represent and encode reusable algorithms. They can be combined with representations of domain knowledge to produce intelligent application systems. A goal of research on PSMs is to provide principled methods and tools for composing and reusing algorithms in knowledge-based systems. The ultimate objective is to produce libraries of methods that can be easily adapted for use in these systems. Despite the intuitive appeal of PSMs as conceptual building blocks, in practice, these goals are largely unmet. There are no widely available tools for building applications using PSMs and no public libraries of PSMs available for reuse. This paper analyzes some of the reasons for the lack of widespread adoptions of PSM techniques and illustrate our analysis by describing our experiences developing a complex, high-throughput software system based on PSM principles. We conclude that many fundamental principles in PSM research are useful for building knowledge-based systems. In particular, the task–method decomposition process, which provides a means for structuring knowledge-based tasks, is a powerful abstraction for building systems of analytic methods. However, despite the power of PSMs in the conceptual modeling of knowledge-based systems, software engineering challenges have been seriously underestimated. The complexity of integrating control knowledge modeled by developers using PSMs with the domain knowledge that they model using ontologies creates a barrier to widespread use of PSM-based systems. Nevertheless, the surge of recent interest in ontologies has led to the production of comprehensive domain ontologies and of robust ontology-authoring tools. These developments present new opportunities to leverage the PSM approach.
Knowledge-Based Systems; Problem Solving Methods; Reusable Problem Solvers; Software-Engineering Challenges; Task–Method Decomposition Process
An issue of increasing interest in Pavlovian conditioning is to identify ways to facilitate the development and persistence of extinction. Both behavioral and molecular lines of evidence demonstrate that learning during extinction can be enhanced. Similar evidence has been offered to support the idea that extinction causes the original association to be unlearned, or erased. Differentiating between extinction and erasure accounts is extremely difficult and requires many assumptions about the fundamental nature of how memory storage maps into memory expression. In this issue of Behavioral Neuroscience, Norrholm, et al (2008) describe a study of extinction with humans that has the potential to serve as a translational bridge between rodent work and clinical applications. They find less recovery of a conditioned fear response when extinction occurs 10-min compared to 72-hr after conditioning; however, the recovery of subjects’ expectancies of the fearful stimulus is independent of when extinction occurred. These findings and others discussed here demonstrate some of the challenges in making inferences about memory erasure during extinction.
Extinction; consolidation; reconsolidation; memory storage; memory erasure
Immunity results from a complex interplay between the antigen-nonspecific innate immune system and the antigen-specific adaptive immune system. The cells and molecules of the innate system employ non-clonal recognition receptors including lectins, Toll-like receptors, NOD-like receptors and helicases. B and T lymphocytes of the adaptive immune system employ clonal receptors recognizing antigens or their derived peptides in a highly specific manner. An essential link between innate and adaptive immunity is provided by dendritic cells (DCs). DCs can induce such contrasting states as immunity and tolerance. The recent years have brought a wealth of information on the biology of DCs revealing the complexity of this cell system. Indeed, DC plasticity and subsets are prominent determinants of the type and quality of elicited immune responses. Here we summarize our recent studies aimed at a better understanding of the DC system to unravel the pathophysiology of human diseases and design novel human vaccines.
Some recent explanations of depression have suggested that it may be “evolutionary” in that there are advantages to the depressed individual which arise from some aspects of depressive symptomatology. While the depressive behaviour of withdrawal from the adverse environment may provide some immediate benefits to the depressed individual, thus making it potentially “adaptive” in the short-term, this does not fit the biological definition of “evolutionary”. In fact, depression does not meet two of the three required criteria from natural selection in order to be evolutionary. Therefore, while some depressive behaviour may be advantageous for the depressed individual, and is therefore “adaptive” in an immediate sense, it cannot be accurately described as “evolutionary”. Implications for research and clinical practice are discussed.
We approach the field of stress immunology from an ecological point of view and ask: why should a heavy physical workload, for example as a result of a high reproductive effort, compromise immune function? We argue that immunosuppression by neuroendocrine mechanisms, such as stress hormones, during heavy physical workload is adaptive, and consider two different ultimate explanations of such immunosuppression. First, several authors have suggested that the immune system is suppressed to reallocate resources to other metabolic demands. In our view, this hypothesis assumes that considerable amounts of energy or nutrients can be saved by suppressing the immune system; however, this assumption requires further investigation. Second, we suggest an alternative explanation based on the idea that the immune system is tightly regulated by neuroendocrine mechanisms to avoid hyperactivation and ensuing autoimmune responses. We hypothesize that the risk of autoimmune responses increases during heavy physical workload and that the immune system is suppressed to counteract this.
This review describes the structure-based reverse vaccinology approach aimed at developing vaccine immunogens capable of inducing antibodies that broadly neutralize HIV-1. Some basic principles of protein immunochemistry are reviewed and the implications of the extensive polyspecificity of antibodies for vaccine development are underlined. Although it is natural for investigators to want to know the cause of an effective immunological intervention, the classic notion of causality is shown to have little explanatory value for a system as complex as the immune system, where any observed effect always results from many interactions between a large number of components. Causal explanations are reductive because a single factor is singled out for attention and given undue explanatory weight on its own. Other examples of the negative impact of reductionist thinking on HIV vaccine development are discussed. These include (1) the failure to distinguish between the chemical nature of antigenicity and the biological nature of immunogenicity, (2) the belief that when an HIV-1 epitope is reconstructed by rational design to better fit a neutralizing monoclonal antibody (nMab), this will produce an immunogen able to elicit Abs with the same neutralizing capacity as the Ab used as template for designing the antigen, and (3) the belief that protection against infection can be analyzed at the level of individual molecular interactions although it has meaning only at the level of an entire organism. The numerous unsuccessful strategies that have been used to design HIV-1 vaccine immunogens are described and it is suggested that the convergence of so many negative experimental results justifies the conclusion that reverse vaccinology is unlikely to lead to the development of a preventive HIV-1 vaccine. Immune correlates of protection in vaccines have not yet been identified because this will become feasible only retrospectively once an effective vaccine exists. The finding that extensive antibody affinity maturation is needed to obtain mature anti-HIV-1 Abs endowed with a broad neutralizing capacity explains why antigens designed to fit matured Mabs are not effective vaccine immunogens since these are administered to naive recipients who possess only B-cell receptors corresponding to the germline version of the matured Abs.
antibody affinity maturation; antibody polyspecificity; discontinuous protein epitopes; HIV vaccines; rational vaccine design; reductionism; reverse vaccinology; systems biology
Bradford Hill's considerations published in 1965 had an enormous influence on attempts to separate causal from non-causal explanations of observed associations. These considerations were often applied as a checklist of criteria, although they were by no means intended to be used in this way by Hill himself. Hill, however, avoided defining explicitly what he meant by "causal effect".
This paper provides a fresh point of view on Hill's considerations from the perspective of counterfactual causality. I argue that counterfactual arguments strongly contribute to the question of when to apply the Hill considerations. Some of the considerations, however, involve many counterfactuals in a broader causal system, and their heuristic value decreases as the complexity of a system increases; the danger of misapplying them can be high. The impacts of these insights for study design and data analysis are discussed. The key analysis tool to assess the applicability of Hill's considerations is multiple bias modelling (Bayesian methods and Monte Carlo sensitivity analysis); these methods should be used much more frequently.
In basic terms, the immune system has two lines of defense: innate immunity and adaptive immunity. Innate immunity is the first immunological, non-specific (antigen-independent) mechanism for fighting against an intruding pathogen. It is a rapid immune response, occurring within minutes or hours after aggression, that has no immunologic memory. Adaptive immunity, on the other hand, is antigen-dependent and antigen-specific; it has the capacity for memory, which enables the host to mount a more rapid and efficient immune response upon subsequent exposure to the antigen. There is a great deal of synergy between the adaptive immune system and its innate counterpart, and defects in either system can provoke illness or disease, such as autoimmune diseases, immunodeficiency disorders and hypersensitivity reactions. This article provides a practical overview of innate and adaptive immunity, and describes how these host defense mechanisms are involved in both health and illness.
STUDY QUESTION: Continuous quality improvement (CQI) has been implemented at least to some degree in many health care settings, yet randomized controlled trials (RCTs) of CQI are rare. We ask whether, when, and how RCTs of CQI might be designed. STUDY DESIGN: We consider two applications of CQI: as a general philosophy of management and (by analogy with the use of conceptual models from the behavioral sciences) as a conceptual model for developing specific interventions. The example of warfarin therapy for stroke prevention among patients with atrial fibrillation is used throughout. PRINCIPAL FINDINGS: While it is impractical to use RCTs to study CQI as a general management philosophy, RCT methodology is appropriate for studying CQI as a conceptual model for generating interventions. RCTs of CQI might be considered when the process change under consideration is very large, its implications (e.g., in terms of cost, outcomes of care, etc.) are very great, and the best approach is uncertain. When designing RCTs of CQI, critical decisions include (1) the unit of randomization; (2) whether the focus is on CQI as a method for generating interventions or, instead, is on specific interventions in and of themselves; and (3) the flexibility available to local personnel to modify the intervention's operational details. CONCLUSIONS: RCTs of CQI as a conceptual model for generating interventions are feasible.
Motivation: Our recent work introduced a generic method to construct the design space of biochemical systems: a representation of the relationships between system parameters, environmental variables and phenotypic behavior. In design space, the qualitatively distinct phenotypes of a biochemical system can be identified, counted, analyzed and compared. Boundaries in design space indicate a transition between phenotypic behaviors and can be used to measure a system's tolerance to large changes in parameters. Moreover, the relative size and arrangement of such phenotypic regions can suggest or confirm global properties of the system.
Results: Our work here demonstrates that the construction and analysis of design space can be automated. We present a formal description of design space and a detailed explanation of its construction. We also extend the notion to include variable kinetic orders. We describe algorithms that automate common steps of design space construction and analysis, introduce new analyses that are made possible by such automation and discuss challenges of implementation and scaling. In the end, we demonstrate the techniques using software we have created.
Availability: The Design Space Toolbox for MATLAB is freely available at http://www.bme.ucdavis.edu/savageaulab/
By way of comment, I suggest: 1) That the definitions of 'competence' and 'rationality' require some modification. 2) That Professor Sherlock is right to argue that a competent but irrational decision to refuse beneficial treatment ought to be overruled; but in practice it is extremely difficult to be sufficiently sure that the decision is really irrational and the treatment really will be beneficial, except when the patient's life is in danger or he is refusing basic necessities. 3) That in practice the issue is further complicated by such questions as whether there are alternative treatments, whether persuasion is possible, what the doctor's or institution's legal obligations are, and what resources are available. 4) That the presumption should be against coercion, and the patient--however irritating this may be to some doctors--should be considered 'rational until proved irrational'.
A picture archiving and communication system (PACS) is an electronic and ideally filmless information system for acquiring, sorting, transporting, storing, and electronically displaying medical images. PACS have developed rapidly and are in operation in a number of hospitals. Before widespread adoption of PACSs can occur, however, their cost-effectiveness must be proven. This article introduces the basic components of a PACS. The current PACS cost-analysis literature is reviewed. Some authors conclude that the PACS would pay for itself, while others find the PACS much more expensive. Explanations for these differences are explored. Almost all of these studies focus on direct costs and ignore indirect costs and benefits. The literature characterizing the indirect costs of PACS is reviewed. The authors conclude that there is a need for uniform, well-defined criteria for the calculation of the costs and savings of PACSs.