PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1290675)

Clipboard (0)
None

Related Articles

1.  Research on Implementation of Interventions in Tuberculosis Control in Low- and Middle-Income Countries: A Systematic Review 
PLoS Medicine  2012;9(12):e1001358.
Cobelens and colleagues systematically reviewed research on implementation and cost-effectiveness of the WHO-recommended interventions for tuberculosis.
Background
Several interventions for tuberculosis (TB) control have been recommended by the World Health Organization (WHO) over the past decade. These include isoniazid preventive therapy (IPT) for HIV-infected individuals and household contacts of infectious TB patients, diagnostic algorithms for rule-in or rule-out of smear-negative pulmonary TB, and programmatic treatment for multidrug-resistant TB. There is no systematically collected data on the type of evidence that is publicly available to guide the scale-up of these interventions in low- and middle-income countries. We investigated the availability of published evidence on their effectiveness, delivery, and cost-effectiveness that policy makers need for scaling-up these interventions at country level.
Methods and Findings
PubMed, Web of Science, EMBASE, and several regional databases were searched for studies published from 1 January 1990 through 31 March 2012 that assessed health outcomes, delivery aspects, or cost-effectiveness for any of these interventions in low- or middle-income countries. Selected studies were evaluated for their objective(s), design, geographical and institutional setting, and generalizability. Studies reporting health outcomes were categorized as primarily addressing efficacy or effectiveness of the intervention. These criteria were used to draw landscapes of published research. We identified 59 studies on IPT in HIV infection, 14 on IPT in household contacts, 44 on rule-in diagnosis, 19 on rule-out diagnosis, and 72 on second-line treatment. Comparative effectiveness studies were relatively few (n = 9) and limited to South America and sub-Saharan Africa for IPT in HIV-infection, absent for IPT in household contacts, and rare for second-line treatment (n = 3). Evaluations of diagnostic and screening algorithms were more frequent (n = 19) but geographically clustered and mainly of non-comparative design. Fifty-four studies evaluated ways of delivering these interventions, and nine addressed their cost-effectiveness.
Conclusions
There are substantial gaps in published evidence for scale-up for five WHO-recommended TB interventions settings at country level, which for many countries possibly precludes program-wide implementation of these interventions. There is a strong need for rigorous operational research studies to be carried out in programmatic settings to inform on best use of existing and new interventions in TB control.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Tuberculosis (TB), caused by Mycobacterium tuberculosis, is curable and preventable, but according to the World Health Organization (WHO), in 2011, 8.7 million people had symptoms of TB (usually a productive cough and fever) and 1.4 million people—95% from low- and middle-income countries—died from TB. TB is also the leading cause of death in people with HIV worldwide, and in 2010 about 10 million children were orphaned as a result of their parents dying from TB. To help reduce the considerable global burden of TB, a global initiative called the Stop TB Partnership, led by WHO, has implemented a strategy to reduce deaths from TB by 50% by 2015—even greater than the target of Millennium Development Goal 6 (to reverse the increase in TB incidence by 2015).
Why Was This Study Done?
Over the past few years, WHO has recommended that countries implement several interventions to help control the spread of tuberculosis through measures to improve prevention, diagnosis, and treatment. Five such interventions currently recommended by WHO are: treatment with isoniazid to prevent TB among people who are HIV positive, and also among household contacts of people infected with TB; the use of clinical pathways (algorithms) for diagnosing TB in people accessing health care who have a negative smear test—the most commonly used diagnostic test, which relies on sputum samples—(“rule-in algorithms”); screening algorithms for excluding TB in people who have HIV (“rule-out algorithms”); and finally, provision of second-line treatment for multidrug-resistant tuberculosis (a form of TB that does not respond to the most commonly used drugs) under programmatic conditions. The effectiveness of these interventions, their costs, and the practicalities of implementation are all important information for countries seeking to control TB following the WHO guidelines, but little is known about the availability of this information. Therefore, in this study the researchers systematically reviewed published studies to find evidence of the effectiveness of each of these interventions when implemented in routine practice, and also for additional information on the setting and conditions of implemented interventions, which might be useful to other countries.
What Did the Researchers Do and Find?
Using a specific search strategy, the researchers comprehensively searched through several key databases of publications, including regional databases, to identify 208 (out of 11,489 found initially) suitable research papers published between January 1990 and March 2012. For included studies, the researchers also noted the geographical location and setting and the type and design of study.
Of the 208 included studies, 59 focused on isoniazid prevention therapy in HIV infection, and only 14 on isoniazid prevention therapy for household contacts. There were 44 studies on “rule-in” clinical diagnosis, 19 on “rule-out” clinical diagnosis, and 72 studies on second-line treatment for TB. Studies on each intervention had some weaknesses, and overall, researchers found that there were very few real-world studies reporting on the effectiveness of interventions in program settings (rather than under optimal conditions in research settings). Few studies evaluated the methods used to implement the intervention or addressed delivery and operational issues (such as adherence to treatment), and there were limited economic evaluations of the recommended interventions. Furthermore, the researchers found that in general, the South Asian region was poorly represented.
What Do These Findings Mean?
These findings suggest that there is limited evidence on effectiveness, delivery, and cost-effectiveness to guide the scale-up of five WHO recommended interventions to control tuberculosis in the countries and settings, despite the urgent need for such interventions to be implemented. The poor evidence base identified in this review highlights the tension between the decision to adopt the recommendation and its implementation adapted to local circumstances, and may be an important reason as to why these interventions are not implemented in many countries. This study also suggests creative thinking is necessary to address the gaps between WHO recommendations and global health policy on new interventions and their real-world implementation in country-wide TB control programs. Future research should focus more on operational studies, the results of which should be made publicly available, and researchers, donors, and medical journals could perhaps re-consider their priorities to help bridge the knowledge gap identified in this study.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001358.
WHO has a wide range of information about TB and research on TB, including more about the STOP TB strategy and the STOP TB Partnership
The UN website has more information about MDG 6
The Global Fund to Fight AIDS, Tuberculosis and Malaria has specific information about progress on TB control
doi:10.1371/journal.pmed.1001358
PMCID: PMC3525528  PMID: 23271959
2.  Executive attention impairment in first-episode schizophrenia 
BMC Psychiatry  2012;12:154.
Background
We compared the attention abilities of a group of first-episode schizophrenia (FES) patients and a group of healthy participants using the Attention Network Test (ANT), a standard procedure that estimates the functional state of three neural networks controlling the efficiency of three different attentional behaviors, i.e., alerting (achieving and maintaining a state of high sensitivity to incoming stimuli), orienting (ability to select information from sensory input), and executive attention (mechanisms for resolving conflict among thoughts, feelings, and actions).
Methods
We evaluated 22 FES patients from 17 to 29 years of age with a recent history of a single psychotic episode treated only with atypical neuroleptics, and 20 healthy persons matched with FES patients by sex, age, and educational level as the control group. Attention was estimated using the ANT in which participants indicate whether a central horizontal arrow is pointing to the left or the right. The central arrow may be preceded by spatial or temporal cues denoting where and when the arrow will appear, and may be flanked by other arrows (hereafter, flankers) pointing in the same or the opposite direction.
Results
The efficiency of the alerting, orienting, and executive networks was estimated by measuring how reaction time was influenced by congruency between temporal, spatial, and flanker cues. We found that the control group only demonstrated significantly greater attention efficiency than FES patients in the executive attention network.
Conclusions
FES patients are impaired in executive attention but not in alerting or orienting attention, suggesting that executive attention deficit may be a primary impairment during the progression of the disease.
doi:10.1186/1471-244X-12-154
PMCID: PMC3493330  PMID: 22998680
Schizophrenia; First-episode; Attention; ANT; Executive; Cognitive
3.  Left Ventricular Assist Devices 
Executive Summary
Objective
The objective of this health technology policy assessment was to determine the effectiveness and cost-effectiveness of using implantable ventricular assist devices in the treatment of end-stage heart failure.
Heart Failure
Heart failure is a complex syndrome that impairs the ability of the heart to maintain adequate blood circulation, resulting in multiorgan abnormalities and, eventually, death. In the period of 1994 to 1997, 38,702 individuals in Ontario had a first hospital admission for heart failure. Despite reported improvement in survival, the five-year mortality rate for heart failure is about 50%.
For patients with end-stage heart failure that does not respond to medical therapy, surgical treatment or traditional circulatory assist devices, heart transplantation (in appropriate patients) is the only treatment that provides significant patient benefit.
Heart Transplant in Ontario
With a shortage in the supply of donor hearts, patients are waiting longer for a heart transplant and may die before a donor heart is available. From 1999 to 2003, 55 to 74 people received a heart transplant in Ontario each year. Another 12 to 21 people died while waiting for a suitable donor heart. Of these, 1 to 5 deaths occurred in people under 18 years old. The rate-limiting factor in heart transplant is the supply of donor hearts. Without an increase in available donor hearts, attempts at prolonging the life of some patients on the transplant wait list could have a harmful effect on other patients that are being pushed down the waiting list (knock on effect).
LVAD Technology
Ventricular assist devices [VADs] have been developed to provide circulatory assistance to patients with end-stage heart failure. These are small pumps that usually assist the damaged left ventricle [LVADs] and may be situated within the body (intracorporeal] or outside the body [extracorporeal). Some of these devices were designed for use in the right ventricle [RVAD] or both ventricles (bi-ventricular).
LVADs have been mainly used as a “bridge-to-transplant” for patients on a transplant waiting list. As well, they have been used as a “bridge-to-recovery” in acute heart failure, but this experience is limited. There has been an increasing interest in using LVAD as a permanent (destination) therapy.
Review of LVAD by the Medical Advisory Secretariat
The Medical Advisory Secretariat’s review included a descriptive synthesis of findings from five systematic reviews and 60 reports published between January 2000 and December 2003. Additional information was obtained through consultation and by searching the websites of Health Canada, the United Network of Organ Sharing, Organ Donation Ontario, and LVAD manufacturers.
Summary of Findings
Safety and Effectiveness
Previous HTAs and current Level 3 evidence from prospective non-randomized controlled studies showed that when compared to optimal medical therapy, LVAD support significantly improved the pre-transplant survival rates of heart transplant candidates waiting for a suitable donor heart (71% for LVAD and 36% for medical therapy). Pre-transplant survival rates reported ranged from 58% to 90% (median 74%). Improved transplant rates were also reported for people who received pre-transplant LVAD support (e.g. 67% for LVAD vs 33% for medical therapy). Reported transplant rates for LVAD patients ranged from 39% to 90% (median 71%).
Patient’s age greater than 60 years and pre-existing conditions of respiratory failure associated with septicemia, ventilation, and right heart failure were independent risk factors for mortality after the LVAD implantation.
LVAD support was shown to improve the New York Heart Association [NYHA)] functional classification and quality of life of patients waiting for heart transplant. LVAD also enabled approximately 41% - 49% of patients to be discharged from hospitals and wait for a heart transplant at home. However, over 50% of the discharged patients required re-hospitalization due to adverse events.
Post-transplant survival rates for LVAD-bridged patients were similar to or better than the survival rates of patients bridged by medical therapy.
LVAD support has been associated with serious adverse events, including infection (median 53%, range 6%–72%), bleeding (8.6%–48%, median 35%), thromboembolism (5%–37%), neurologic disorders (7%–28%), right ventricular failure (11%–26%), organ dysfunction (5%–50%) and hemolysis (6%–20%). Bleeding tends to occur in the first few post-implant days and is rare thereafter. It is fatal in 2%–7% of patients. Infection and thromboembolism occurred throughout the duration of the implant, though their frequency tended to diminish with time. Device malfunction has been identified as one of the major complications. Fatalities directly attributable to the devices were about 1% in short-term LVAD use. However, mechanical failure was the second most frequent cause of death in patients on prolonged LVAD support. Malfunctions were mainly associated with the external components, and often could be replaced by backed up components.
LVAD has been used as a bridge-to-recovery in patients suffering from acute cardiogenic shock due to cardiomyopathy, myocarditis or cardiotomy. The survival rates were reported to be lower than in bridge-to-transplant (median 26%). Some of the bridge-to-recovery patients (14%–75%) required a heart transplant or remained on prolonged LVAD support. According to an expert in the field, experience with LVAD as a bridge-to-recovery technology has been more favourable in Germany than in North America, where it is not regarded as a major indication since evidence for its effectiveness in this setting is limited.
LVAD has also been explored as a destination therapy. A small, randomized, controlled trial (level 2 evidence) showed that LVAD significantly increased the 1-year survival rate of patients with end-stage heart failure but were not eligible for a heart transplant (51% LVAD vs 25% for medical therapy). However, improved survival was associated with adverse events 2.35 times higher than medically treated patients and a higher hospital re-admission rate. The 2-year survival rate on LVAD decreased to 23%, although it was still significantly better compared to patients on medical therapy (8%). The leading causes of deaths were sepsis (41%) and device failure (17%).
The FDA has given conditional approval for the permanent use of HeartMate SNAP VE LVAS in patients with end-stage heart failure who are not eligible for heart transplantation, although the long-term effect of this application is not known.
In Canada, four LVAD systems have been licensed for bridge-to-transplant only. The use of LVAD support raises ethical issues because of the implications of potential explantation that could be perceived as a withdrawal of life support.
Potential Impact on the Transplant Waiting List
With the shortage of donor hearts for adults, LVAD support probably would not increase the number of patients who receive a heart transplant. If LVAD supported candidates are prioritized for urgent heart transplant, there will be a knock on effect as other transplant candidates without LVAD support would be pushed down, resulting in longer wait, deterioration in health status and die before a suitable donor heart becomes available.
Under the current policy for allocating donor hearts in Ontario, patients on LVAD support would be downgraded to Status 3 with a lower priority to receive a transplant. This would likely result in an expansion of the transplant waiting list with an increasing number of patients on prolonged LVAD support, which is not consistent with the indication of LVAD use approved by Health Canada.
There is indication in the United Kingdom that LVAD support in conjunction with an urgent transplant listing in the pediatric population may decrease the number of deaths on the waiting list without a harmful knock-on effect on other transplant candidates.
Conclusion
LVAD support as a bridge-to-transplant has been shown to improve the survival rate, functional status and quality of life of patients on the heart transplant waiting list. However, due to the shortage of donor hearts and the current heart transplant algorithm, LVAD support for transplant candidates of all age groups would likely result in an expansion of the waiting list and prolonged use of LVAD with significant budget implications but without increasing the number of heart transplants. Limited level 4 evidence showed that LVAD support in children yielded survival rates comparable to those in the adult population. The introduction of LVAD in the pediatric population would be more cost-effective and might not have a negative effect on the transplant waiting list.
PMCID: PMC3387736  PMID: 23074453
4.  How Awareness Changes the Relative Weights of Evidence During Human Decision-Making 
PLoS Biology  2011;9(11):e1001203.
A combined behavioral and brain imaging study shows how sensory awareness and stimulus visibility can influence the dynamics of decision-making in humans.
Human decisions are based on accumulating evidence over time for different options. Here we ask a simple question: How is the accumulation of evidence affected by the level of awareness of the information? We examined the influence of awareness on decision-making using combined behavioral methods and magneto-encephalography (MEG). Participants were required to make decisions by accumulating evidence over a series of visually presented arrow stimuli whose visibility was modulated by masking. Behavioral results showed that participants could accumulate evidence under both high and low visibility. However, a top-down strategic modulation of the flow of incoming evidence was only present for stimuli with high visibility: once enough evidence had been accrued, participants strategically reduced the impact of new incoming stimuli. Also, decision-making speed and confidence were strongly modulated by the strength of the evidence for high-visible but not low-visible evidence, even though direct priming effects were identical for both types of stimuli. Neural recordings revealed that, while initial perceptual processing was independent of visibility, there was stronger top-down amplification for stimuli with high visibility than low visibility. Furthermore, neural markers of evidence accumulation over occipito-parietal cortex showed a strategic bias only for highly visible sensory information, speeding up processing and reducing neural computations related to the decision process. Our results indicate that the level of awareness of information changes decision-making: while accumulation of evidence already exists under low visibility conditions, high visibility allows evidence to be accumulated up to a higher level, leading to important strategical top-down changes in decision-making. Our results therefore suggest a potential role of awareness in deploying flexible strategies for biasing information acquisition in line with one's expectations and goals.
Author Summary
When making a decision, we gather evidence for the different options and ultimately choose on the basis of the accumulated evidence. A fundamental question is whether and how conscious awareness of the evidence changes this decision-making process. Here, we examined the influence of sensory awareness on decision-making using behavioral studies and magneto-encephalographic recordings in human participants. In our task, participants had to indicate the prevailing direction of five arrows presented on a screen that each pointed either left or right, and in different trials these arrows were either easy to see (high visibility) or difficult to see (low visibility). Behavioral and neural recordings show that evidence accumulation changed from a linear to a non-linear integration strategy with increasing stimulus visibility. In particular, the impact of later evidence was reduced when more evidence had been accrued, but only for highly visible information. By contrast, barely perceptible arrows contributed equally to a decision because participants needed to continue to accumulate evidence in order to make an accurate decision. These results suggest that consciousness may play a role in decision-making by biasing the accumulation of new evidence.
doi:10.1371/journal.pbio.1001203
PMCID: PMC3222633  PMID: 22131904
5.  Searching for Complex Patterns Using Disjunctive Anomaly Detection 
Objective
Disjunctive anomaly detection (DAD) algorithm [1] can efficiently search across multidimensional biosurveillance data to find multiple simultaneously occurring (in time) and overlapping (across different data dimensions) anomalous clusters. We introduce extensions of DAD to handle rich cluster interactions and diverse data distributions.
Introduction
Modern biosurveillance data contains thousands of unique time series defined across various categorical dimensions (zipcode, age groups, hospitals). Many algorithms are overly specific (tracking each time series independently would often miss early signs of outbreaks), or too general (detections at state level may lack specificity reflective of the actual process at hand). Disease outbreaks often impact multiple values (disjunctive sets of zipcodes, hospitals, multiple age groups) along subsets of multiple dimensions of data. It is not uncommon to see outbreaks of different diseases occurring simultaneously (e.g. food poisoning and flu) making it hard to detect and characterize the individual events.
We proposed Disjunctive Anomaly Detection (DAD) algorithm [1] to efficiently search across millions of potential clusters defined as conjunctions over dimensions and disjunctions over values along each dimension. An example anomalous cluster detectable by DAD may identify zipcode = {z1 or z2 or z3 or z5} and age_group = {child or senior} to show unusual activity in the aggregate. Such conjunctive-disjunctive language of cluster definitions enables finding real-world outbreaks that are often missed by other state-of-art algorithms like What’s Strange About Recent Events (WSARE) [3] or Large Average Submatrix (LAS) [2]. DAD is able to identify multiple interesting clusters simultaneously and better explain complex anomalies in data than those alternatives.
Methods
We define the observed counts of patients reporting on a given day as a random variable for each unique combination of values along all dimensions. DAD iteratively identifies K subsets of these variables along with corresponding ranges of their values and time intervals that show increased activity that cannot be explained by random fluctuations (K is generally unknown and could be 0). The resulting set of clusters maximizes data likelihood while controlling for overall complexity. We have successfully derived a versatile set of scoring functions that allow Normal, Poisson, Exponential or Non-parametric assumptions about the underlying data distributions, and accommodate additive-scaled, additive-unscaled or multiplicative-scaled models for the clusters.
Results
We present results of testing DAD on two real-world datasets. One of them contains daily outpatient visit counts from 26 regions in Sri Lanka involving 9 common diseases. The other data contains semi-synthetically generated terrorist activities throughout regions of Afghanistan (Sigacts). Both span multiple years and are representative of data seen in biosurveillance applications.
Figure 1 shows DAD systematically outperforming WSARE and LAS. Each algorithm’s parameters were tuned to generate one false positive per month in baseline data. The graphs represent average days-to-detect performance of 100 sets with synthetically injected clusters using additive-scaled (AS), additive-unscaled (AU), and multiplicative-scaled (MS) models of cluster interactions.
Conclusions
We extend applicability of DAD algorithm to handle wide variety of input data distributions and various outbreak models. DAD efficiently scans over millions of potential outbreak patterns and accurately and timely reports complex outbreak interactions with speed that meets requirements of practical applications.
PMCID: PMC3692787
outbreak detection; anomalous clusters; disjunctive anomaly detection; prospective surveillance
6.  The Impact of Search Engine Selection and Sorting Criteria on Vaccination Beliefs and Attitudes: Two Experiments Manipulating Google Output 
Background
During the past 2 decades, the Internet has evolved to become a necessity in our daily lives. The selection and sorting algorithms of search engines exert tremendous influence over the global spread of information and other communication processes.
Objective
This study is concerned with demonstrating the influence of selection and sorting/ranking criteria operating in search engines on users’ knowledge, beliefs, and attitudes of websites about vaccination. In particular, it is to compare the effects of search engines that deliver websites emphasizing on the pro side of vaccination with those focusing on the con side and with normal Google as a control group.
Method
We conducted 2 online experiments using manipulated search engines. A pilot study was to verify the existence of dangerous health literacy in connection with searching and using health information on the Internet by exploring the effect of 2 manipulated search engines that yielded either pro or con vaccination sites only, with a group receiving normal Google as control. A pre-post test design was used; participants were American marketing students enrolled in a study-abroad program in Lugano, Switzerland. The second experiment manipulated the search engine by applying different ratios of con versus pro vaccination webpages displayed in the search results. Participants were recruited from Amazon’s Mechanical Turk platform where it was published as a human intelligence task (HIT).
Results
Both experiments showed knowledge highest in the group offered only pro vaccination sites (Z=–2.088, P=.03; Kruskal-Wallis H test [H5]=11.30, P=.04). They acknowledged the importance/benefits (Z=–2.326, P=.02; H5=11.34, P=.04) and effectiveness (Z=–2.230, P=.03) of vaccination more, whereas groups offered antivaccination sites only showed increased concern about effects (Z=–2.582, P=.01; H5=16.88, P=.005) and harmful health outcomes (Z=–2.200, P=.02) of vaccination. Normal Google users perceived information quality to be positive despite a small effect on knowledge and a negative effect on their beliefs and attitudes toward vaccination and willingness to recommend the information (χ2 5=14.1, P=.01). More exposure to antivaccination websites lowered participants’ knowledge (J=4783.5, z=−2.142, P=.03) increased their fear of side effects (J=6496, z=2.724, P=.006), and lowered their acknowledgment of benefits (J=4805, z=–2.067, P=.03).
Conclusion
The selection and sorting/ranking criteria of search engines play a vital role in online health information seeking. Search engines delivering websites containing credible and evidence-based medical information impact positively Internet users seeking health information. Whereas sites retrieved by biased search engines create some opinion change in users. These effects are apparently independent of users’ site credibility and evaluation judgments. Users are affected beneficially or detrimentally but are unaware, suggesting they are not consciously perceptive of indicators that steer them toward the credible sources or away from the dangerous ones. In this sense, the online health information seeker is flying blind.
doi:10.2196/jmir.2642
PMCID: PMC4004139  PMID: 24694866
consumer health information; search engine; searching behavior; Internet; information storage and retrieval; online systems; public health informatics; vaccination; health communication
7.  The JCSG MR pipeline: optimized alignments, multiple models and parallel searches 
The practical limits of molecular replacement can be extended by using several specifically designed protein models based on fold-recognition methods and by exhaustive searches performed in a parallelized pipeline. Updated results from the JCSG MR pipeline, which to date has solved 33 molecular-replacement structures with less than 35% sequence identity to the closest homologue of known structure, are presented.
The success rate of molecular replacement (MR) falls considerably when search models share less than 35% sequence identity with their templates, but can be improved significantly by using fold-recognition methods combined with exhaustive MR searches. Models based on alignments calculated with fold-recognition algorithms are more accurate than models based on conventional alignment methods such as FASTA or BLAST, which are still widely used for MR. In addition, by designing MR pipelines that integrate phasing and automated refinement and allow parallel processing of such calculations, one can effectively increase the success rate of MR. Here, updated results from the JCSG MR pipeline are presented, which to date has solved 33 MR structures with less than 35% sequence identity to the closest homologue of known structure. By using difficult MR problems as examples, it is demonstrated that successful MR phasing is possible even in cases where the similarity between the model and the template can only be detected with fold-recognition algorithms. In the first step, several search models are built based on all homologues found in the PDB by fold-recognition algorithms. The models resulting from this process are used in parallel MR searches with different combinations of input parameters of the MR phasing algorithm. The putative solutions are subjected to rigid-body and restrained crystallo­graphic refinement and ranked based on the final values of free R factor, figure of merit and deviations from ideal geometry. Finally, crystal packing and electron-density maps are checked to identify the correct solution. If this procedure does not yield a solution with interpretable electron-density maps, then even more alternative models are prepared. The structurally variable regions of a protein family are identified based on alignments of sequences and known structures from that family and appropriate trimmings of the models are proposed. All combinations of these trimmings are applied to the search models and the resulting set of models is used in the MR pipeline. It is estimated that with the improvements in model building and exhaustive parallel searches with existing phasing algorithms, MR can be successful for more than 50% of recognizable homologues of known structures below the threshold of 35% sequence identity. This implies that about one-third of the proteins in a typical bacterial proteome are potential MR targets.
doi:10.1107/S0907444907050111
PMCID: PMC2394805  PMID: 18094477
molecular replacement; sequence-alignment accuracy; homology modeling; parameter-space screening; structural genomics
8.  Cueing effects on semantic and perceptual categorization: ERPs reveal differential effects of validity as a function of processing stage 
Neuropsychologia  2007;45(9):2038-2050.
Valid cueing has been shown to accelerate target identification and improve decision accuracy, however the precise nature and extent to which biasing influences the successive stages of target processing remain unclear. The present event-related potential (ERP) study used a “hybrid” task that combined features of standard cued-attention and task-switching paradigms in order to explore the effects of expectation on both identification and categorization of centrally-presented stimuli. Subjects made semantic judgments (living/nonliving) on word targets (“bunny”), and perceptual judgments (right/left) on arrow targets (“≪≪<”). Target expectancy was manipulated using cues that were valid (60% of trials), invalid (10%), or neutral (30%). Invalidly-cued targets required task-set switching before categorization could commence, and resulted in RT costs relative to validly- or neutrally-cued targets. Additionally, benefits from valid-cueing were observed for word targets. Invalid cueing of both arrow and word targets modulated early posterior visual potentials (P1/N1) and elicited a subsequent anterior P3a (270 ms). The temporal relationship of these effects suggests that the P3a indexed domain-general task-set switching processes recruited in response to the detection of unexpected perceptual information. Subsequent to the P3a and immediately preceding the behavioral response, validly-cued targets elicited enhanced stimulus-specific waveforms (arrows: parietal positivity [P290], words: inferior temporal negativity [late ITN: 400–600 ms]). The degree of neural enhancement relative to the invalid and neutral conditions mirrored the magnitude of corresponding RT benefits, suggesting that these waveforms indexed categorization and/or decision processes. Together, these results suggest that valid cueing increases the neural efficiency of initial stimulus identification, facilitating transmission of information to subsequent categorization stages, where increased neural activity leads to behavioral benefits.
doi:10.1016/j.neuropsychologia.2007.02.013
PMCID: PMC2099310  PMID: 17382975
categorization; attention; P3a; N400; N1; P1
9.  Social Stimuli Interfere with Cognitive Control in Autism 
NeuroImage  2007;35(3):1219-1230.
Autism spectrum disorders are characterized by cognitive control deficits as well as impairments in social interactions. However, the brain mechanisms mediating the interactive effects of these deficits have not been addressed. We employed event-related functional magnetic resonance imaging (fMRI) to examine the effects of processing directional information from faces on activity within brain regions mediating cognitive control. High-functioning individuals with autism and age-, gender-, and IQ-matched neurotypical individuals attended to the direction of a centrally-presented arrow or gaze stimulus with similar flanker stimuli oriented in the same (“congruent”) or opposite (“incongruent”) direction. The incongruent arrow condition was examined to assess functioning of brain regions mediating cognitive control in a context without social-cognitive demands, whereas the incongruent gaze condition assessed functioning of the same brain regions in a social-cognitive context. Consistent with prior studies, the incongruent arrow condition recruited activity in bilateral midfrontal gyrus, right inferior frontal gyrus, bilateral intraparietal sulcus, and the anterior cingulate relative to the congruent arrow condition in neurotypical participants. Notably, there were not diagnostic group differences in patterns of regional fMRI activation in response to the arrow condition. However, while viewing the incongruent gaze stimuli, although neurotypical participants recruited the same brain regions, participants with autism showed marked hypoactivation in these areas. These findings suggest that processing social-cognitive stimuli interferes with functioning of brain regions recruited during cognitive control tasks in autism. Implications for research into cognitive control deficits in autism are discussed.
doi:10.1016/j.neuroimage.2006.12.038
PMCID: PMC1885863  PMID: 17321151
Autism; Functional Magnetic Resonance Imaging (fMRI); Cognitive Control; Executive Function; Attention; Social Cognition; Gaze
10.  Trends in Compulsory Licensing of Pharmaceuticals Since the Doha Declaration: A Database Analysis 
PLoS Medicine  2012;9(1):e1001154.
Reed Beall and Randall Kuhn describe their findings from an analysis of use of compulsory licenses for pharmaceutical products by World Trade Organization members since 1995.
Background
It is now a decade since the World Trade Organization (WTO) adopted the “Declaration on the TRIPS Agreement and Public Health” at its 4th Ministerial Conference in Doha. Many anticipated that these actions would lead nations to claim compulsory licenses (CLs) for pharmaceutical products with greater regularity. A CL is the use of a patented innovation that has been licensed by a state without the permission of the patent title holder. Skeptics doubted that many CLs would occur, given political pressure against CL activity and continued health system weakness in poor countries. The subsequent decade has seen little systematic assessment of the Doha Declaration's impact.
Methods and Findings
We assembled a database of all episodes in which a CL was publically entertained or announced by a WTO member state since 1995. Broad searches of CL activity were conducted using media, academic, and legal databases, yielding 34 potential CL episodes in 26 countries. Country- and product-specific searches were used to verify government participation, resulting in a final database of 24 verified CLs in 17 nations. We coded CL episodes in terms of outcome, national income, and disease group over three distinct periods of CL activity. Most CL episodes occurred between 2003 and 2005, involved drugs for HIV/AIDS, and occurred in upper-middle-income countries (UMICs). Aside from HIV/AIDS, few CL episodes involved communicable disease, and none occurred in least-developed or low-income countries.
Conclusions
Given skepticism about the Doha Declaration's likely impact, we note the relatively high occurrence of CLs, yet CL activity has diminished markedly since 2006. While UMICs have high CL activity and strong incentives to use CLs compared to other countries, we note considerable countervailing pressures against CL use even in UMICs. We conclude that there is a low probability of continued CL activity. We highlight the need for further systematic evaluation of global health governance actions.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
The development of a new drug is a time-consuming and expensive process. To stimulate investment in drug development, the creators of new drugs (including the pharmaceutical companies that undertake the development and testing that is needed before any drug can be used in patients) can apply for “intellectual property rights” (a patent). Intellectual property rights protect the investments made by companies during drug development by preventing other companies from making the new drug for a fixed period of time and by providing a means by which creators of new drugs can negotiate payment from other companies for the use of their creation. Until recently, the extent and enforcement of intellectual property rights varied widely around the world. Then, in 1995, the World Trade Organization (WTO) was established. By providing a set of ground rules for trade among nations, the WTO aims to ensure that trade flows as smoothly, predictably, and freely as possible around the world. One of the founding documents of the WTO is the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS Agreement), which attempts to bring the protection of intellectual property rights (including patents) under common international rules.
Why Was This Study Done?
Unfortunately, patent protection for drugs (pharmaceuticals) means that many medicines are too expensive for use in developing countries. While maintaining incentives for drug development, the TRIPS Agreement allows governments to license the use of patented inventions to someone else without the consent of the patent owner. Such “compulsory licensing” normally occurs only after negotiations for a voluntary license have failed, and the patent owner still receives an appropriate payment. It soon became clear that some governments were unsure of their right to use compulsory licensing and other flexibilities in the TRIPS Agreement, a situation likely to affect public health in poor countries by hindering universal access to medicines. Consequently, the WTO issued the “Declaration on the TRIPS Agreement and Public Health” at its 4th Ministerial Conference in Doha in November 2001. Reaction to the Doha Declaration, which reaffirms that the “TRIPS Agreement does not and should not prevent members from taking measures to protect public health,” has been mixed. Some experts predicted that it would increase compulsory licensing of pharmaceuticals, but others suggested that political pressure against compulsory licensing and health system weaknesses in poor countries would limit claims for compulsory licenses. In this database analysis, the researchers systematically assess the impact of the Doha Declaration on the compulsory licensing of pharmaceuticals.
What Did the Researchers Do and Find?
By systematically searching media archives for reports of WTO member states considering or announcing compulsory licensing of pharmaceuticals, the researchers identified 24 verified compulsory licensing episodes in 17 nations that occurred between January 1995 and June 2011. Half of these episodes ended with an announcement of a compulsory license, and the majority ended in a price reduction for a specific pharmaceutical product for the potential issuing nation through a compulsory license, a voluntary license, or a negotiated discount. Sixteen of the compulsory licensing episodes involved drugs for HIV/AIDS, four involved drugs for other communicable diseases, and four involved drugs for non-communicable diseases such as cancer. More than half the compulsory licensing episodes occurred in upper-middle-income countries (including Brazil and Thailand). Finally, most compulsory licensing episodes occurred between 2003 and 2005. There was a smaller peak of activity in the months leading up to the Doha conference, but after 2006 activity declined substantially.
What Do These Findings Mean?
Given these findings, the researchers suggest that the Doha Declaration is unlikely to have an important long-term impact on the use of compulsory licensing or on access to pharmaceuticals for communicable diseases other than HIV/AIDS in developing and low-income countries. Most notably, the researchers found no evidence of a spike in compulsory licensing episodes immediately after the Doha Declaration, and they note that the lagged spike that occurred between 2003 and 2005 could have resulted in large part from the global antiretroviral advocacy campaign. Moreover, compulsory licensing activity has diminished greatly since 2006. Thus, the researchers conclude, health advocates who pushed for the Doha Declaration reforms have had little success in engaging trade as a positive, proactive force for addressing health gaps.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001154.
The World Trade Organization provides information on intellectual property rights, on the TRIPS Agreement, on TRIPS and pharmaceutical patents, and on compulsory licensing of pharmaceuticals and TRIPS (in English, French, and Spanish); the Doha Declaration on the TRIPS Agreement and Public Health is also available
The World Health Organization provides information on the Doha Declaration on the TRIPS Agreement and Public Health and an analysis of the implications of the Doha Declaration
Wikipedia has pages on intellectual property rights, on the TRIPS Agreement, and on the Doha Declaration (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
doi:10.1371/journal.pmed.1001154
PMCID: PMC3254665  PMID: 22253577
11.  Long-term response-stimulus associations can influence distractor-response bindings 
Strong associations between target stimuli and responses usually facilitate fast and effortless reactions. The present study investigated whether long-term associations between distractor stimuli and responses modulate behavior. In particular, distractor stimuli can affect behavior due to distractor-based stimulus-response retrieval, a phenomenon called distractor-response binding: An ignored stimulus becomes temporarily associated with a response and retrieves it at stimulus repetition. In a flanker task, participants ignored left and right pointing arrows and responded to a target letter either with left and right (strongly associated) responses or with upper and lower (weakly associated) responses. Binding effects were modulated in dependence of the long-term association strength between distractors and responses. If the association was strong (arrows pointing left and right with left and right responses), binding effects emerged but only in case of compatible responses. If the long-term association between distractors and responses was weak (arrows pointing left and right with upper and lower responses), binding was weaker and not modulated by compatibility. In contrast, sequential compatibility effects were not modulated by association strength between distractor and response. The results indicate that existing long-term associations between stimuli responses may modulate the impact of an ignored stimulus on action control.
doi:10.5709/acp-0158-1
PMCID: PMC4116758  PMID: 25157302
long-term associations; short-term stimulus- response bindings; distractor-response binding; action control; learning
12.  An automated proteomic data analysis workflow for mass spectrometry 
BMC Bioinformatics  2009;10(Suppl 11):S17.
Background
Mass spectrometry-based protein identification methods are fundamental to proteomics. Biological experiments are usually performed in replicates and proteomic analyses generate huge datasets which need to be integrated and quantitatively analyzed. The Sequest™ search algorithm is a commonly used algorithm for identifying peptides and proteins from two dimensional liquid chromatography electrospray ionization tandem mass spectrometry (2-D LC ESI MS2) data. A number of proteomic pipelines that facilitate high throughput 'post data acquisition analysis' are described in the literature. However, these pipelines need to be updated to accommodate the rapidly evolving data analysis methods. Here, we describe a proteomic data analysis pipeline that specifically addresses two main issues pertinent to protein identification and differential expression analysis: 1) estimation of the probability of peptide and protein identifications and 2) non-parametric statistics for protein differential expression analysis. Our proteomic analysis workflow analyzes replicate datasets from a single experimental paradigm to generate a list of identified proteins with their probabilities and significant changes in protein expression using parametric and non-parametric statistics.
Results
The input for our workflow is Bioworks™ 3.2 Sequest (or a later version, including cluster) output in XML format. We use a decoy database approach to assign probability to peptide identifications. The user has the option to select "quality thresholds" on peptide identifications based on the P value. We also estimate probability for protein identification. Proteins identified with peptides at a user-specified threshold value from biological experiments are grouped as either control or treatment for further analysis in ProtQuant. ProtQuant utilizes a parametric (ANOVA) method, for calculating differences in protein expression based on the quantitative measure ΣXcorr. Alternatively ProtQuant output can be further processed using non-parametric Monte-Carlo resampling statistics to calculate P values for differential expression. Correction for multiple testing of ANOVA and resampling P values is done using Benjamini and Hochberg's method. The results of these statistical analyses are then combined into a single output file containing a comprehensive protein list with probabilities and differential expression analysis, associated P values, and resampling statistics.
Conclusion
For biologists carrying out proteomics by mass spectrometry, our workflow facilitates automated, easy to use analyses of Bioworks (3.2 or later versions) data. All the methods used in the workflow are peer-reviewed and as such the results of our workflow are compliant with proteomic data submission guidelines to public proteomic data repositories including PRIDE. Our workflow is a necessary intermediate step that is required to link proteomics data to biological knowledge for generating testable hypotheses.
doi:10.1186/1471-2105-10-S11-S17
PMCID: PMC3226188  PMID: 19811682
13.  Prioritizing CD4 Count Monitoring in Response to ART in Resource-Constrained Settings: A Retrospective Application of Prediction-Based Classification 
PLoS Medicine  2012;9(4):e1001207.
Luis Montaner and colleagues retrospectively apply a potential capacity-saving CD4 count prediction tool to a cohort of HIV patients on antiretroviral therapy.
Background
Global programs of anti-HIV treatment depend on sustained laboratory capacity to assess treatment initiation thresholds and treatment response over time. Currently, there is no valid alternative to CD4 count testing for monitoring immunologic responses to treatment, but laboratory cost and capacity limit access to CD4 testing in resource-constrained settings. Thus, methods to prioritize patients for CD4 count testing could improve treatment monitoring by optimizing resource allocation.
Methods and Findings
Using a prospective cohort of HIV-infected patients (n = 1,956) monitored upon antiretroviral therapy initiation in seven clinical sites with distinct geographical and socio-economic settings, we retrospectively apply a novel prediction-based classification (PBC) modeling method. The model uses repeatedly measured biomarkers (white blood cell count and lymphocyte percent) to predict CD4+ T cell outcome through first-stage modeling and subsequent classification based on clinically relevant thresholds (CD4+ T cell count of 200 or 350 cells/µl). The algorithm correctly classified 90% (cross-validation estimate = 91.5%, standard deviation [SD] = 4.5%) of CD4 count measurements <200 cells/µl in the first year of follow-up; if laboratory testing is applied only to patients predicted to be below the 200-cells/µl threshold, we estimate a potential savings of 54.3% (SD = 4.2%) in CD4 testing capacity. A capacity savings of 34% (SD = 3.9%) is predicted using a CD4 threshold of 350 cells/µl. Similar results were obtained over the 3 y of follow-up available (n = 619). Limitations include a need for future economic healthcare outcome analysis, a need for assessment of extensibility beyond the 3-y observation time, and the need to assign a false positive threshold.
Conclusions
Our results support the use of PBC modeling as a triage point at the laboratory, lessening the need for laboratory-based CD4+ T cell count testing; implementation of this tool could help optimize the use of laboratory resources, directing CD4 testing towards higher-risk patients. However, further prospective studies and economic analyses are needed to demonstrate that the PBC model can be effectively applied in clinical settings.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
AIDS has killed nearly 30 million people since 1981, and about 34 million people (most of them living in low- and middle-income countries) are now infected with HIV, the virus that causes AIDS. HIV destroys immune system cells (including CD4 cells, a type of lymphocyte and one of the body's white blood cell types), leaving infected individuals susceptible to other infections. Early in the AIDS epidemic, most HIV-infected people died within ten years of infection. Then, in 1996, antiretroviral therapy (ART) became available, and for people living in affluent countries, HIV/AIDS became a chronic condition. However, ART was expensive, and for people living in developing countries, HIV/AIDS remained a fatal illness. In 2003, HIV was declared a global health emergency, and in 2006, the international community set itself the target of achieving universal access to ART by 2010. By the end of 2010, only 6.6 million of the estimated 15 million people in need of ART in developing countries were receiving ART.
Why Was This Study Done?
One factor that has impeded progress towards universal ART coverage has been the limited availability of trained personnel and laboratory facilities in many developing countries. These resources are needed to determine when individuals should start ART—the World Health Organization currently recommends that people start ART when their CD4 count drops below 350 cells/µl—and to monitor treatment responses over time so that viral resistance to ART is quickly detected. Although a total lymphocyte count can be used as a surrogate measure to decide when to start treatment, repeated CD4 cell counts are the only way to monitor immunologic responses to treatment, a level of monitoring that is rarely sustainable in resource-constrained settings. A method that optimizes resource allocation by prioritizing who gets tested might be one way to improve treatment monitoring. In this study, the researchers applied a new tool for prioritizing laboratory-based CD4 cell count testing in resource-constrained settings to patient data that had been previously collected.
What Did the Researchers Do and Find?
The researchers fitted a mixed-effects statistical model to repeated CD4 count measurements from HIV-infected individuals from seven sites around the world (including some resource-limited sites). They then used model-derived estimates to apply a mathematical tool for predicting—from a CD4 count taken at the start of treatment, and white blood cell counts and lymphocyte percentage measurements taken later—whether CD4 counts would be above 200 cells/µl (the original threshold recommended for ART initiation) and 350 cells/µl (the current recommended threshold) for up to three years after ART initiation. The tool correctly classified 91.5% of the CD4 cell counts that were below 200 cells/µl in the first year of ART. With this threshold, the potential savings in CD4 testing capacity was 54.3%. With a CD4 count threshold of 350 cells/µl, the potential savings in testing capacity was 34%. The results over a three-year follow-up were similar. When applied to six representative HIV-positive individuals, the tool correctly predicted all the CD4 counts above 200 cells/µl, although some individuals who had a predicted CD4 count of less than 200 cells/µl actually had a CD4 count above this threshold. Thus, none of these individuals would have been exposed to an undetected dangerous CD4 count, but the application of the tool would have saved 57% of the CD4 laboratory tests done during the first year of ART.
What Do These Findings Mean?
These findings support the use of this new tool—the prediction-based classification (PBC) algorithm—for predicting a drop in CD4 count below a clinically meaningful threshold in HIV-infected individuals receiving ART. Further studies are now needed to demonstrate the feasibility, clinical effectiveness, and cost-effectiveness of this approach, to find out whether the tool can be used over extended periods of time, and to investigate whether the accuracy of its predictions can be improved by, for example, adding in periodic CD4 testing. Provided these studies confirm its early promise, the researchers suggest that the PBC algorithm could be used as a “triage” tool to direct available laboratory testing capacity to high-priority individuals (those likely to have a dangerously low CD4 count). By optimizing the use of limited laboratory resources in this and other ways, the PBC algorithm could therefore help to maintain and expand ART programs in low- and middle-income countries.
Additional Information
Please access these web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001207.
Information is available from the US National Institute of Allergy and Infectious Diseases on HIV infection and AIDS
NAM/aidsmap provides basic information about HIV/AIDS and summaries of recent research findings on HIV care and treatment
Information is available from Avert, an international AIDS charity, on many aspects of HIV/AIDS, including information on HIV/AIDS treatment and care and on universal access to AIDS treatment (in English and Spanish)
The World Health Organization provides information about universal access to AIDS treatment (in several languages)
More information about universal access to HIV treatment, prevention, care, and support is available from UNAIDS
Patient stories about living with HIV/AIDS are available through Avert and through the charity website Healthtalkonline
doi:10.1371/journal.pmed.1001207
PMCID: PMC3328436  PMID: 22529752
14.  A Novel Validation Algorithm Allows for Automated Cell Tracking and the Extraction of Biologically Meaningful Parameters 
PLoS ONE  2011;6(11):e27315.
Automated microscopy is currently the only method to non-invasively and label-free observe complex multi-cellular processes, such as cell migration, cell cycle, and cell differentiation. Extracting biological information from a time-series of micrographs requires each cell to be recognized and followed through sequential microscopic snapshots. Although recent attempts to automatize this process resulted in ever improving cell detection rates, manual identification of identical cells is still the most reliable technique. However, its tedious and subjective nature prevented tracking from becoming a standardized tool for the investigation of cell cultures. Here, we present a novel method to accomplish automated cell tracking with a reliability comparable to manual tracking. Previously, automated cell tracking could not rival the reliability of manual tracking because, in contrast to the human way of solving this task, none of the algorithms had an independent quality control mechanism; they missed validation. Thus, instead of trying to improve the cell detection or tracking rates, we proceeded from the idea to automatically inspect the tracking results and accept only those of high trustworthiness, while rejecting all other results. This validation algorithm works independently of the quality of cell detection and tracking through a systematic search for tracking errors. It is based only on very general assumptions about the spatiotemporal contiguity of cell paths. While traditional tracking often aims to yield genealogic information about single cells, the natural outcome of a validated cell tracking algorithm turns out to be a set of complete, but often unconnected cell paths, i.e. records of cells from mitosis to mitosis. This is a consequence of the fact that the validation algorithm takes complete paths as the unit of rejection/acceptance. The resulting set of complete paths can be used to automatically extract important biological parameters with high reliability and statistical significance. These include the distribution of life/cycle times and cell areas, as well as of the symmetry of cell divisions and motion analyses. The new algorithm thus allows for the quantification and parameterization of cell culture with unprecedented accuracy. To evaluate our validation algorithm, two large reference data sets were manually created. These data sets comprise more than 320,000 unstained adult pancreatic stem cells from rat, including 2592 mitotic events. The reference data sets specify every cell position and shape, and assign each cell to the correct branch of its genealogic tree. We provide these reference data sets for free use by others as a benchmark for the future improvement of automated tracking methods.
doi:10.1371/journal.pone.0027315
PMCID: PMC3210784  PMID: 22087288
15.  Inhibition of subliminally primed responses is mediated by the caudate and thalamus: evidence from functional MRI and Huntington’s disease 
Brain : a journal of neurology  2003;126(0 3):713-723.
Summary
Masked prime tasks have shown that sensory information that has not been consciously perceived can nevertheless trigger the preactivation of a motor response. Automatic inhibitory control processes prevent such response tendencies from interfering with behaviour. The present study investigated the possibility that these inhibitory control processes are mediated by a cortico-striatal-pallidal-thalamic pathway by using a masked prime task with Huntington’s disease patients (Experiment 1) and with healthy volunteers in a functional MRI (fMRI) study (Experiment 2). In the masked prime task, clearly visible left- or right-pointing target arrows are preceded by briefly presented and subsequently masked prime arrows. Participants respond quickly with a left or right key-press to each target. Trials are either compatible (prime and target pointing in the same direction) or incompatible (prime and target pointing in different directions). Prior behavioural and electrophysiological results show that automatic inhibition of the initially primed response tendency is reflected in a ‘negative compatibility effect’ (faster reaction times for incompatible trials than for compatible trials), and is shown to consist of three distinct processes (prime activation, response inhibition and response conflict) occurring within 300 ms. Experiment 1 tested the hypothesis that lesions of the striatum would interrupt automatic inhibitory control by studying early-stage Huntington’s disease patients. Findings supported the hypothesis: there was a bimodal distribution for patients, with one-third (choreic) showing disinhibition, manifested as an absent negative compatibility effect, and two-thirds (non-choreic) showing excessive inhibition, manifested as a significantly greater negative compatibility effect than that in controls. Experiment 2 used fMRI and a region of interest (ROI) template-based method to further test the hypothesis that structures of the striatal-pallidal-thalamic pathway mediate one or more of the processes of automatic inhibitory control. Neither prime activation nor response conflict significantly engaged any ROIs, but the response inhibition process led to significant modulation of both the caudate and thalamus. Taken together, these experiments indicate a causal role for the caudate nucleus and thalamus in automatic inhibitory motor control, and the results are consistent with performance of the task requiring both direct and indirect striatal-pallidalthalamic pathways. The finding that Huntington’s disease patients with greater chorea were disinhibited is consistent with the theory that chorea arises from selective degeneration of striatal projections to the lateral globus pallidus, while the exaggerated inhibitory effect for patients with little or no chorea may be due to additional degeneration of projections to the medial globus pallidus.
PMCID: PMC3838934  PMID: 12566291
priming; striatum; motor control; subliminal; compatibility
16.  Prevalence, Distribution, and Impact of Mild Cognitive Impairment in Latin America, China, and India: A 10/66 Population-Based Study 
PLoS Medicine  2012;9(2):e1001170.
A set of cross-sectional surveys carried out in Cuba, Dominican Republic, Peru, Mexico, Venezuela, Puerto Rico, China, and India reveal the prevalence and between-country variation in mild cognitive impairment at a population level.
Background
Rapid demographic ageing is a growing public health issue in many low- and middle-income countries (LAMICs). Mild cognitive impairment (MCI) is a construct frequently used to define groups of people who may be at risk of developing dementia, crucial for targeting preventative interventions. However, little is known about the prevalence or impact of MCI in LAMIC settings.
Methods and Findings
Data were analysed from cross-sectional surveys established by the 10/66 Dementia Research Group and carried out in Cuba, Dominican Republic, Peru, Mexico, Venezuela, Puerto Rico, China, and India on 15,376 individuals aged 65+ without dementia. Standardised assessments of mental and physical health, and cognitive function were carried out including informant interviews. An algorithm was developed to define Mayo Clinic amnestic MCI (aMCI). Disability (12-item World Health Organization disability assessment schedule [WHODAS]) and informant-reported neuropsychiatric symptoms (neuropsychiatric inventory [NPI-Q]) were measured. After adjustment, aMCI was associated with disability, anxiety, apathy, and irritability (but not depression); between-country heterogeneity in these associations was only significant for disability. The crude prevalence of aMCI ranged from 0.8% in China to 4.3% in India. Country differences changed little (range 0.6%–4.6%) after standardization for age, gender, and education level. In pooled estimates, aMCI was modestly associated with male gender and fewer assets but was not associated with age or education. There was no significant between-country variation in these demographic associations.
Conclusions
An algorithm-derived diagnosis of aMCI showed few sociodemographic associations but was consistently associated with higher disability and neuropsychiatric symptoms in addition to showing substantial variation in prevalence across LAMIC populations. Longitudinal data are needed to confirm findings—in particular, to investigate the predictive validity of aMCI in these settings and risk/protective factors for progression to dementia; however, the large number affected has important implications in these rapidly ageing settings.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Currently, more than 35 million people worldwide have dementia, a group of brain disorders characterized by an irreversible decline in memory, problem solving, communication, and other “cognitive” functions. Dementia, the commonest form of which is Alzheimer's disease, mainly affects older people and, because more people than ever are living to a ripe old age, experts estimate that, by 2050, more than 115 million people will have dementia. At present, there is no cure for dementia although drugs can be used to manage some of the symptoms. Risk factors for dementia include physical inactivity, infrequent participation in mentally or socially stimulating activities, and common vascular risk factors such as high blood pressure, diabetes, and smoking. In addition, some studies have reported that mild cognitive impairment (MCI) is associated with an increased risk of dementia. MCI can be seen as an intermediate state between normal cognitive aging (becoming increasingly forgetful) and dementia although many people with MCI never develop dementia, and some types of MCI can be static or self-limiting. Individuals with MCI have cognitive problems that are more severe than those normally seen in people of a similar age but they have no other symptoms of dementia and are able to look after themselves. The best studied form of MCI—amnestic MCI (aMCI)—is characterized by memory problems such as misplacing things and forgetting appointments.
Why Was This Study Done?
Much of the expected increase in dementia will occur in low and middle income countries (LAMICs) because these countries have rapidly aging populations. Given that aMCI is frequently used to define groups of people who may be at risk of developing dementia, it would be useful to know what proportion of community-dwelling older adults in LAMICs have aMCI (the prevalence of aMCI). Such information might help governments plan their future health care and social support needs. In this cross-sectional, population-based study, the researchers estimate the prevalence of aMCI in eight LAMICs using data collected by the 10/66 Dementia Research Group. They also investigate the association of aMCI with sociodemographic factors (for example, age, gender, and education), disability, and neuropsychiatric symptoms such as anxiety, apathy, irritability, and depression. A cross-sectional study collects data on a population at a single time point; the 10/66 Dementia Research Group is building an evidence base to inform the development and implementation of policies for improving the health and social welfare of older people in LAMICs, particularly people with dementia.
What Did the Researchers Do and Find?
In cross-sectional surveys carried out in six Latin American LAMICS, China, and India, more than 15,000 elderly individuals without dementia completed standardized assessments of their mental and physical health and their cognitive function. Interviews with relatives and carers provided further details about the participant's cognitive decline and about neuropsychiatric symptoms. The researchers developed an algorithm (set of formulae) that used the data collected in these surveys to diagnose aMCI in the study participants. Finally, they used statistical methods to analyze the prevalence, distribution, and impact of aMCI in the eight LAMICs. The researchers report that aMCI was associated with disability, anxiety, apathy, and irritability but not with depression and that the prevalence of aMCI ranged from 0.8% in China to 4.3% in India. Other analyses show that, considered across all eight countries, aMCI was modestly associated with being male (men had a slightly higher prevalence of aMCI than women) and with having fewer assets but was not associated with age or education.
What Do These Findings Mean?
These findings suggest that aMCI, as diagnosed using the algorithm developed by the researchers, is consistently associated with higher disability and with neuropsychiatric symptoms in the LAMICs studied but not with most sociodemographic factors. Because prevalidated and standardized measurements were applied consistently in all the countries and a common algorithm was used to define aMCI, these findings also suggest that the prevalence of aMCI varies markedly among LAMIC populations and is similar to or slightly lower than the prevalence most often reported for European and North American populations. Although longitudinal studies are now needed to investigate the extent to which aMCI can be used as risk marker for further cognitive decline and dementia in these settings, the large absolute numbers of older people with aMCI in LAMICs revealed here potentially has important implications for health care and social service planning in these rapidly aging and populous regions of the world.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001170.
Alzheimer's Disease International is the international federation of Alzheimer associations around the world; it provides links to individual associations, information about dementia, and links to three World Alzheimer Reports; information about the 10/66 Dementia Research Group is also available on this web site
The Alzheimer's Society provides information for patients and carers about dementia, including information on MCI and personal stories about living with dementia
The Alzheimer's Association also provides information for patients and carers about dementia and about MCI, and personal stories about dementia
A BBC radio program that includes an interview with a man with MCI is available
MedlinePlus provides links to further resources about MCI and dementia (in English and Spanish)
doi:10.1371/journal.pmed.1001170
PMCID: PMC3274506  PMID: 22346736
17.  Search Algorithms as a Framework for the Optimization of Drug Combinations 
PLoS Computational Biology  2008;4(12):e1000249.
Combination therapies are often needed for effective clinical outcomes in the management of complex diseases, but presently they are generally based on empirical clinical experience. Here we suggest a novel application of search algorithms—originally developed for digital communication—modified to optimize combinations of therapeutic interventions. In biological experiments measuring the restoration of the decline with age in heart function and exercise capacity in Drosophila melanogaster, we found that search algorithms correctly identified optimal combinations of four drugs using only one-third of the tests performed in a fully factorial search. In experiments identifying combinations of three doses of up to six drugs for selective killing of human cancer cells, search algorithms resulted in a highly significant enrichment of selective combinations compared with random searches. In simulations using a network model of cell death, we found that the search algorithms identified the optimal combinations of 6–9 interventions in 80–90% of tests, compared with 15–30% for an equivalent random search. These findings suggest that modified search algorithms from information theory have the potential to enhance the discovery of novel therapeutic drug combinations. This report also helps to frame a biomedical problem that will benefit from an interdisciplinary effort and suggests a general strategy for its solution.
Author Summary
This work describes methods that identify drug combinations that might alleviate the suffering caused by complex diseases. Our biological model systems are: physiological decline associated with aging, and selective killing of cancer cells. The novelty of this approach is based on a new application of methods from digital communications theory, which becomes useful when the number of possible combinations is large and a complete set of measurements cannot be obtained. This limit is reached easily, given the many drugs and doses available for complex diseases. We are not simply using computer models but are using search algorithms implemented with biological measurements, built to integrate information from different sources, including simulations. This might be considered parallel biological computation and differs from the classic systems biology approach by having search algorithms rather than explicit quantitative models as the central element. Because variation is an essential component of biology, this approach might be more appropriate for combined drug interventions, which can be considered a form of biological control. Search algorithms are used in many fields in physics and engineering. We hope that this paper will generate interest in a new application of importance to human health from practitioners of diverse computational disciplines.
doi:10.1371/journal.pcbi.1000249
PMCID: PMC2590660  PMID: 19112483
18.  Reverse Engineering a Signaling Network Using Alternative Inputs 
PLoS ONE  2009;4(10):e7622.
One of the goals of systems biology is to reverse engineer in a comprehensive fashion the arrow diagrams of signal transduction systems. An important tool for ordering pathway components is genetic epistasis analysis, and here we present a strategy termed Alternative Inputs (AIs) to perform systematic epistasis analysis. An alternative input is defined as any genetic manipulation that can activate the signaling pathway instead of the natural input. We introduced the concept of an “AIs-Deletions matrix” that summarizes the outputs of all combinations of alternative inputs and deletions. We developed the theory and algorithms to construct a pairwise relationship graph from the AIs-Deletions matrix capturing both functional ordering (upstream, downstream) and logical relationships (AND, OR), and then interpreting these relationships into a standard arrow diagram. As a proof-of-principle, we applied this methodology to a subset of genes involved in yeast mating signaling. This experimental pilot study highlights the robustness of the approach and important technical challenges. In summary, this research formalizes and extends classical epistasis analysis from linear pathways to more complex networks, facilitating computational analysis and reconstruction of signaling arrow diagrams.
doi:10.1371/journal.pone.0007622
PMCID: PMC2764141  PMID: 19898612
19.  A Network Inference Method for Large-Scale Unsupervised Identification of Novel Drug-Drug Interactions 
PLoS Computational Biology  2013;9(12):e1003374.
Characterizing interactions between drugs is important to avoid potentially harmful combinations, to reduce off-target effects of treatments and to fight antibiotic resistant pathogens, among others. Here we present a network inference algorithm to predict uncharacterized drug-drug interactions. Our algorithm takes, as its only input, sets of previously reported interactions, and does not require any pharmacological or biochemical information about the drugs, their targets or their mechanisms of action. Because the models we use are abstract, our approach can deal with adverse interactions, synergistic/antagonistic/suppressing interactions, or any other type of drug interaction. We show that our method is able to accurately predict interactions, both in exhaustive pairwise interaction data between small sets of drugs, and in large-scale databases. We also demonstrate that our algorithm can be used efficiently to discover interactions of new drugs as part of the drug discovery process.
Author Summary
Over one in four adults older than 57 in the US take five or more prescriptions at the same time; as many as 4% are at risk of a major adverse drug-drug interaction. Potentially beneficial effects of drug combinations, on the other hand, are also important. For example, combinations of drugs with synergistic effects increase the efficacy of treatments and reduce side effects; and suppressing interactions between drugs, in which one drug inhibits the action of the other, have been found to be effective in the fight against antibiotic-resistant pathogens. With thousands of drugs in the market, and hundreds or thousands being tested and developed, it is clear that we cannot rely only on experimental assays, or even mechanistic pharmacological models, to uncover new interactions. Here we present an algorithm that is able to predict such interactions. Our algorithm is parameter-free, unsupervised, and takes, as its only input, sets of previously reported interactions. We show that our method is able to accurately predict interactions, even in large-scale databases containing thousands of drugs, and that it can be used efficiently to discover interactions of new drugs as part of the drug discovery process.
doi:10.1371/journal.pcbi.1003374
PMCID: PMC3854677  PMID: 24339767
20.  Efficient algorithms for biological stems search 
BMC Bioinformatics  2013;14:161.
Background
Motifs are significant patterns in DNA, RNA, and protein sequences, which play an important role in biological processes and functions, like identification of open reading frames, RNA transcription, protein binding, etc. Several versions of the motif search problem have been studied in the literature. One such version is called the Planted Motif Search (PMS)or (l, d)-motif Search. PMS is known to be NP complete. The time complexities of most of the planted motif search algorithms depend exponentially on the alphabet size. Recently a new version of the motif search problem has been introduced by Kuksa and Pavlovic. We call this version as the Motif Stems Search (MSS) problem. A motif stem is an l-mer (for some relevant value of l)with some wildcard characters and hence corresponds to a set of l-mers (without wildcards), some of which are (l, d)-motifs. Kuksa and Pavlovic have presented an efficient algorithm to find motif stems for inputs from large alphabets. Ideally, the number of stems output should be as small as possible since the stems form a superset of the motifs.
Results
In this paper we propose an efficient algorithm for MSS and evaluate it on both synthetic and real data. This evaluation reveals that our algorithm is much faster than Kuksa and Pavlovic’s algorithm.
Conclusions
Our MSS algorithm outperforms the algorithm of Kuksa and Pavlovic in terms of the run time as well as the number of stems output. Specifically, the stems output by our algorithm form a proper (and much smaller)subset of the stems output by Kuksa and Pavlovic’s algorithm.
doi:10.1186/1471-2105-14-161
PMCID: PMC3679804  PMID: 23679045
21.  Reactive Searching and Infotaxis in Odor Source Localization 
PLoS Computational Biology  2014;10(10):e1003861.
Male moths aiming to locate pheromone-releasing females rely on stimulus-adapted search maneuvers complicated by a discontinuous distribution of pheromone patches. They alternate sequences of upwind surge when perceiving the pheromone and cross- or downwind casting when the odor is lost. We compare four search strategies: three reactive versus one cognitive. The former consist of pre-programmed movement sequences triggered by pheromone detections while the latter uses Bayesian inference to build spatial probability maps. Based on the analysis of triphasic responses of antennal lobe neurons (On, inhibition, Off), we propose three reactive strategies. One combines upwind surge (representing the On response to a pheromone detection) and spiral casting, only. The other two additionally include crosswind (zigzag) casting representing the Off phase. As cognitive strategy we use the infotaxis algorithm which was developed for searching in a turbulent medium. Detection events in the electroantennogram of a moth attached to a robot indirectly control this cyborg, depending on the strategy in use. The recorded trajectories are analyzed with regard to success rates, efficiency, and other features. In addition, we qualitatively compare our robotic trajectories to behavioral search paths. Reactive searching is more efficient (yielding shorter trajectories) for higher pheromone doses whereas cognitive searching works better for lower doses. With respect to our experimental conditions (2 m from starting position to pheromone source), reactive searching with crosswind zigzag yields the shortest trajectories (for comparable success rates). Assuming that the neuronal Off response represents a short-term memory, zigzagging is an efficient movement to relocate a recently lost pheromone plume. Accordingly, such reactive strategies offer an interesting alternative to complex cognitive searching.
Author Summary
The moth mating race is a suitable model case for studying the efficiency of various search strategies and to compare them to real-world behavior. All there is to guide olfactory navigation are simple sporadic clues, i.e., single pheromone detections. Thus, a pheromone seeking male relies on a specifically adapted behavior where action selection is triggered by simple perceptional events. They switch between stereotypical movement sequences, as, for example, upwind surge and crosswind casting. This behavior can be either a consequence of cognitive processing or a reactive reflex of fixed action patterns. Suggesting a direct relationship between neuronal central activity and such action patterns, we combine and implement them as reactive strategies. We also employ infotaxis, an artificial intelligence algorithm specifically developed for searching in turbulent odor plumes. Using these strategies in cyborg experiments, we obtain and compare the resulting search trajectories. Our results indicate that complex, computationally expensive search strategies like infotaxis are not necessarily better than simple reactive ones. With respect to our set-up, reactive searching yields the shortest trajectories if and only if it includes a crosswind zigzagging phase that represents a short-term memory. Thus, already a minimal bit of simplistic memory can produce very efficient goal-directed behavior.
doi:10.1371/journal.pcbi.1003861
PMCID: PMC4211930  PMID: 25330317
22.  Searching for phenotypic causal networks involving complex traits: an application to European quail 
Background
Structural equation models (SEM) are used to model multiple traits and the casual links among them. The number of different causal structures that can be used to fit a SEM is typically very large, even when only a few traits are studied. In recent applications of SEM in quantitative genetics mixed model settings, causal structures were pre-selected based on prior beliefs alone. Alternatively, there are algorithms that search for structures that are compatible with the joint distribution of the data. However, such a search cannot be performed directly on the joint distribution of the phenotypes since causal relationships are possibly masked by genetic covariances. In this context, the application of the Inductive Causation (IC) algorithm to the joint distribution of phenotypes conditional to unobservable genetic effects has been proposed.
Methods
Here, we applied this approach to five traits in European quail: birth weight (BW), weight at 35 days of age (W35), age at first egg (AFE), average egg weight from 77 to 110 days of age (AEW), and number of eggs laid in the same period (NE). We have focused the discussion on the challenges and difficulties resulting from applying this method to field data. Statistical decisions regarding partial correlations were based on different Highest Posterior Density (HPD) interval contents and models based on the selected causal structures were compared using the Deviance Information Criterion (DIC). In addition, we used temporal information to perform additional edge orienting, overriding the algorithm output when necessary.
Results
As a result, the final causal structure consisted of two separated substructures: BW→AEW and W35→AFE→NE, where an arrow represents a direct effect. Comparison between a SEM with the selected structure and a Multiple Trait Animal Model using DIC indicated that the SEM is more plausible.
Conclusions
Coupling prior knowledge with the output provided by the IC algorithm allowed further learning regarding phenotypic causal structures when compared to standard mixed effects SEM applications.
doi:10.1186/1297-9686-43-37
PMCID: PMC3354366  PMID: 22047591
23.  The Role of Health Systems Factors in Facilitating Access to Psychotropic Medicines: A Cross-Sectional Analysis of the WHO-AIMS in 63 Low- and Middle-Income Countries 
PLoS Medicine  2012;9(1):e1001166.
In a cross-sectional analysis of WHO-AIMS data, Ryan McBain and colleagues investigate the associations between health system components and access to psychotropic drugs in 63 low and middle income countries.
Background
Neuropsychiatric conditions comprise 14% of the global burden of disease and 30% of all noncommunicable disease. Despite the existence of cost-effective interventions, including administration of psychotropic medicines, the number of persons who remain untreated is as high as 85% in low- and middle-income countries (LAMICs). While access to psychotropic medicines varies substantially across countries, no studies to date have empirically investigated potential health systems factors underlying this issue.
Methods and Findings
This study uses a cross-sectional sample of 63 LAMICs and country regions to identify key health systems components associated with access to psychotropic medicines. Data from countries that completed the World Health Organization Assessment Instrument for Mental Health Systems (WHO-AIMS) were included in multiple regression analyses to investigate the role of five major mental health systems domains in shaping medicine availability and affordability. These domains are: mental health legislation, human rights implementations, mental health care financing, human resources, and the role of advocacy groups. Availability of psychotropic medicines was associated with features of all five mental health systems domains. Most notably, within the domain of mental health legislation, a comprehensive national mental health plan was associated with 15% greater availability; and in terms of advocacy groups, the participation of family-based organizations in the development of mental health legislation was associated with 17% greater availability. Only three measures were related with affordability of medicines to consumers: level of human resources, percentage of countries' health budget dedicated to mental health, and availability of mental health care in prisons. Controlling for country development, as measured by the Human Development Index, health systems features were associated with medicine availability but not affordability.
Conclusions
Results suggest that strengthening particular facets of mental health systems might improve availability of psychotropic medicines and that overall country development is associated with affordability.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Mental disorders—conditions that involve impairment of thinking, emotions, and behavior—are extremely common. Worldwide, mental illness affects about 450 million people and accounts for 13.5% of the global burden of disease. About one in four people will have a mental health problem at some time in their life. For some people, this will be a short period of mild depression, anxiety, or stress. For others, it will be a serious, long-lasting condition such as schizophrenia, bipolar disorder, or major depression. People with mental health problems need help and support from professionals and from their friends and families to help them cope with their illness but are often discriminated against, which can make their illness worse. Treatments include counseling and psychotherapy (talking therapies), and psychotropic medicines—drugs that act mainly on the brain. Left untreated, many people with serious mental illnesses commit suicide.
Why Was This Study Done?
About 80% of people with mental illnesses live in low- and middle-income countries (LAMICs) where up to 85% of patients remain untreated. Access to psychotropic medicines, which constitute an essential and cost-effective component in the treatment of mental illnesses, is particularly poor in many LAMICs. To improve this situation, it is necessary to understand what health systems factors limit the availability and affordability of psychotropic drugs; a health system is the sum of all the organizations, institutions, and resources that act together to improve health. In this cross-sectional study, the researchers look for associations between specific health system components and access to psychotropic medicines by analyzing data collected from LAMICs using the World Health Organization's Assessment Instrument for Mental Health Systems (WHO-AIMS). A cross-sectional study analyzes data collected at a single time. WHO-AIMS, which was created to evaluate mental health systems primarily in LAMICs, is a 155-item survey that Ministries of Health and other country-based agencies can use to collect information on mental health indicators.
What Did the Researchers Do and Find?
The researchers used WHO-AIMS data from 63 countries/country regions and multiple regression analysis to evaluate the role of mental health legislation, human rights implementation, mental health care financing, human resources, and advocacy in shaping medicine availability and affordability. For each of these health systems domains, the researchers developed one or more summary measurements. For example, they measured financing as the percentage of government health expenditure directed toward mental health. Availability of psychotropic medicines was defined as the percentage of mental health facilities in which at least one psychotropic medication for each therapeutic category was always available. Affordability was measured by calculating the percentage of daily minimum wage needed to purchase medicine by the average consumer. The availability of psychotropic medicines was related to features of all five mental health systems domains, report the researchers. Notably, having a national mental health plan (part of the legislation domain) and the participation (advocacy) of family-based organizations in mental health legislation formulation were associated with 15% and 17% greater availability of medicines, respectively. By contrast, only the levels of human resources and financing, and the availability of mental health care in prisons (part of the human rights domain) were associated with the affordability of psychotropic medicines. Once overall country development was taken into account, most of the associations between health systems factors and medicine availability remained significant, while the associations between health systems factors and medicine affordability were no longer significant. In part, this was because country development was more strongly associated with affordability and explained most of the relationships: for example, countries with greater overall development have higher expenditures on mental health and greater medicine affordability compared to availability.
What Do These Findings Mean?
These findings indicate that access to psychotropic medicines in LAMICs is related to key components within the mental health systems of these countries but that availability and affordability are affected to different extents by these components. They also show that country development plays a strong role in determining affordability but has less effect on determining availability. Because cross-sectional data were used in this study, these findings only indicate associations; they do not imply causality. They are also limited by the relatively small number of observations included in this study, by the methods used to collect mental health systems data in many LAMICs, and by the possibility that some countries may have reported biased results. Despite these limitations, these findings suggest that strengthening specific mental health system features may be an important way to facilitate access to psychotropic medicines but also highlight the role that country wealth and development play in promoting the treatment of mental disorders.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/ 10.1371/journal.pmed.1001166.
The US National Institute of Mental Health provides information on all aspects of mental health (in English and Spanish)
The UK National Health Service Choices website provides information on mental health; its Live Well feature provides practical advice on dealing with mental health problems and personal stories
The UK charity Mind provides further information about mental illness, including personal stories
MedlinePlus provides links to many other sources of information on mental health (in English and Spanish)
Information on WHO-AIMS, including versions of the instrument in several languages, and WHO-AIMS country reports are available
doi:10.1371/journal.pmed.1001166
PMCID: PMC3269418  PMID: 22303288
24.  Point-of-Care International Normalized Ratio (INR) Monitoring Devices for Patients on Long-term Oral Anticoagulation Therapy 
Executive Summary
Subject of the Evidence-Based Analysis
The purpose of this evidence based analysis report is to examine the safety and effectiveness of point-of-care (POC) international normalized ratio (INR) monitoring devices for patients on long-term oral anticoagulation therapy (OAT).
Clinical Need: Target Population and Condition
Long-term OAT is typically required by patients with mechanical heart valves, chronic atrial fibrillation, venous thromboembolism, myocardial infarction, stroke, and/or peripheral arterial occlusion. It is estimated that approximately 1% of the population receives anticoagulation treatment and, by applying this value to Ontario, there are an estimated 132,000 patients on OAT in the province, a figure that is expected to increase with the aging population.
Patients on OAT are regularly monitored and their medications adjusted to ensure that their INR scores remain in the therapeutic range. This can be challenging due to the narrow therapeutic window of warfarin and variation in individual responses. Optimal INR scores depend on the underlying indication for treatment and patient level characteristics, but for most patients the therapeutic range is an INR score of between 2.0 and 3.0.
The current standard of care in Ontario for patients on long-term OAT is laboratory-based INR determination with management carried out by primary care physicians or anticoagulation clinics (ACCs). Patients also regularly visit a hospital or community-based facility to provide a venous blood samples (venipuncture) that are then sent to a laboratory for INR analysis.
Experts, however, have commented that there may be under-utilization of OAT due to patient factors, physician factors, or regional practice variations and that sub-optimal patient management may also occur. There is currently no population-based Ontario data to permit the assessment of patient care, but recent systematic reviews have estimated that less that 50% of patients receive OAT on a routine basis and that patients are in the therapeutic range only 64% of the time.
Overview of POC INR Devices
POC INR devices offer an alternative to laboratory-based testing and venipuncture, enabling INR determination from a fingerstick sample of whole blood. Independent evaluations have shown POC devices to have an acceptable level of precision. They permit INR results to be determined immediately, allowing for more rapid medication adjustments.
POC devices can be used in a variety of settings including physician offices, ACCs, long-term care facilities, pharmacies, or by the patients themselves through self-testing (PST) or self-management (PSM) techniques. With PST, patients measure their INR values and then contact their physician for instructions on dose adjustment, whereas with PSM, patients adjust the medication themselves based on pre-set algorithms. These models are not suitable for all patients and require the identification and education of suitable candidates.
Potential advantages of POC devices include improved convenience to patients, better treatment compliance and satisfaction, more frequent monitoring and fewer thromboembolic and hemorrhagic complications. Potential disadvantages of the device include the tendency to underestimate high INR values and overestimate low INR values, low thromboplastin sensitivity, inability to calculate a mean normal PT, and errors in INR determination in patients with antiphospholipid antibodies with certain instruments. Although treatment satisfaction and quality of life (QoL) may improve with POC INR monitoring, some patients may experience increased anxiety or preoccupation with their disease with these strategies.
Evidence-Based Analysis Methods
Research Questions
1. Effectiveness
Does POC INR monitoring improve clinical outcomes in various settings compared to standard laboratory-based testing?
Does POC INR monitoring impact patient satisfaction, QoL, compliance, acceptability, convenience compared to standard laboratory-based INR determination?
Settings include primary care settings with use of POC INR devices by general practitioners or nurses, ACCs, pharmacies, long-term care homes, and use by the patient either for PST or PSM.
2. Cost-effectiveness
What is the cost-effectiveness of POC INR monitoring devices in various settings compared to standard laboratory-based INR determination?
Inclusion Criteria
English-language RCTs, systematic reviews, and meta-analyses
Publication dates: 1996 to November 25, 2008
Population: patients on OAT
Intervention: anticoagulation monitoring by POC INR device in any setting including anticoagulation clinic, primary care (general practitioner or nurse), pharmacy, long-term care facility, PST, PSM or any other POC INR strategy
Minimum sample size: 50 patients Minimum follow-up period: 3 months
Comparator: usual care defined as venipuncture blood draw for an INR laboratory test and management provided by an ACC or individual practitioner
Outcomes: Hemorrhagic events, thromboembolic events, all-cause mortality, anticoagulation control as assessed by proportion of time or values in the therapeutic range, patient reported outcomes including satisfaction, QoL, compliance, acceptability, convenience
Exclusion criteria
Non-RCTs, before-after studies, quasi-experimental studies, observational studies, case reports, case series, editorials, letters, non-systematic reviews, conference proceedings, abstracts, non-English articles, duplicate publications
Studies where POC INR devices were compared to laboratory testing to assess test accuracy
Studies where the POC INR results were not used to guide patient management
Method of Review
A search of electronic databases (OVID MEDLINE, MEDLINE In-Process & Other Non-Indexed Citations, EMBASE, The Cochrane Library, and the International Agency for Health Technology Assessment [INAHTA] database) was undertaken to identify evidence published from January 1, 1998 to November 25, 2008. Studies meeting the inclusion criteria were selected from the search results. Reference lists of selected articles were also checked for relevant studies.
Summary of Findings
Five existing reviews and 22 articles describing 17 unique RCTs met the inclusion criteria. Three RCTs examined POC INR monitoring devices with PST strategies, 11 RCTs examined PSM strategies, one RCT included both PST and PSM strategies and two RCTs examined the use of POC INR monitoring devices by health care professionals.
Anticoagulation Control
Anticoagulation control is measured by the percentage of time INR is within the therapeutic range or by the percentage of INR values in the therapeutic range. Due to the differing methodologies and reporting structures used, it was deemed inappropriate to combine the data and estimate whether the difference between groups would be significant. Instead, the results of individual studies were weighted by the number of person-years of observation and then pooled to calculate a summary measure.
Across most studies, patients in the intervention groups tended to have a higher percentage of time and values in the therapeutic target range in comparison to control patients. When the percentage of time in the therapeutic range was pooled across studies and weighted by the number of person-years of observation, the difference between the intervention and control groups was 4.2% for PSM, 7.2% for PST and 6.1% for POC use by health care practitioners. Overall, intervention patients were in the target range 69% of the time and control patients were in the therapeutic target range 64% of the time leading to an overall difference between groups of roughly 5%.
Major Complications and Deaths
There was no statistically significant difference in the number of major hemorrhagic events between patients managed with POC INR monitoring devices and patients managed with standard laboratory testing (OR =0.74; 95% CI: 0.52- 1.04). This difference was non-significant for all POC strategies (PSM, PST, health care practitioner).
Patients managed with POC INR monitoring devices had significantly fewer thromboembolic events than usual care patients (OR =0.52; 95% CI: 0.37 - 0.74). When divided by POC strategy, PSM resulted in significantly fewer thromboembolic events than usual care (OR =0.46.; 95% CI: 0.29 - 0.72). The observed difference in thromboembolic events for PSM remained significant when the analysis was limited to major thromboembolic events (OR =0.40; 95% CI: 0.17 - 0.93), but was non-significant when the analysis was limited to minor thromboembolic events (OR =0.73; 95% CI: 0.08 - 7.01). PST and GP/Nurse strategies did not result in significant differences in thromboembolic events, however there were only a limited number of studies examining these interventions.
No statistically significant difference was observed in the number of deaths between POC intervention and usual care control groups (OR =0.67; 95% CI: 0.41 - 1.10). This difference was non-significant for all POC strategies. Only one study reported on survival with 10-year survival rate of 76.1% in the usual care control group compared to 84.5% in the PSM group (P=0.05).
Summary Results of Meta-Analyses of Major Complications and Deaths in POC INR Monitoring Studies
Patient Satisfaction and Quality of Life
Quality of life measures were reported in eight studies comparing POC INR monitoring to standard laboratory testing using a variety of measurement tools. It was thus not possible to calculate a quantitative summary measure. The majority of studies reported favourable impacts of POC INR monitoring on QoL and found better treatment satisfaction with POC monitoring. Results from a pre-analysis patient and caregiver focus group conducted in Ontario also indicated improved patient QoL with POC monitoring.
Quality of the Evidence
Studies varied with regard to patient eligibility, baseline patient characteristics, follow-up duration, and withdrawal rates. Differential drop-out rates were observed such that the POC intervention groups tended to have a larger number of patients who withdrew. There was a lack of consistency in the definitions and reporting for OAT control and definitions of adverse events. In most studies, the intervention group received more education on the use of warfarin and performed more frequent INR testing, which may have overestimated the effect of the POC intervention. Patient selection and eligibility criteria were not always fully described and it is likely that the majority of the PST/PSM trials included a highly motivated patient population. Lastly, a large number of trials were also sponsored by industry.
Despite the observed heterogeneity among studies, there was a general consensus in findings that POC INR monitoring devices have beneficial impacts on the risk of thromboembolic events, anticoagulation control and patient satisfaction and QoL (ES Table 2).
GRADE Quality of the Evidence on POC INR Monitoring Studies
CI refers to confidence interval; Interv, intervention; OR, odds ratio; RCT, randomized controlled trial.
Economic Analysis
Using a 5-year Markov model, the health and economic outcomes associated with four different anticoagulation management approaches were evaluated:
Standard care: consisting of a laboratory test with a venipuncture blood draw for an INR;
Healthcare staff testing: consisting of a test with a POC INR device in a medical clinic comprised of healthcare staff such as pharmacists, nurses, and physicians following protocol to manage OAT;
PST: patient self-testing using a POC INR device and phoning in results to an ACC or family physician; and
PSM: patient self-managing using a POC INR device and self-adjustment of OAT according to a standardized protocol. Patients may also phone in to a medical office for guidance.
The primary analytic perspective was that of the MOHLTC. Only direct medical costs were considered and the time horizon of the model was five years - the serviceable life of a POC device.
From the results of the economic analysis, it was found that POC strategies are cost-effective compared to traditional INR laboratory testing. In particular, the healthcare staff testing strategy can derive potential cost savings from the use of one device for multiple patients. The PSM strategy, however, seems to be the most cost-effective method i.e. patients are more inclined to adjust their INRs more readily (as opposed to allowing INRs to fall out of range).
Considerations for Ontario Health System
Although the use of POC devices continues to diffuse throughout Ontario, not all OAT patients are suitable or have the ability to practice PST/PSM. The use of POC is currently concentrated at the institutional setting, including hospitals, ACCs, long-term care facilities, physician offices and pharmacies, and is much less commonly used at the patient level. It is, however, estimated that 24% of OAT patients (representing approximately 32,000 patients in Ontario), would be suitable candidates for PST/PSM strategies and willing to use a POC device.
There are several barriers to the use and implementation of POC INR monitoring devices, including factors such as lack of physician familiarity with the devices, resistance to changing established laboratory-based methods, lack of an approach for identifying suitable patients and inadequate resources for effective patient education and training. Issues of cost and insufficient reimbursement strategies may also hinder implementation and effective quality assurance programs would need to be developed to ensure that INR measurements are accurate and precise.
Conclusions
For a select group of patients who are highly motivated and trained, PSM resulted in significantly fewer thromboembolic events compared to conventional laboratory-based INR testing. No significant differences were observed for major hemorrhages or all-cause mortality. PST and GP/Nurse use of POC strategies are just as effective as conventional laboratory-based INR testing for thromboembolic events, major hemorrhages, and all-cause mortality. POC strategies may also result in better OAT control as measured by the proportion of time INR is in the therapeutic range and there appears to be beneficial impacts on patient satisfaction and QoL. The use of POC devices should factor in patient suitability, patient education and training, health system constraints, and affordability.
Keywords
anticoagulants, International Normalized Ratio, point-of-care, self-monitoring, warfarin.
PMCID: PMC3377545  PMID: 23074516
25.  3D Protein structure prediction with genetic tabu search algorithm 
BMC Systems Biology  2010;4(Suppl 1):S6.
Background
Protein structure prediction (PSP) has important applications in different fields, such as drug design, disease prediction, and so on. In protein structure prediction, there are two important issues. The first one is the design of the structure model and the second one is the design of the optimization technology. Because of the complexity of the realistic protein structure, the structure model adopted in this paper is a simplified model, which is called off-lattice AB model. After the structure model is assumed, optimization technology is needed for searching the best conformation of a protein sequence based on the assumed structure model. However, PSP is an NP-hard problem even if the simplest model is assumed. Thus, many algorithms have been developed to solve the global optimization problem. In this paper, a hybrid algorithm, which combines genetic algorithm (GA) and tabu search (TS) algorithm, is developed to complete this task.
Results
In order to develop an efficient optimization algorithm, several improved strategies are developed for the proposed genetic tabu search algorithm. The combined use of these strategies can improve the efficiency of the algorithm. In these strategies, tabu search introduced into the crossover and mutation operators can improve the local search capability, the adoption of variable population size strategy can maintain the diversity of the population, and the ranking selection strategy can improve the possibility of an individual with low energy value entering into next generation. Experiments are performed with Fibonacci sequences and real protein sequences. Experimental results show that the lowest energy obtained by the proposed GATS algorithm is lower than that obtained by previous methods.
Conclusions
The hybrid algorithm has the advantages from both genetic algorithm and tabu search algorithm. It makes use of the advantage of multiple search points in genetic algorithm, and can overcome poor hill-climbing capability in the conventional genetic algorithm by using the flexible memory functions of TS. Compared with some previous algorithms, GATS algorithm has better performance in global optimization and can predict 3D protein structure more effectively.
doi:10.1186/1752-0509-4-S1-S6
PMCID: PMC2880412  PMID: 20522256

Results 1-25 (1290675)