PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1108987)

Clipboard (0)
None

Related Articles

1.  Types and frequency of preanalytical mistakes in the first Thai ISO 9002:1994 certified clinical laboratory, a 6 – month monitoring 
Background
Reliability cannot be achieved in a clinical laboratory through the control of accuracy in the analytical phase of the testing process alone. Indeed a "mistake" can be defined as any defect occuring during the testing process. In the analysis of clinical specimens, there are many possible preanalytical sources of error. Therefore, the application of quality system to laboratory testing requires total quality management throughout the laboratory process, including the preanalytical and postanalytical phases. ISO 9002:1994 is a model for quality assurance in production, installation, and servicing, which includes a number of clauses providing guidance for implementation in clinical laboratories. Our laboratory at King Chulalongkorn Memorial Hospital, the largest Thai Red Cross Society hospital, is the first clinical laboratory in Thailand with ISO 9002:1994 certified for the whole unit.
Method
In this study, we evaluated the frequency and types of preanalytical mistakes found in our laboratory, by monitoring specimens requested for laboratory analyses from both in-patient and out-patient divisions for 6 months.
Result
Among a total of 935,896 specimens for 941,902 analyses, 1,048 findings were confirmed as preanalytical mistakes; this was a relative frequency of 0.11 % (1,048/935,896). A total of 1,240 mistakes were identified during the study period. Comparing the preanalytical mistakes to other mistakes in the laboratory process monitored in the same setting and period, the distribution of mistakes was: preanalytical 84.52 % (1,048 mistakes), analytical 4.35 % (54 mistakes), and postanalytical 11.13 % (138 mistakes). Of 1,048 preanalytical mistakes, 998 (95.2%) originated in the care units. All preanalytical mistakes, except for 12 (1.15 %) relating to the laboratory barcode reading machine, were due to human error.
Conclusion
Most mistakes occurred before samples were analysed, either during sampling or preparation for analysis. This suggests that co-operation with clinicians and personnel outside the laboratory is still the key to improvement of laboratory quality.
doi:10.1186/1472-6890-1-5
PMCID: PMC59663  PMID: 11696253
2.  Learning about non-predators and safe places: the forgotten elements of risk assessment 
Animal Cognition  2011;14(3):309-316.
A fundamental prerequisite for prey to avoid being captured is the ability to distinguish dangerous stimuli such as predators and risky habitats from non-dangerous stimuli such as non-predators and safe locations. Most research to date has focused on mechanisms allowing prey to learn to recognize risky stimuli. The paradox of learned predator recognition is that its remarkable efficiency leaves room for potentially costly mistakes if prey inadvertently learn to recognize non-predatory species as dangerous. Here, we pre-exposed embryonic woodfrogs, Rana sylvatica, to the odour of a tiger salamander, Ambystoma tigrinum, without risk reinforcement, and later try to teach the tadpoles to recognize the salamander, a red-bellied newt Cynops pyrrhogaster—a closely related amphibian, or a goldfish, Carassiusauratus, as a predator. Tadpoles were then tested for their responses to salamander, newt or fish odour. Pre-exposure to salamander did not affect the ability of tadpoles to learn to recognize goldfish as a predator. However, the embryonic pre-exposure to salamanders inhibited the subsequent learning of salamanders as a potential predator, through a mechanism known as latent inhibition. The embryonic pre-exposure also prevented the learned recognition of novel newts, indicating complete generalization of non-predator recognition. This pattern does not match that of generalization of predator recognition, whereby species learning to recognize a novel predator do respond, but not as strongly, to novel species closely related to the known predator. The current paper discusses the costs of making recognition mistakes within the context of generalization of predators and dangerous habitats versus generalization of non-predators and safe habitats and highlights the asymmetry in which amphibians incorporate information related to safe versus risky cues in their decision-making. Mechanisms such as latent inhibition allow a variety of prey species to collect information about non-threatening stimuli, as early as during their embryonic development, and to use this information later in life to infer the danger level associated with the stimuli.
doi:10.1007/s10071-010-0363-4
PMCID: PMC3078302  PMID: 21203793
Predator recognition; Non-predator recognition; Habitat learning; Latent inhibition; Embryonic learning; Decision-making; Information use
3.  How house officers cope with their mistakes. 
Western Journal of Medicine  1993;159(5):565-569.
We examined how house officers coped with serious medical mistakes to gain insight into how medical educators should handle these situations. An anonymous questionnaire was mailed to 254 house officers in internal medicine asking them to describe their most important mistake and their response to it; 45% (N = 114) reported a mistake and completed the questionnaire. House officers experienced considerable emotional distress in response to their mistakes and used a variety of strategies to cope. In multivariate analysis, those who coped by accepting responsibility were more likely to make constructive changes in practice, but to experience more emotional distress. House officers who coped by escape-avoidance were more likely to report defensive changes in practice. For house officers who have made a mistake, we suggest that medical educators provide specific advice about preventing a recurrence of the mistake, provide emotional support, and help them understand that distress is an expected concomitant of learning from the experience.
PMCID: PMC1022346  PMID: 8279153
4.  Learning from the value of your mistakes: evidence for a risk-sensitive process in movement adaptation 
Risk frames nearly every decision we make. Yet, remarkably little is known about whether risk influences how we learn new movements. Risk-sensitivity can emerge when there is a distortion between the absolute magnitude (actual value) and how much an individual values (subjective value) a given outcome. In movement, this translates to the difference between a given movement error and its consequences. Surprisingly, how movement learning can be influenced by the consequences associated with an error is not well-understood. It is traditionally assumed that all errors are created equal, i.e., that adaptation is proportional to an error experienced. However, not all movement errors of a given magnitude have the same subjective value. Here we examined whether the subjective value of error influenced how participants adapted their control from movement to movement. Seated human participants grasped the handle of a force-generating robotic arm and made horizontal reaching movements in two novel dynamic environments that penalized errors of the same magnitude differently, changing the subjective value of the errors. We expected that adaptation in response to errors of the same magnitude would differ between these environments. In the first environment, Stable, errors were not penalized. In the second environment, Unstable, rightward errors were penalized with the threat of unstable, cliff-like forces. We found that adaptation indeed differed. Specifically, in the Unstable environment, we observed reduced adaptation to leftward errors, an appropriate strategy that reduced the chance of a penalizing rightward error. These results demonstrate that adaptation is influenced by the subjective value of error, rather than solely the magnitude of error, and therefore is risk-sensitive. In other words, we may not simply learn from our mistakes, we may also learn from the value of our mistakes.
doi:10.3389/fncom.2013.00118
PMCID: PMC3750521  PMID: 23986693
risk-sensitivity; adaptation; motor learning; decision-making; internal model; subjective value; sensorimotor control
5.  Simulation-based medical teaching and learning 
One of the most important steps in curriculum development is the introduction of simulation- based medical teaching and learning. Simulation is a generic term that refers to an artificial representation of a real world process to achieve educational goals through experiential learning. Simulation based medical education is defined as any educational activity that utilizes simulation aides to replicate clinical scenarios. Although medical simulation is relatively new, simulation has been used for a long time in other high risk professions such as aviation. Medical simulation allows the acquisition of clinical skills through deliberate practice rather than an apprentice style of learning. Simulation tools serve as an alternative to real patients. A trainee can make mistakes and learn from them without the fear of harming the patient. There are different types and classification of simulators and their cost vary according to the degree of their resemblance to the reality, or ‘fidelity’. Simulation- based learning is expensive. However, it is cost-effective if utilized properly. Medical simulation has been found to enhance clinical competence at the undergraduate and postgraduate levels. It has also been found to have many advantages that can improve patient safety and reduce health care costs through the improvement of the medical provider's competencies. The objective of this narrative review article is to highlight the importance of simulation as a new teaching method in undergraduate and postgraduate education.
doi:10.4103/1319-1683.68787
PMCID: PMC3195067  PMID: 22022669
Clinical skills; medical education; medical simulation; simulators
6.  Patient Perceptions of Mistakes in Ambulatory Care 
Archives of internal medicine  2010;170(16):1480-1487.
CONTEXT
Little information exists about current patient perceptions of medical mistakes in ambulatory care within a diverse population.
OBJECTIVES
To learn about adults’ perceptions of mistakes in ambulatory care, what factors were associated with perceived mistakes, and whether or not the respondents changed physicians because of these perceived mistakes
DESIGN
Cross-sectional survey conducted in 2008
SETTING
Seven primary care medical practices in North Carolina
PARTICIPANTS
One thousand six hundred ninety-seven English or Spanish speaking adults, aged 18 and older, who presented to a medical provider during the data collection period.
MAIN OUTCOME MEASURES
1) Has a doctor in a doctor’s office ever made a mistake in your care? 2) In the past 10 years, has a doctor in a doctor’s office made a wrong diagnosis or misdiagnosed you? (If yes, how much harm did this cause you?) 3) In the last 10 years, has a doctor in a doctor’s office given you the wrong medical treatment or delayed treatment? (If yes, how much harm did this cause you?) 4) Have you ever changed doctors because of either a wrong diagnosis or a wrong treatment of a medical condition?
RESULTS
Two hundred sixty-five participants (15.6%) responded that a doctor had ever made a mistake, 13.4% reported a wrong diagnosis, 12.4% reported a wrong treatment, and 14.1% reported having changed doctors because of a mistake. Participants perceived mistakes and harm in both diagnostic care and medical treatment. Patients with chronic low back pain, higher levels of education, and poor physical health were at increased odds of perceiving harm, whereas African-Americans were less likely to perceive mistakes.
CONCLUSIONS
Patients perceived mistakes in their diagnostic and treatment care in the ambulatory setting. These perceptions had a concrete impact on the patient-physician relationship, often leading patients to seek another health care provider.
doi:10.1001/archinternmed.2010.288
PMCID: PMC3070906  PMID: 20837835
7.  Unpredicted spontaneous extrusion of a renal calculus in an adult male with spina bifida and paraplegia: report of a misdiagnosis. Measures to be taken to reduce urological errors in spinal cord injury patients 
BMC Urology  2001;1:3.
Background
A delay in diagnosis or a misdiagnosis may occur in patients with spinal cord injury (SCI) or spinal bifida as typical symptoms of a clinical condition may be absent because of their neurological impairment.
Case presentation
A 29-year old male, who was born with spina bifida and hydrocephalus, became unwell and developed a swelling and large red mark in his left loin eighteen months ago. Pyonephrosis or perinephric abscess was suspected. X-ray of the abdomen showed left-sided staghorn calculus. Since ultrasound scan showed no features of pyonephrosis or perinephric abscess, he was prescribed a prolonged course of antibiotics for infection presumed to arise from the site of metal implant in spine. He developed a discharging sinus, following which the loin swelling and red mark subsided. About three months ago, he again developed a red mark and minimal swelling in the left loin. Ultrasound scan detected no abnormality in the renal or perinephric region. Therefore, the red mark and swelling were attributed to pressure from the backrest of his chair. Five weeks later, the swelling in the left loin burst open and a large stone was extruded spontaneously. An X-ray of the abdomen showed that he had extruded the central portion of the staghorn calculus from left kidney. With hindsight, the extruded renal calculus could be seen lying in the subcutaneous tissue of left loin lateral to the 10th rib in the X-ray of abdomen, which was taken when he presented with red mark and minimal swelling.
Conclusion
This case illustrates how mistakes in diagnosis could occur in spinal cord injury patients, and highlights the need for corrective measures to reduce urological errors in these patients. Voluntary reporting of urological errors is recommended to facilitate learning from our mistakes. In the patients who have marked spinal curvature, ultrasonography of kidneys and perinephric region may not be entirely reliable. As clinical symptoms and signs may be non-specific in SCI patients, they require prompt, detailed and occasionally, repeated investigations. A joint team approach by health professionals belonging to various medical disciplines, which is strengthened by frequent, informal and honest discussions of a patient's clinical condition, is likely to reduce urological errors in SCI patients.
doi:10.1186/1471-2490-1-3
PMCID: PMC64578  PMID: 11801198
8.  Do house officers learn from their mistakes?* 
Quality & Safety in Health Care  2003;12(3):221-226.


 Mistakes are inevitable in medicine. To learn how medical mistakes relate to subsequent changes in practice, we surveyed 254 internal medicine house officers. One hundred and fourteen house officers (45%) completed an anonymous questionnaire describing their most significant mistake and their response to it. Mistakes included errors in diagnosis (33%), prescribing (29%), evaluation (21%), and communication (5%) and procedural complications (11%). Patients had serious adverse outcomes in 90% of the cases, including death in 31% of cases. Only 54% of house officers discussed the mistake with their attending physicians, and only 24% told the patients or families. House officers who accepted responsibility for the mistake and discussed it were more likely to report constructive changes in practice. Residents were less likely to make constructive changes if they attributed the mistake to job overload. They were more likely to report defensive changes if they felt the institution was judgmental. Decreasing the work load and closer supervision may help prevent mistakes. To promote learning, faculty should encourage house officers to accept responsibility and to discuss their mistakes.
doi:10.1136/qhc.12.3.221
PMCID: PMC1743709  PMID: 12792014
9.  Oxidation of cellular amino acid pools leads to cytotoxic mistranslation of the genetic code 
eLife  2014;3:e02501.
Aminoacyl-tRNA synthetases use a variety of mechanisms to ensure fidelity of the genetic code and ultimately select the correct amino acids to be used in protein synthesis. The physiological necessity of these quality control mechanisms in different environments remains unclear, as the cost vs benefit of accurate protein synthesis is difficult to predict. We show that in Escherichia coli, a non-coded amino acid produced through oxidative damage is a significant threat to the accuracy of protein synthesis and must be cleared by phenylalanine-tRNA synthetase in order to prevent cellular toxicity caused by mis-synthesized proteins. These findings demonstrate how stress can lead to the accumulation of non-canonical amino acids that must be excluded from the proteome in order to maintain cellular viability.
DOI: http://dx.doi.org/10.7554/eLife.02501.001
eLife digest
Proteins are built from molecules called amino acids. The amino acids that make up a particular protein, and the order they appear in, are determined by the gene that encodes that protein. First, the gene is transcribed to produce a molecule of messenger RNA, which is then translated by a molecular machine called a ribosome. This involves other RNA molecules, called transfer RNAs (tRNAs), bringing the correct amino acids to the ribosome, which then joins the amino acids together to build the protein.
Amino acids are loaded onto their corresponding tRNA molecules by enzymes called tRNA synthetases. Occasionally, however, the wrong amino acid can be loaded onto a tRNA. If this amino acid ends up in a protein, the protein might not be able to function properly, or it might even be toxic to the cell, so cells need to be able to fix this problem. Some tRNA synthetases can check if a wrong amino acid has been loaded onto a tRNA, and remove it before it can cause harm. However, the importance of these ‘editing’ activities to living cells is unclear.
Here, Bullwinkle, Reynolds et al. show that, in the bacterium E. coli, a tRNA synthetase works to stop an incorrect amino acid—which accumulates in cells that are exposed to harmful chemicals—from being built into proteins. Without the enzyme’s editing activity, the build-up of this amino acid slows the growth of the bacteria. However, E. coli can thrive without this editing activity when it is grown under normal conditions in a laboratory. Yeast benefit slightly from this editing activity when exposed to the stress-produced amino acid. But, unlike E. coli, yeast strongly rely on this activity when grown in an excess of another amino acid, which is used to build proteins but is the wrong amino acid for this tRNA synthetase.
The findings of Bullwinkle, Reynolds et al. will help to improve our understanding of which activities in a cell are most affected by mistakes in protein synthesis, and how these mistakes may relate to disease.
DOI: http://dx.doi.org/10.7554/eLife.02501.002
doi:10.7554/eLife.02501
PMCID: PMC4066437  PMID: 24891238
translation; protein synthesis; stress; E. coli; S. cerevisiae
10.  Double–blind control of the data manager doesn't have any impact on data entry reliability and should be considered as an avoidable cost 
Background
Database systems have been developed to store data from large medical trials and survey studies. However, a reliable data storage system does not guarantee data entering reliability.
We aimed to evaluate if double-blind control of the data manager might have any effect on data-reliability. Our secondary aim was to assess the influence of the inserting position in the insertion-sheet on data-entry accuracy and the effectiveness of electronic controls in identifying data-entering mistakes.
Methods
A cross-sectional survey and single data-manager data entry.
Data from PACMeR_02 survey, which had been conducted within a framework of the SESy-Europe project (PACMeR_01.4), were used as substrate for this study. We analyzed the electronic storage of 6446 medical charts. We structured data insertion in four sequential phases. After each phase, the data stored in the database were tested in order to detect unreliable entries through both computerized and manual random control. Control was provided in a double blind fashion.
Results
Double-blind control of the data manager didn't improve data entry reliability. Entries near the end of the insertion sheet were correlated with a larger number of mistakes. Data entry monitoring by electronic-control was statistically more effective than hand-searching of randomly selected medical records.
Conclusion
Double-blind control of the data manager should be considered an avoidable cost. Electronic-control for monitoring of data-entry reliability is suggested.
doi:10.1186/1471-2288-8-66
PMCID: PMC2596166  PMID: 19239725
11.  Negative Example Selection for Protein Function Prediction: The NoGO Database 
PLoS Computational Biology  2014;10(6):e1003644.
Negative examples – genes that are known not to carry out a given protein function – are rarely recorded in genome and proteome annotation databases, such as the Gene Ontology database. Negative examples are required, however, for several of the most powerful machine learning methods for integrative protein function prediction. Most protein function prediction efforts have relied on a variety of heuristics for the choice of negative examples. Determining the accuracy of methods for negative example prediction is itself a non-trivial task, given that the Open World Assumption as applied to gene annotations rules out many traditional validation metrics. We present a rigorous comparison of these heuristics, utilizing a temporal holdout, and a novel evaluation strategy for negative examples. We add to this comparison several algorithms adapted from Positive-Unlabeled learning scenarios in text-classification, which are the current state of the art methods for generating negative examples in low-density annotation contexts. Lastly, we present two novel algorithms of our own construction, one based on empirical conditional probability, and the other using topic modeling applied to genes and annotations. We demonstrate that our algorithms achieve significantly fewer incorrect negative example predictions than the current state of the art, using multiple benchmarks covering multiple organisms. Our methods may be applied to generate negative examples for any type of method that deals with protein function, and to this end we provide a database of negative examples in several well-studied organisms, for general use (The NoGO database, available at: bonneaulab.bio.nyu.edu/nogo.html).
Author Summary
Many machine learning methods have been applied to the task of predicting the biological function of proteins based on a variety of available data. The majority of these methods require negative examples: proteins that are known not to perform a function, in order to achieve meaningful predictions, but negative examples are often not available. In addition, past heuristic methods for negative example selection suffer from a high error rate. Here, we rigorously compare two novel algorithms against past heuristics, as well as some algorithms adapted from a similar task in text-classification. Through this comparison, performed on several different benchmarks, we demonstrate that our algorithms make significantly fewer mistakes when predicting negative examples. We also provide a database of negative examples for general use in machine learning for protein function prediction (The NoGO database, available at: bonneaulab.bio.nyu.edu/nogo.html).
doi:10.1371/journal.pcbi.1003644
PMCID: PMC4055410  PMID: 24922051
12.  True Zero-Training Brain-Computer Interfacing – An Online Study 
PLoS ONE  2014;9(7):e102504.
Despite several approaches to realize subject-to-subject transfer of pre-trained classifiers, the full performance of a Brain-Computer Interface (BCI) for a novel user can only be reached by presenting the BCI system with data from the novel user. In typical state-of-the-art BCI systems with a supervised classifier, the labeled data is collected during a calibration recording, in which the user is asked to perform a specific task. Based on the known labels of this recording, the BCI's classifier can learn to decode the individual's brain signals. Unfortunately, this calibration recording consumes valuable time. Furthermore, it is unproductive with respect to the final BCI application, e.g. text entry. Therefore, the calibration period must be reduced to a minimum, which is especially important for patients with a limited concentration ability. The main contribution of this manuscript is an online study on unsupervised learning in an auditory event-related potential (ERP) paradigm. Our results demonstrate that the calibration recording can be bypassed by utilizing an unsupervised trained classifier, that is initialized randomly and updated during usage. Initially, the unsupervised classifier tends to make decoding mistakes, as the classifier might not have seen enough data to build a reliable model. Using a constant re-analysis of the previously spelled symbols, these initially misspelled symbols can be rectified posthoc when the classifier has learned to decode the signals. We compare the spelling performance of our unsupervised approach and of the unsupervised posthoc approach to the standard supervised calibration-based dogma for n = 10 healthy users. To assess the learning behavior of our approach, it is unsupervised trained from scratch three times per user. Even with the relatively low SNR of an auditory ERP paradigm, the results show that after a limited number of trials (30 trials), the unsupervised approach performs comparably to a classic supervised model.
doi:10.1371/journal.pone.0102504
PMCID: PMC4113217  PMID: 25068464
13.  Clinical Research After Catastrophic Disasters: Lessons Learned From Hurricane Katrina 
When catastrophic disasters such as Hurricane Katrina strike, psychologists and other mental health professionals often wonder how to use resources and fill needed roles. We argue that conducting clinical research in response to disasters is 1 important way that these professionals can contribute. However, we recognize that designing and implementing a clinical research study can be a daunting task, particularly in the context of the personal and system-wide chaos that follows most disasters. Thus, we offer a detailed description of our own experiences with conducting clinical research as part of our response to Hurricane Katrina. We describe our study design, recruitment and data collection efforts, and summarize and synthesize the lessons we have learned from this endeavor. Our hope is that others who may wish to conduct disaster-related research will learn from our mistakes and successes.
doi:10.1037/0735-7028.39.1.107
PMCID: PMC2631178  PMID: 19177173
clinical research; disasters; Hurricane Katrina; roles
14.  THE RELATIONSHIP QUESTIONNAIRE-CLINICAL VERSION (RQ-CV): INTRODUCING A PROFOUNDLY-DISTRUSTFUL ATTACHMENT STYLE 
Infant mental health journal  2006;27(3):310-325.
Cost-efficient prenatal assessments are needed that have the potential to identify those at risk for parent/infant relational problems. With this goal in mind, an additional attachment style description was added to the Relationship Questionnaire (Bartholomew & Horowitz, 1991), an established self-report attachment measure, to create the Relationship Questionnaire: Clinical Version (RQ-CV). The additional description represents a profoundly-distrustful attachment style: “I think it's a mistake to trust other people. Everyone's looking out for themselves, so the sooner you learn not to expect anything from anybody else the better.” The RQ-CV was applied to a sample of 44 low-income mothers who had participated in a previous study of the impact of family risk factors on infant development. After first controlling for demographic risk factors and for other insecure adult attachment styles, mother's profound-distrust was associated with three independent assessments of the quality of maternal interactions with the infant assessed 20 years earlier. In particular, profound-distrust was related to more hostile, intrusive, and negative behaviors toward the infant. The results are discussed within the framework of attachment theory.
doi:10.1002/imhj.20094
PMCID: PMC1945178  PMID: 17710115
15.  Multi-dimensional Problems in Health Settings: A Review of Approaches to Decision Making 
Introduction
There appears to be a growing number of prioritization exercises, for example of diseases, in health related settings (1). The decision process around these exercises involves comparing competing alternatives, i.e. diseases, and irreducible objectives. In addition to the multi-dimensional nature of the problem, the lack of reliable data, group dynamics associated to the involvement of experts, and the multiplicity of stakeholders, among other contextual factors, add complexity to the decision process. Here we review trends in such prioritization exercises and applications in different settings and for different events of interest, for example the management of emerging risks. Based on our findings, we discuss a conceptual framework based on multi-attribute utility theory presented to the World Organization for Animal Health (OIE) for the modification of its qualitative assessment of veterinary services performance into a quantifiable decision support system.
Methods
We searched PubMed for articles containing the key words ‘multi-criteria’, ‘multi-attribute’, ‘multi-objective’, ‘prioritization’, ‘decision making’ and their variations (e.g. without hyphenation) for the period 1990 to 2011 for human and veterinary medicine. We focused on prioritization methodologies and their sound application.
Results
A large number of prioritization efforts in health settings aim to produce a rank order of diseases to help allocation of scarce surveillance and disease control budgets. A number of applications target the prioritization of competing health interventions against specific diseases. Fewer target different events, for example emerging threats. Common mistakes found in multi-attribute prioritization approaches reported in the social sciences (2) appear also in public and animal health settings. In particular, the application of linear additive models to non-preferentially independent evaluation criteria, the poor design of attributes to assess the decision alternatives, the failure to define suitable criteria scales, and mistakes in defining trade-off weights were prevalent. In addition, most decision support tools tend to be overly complex. This not only compromises their acceptability and long-term sustainability but also increases the likelihood of methodological mistakes in their design and regular application. For example, the failure to properly identify and separate ‘ends’ objectives, such as the improvement of a country’s health, from ‘means’ objectives, i.e. required resources, in the definition of the fundamental drivers in any decision process.
Conclusions
Our findings, and experience in the practical application of formal prioritization methodologies (3), informed our advice to the OIE for the quantification of its tools for the assessment of veterinary services performance. The current framework used by the OIE produces a purely qualitative output with ordinal scales. The suggested quantitative extension allows additional outputs not available in their current form, for example, the aggregation of assessment scores at any level within the framework to produce a country’s overall score. It also permits the assessment of marginal performance improvements for every criterion and the consideration of trade-offs among the different criteria. The final output of our extension is the identification of the best portfolio of actions that will maximize the overall capability of national veterinary services given available resources. Quantification of the existing tool will deliver obvious benefits such as enhanced accountability and transparency in the decision making process, and will allow the historical analysis of a country’s veterinary services performance. The approach suggested to the OIE is adaptable to similar decision problems, such as monitoring the implementation of the International Health Regulations in a given country.
PMCID: PMC3692762
Prioritisation; Multi-attribute utility theory; Decision support
16.  The 2009 Lindau Nobel Laureate Meeting: Martin Chalfie, Chemistry 2008 
American Biologist Martin Chalfie shared the 2008 Nobel Prize in Chemistry with Roger Tsien and Osamu Shimomura for their discovery and development of the Green Fluorescent Protein (GFP).
Martin Chalfie was born in Chicago in 1947 and grew up in Skokie Illinois. Although he had an interest in science from a young age-- learning the names of the planets and reading books about dinosaurs-- his journey to a career in biological science was circuitous. In high school, Chalfie enjoyed his AP Chemistry course, but his other science courses did not make much of an impression on him, and he began his undergraduate studies at Harvard uncertain of what he wanted to study. Eventually he did choose to major in Biochemistry, and during the summer between his sophomore and junior years, he joined Klaus Weber's lab and began his first real research project, studying the active site of the enzyme aspartate transcarbamylase. Unfortunately, none of the experiments he performed in Weber's lab worked, and Chalfie came to the conclusion that research was not for him.
Following graduation in 1969, he was hired as a teacher Hamden Hall Country Day School in Connecticut where he taught high school chemistry, algebra, and social sciences for 2 years. After his first year of teaching, he decided to give research another try. He took a summer job in Jose Zadunaisky's lab at Yale, studying chloride transport in the frog retina. Chalfie enjoyed this experience a great deal, and having gained confidence in his own scientific abilities, he applied to graduate school at Harvard, where he joined the Physiology department in 1972 and studied norepinephrine synthesis and secretion under Bob Pearlman. His interest in working on C. elegans led him to post doc with Sydney Brenner, at the Medical Research Council Laboratory of Molecular Biology in Cambridge, England. In 1982 he was offered position at Columbia University.
When Chalfie first heard about GFP at a research seminar given by Paul Brehm in 1989, his lab was studying genes involved in the development and function of touch-sensitive cells in C. elegans. He immediately became very excited about the idea of expressing the fluorescent protein in the nematode, hoping to figure out where the genes were expressed in the live organism. At the time, all methods of examining localization, such as antibody staining or in situ hybridization, required fixation of the tissue or cells, revealing the location of proteins only at fixed points in time.
In September 1992, after obtaining GFP DNA from Douglas Prasher, Chalfie asked his rotation student, Ghia Euskirchen to express GFP in E. coli, unaware that several other labs were also trying to express the protein, without success. Chalfie and Euskirchen used PCR to amplify only the coding sequence of GFP, which they placed in an expression vector and expressed in E.coli. Because of her engineering background, Euskirchen knew that the microscope in the Chalfie lab was not good enough to use for this type of experiment, so she captured images of green bacteria using the microscope from her former engineering lab. This work demonstrated that GFP fluorescence requires no component other than GFP itself. In fact, the difficulty that other labs had encountered stemmed from their use of restriction enzyme digestions for subcloning, which brought along an extra sequence that prevented GFP's fluorescent expression. Following Euskirchen's successful expression in E. coli, Chalfie's technician Yuan Tu went on to express GFP in C. elegans, and Chalfie published the findings in Science in 1994.
Through the study of C. elegans and GFP, Chalfie feels there is an important lesson to be learned about the importance basic research. Though there has been a recent push for clinically-relevant or patent-producing (translational) research, Chalfie warns that taking this approach alone is a mistake, given how "woefully little" we know about biology. He points out the vast expanse of the unknowns in biology, noting that important discoveries such as GFP are very frequently made through basic research using a diverse set of model organisms. Indeed, the study of GFP bioluminescence did not originally have a direct application to human health. Our understanding of it, however, has led to a wide array of clinically-relevant discoveries and developments. Chalfie believes we should not limit ourselves: "We should be a little freer and investigate things in different directions, and be a little bit awed by what we're going to find."
doi:10.3791/1570
PMCID: PMC3152221  PMID: 20147885
17.  A qualitative study of the cultural changes in primary care organisations needed to implement clinical governance. 
BACKGROUND: It is commony claimed that changing the culture of health organisations is a fundamental prerequisite for improving the National Health Service (NHS). Little is currently known about the nature or importance of culture and cultural change in primary care groups and trusts (PCG/Ts) or their constituent general practices. AIMS: To investigate the importance of culture and cultural change for the implementation of clinical governance in general practice by PCG/Ts, to identify perceived desirable and undesirable cultural attributes of general practice, and to describe potential facilitators and barriers to changing culture. DESIGN: Qualitative: case studies using data derived from semi-structured interviews and review of documentary evidence. SETTING: Fifty senior non-clinical and clinical managers from 12 purposely sampled PCGs or trusts in England. RESULTS: Senior primary care managers regard culture and cultural change as fundamental aspects of clinical governance. The most important desirable cultural traits were the value placed on a commitment to public accountability by the practices, their willingness to work together and learn from each other, and the ability to be self-critical and learn from mistakes. The main barriers to cultural change were the high level of autonomy of practices and the perceived pressure to deliver rapid measurable changes in general practice. CONCLUSIONS: The culture of general practice is perceived to be an important component of health system reform and quality improvement. This study develops our understanding of a changing organisational culture in primary care; however, further work is required to determine whether culture is a useful practical lever for initiating or managing improvement.
PMCID: PMC1314382  PMID: 12171222
18.  A decision support system to determine optimal ventilator settings 
Background
Choosing the correct ventilator settings for the treatment of patients with respiratory tract disease is quite an important issue. Since the task of specifying the parameters of ventilation equipment is entirely carried out by a physician, physician’s knowledge and experience in the selection of these settings has a direct effect on the accuracy of his/her decisions. Nowadays, decision support systems have been used for these kinds of operations to eliminate errors. Our goal is to minimize errors in ventilation therapy and prevent deaths caused by incorrect configuration of ventilation devices. The proposed system is designed to assist less experienced physicians working in the facilities without having lung mechanics like cottage hospitals.
Methods
This article describes a decision support system proposing the ventilator settings required to be applied in the treatment according to the patients’ physiological information. The proposed model has been designed to minimize the possibility of making a mistake and to encourage more efficient use of time in support of the decision making process while the physicians make critical decisions about the patient. Artificial Neural Network (ANN) is implemented in order to calculate frequency, tidal volume, FiO2 outputs, and this classification model has been used for estimation of pressure support / volume support outputs. For the obtainment of the highest performance in both models, different configurations have been tried. Various tests have been realized for training methods, and a number of hidden layers mostly affect factors regarding the performance of ANNs.
Results
The physiological information of 158 respiratory patients over the age of 60 and were treated in three different hospitals between the years 2010 and 2012 has been used in the training and testing of the system. The diagnosed disease, core body temperature, pulse, arterial systolic pressure, diastolic blood pressure, PEEP, PSO2, pH, pCO2, bicarbonate data as well as the frequency, tidal volume, FiO2, and pressure support / volume support values suitable for use in the ventilator device have been recommended to the physicians with an accuracy of 98,44%. Performed experiments show that sequential order weight/bias training was found to be the most ideal ANN learning algorithm for regression model and Bayesian regulation backpropagation was found to be the most ideal ANN learning algorithm for classification models.
Conclusions
This article aims at making independent of the choice of parameters from physicians in the ventilator treatment of respiratory tract patients with proposed decision support system. The rate of accuracy in prediction of systems increases with the use of data of more patients in training. Therefore, non-physician operators can use systems in determination of ventilator settings in case of emergencies.
doi:10.1186/1472-6947-14-3
PMCID: PMC3996182  PMID: 24410995
Ventilator settings; Decision support systems; Artificial neural networks; Bayesian model
19.  Is High Fidelity Simulation the Most Effective Method for the Development of Non-Technical Skills in Nursing? A Review of the Current Evidence 
The Open Nursing Journal  2012;6:82-89.
Aim:
To review the literature on the use of simulation in the development of non-technical skills in nursing
Background:
The potential risks to patients associated with learning 'at the bedside' are becoming increasingly unacceptable, and the search for innovative education and training methods that do not expose the patient to preventable errors continues. All the evidence shows that a significant proportion of adverse events in health care is caused by problems relating to the application of the 'non-technical' skills of communication, teamwork, leadership and decision-making.
Results:
Simulation is positively associated with significantly improved interpersonal communication skills at patient handover, and it has also been clearly shown to improve team behaviours in a wide variety of clinical contexts and clinical personnel, associated with improved team performance in the management of crisis situations. It also enables the effective development of transferable, transformational leadership skills, and has also been demonstrated to improve students' critical thinking and clinical reasoning in complex care situations, and to aid in the development of students' self-efficacy and confidence in their own clinical abilities.
Conclusion:
High fidelity simulation is able to provide participants with a learning environment in which to develop non-technical skills, that is safe and controlled so that the participants are able to make mistakes, correct those mistakes in real time and learn from them, without fear of compromising patient safety. Participants in simulation are also able to rehearse the clinical management of rare, complex or crisis situations in a valid representation of clinical practice, before practising on patients.
doi:10.2174/1874434601206010082
PMCID: PMC3415625  PMID: 22893783
Communication; decision-making; non-technical skills; nurse education; simulation; situation awareness; teamwork; team training; interprofessional.
20.  Community pharmacy-based intervention to improve self-monitoring of blood glucose in type 2 diabetic patients 
Pharmacy Practice  2006;4(4):195-203.
Self-monitoring of blood glucose (SMBG) is clearly correlated with increased life expectancy and quality of life in type 2 diabetic patients.
Objective
The objective of our study was to record and assess the errors patients make in preparing, performing, and processing self-monitoring of blood glucose (SMBG). Furthermore, the study aimed to determine to what extent a single standardized SMBG instruction session in a community pharmacy might reduce the number of patients making errors or the number of errors per patient.
Methods
Between May and October 2005, SMBG of 462 randomly selected patients with type 2 diabetes was monitored in 32 pharmacies specialized in diabetes care. The patients performed blood glucose self-tests using their own blood glucose meters. Self-testing was monitored using a standardized documentation sheet on which any error made during the performance of the test was recorded. If necessary, patients were instructed in the accurate operation of their meter and the use of the necessary equipment. Additionally, patients obtained written instructions. Six weeks later, assessment of the quality of patient’s SMBG was repeated.
Results
During the first observation, 383 patients (83%) made at least one mistake performing SMBG. By the time of the second observation, this frequency had fallen to 189 (41%) (p<0.001). The average number of mistakes fell from 3.1 to 0.8 per patient. Mistakes that may potentially have led to inaccurate readings were initially recorded for 283 (61%) and at study end for 110 (24%) patients (p<0.001).
Conclusion
It is important to periodically instruct type 2 diabetic patients in the proper SMBG technique in order to ensure accurate measurements. In this study it was shown that community pharmacies specialized in diabetes care can provide this service effectively.
PMCID: PMC4155622  PMID: 25214909
Diabetes mellitus; Blood glucose self- monitoring; Patient education; Community pharmacy services; Germany
21.  Comparison of two data collection processes in clinical studies: electronic and paper case report forms 
Background
Electronic Case Report Forms (eCRFs) are increasingly chosen by investigators and sponsors of clinical research instead of the traditional pen-and-paper data collection (pCRFs). Previous studies suggested that eCRFs avoided mistakes, shortened the duration of clinical studies and reduced data collection costs.
Methods
Our objectives were to describe and contrast both objective and subjective efficiency of pCRF and eCRF use in clinical studies. A total of 27 studies (11 eCRF, 16 pCRF) sponsored by the Paris hospital consortium, conducted and completed between 2001 and 2011 were included. Questionnaires were emailed to investigators of those studies, as well as clinical research associates and data managers working in Paris hospitals, soliciting their level of satisfaction and preferences for eCRFs and pCRFs. Mean costs and timeframes were compared using bootstrap methods, linear and logistic regression.
Results
The total cost per patient was 374€ ±351 with eCRFs vs. 1,135€ ±1,234 with pCRFs. Time between the opening of the first center and the database lock was 31.7 months Q1 = 24.6; Q3 = 42.8 using eCRFs, vs. 39.8 months Q1 = 31.7; Q3 = 52.2 with pCRFs (p = 0.11). Electronic CRFs were globally preferred by all (31/72 vs. 15/72 for paper) for easier monitoring and improved data quality.
Conclusions
This study found that eCRFs and pCRFs are used in studies with different patient numbers, center numbers and risk. The first ones are more advantageous in large, low–risk studies and gain support from a majority of stakeholders.
doi:10.1186/1471-2288-14-7
PMCID: PMC3909932  PMID: 24438227
Electronic data collection; Costs; Time management; Work satisfaction
22.  Pitfalls in developmental diagnosis. 
Archives of Disease in Childhood  1987;62(8):860-865.
I have never seen a paper or chapter of a book devoted to pitfalls and mistakes in developmental diagnosis. This paper is designed to try to fill the gap. It concerns the avoidance of mistakes in developmental diagnosis and is based entirely on mistakes that I have made myself and now learned to try to avoid and on mistakes that I have seen, most of them repeatedly. I have made no mention of mistakes that could theoretically be made but that I have not personally seen. I believe that most assessment errors are due to overconfidence and to the view that developmental diagnosis is easy. Many other mistakes are due to reliance on purely objective tests with consequent omission of a detailed history and physical examination, so that factors that profoundly affect development but are not directly related to the child's mental endowment are not weighed up before an opinion is reached.
PMCID: PMC1778504  PMID: 2444167
23.  Make no mistake—errors can be controlled* 
Quality & safety in health care  2003;12(5):359-365.


 Traditional quality control methods identify "variation" as the enemy. However, the control of variation by itself can never achieve the remarkably low non-conformance rates of world class quality leaders. Because the control of variation does not achieve the highest levels of quality, an inordinate focus on these techniques obscures key quality improvement opportunities and results in unnecessary pain and suffering for patients, and embarrassment, litigation, and loss of revenue for healthcare providers. Recent experience has shown that mistakes are the most common cause of problems in health care as well as in other industrial environments. Excessive product and process complexity contributes to both excessive variation and unnecessary mistakes. The best methods for controlling variation, mistakes, and complexity are each a form of mistake proofing. Using these mistake proofing techniques, virtually every mistake and non-conformance can be controlled at a fraction of the cost of traditional quality control methods.
doi:10.1136/qhc.12.5.359
PMCID: PMC1743770  PMID: 14532368
24.  Barriers to incident reporting in a healthcare system 
Background: Learning from mistakes is key to maintaining and improving the quality of care in the NHS. This study investigates the willingness of healthcare professionals to report the mistakes of others.
Methods: The questionnaire used in this research included nine short scenarios describing either a violation of a protocol, compliance with a protocol, or improvisation (where no protocol exists). By developing different versions of the questionnaire, each scenario was presented with a good, poor, or bad outcome for the patient. The participants (n=315) were doctors, nurses, and midwives from three English NHS trusts who volunteered to take part in the study and represented 53% of those originally contacted. Participants were asked to indicate how likely they were to report the incident described in each scenario to a senior member of staff.
Results: The findings of this study suggest that healthcare professionals, particularly doctors, are reluctant to report adverse events to a superior. The results show that healthcare professionals, as might be expected, are most likely to report an incident to a colleague when things go wrong (F(2,520) = 82.01, p<0.001). The reporting of incidents to a senior member of staff is also more likely, irrespective of outcome for the patient, when the incident involves the violation of a protocol (F(2,520) = 198.77, p<0.001. It appears that, although the reporting of an incident to a senior member of staff is generally not very likely, particularly among doctors, it is most likely when the incident represents the violation of a protocol with a bad outcome.
Conclusions: An alternative means of organisational learning that relies on the identification of system (latent) failures before, rather than after, an adverse event is proposed.
doi:10.1136/qhc.11.1.15
PMCID: PMC1743585  PMID: 12078362
25.  Executive function abnormalities in pathological gamblers 
Background
Pathological gambling (PG) is an impulse control disorder characterized by persistent and maladaptive gambling behaviors with disruptive consequences for familial, occupational and social functions. The pathophysiology of PG is still unclear, but it is hypothesized that it might include environmental factors coupled with a genetic vulnerability and dysfunctions of different neurotransmitters and selected brain areas. Our study aimed to evaluate a group of patients suffering from PG by means of some neuropsychological tests in order to explore the brain areas related to the disorder.
Methods
Twenty outpatients (15 men, 5 women), with a diagnosis of PG according to DSM-IV criteria, were included in the study and evaluated with a battery of neuropsychological tests: the Wisconsin Card Sorting Test (WCST), the Wechsler Memory Scale revised (WMS-R) and the Verbal Associative Fluency Test (FAS). The results obtained in the patients were compared with normative values of matched healthy control subjects.
Results
The PG patients showed alterations at the WCST only, in particular they had a great difficulty in finding alternative methods of problem-solving and showed a decrease, rather than an increase, in efficiency, as they progressed through the consecutive phases of the test. The mean scores of the other tests were within the normal range.
Conclusion
Our findings showed that patients affected by PG, in spite of normal intellectual, linguistic and visual-spatial abilities, had abnormalities emerging from the WCST, in particular they could not learn from their mistakes and look for alternative solutions. Our results would seem to confirm an altered functioning of the prefrontal areas which might provoke a sort of cognitive "rigidity" that might predispose to the development of impulsive and/or compulsive behaviors, such as those typical of PG.
doi:10.1186/1745-0179-4-7
PMCID: PMC2359744  PMID: 18371193

Results 1-25 (1108987)