One of the most important steps in curriculum development is the introduction of simulation- based medical teaching and learning. Simulation is a generic term that refers to an artificial representation of a real world process to achieve educational goals through experiential learning. Simulation based medical education is defined as any educational activity that utilizes simulation aides to replicate clinical scenarios. Although medical simulation is relatively new, simulation has been used for a long time in other high risk professions such as aviation. Medical simulation allows the acquisition of clinical skills through deliberate practice rather than an apprentice style of learning. Simulation tools serve as an alternative to real patients. A trainee can make mistakes and learn from them without the fear of harming the patient. There are different types and classification of simulators and their cost vary according to the degree of their resemblance to the reality, or ‘fidelity’. Simulation- based learning is expensive. However, it is cost-effective if utilized properly. Medical simulation has been found to enhance clinical competence at the undergraduate and postgraduate levels. It has also been found to have many advantages that can improve patient safety and reduce health care costs through the improvement of the medical provider's competencies. The objective of this narrative review article is to highlight the importance of simulation as a new teaching method in undergraduate and postgraduate education.
Clinical skills; medical education; medical simulation; simulators
Mistakes are inevitable in medicine. To learn how medical mistakes relate to subsequent changes in practice, we surveyed 254 internal medicine house officers. One hundred and fourteen house officers (45%) completed an anonymous questionnaire describing their most significant mistake and their response to it. Mistakes included errors in diagnosis (33%), prescribing (29%), evaluation (21%), and communication (5%) and procedural complications (11%). Patients had serious adverse outcomes in 90% of the cases, including death in 31% of cases. Only 54% of house officers discussed the mistake with their attending physicians, and only 24% told the patients or families. House officers who accepted responsibility for the mistake and discussed it were more likely to report constructive changes in practice. Residents were less likely to make constructive changes if they attributed the mistake to job overload. They were more likely to report defensive changes if they felt the institution was judgmental. Decreasing the work load and closer supervision may help prevent mistakes. To promote learning, faculty should encourage house officers to accept responsibility and to discuss their mistakes.
When catastrophic disasters such as Hurricane Katrina strike, psychologists and other mental health professionals often wonder how to use resources and fill needed roles. We argue that conducting clinical research in response to disasters is 1 important way that these professionals can contribute. However, we recognize that designing and implementing a clinical research study can be a daunting task, particularly in the context of the personal and system-wide chaos that follows most disasters. Thus, we offer a detailed description of our own experiences with conducting clinical research as part of our response to Hurricane Katrina. We describe our study design, recruitment and data collection efforts, and summarize and synthesize the lessons we have learned from this endeavor. Our hope is that others who may wish to conduct disaster-related research will learn from our mistakes and successes.
clinical research; disasters; Hurricane Katrina; roles
Cost-efficient prenatal assessments are needed that have the potential to identify those at risk for parent/infant relational problems. With this goal in mind, an additional attachment style description was added to the Relationship Questionnaire (Bartholomew & Horowitz, 1991), an established self-report attachment measure, to create the Relationship Questionnaire: Clinical Version (RQ-CV). The additional description represents a profoundly-distrustful attachment style: “I think it's a mistake to trust other people. Everyone's looking out for themselves, so the sooner you learn not to expect anything from anybody else the better.” The RQ-CV was applied to a sample of 44 low-income mothers who had participated in a previous study of the impact of family risk factors on infant development. After first controlling for demographic risk factors and for other insecure adult attachment styles, mother's profound-distrust was associated with three independent assessments of the quality of maternal interactions with the infant assessed 20 years earlier. In particular, profound-distrust was related to more hostile, intrusive, and negative behaviors toward the infant. The results are discussed within the framework of attachment theory.
I have never seen a paper or chapter of a book devoted to pitfalls and mistakes in developmental diagnosis. This paper is designed to try to fill the gap. It concerns the avoidance of mistakes in developmental diagnosis and is based entirely on mistakes that I have made myself and now learned to try to avoid and on mistakes that I have seen, most of them repeatedly. I have made no mention of mistakes that could theoretically be made but that I have not personally seen. I believe that most assessment errors are due to overconfidence and to the view that developmental diagnosis is easy. Many other mistakes are due to reliance on purely objective tests with consequent omission of a detailed history and physical examination, so that factors that profoundly affect development but are not directly related to the child's mental endowment are not weighed up before an opinion is reached.
Traditional quality control methods identify "variation" as the enemy. However, the control of variation by itself can never achieve the remarkably low non-conformance rates of world class quality leaders. Because the control of variation does not achieve the highest levels of quality, an inordinate focus on these techniques obscures key quality improvement opportunities and results in unnecessary pain and suffering for patients, and embarrassment, litigation, and loss of revenue for healthcare providers. Recent experience has shown that mistakes are the most common cause of problems in health care as well as in other industrial environments. Excessive product and process complexity contributes to both excessive variation and unnecessary mistakes. The best methods for controlling variation, mistakes, and complexity are each a form of mistake proofing. Using these mistake proofing techniques, virtually every mistake and non-conformance can be controlled at a fraction of the cost of traditional quality control methods.
The medial prefrontal cortex (MFC) is critical for our ability to learn from previous mistakes. Here we provide evidence that neurophysiological oscillatory long-range synchrony is a mechanism of post-error adaptation that occurs even without conscious awareness of the error. During a visually signaled Go/No-Go task in which half of the No-Go cues were masked and thus not consciously perceived, response errors enhanced tonic (i.e., over 1–2 s) oscillatory synchrony between MFC and occipital cortex (OCC) leading up to and during the subsequent trial. Spectral Granger causality analyses demonstrated that MFC → OCC directional synchrony was enhanced during trials following both conscious and unconscious errors, whereas transient stimulus-induced occipital → MFC directional synchrony was independent of errors in the previous trial. Further, the strength of pre-trial MFC-occipital synchrony predicted individual differences in task performance. Together, these findings suggest that synchronous neurophysiological oscillations are a plausible mechanism of MFC-driven cognitive control that is independent of conscious awareness.
cognitive control; top-down regulation; oscillation; synchrony; EEG
While we have a good understanding of the potential benefits of the Web for consumers in terms of patient empowerment and education (and we know what outcomes we should look at), we do not know much about potential mechanisms of harm through Internet applications. Collection of critical incidents is needed to ensure that we learn from our mistakes. Since 2001 we systematically collect published and unpublished cases in the “Database of Adverse Events Related to the Internet” (http://www.medcertain.org/daeri). We solicit and collect cases submitted by practitioners and patients, but also include cases reported in lay publications such as newspapers. The cases stored in the database will be published as an aggregate, annual report. Cases include e.g. misdiagnosis or wrong treatments due to online prescription of drugs or medical consulting via the Internet, discontinuation of life-saving treatments due to misinterpretation of Internet information by patients, the addictive potential of the Internet, the potential of the Internet to encourage suicide. As an incentive to submit cases we pay a small honorarium to the submitting physician.
Background: Learning from mistakes is key to maintaining and improving the quality of care in the NHS. This study investigates the willingness of healthcare professionals to report the mistakes of others.
Methods: The questionnaire used in this research included nine short scenarios describing either a violation of a protocol, compliance with a protocol, or improvisation (where no protocol exists). By developing different versions of the questionnaire, each scenario was presented with a good, poor, or bad outcome for the patient. The participants (n=315) were doctors, nurses, and midwives from three English NHS trusts who volunteered to take part in the study and represented 53% of those originally contacted. Participants were asked to indicate how likely they were to report the incident described in each scenario to a senior member of staff.
Results: The findings of this study suggest that healthcare professionals, particularly doctors, are reluctant to report adverse events to a superior. The results show that healthcare professionals, as might be expected, are most likely to report an incident to a colleague when things go wrong (F(2,520) = 82.01, p<0.001). The reporting of incidents to a senior member of staff is also more likely, irrespective of outcome for the patient, when the incident involves the violation of a protocol (F(2,520) = 198.77, p<0.001. It appears that, although the reporting of an incident to a senior member of staff is generally not very likely, particularly among doctors, it is most likely when the incident represents the violation of a protocol with a bad outcome.
Conclusions: An alternative means of organisational learning that relies on the identification of system (latent) failures before, rather than after, an adverse event is proposed.
Little information exists about current patient perceptions of medical mistakes in ambulatory care within a diverse population.
To learn about adults’ perceptions of mistakes in ambulatory care, what factors were associated with perceived mistakes, and whether or not the respondents changed physicians because of these perceived mistakes
Cross-sectional survey conducted in 2008
Seven primary care medical practices in North Carolina
One thousand six hundred ninety-seven English or Spanish speaking adults, aged 18 and older, who presented to a medical provider during the data collection period.
MAIN OUTCOME MEASURES
1) Has a doctor in a doctor’s office ever made a mistake in your care? 2) In the past 10 years, has a doctor in a doctor’s office made a wrong diagnosis or misdiagnosed you? (If yes, how much harm did this cause you?) 3) In the last 10 years, has a doctor in a doctor’s office given you the wrong medical treatment or delayed treatment? (If yes, how much harm did this cause you?) 4) Have you ever changed doctors because of either a wrong diagnosis or a wrong treatment of a medical condition?
Two hundred sixty-five participants (15.6%) responded that a doctor had ever made a mistake, 13.4% reported a wrong diagnosis, 12.4% reported a wrong treatment, and 14.1% reported having changed doctors because of a mistake. Participants perceived mistakes and harm in both diagnostic care and medical treatment. Patients with chronic low back pain, higher levels of education, and poor physical health were at increased odds of perceiving harm, whereas African-Americans were less likely to perceive mistakes.
Patients perceived mistakes in their diagnostic and treatment care in the ambulatory setting. These perceptions had a concrete impact on the patient-physician relationship, often leading patients to seek another health care provider.
The notion that hospitals and medical practices should learn from failures, both their own and others', has obvious appeal. Yet, healthcare organisations that systematically and effectively learn from the failures that occur in the care delivery process, especially from small mistakes and problems rather than from consequential adverse events, are rare. This article explores pervasive barriers embedded in healthcare's organisational systems that make shared or organisational learning from failure difficult and then recommends strategies for overcoming these barriers to learning from failure, emphasising the critical role of leadership. Firstly, leaders must create a compelling vision that motivates and communicates urgency for change; secondly, leaders must work to create an environment of psychological safety that fosters open reporting, active questioning, and frequent sharing of insights and concerns; and thirdly, case study research on one hospital's organisational learning initiative suggests that leaders can empower and support team learning throughout their organisations as a way of identifying, analysing, and removing hazards that threaten patient safety.
The error likelihood effect in anterior cingulate cortex (ACC) has recently been shown to be a special case of an even more general risk prediction effect, which signals both the likelihood of an error and the potential severity of its consequences. Surprisingly, these error likelihood and anticipated consequence effects are strikingly absent in risk-taking individuals. Conversely, conflict effects in ACC were found to be stronger in these same individuals. Here we show that the error likelihood computational model can account for individual differences in error likelihood, predicted error consequence, and conflict effects in ACC with no changes from the published version of the model. In particular, the model accounts for the counter-intuitive inverse relationship between conflict and error likelihood effects as a function of the ACC learning rate in response to errors. As the learning rate increases, ACC learns more effectively from mistakes, which increases risk prediction effects at the expense of conflict effects. Thus, the model predicts that individuals with faster error-based learning in ACC will be more risk averse and show greater ACC error likelihood effects but smaller ACC conflict effects. Furthermore, the model suggests that apparent response conflict effects in ACC may actually consist of two related effects: increased error likelihood and a greater number of simultaneously cued responses, whether or not the responses are mutually incompatible. The results clarify the basic computational mechanisms of learned risk aversion and may have broad implications for predicting and managing risky behavior in healthy and clinical populations.
anterior cingulate; conflict; individual differences; computational model; dopamine
A fundamental prerequisite for prey to avoid being captured is the ability to distinguish dangerous stimuli such as predators and risky habitats from non-dangerous stimuli such as non-predators and safe locations. Most research to date has focused on mechanisms allowing prey to learn to recognize risky stimuli. The paradox of learned predator recognition is that its remarkable efficiency leaves room for potentially costly mistakes if prey inadvertently learn to recognize non-predatory species as dangerous. Here, we pre-exposed embryonic woodfrogs, Rana sylvatica, to the odour of a tiger salamander, Ambystoma tigrinum, without risk reinforcement, and later try to teach the tadpoles to recognize the salamander, a red-bellied newt Cynops pyrrhogaster—a closely related amphibian, or a goldfish, Carassiusauratus, as a predator. Tadpoles were then tested for their responses to salamander, newt or fish odour. Pre-exposure to salamander did not affect the ability of tadpoles to learn to recognize goldfish as a predator. However, the embryonic pre-exposure to salamanders inhibited the subsequent learning of salamanders as a potential predator, through a mechanism known as latent inhibition. The embryonic pre-exposure also prevented the learned recognition of novel newts, indicating complete generalization of non-predator recognition. This pattern does not match that of generalization of predator recognition, whereby species learning to recognize a novel predator do respond, but not as strongly, to novel species closely related to the known predator. The current paper discusses the costs of making recognition mistakes within the context of generalization of predators and dangerous habitats versus generalization of non-predators and safe habitats and highlights the asymmetry in which amphibians incorporate information related to safe versus risky cues in their decision-making. Mechanisms such as latent inhibition allow a variety of prey species to collect information about non-threatening stimuli, as early as during their embryonic development, and to use this information later in life to infer the danger level associated with the stimuli.
Predator recognition; Non-predator recognition; Habitat learning; Latent inhibition; Embryonic learning; Decision-making; Information use
We conduct an experiment to evaluate why individuals invest in high-fee index funds. In our experiments, subjects allocate $10,000 across four S&P 500 index funds and are rewarded for their portfolio’s subsequent return. Subjects overwhelmingly fail to minimize fees. We can reject the hypothesis that subjects buy high-fee index funds because of bundled non-portfolio services. Search costs for fees matter, but even when we eliminate these costs, fees are not minimized. Instead, subjects place high weight on annualized returns since inception. Fees paid decrease with financial literacy. Interestingly, subjects who choose high-fee funds sense they are making a mistake.
Many cyro-EM datasets are heterogeneous stemming from molecules undergoing conformational changes. The need to characterize each of the substrates with sufficient resolution entails a large increase in the data flow and motivates the development of more effective automated particle selection algorithms. Concepts and procedures from the machine-learning field are increasingly employed toward this end. However, a review of recent literature has revealed a discrepancy in terminology of the performance scores used to compare particle selection algorithms, and this has subsequently led to ambiguities in the meaning of claimed performance. In an attempt to curtail the perpetuation of this confusion and to disentangle past mistakes, we review the performance of published particle selection efforts with a set of explicitly defined performance scores using the terminology established and accepted within the field of machine learning.
particle selection; cryo-EM; machine learning; false positive rate
The purpose of this article is to share lessons learned from implementing community-based participatory research (CBPR) in Indian Country that may be generalizable to other medically underserved communities. CBPR is currently included in multiple grant announcements by the National Institute of Health and Centers for Disease Control and Prevention, but information about this methodology vs traditional research methodology is often misleading. This article addresses some common mistakes made by academic research institutes by sharing what we have learned about how CBPR can be implemented in a respectful manner. The majority of tribal Nations prefer, if not mandate, that CBPR be used in most proposed studies involving their communities today.
Technical devices are becoming more prevalent in society and also in medical care. Older adults need more support to learn new technologies than younger subjects. So far, no research has been done on the usability of patient controllers in deep brain stimulation in an elderly population. The aim of the study was to investigate the factors influencing the performance of elderly DBS patients with respect to usability aspects of Medtronic Access therapy controllers.
Time, mistakes and frequency of use of the controller were compared in 41 elderly DBS patients who prior to the study had already owned a therapy controller for more than six years. One group (n = 20, mean age = 66.4 years) was watching an instructional video and then completed practical assignments on a model implantable pulse generator (IPG). The other group (n = 21, mean age = 65.9 years) completed the tasks without having seen the video before. Any errors that patients made were documented and also corrected so that all of them received hands-on training. After six months all patients were re-evaluated on the dummy IPG in order to compare the effects of hands-on alone vs. video-based training combined with hands-on.
The group that had seen the video before significantly outperformed the control group at both assessments with respect to number of errors. Both groups performed faster after six months compared to baseline and tend to use the controller more often than at baseline.
Our results indicate that elderly DBS patients who have been using the controller for several years still have various difficulties in operating the device. However, we also showed that age-specific training may improve the performance in older adults. In general, the design of DBS patient controllers should focus on the specific needs of the end-users. But as changes to medical devices take a long time to be implemented, video instructions with age-specific content plus hands-on training may improve learning for older adults.
In this paper the coherence of the precautionary principle as a guide to public health policy is considered. Two conditions that any account of the principle must meet are outlined, a condition of practicality and a condition of publicity. The principle is interpreted in terms of a tripartite division of the outcomes of action (good outcomes, normal bad outcomes and special bad outcomes). Such a division of outcomes can be justified on either “consequentialist” or “deontological” grounds. In the second half of the paper, it is argued that the precautionary principle is not necessarily opposed to risk–cost–benefit analysis, but, rather, should be interpreted as suggesting a lowering of our epistemic standards for assessing evidence that there is a link between some policy and “special bad” outcomes. This suggestion is defended against the claim that it mistakes the nature of statistical testing and against the charge that it is unscientific or antiscientific, and therefore irrational.
Incident reporting systems (IRS) are used to identify medical errors in order to learn from mistakes and improve patient safety in hospitals. However, IRS contain only a small fraction of occurring incidents. A more comprehensive overview of medical error in hospitals may be obtained by combining information from multiple sources. The WHO has developed the International Classification for Patient Safety (ICPS) in order to enable comparison of incident reports from different sources and institutions.
The aim of this paper was to provide a more comprehensive overview of medical error in hospitals using a combination of different information sources. Incident reports collected from IRS, patient complaints and retrospective chart review in an academic acute care hospital were classified using the ICPS. The main outcome measures were distribution of incidents over the thirteen categories of the ICPS classifier “Incident type”, described as odds ratios (OR) and proportional similarity indices (PSI).
A total of 1012 incidents resulted in 1282 classified items. Large differences between data from IRS and patient complaints (PSI = 0.32) and from IRS and retrospective chart review (PSI = 0.31) were mainly attributable to behaviour (OR = 6.08), clinical administration (OR = 5.14), clinical process (OR = 6.73) and resources (OR = 2.06).
IRS do not capture all incidents in hospitals and should be combined with complementary information about diagnostic error and delayed treatment from patient complaints and retrospective chart review. Since incidents that are not recorded in IRS do not lead to remedial and preventive action in response to IRS reports, healthcare centres that have access to different incident detection methods should harness information from all sources to improve patient safety.
Learning from mistakes is a key feature of human behavior. However, the mechanisms underlying short-term adaptation to erroneous action are still poorly understood. One possibility relies on the modulation of attentional systems after an error. To explore this possibility, we have designed a Stroop-like visuo-motor task in monkeys that favors incorrect action. Using this task, we previously found that single neurons recorded from the anterior cingulate cortex (ACC) were closely tuned to behavioral performance and, more particularly, that the activity of most neurons was biased towards the evaluation of erroneous action. Here we describe single neurons engaged in both error detection and response alertness processing, whose activation is closely associated with the improvement of subsequent behavioral performance. Specifically, we show that the effect of a warning stimulus on neuronal firing is enhanced after an erroneous response rather than a successful one and that this outcome is correlated with an error rate decrease. Our results suggest that the anterior cingulate cortex, which exhibits this activity, serves as a powerful computational locus for rapid behavioral adaptation.
In October UK patients who had cardiovascular events while taking rofecoxib lost the right to fight Merck in the US for compensation. But researchers and journals can still benefit from this case if they learn from the mistakes, write Harlan Krumholz and colleagues
Bats face a great risk of dehydration, so sensory mechanisms for water recognition are crucial for their survival. In the laboratory, bats recognized any smooth horizontal surface as water because these provide analogous reflections of echolocation calls. We tested whether bats also approach smooth horizontal surfaces other than water to drink in nature by partly covering watering troughs used by hundreds of bats with a Perspex layer mimicking water. We aimed 1) to confirm that under natural conditions too bats mistake any horizontal smooth surface for water by testing this on large numbers of individuals from a range of species and 2) to assess the occurrence of learning effects. Eleven bat species mistook Perspex for water relying chiefly on echoacoustic information. Using black instead of transparent Perspex did not deter bats from attempting to drink. In Barbastella barbastellus no echolocation differences occurred between bats approaching the water and the Perspex surfaces respectively, confirming that bats perceive water and Perspex to be acoustically similar. The drinking attempt rates at the fake surface were often lower than those recorded in the laboratory: bats then either left the site or moved to the control water surface. This suggests that bats modified their behaviour as soon as the lack of drinking reward had overridden the influence of echoacoustic information. Regardless of which of two adjoining surfaces was covered, bats preferentially approached and attempted to drink from the first surface encountered, probably because they followed a common route, involving spatial memory and perhaps social coordination. Overall, although acoustic recognition itself is stereotyped and its importance in the drinking process overwhelming, our findings point at the role of experience in increasing behavioural flexibility under natural conditions.
The analysis of possible mechanisms of repair failure is a necessary instrument and the best way to decrease the recurrence rate and improve the overall results. Avoiding historical errors and learning from the reported pitfalls and mistakes helps to standardize the relatively new laparoscopic techniques of trans-abdominal preperitoneal and total extraperitoneal.
Materials and Methods:
The video tapes of all primary laparoscopic repairs done by the author that led to recurrence were retrospectively analyzed and compared with findings at the second laparoscopic repair. A review of the available cases of recurrences occurring between 1994 and 2003 is the basis of this report.
Adequate mesh size, porosity of mesh material, slitting of the mesh, correct and generous dissection of preperitoneal space and wrinkle-free placement of the mesh seem to be the more important factors in avoiding recurrence rather than strength of the material or strong penetrating fixation. Special attention should be paid to preperitoneal lipoma as a possible overlooked herniation or potential future pseudorecurrence despite nondislocated correctly positioned mesh.
Laparoscopic hernia repair is a complex but very efficient method in experienced hands. To achieve the best possible results, it requires an acceptance of a longer learning curve, structured well-mentored training and high level of standardization of the operative procedure.
Endoscopic hernia repair; inguinal hernia; recurrence; trans-abdominal preperitoneal and total extraperitoneal
During the 2009 H1N1 immunization campaign, electronic and hybrid (comprising both electronic and paper components) systems were employed to collect client-level vaccination data in clinics across Canada. Because different systems were used across the country, the 2009 immunization campaign offered an opportunity to study the usability of the various data collection methods.
A convenience sample of clinic staff working in public health agencies and hospitals in 9 provinces/territories across Canada completed a questionnaire in which they indicated their level of agreement with seven statements regarding the usability of the data collection system employed at their vaccination clinic. Questions included overall ease of use, effectiveness of the method utilized, efficiency at completing tasks, comfort using the method, ability to recover from mistakes, ease of learning the method and overall satisfaction with the method. A 5-point Likert-type scale was used to measure responses.
Most respondents (96%) were employed in sites run by public health. Respondents included 186 nurses and 114 administrative staff, among whom 90% and 47%, respectively, used a paper-based method for data collection. Approximately half the respondents had a year or less of experience with immunization-related tasks during seasonal influenza campaigns. Over 90% of all frontline staff found their data collection method easy to use, perceived it to be effective in helping them complete their tasks, felt quick and comfortable using the method, and found the method easy to learn, regardless of whether a hybrid or electronic system was used.
This study demonstrates that there may be a greater willingness of frontline immunization staff to adapt to new technologies than previously perceived by decision-makers. The public health community should recognize that usability may not be a barrier to implementing electronic methods for collecting individual-level immunization data.
Changes in leadership and culture are needed to improve learning from mistakes