Child welfare workers must process complex information in deciding to refer clients to appropriate mental health services. Decision support systems (DSS) have been demonstrated in other fields to be an important tool, yet little research has been done in child welfare. This study focused on the adoption of a specific DSS into child welfare practice. Quantitative analysis was used to demonstrate the diffusion of innovation process among a sample of state child welfare workers, while qualitative analysis was used to explain the facilitators and barriers to DSS adoption. Results indicate that for DSSs to be widely adopted in child welfare practice, they should be integrated into the referral system and include workers’ knowledge and experiences with referral resources. For successful adoption, DSSs need to respect the natural logic and flow of worker interaction, as well as organizational constraints.
decision support technology; child welfare; diffusion of innovations
BACKGROUND: Increasing indications for oral anticoagulation has led to pressure on general practices to undertake therapeutic monitoring. Computerized decision support (DSS) has been shown to be effective in hospitals for improving clinical management. Its usefulness in primary care has previously not been investigated. AIM: To test the effectiveness of using DSS for oral anticoagulation monitoring in primary care by measuring the proportions of patients adequately controlled, defined as within the appropriate therapeutic range of International Normalised Ratio (INR). METHOD: All patients receiving warfarin from two Birmingham inner city general practices were invited to attend a practice-based anticoagulation clinic. In practice A all patients were managed using DSS. In practice B patients were randomized to receive dosing advice either through DSS or through the local hospital laboratory. Clinical outcomes, adverse events and patient acceptability were recorded. RESULTS: Forty-nine patients were seen in total. There were significant improvements in INR control from 23% to 86% (P > 0.001) in the practice where all patients received dosing through DSS. In the practice where patients were randomized to either DSS or hospital dosing, logistic regression showed a significant trend for improvement in intervention patients which was not apparent in the hospital-dosed patients (P < 0.001). Mean recall times were significantly extended in patients who were dosed by the practice DSS through the full 12 months (24 days to 36 days) (P = 0.033). Adverse events were comparable between hospital and practice-dosed patients, although a number of esoteric events occurred. Patient satisfaction with the practice clinics was high. CONCLUSION: Computerized DSS enables the safe and effective transfer of anticoagulation management from hospital to primary care and may result in improved patient outcome in terms of the level of control, frequency of review and general acceptability.
Blood transfusion is a medical domain where decision support systems (DSSs) could be very helpful to the physicians but must easily and continuously be maintained. We have developed a knowledge acquisition tool that allows the construction and the maintenance of such a system by the domain expert. The methodology used could be applied to another highly evolutive medical domain. In this paper, we detail our knowledge acquisition tool, its use and the final DSS obtained, which is fully integrated into our hospital information network.
Systematic, through testing of decision support systems (DSSs) prior to release to general users is a critical aspect of high quality software design. Omission of this step may lead to the dangerous, and potentially fatal, condition of relying on a system with outputs of uncertain quality. Thorough testing requires a great deal of effort and is a difficult job because tools necessary to facilitate testing are not well developed. Testing is a job ill-suited to humans because it requires tireless attention to a large number of details. For these reasons, the majority of DSSs available are probably not well tested prior to release. We have successfully implemented a software design and testing plan which has helped us meet our goal of continuously improving the quality of our DSS software prior to release. While requiring large amounts of effort, we feel that the process of documenting and standardizing our testing methods are important steps toward meeting recognized national and international quality standards. Our testing methodology includes both functional and structural testing and requires input from all levels of development. Our system does not focus solely on meeting design requirements but also addresses the robustness of the system and the completeness of testing.
Opioid prescribing for chronic pain is common and controversial, but recommended clinical practices are followed inconsistently in many clinical settings. Strategies for increasing adherence to clinical practice guideline recommendations are needed to increase effectiveness and reduce negative consequences of opioid prescribing in chronic pain patients.
Here we describe the process and outcomes of a project to operationalize the 2003 VA/DOD Clinical Practice Guideline for Opioid Therapy for Chronic Non-Cancer Pain into a computerized decision support system (DSS) to encourage good opioid prescribing practices during primary care visits. We based the DSS on the existing ATHENA-DSS. We used an iterative process of design, testing, and revision of the DSS by a diverse team including guideline authors, medical informatics experts, clinical content experts, and end-users to convert the written clinical practice guideline into a computable algorithm to generate patient-specific recommendations for care based upon existing information in the electronic medical record (EMR), and a set of clinical tools.
The iterative revision process identified numerous and varied problems with the initially designed system despite diverse expert participation in the design process. The process of operationalizing the guideline identified areas in which the guideline was vague, left decisions to clinical judgment, or required clarification of detail to insure safe clinical implementation. The revisions led to workable solutions to problems, defined the limits of the DSS and its utility in clinical practice, improved integration into clinical workflow, and improved the clarity and accuracy of system recommendations and tools.
Use of this iterative process led to development of a multifunctional DSS that met the approval of the clinical practice guideline authors, content experts, and clinicians involved in testing. The process and experiences described provide a model for development of other DSSs that translate written guidelines into actionable, real-time clinical recommendations.
Objective: The aim of this study was to compare the clinical impact of computerized decision support with and without electronic access to clinical guidelines and laboratory data on antibiotic prescribing decisions.
Design: A crossover trial was conducted of four levels of computerized decision support—no support, antibiotic guidelines, laboratory reports, and laboratory reports plus a decision support system (DSS), randomly allocated to eight simulated clinical cases accessed by the Web.
Measurements Rate of intervention adoption was measured by frequency of accessing information support, cost of use was measured by time taken to complete each case, and effectiveness of decision was measured by correctness of and self-reported confidence in individual prescribing decisions. Clinical impact score was measured by adoption rate and decision effectiveness.
Results Thirty-one intensive care and infectious disease specialist physicians (ICPs and IDPs) participated in the study. Ventilator-associated pneumonia treatment guidelines were used in 24 (39%) of the 62 case scenarios for which they were available, microbiology reports in 36 (58%), and the DSS in 37 (60%). The use of all forms of information support did not affect clinicians' confidence in their decisions. Their use of the DSS plus microbiology report improved the agreement of decisions with those of an expert panel from 65% to 97% (p = 0.0002), or to 67% (p = 0.002) when antibiotic guidelines only were accessed. Significantly fewer IDPs than ICPs accessed information support in making treatment decisions. On average, it took 245 seconds to make a decision using the DSS compared with 113 seconds for unaided prescribing (p < 0.001). The DSS plus microbiology reports had the highest clinical impact score (0.58), greater than that of electronic guidelines (0.26) and electronic laboratory reports (0.45).
Conclusion: When used, computer-based decision support significantly improved decision quality. In measuring the impact of decision support systems, both their effectiveness in improving decisions and their likely rate of adoption in the clinical environment need to be considered. Clinicians chose to use antibiotic guidelines for one third and microbiology reports or the DSS for about two thirds of cases when they were available to assist their prescribing decisions.
Although various studies have found a positive association between neighborhood social capital and individual health, the mechanism explaining this direct effect is still unclear. Neighborhood social capital is the access to resources that are generated by relationships between people in a friendly, well-connected and tightly knit neighborhood community. We expect that the resources generated by cohesive neighborhoods support and influence health -improving behaviors in daily life. We identify five different health-related behaviors that are likely to be affected by neighborhood social capital and test these behaviors separately as mediators.
The data set pertaining to individual health was taken from the 'health interview' in the 'Second Dutch national survey of general practice' (DNSGP-2, 2002). We combine these individual-level data with data from the 'Dutch housing demand survey' (WBO, 1998 and WoON, 2002) and statistical register information (1995-1999). Per neighborhood 29 WBO respondents, on average, had answered questions regarding social capital in their neighborhood. These variables have been aggregated to the neighborhood level by an ecometric methodology. In the main analysis, in which we tested the mediation, multilevel (ordered) logistic regressions were used to analyze 9253 adults (from the DNSGP-2 data set) from 672 Dutch neighborhoods. In the Netherlands, on average, neighborhoods (4-digit postcodes) comprise 4,000 inhabitants at highly variable population densities. Individual- and neighborhood-level controls have been taken into account in the analyses.
In neighborhoods with a high level of social capital, people are more physically active and more likely to be non-smokers. These behaviors have positive effects on their health. The direct effect of neighborhood social capital on health is significantly and strongly reduced by physical activity. This study does not support nutrition and sleep habits or moderate alcohol intake as possible explanations of the effects of neighborhoods on health.
This study is one of the first to test a mechanism explaining much of the direct effect of small-area social capital on individual health. Neighborhood interventions might be most successful at improving health if they stimulate both social interaction and physical activity.
While there is an increased emphasis on shared decision making between patients and clinicians, there has been little research on decision support systems (DSSs) to assist clinicians in eliciting and integrating patient preferences in clinical care and evaluating their effect on patient outcomes. This paper presents the results of nurses' use of CHOICE, a palm-top based DSS for preference-based care planning that assists nurses in eliciting patient preferences for functional performance at the bedside and to select care priorities consistent with patient preferences. Nurses' use of CHOICE changed nursing care to be more consistent with patient preferences and improved patients' preference achievement. The study demonstrated that the use of computer-based decision support for preference-based care planning can improve patient-centered care and patient outcomes.
To influence physician practice behavior after implementation of a computerized clinical decision support system (CDSS) based upon the recommendations from the 2007 ACEP Clinical Policy on Syncope.
This was a pre-post intervention with a prospective cohort and retrospective controls. We conducted a medical chart review of consecutive adult patients with syncope. A computerized CDSS prompting physicians to explain their decision-making regarding imaging and admission in syncope patients based upon ACEP Clinical Policy recommendations was embedded into the emergency department information system (EDIS). The medical records of 410 consecutive adult patients presenting with syncope were reviewed prior to implementation, and 301 records were reviewed after implementation. Primary outcomes were physician practice behavior demonstrated by admission rate and rate of head computed tomography (CT) imaging before and after implementation.
There was a significant difference in admission rate pre- and post-intervention (68.1% vs. 60.5% respectively, p = 0.036). There was no significant difference in the head CT imaging rate pre- and post-intervention (39.8% vs. 43.2%, p = 0.358). There were seven physicians who saw ten or more patients during the pre- and post-intervention. Subset analysis of these seven physicians’ practice behavior revealed a slight significant difference in the admission rate pre- and post-intervention (74.3% vs. 63.9%, p = 0.0495) and no significant difference in the head CT scan rate pre- and post-intervention (42.9% vs. 45.4%, p = 0.660).
The introduction of an evidence-based CDSS based upon ACEP Clinical Policy recommendations on syncope correlated with a change in physician practice behavior in an urban academic emergency department. This change suggests emergency medicine clinical practice guideline recommendations can be incorporated into the physician workflow of an EDIS to enhance the quality of practice.
Informatics; Clinical decision support systems; Practice guidelines; Knowledge translation; Syncope
In pre-school children a diagnosis of asthma is not easily made and only a minority of wheezing children will develop persistent atopic asthma. According to the general consensus a diagnosis of asthma becomes more certain with increasing age. Therefore the congruence between asthma medication use and doctor-diagnosed asthma is expected to increase with age. The aim of this study is to evaluate the relationship between prescribing of asthma medication and doctor-diagnosed asthma in children age 0–17.
We studied all 74,580 children below 18 years of age, belonging to 95 GP practices within the second Dutch national survey of general practice (DNSGP-2), in which GPs registered all physician-patient contacts during the year 2001. Status on prescribing of asthma medication (at least one prescription for beta2-agonists, inhaled corticosteroids, cromones or montelukast) and doctor-diagnosed asthma (coded according to the International Classification of Primary Care) was determined.
In total 7.5% of children received asthma medication and 4.1% had a diagnosis of asthma. Only 49% of all children receiving asthma medication was diagnosed as an asthmatic. Subgroup analyses on age, gender and therapy groups showed that the Positive Predictive Value (PPV) differs significantly between therapy groups only. The likelihood of having doctor-diagnosed asthma increased when a child received combination therapy of short acting beta2-agonists and inhaled corticosteroids (PPV = 0.64) and with the number of prescriptions (3 prescriptions or more, PPV = 0.66). Both prescribing of asthma medication and doctor-diagnosed asthma declined with age but the congruence between the two measures did not increase with age.
In this study, less than half of all children receiving asthma medication had a registered diagnosis of asthma. Detailed subgroup analyses show that a diagnosis of asthma was present in at most 66%, even in groups of children treated intensively with asthma medication. Although age strongly influences the chance of being treated, remarkably, the congruence between prescribing of asthma medication and doctor-diagnosed asthma does not increase with age.
Sophisticated decision support systems (DSS) can reduce preventable medical errors. A standalone DSS prototype was built to identify drug-disease mismatches in the electronic medical record (EMR). When drugs fail to match a known problem on the problem list (drug orphans), either the problem list is deficient or the drug was ordered in error. We tested the performance of an integrated DSS prototype by improving the data exchange with the standalone DSS prototype. By implementing a screen capture tool, we were able to accelerate data entry into the DSS prototype through the semi-automated operation. Preliminary results revealed a marked increase in the rate of data entry during testing the DSS prototype. The accelerated data entry streamlines workflow and promotes physician’s acceptance of the DSS.
Scientifically based clinical guidelines have become increasingly used to educate physicians and improve quality of care. While individual guidelines are potentially useful, repeated studies have shown that guidelines are ineffective in changing physician behavior. The Internet has evolved as a potentially useful tool for guideline education, dissemination, and implementation because of its open standards and its ability to provide concise, relevant clinical information at the location and time of need.
Our objective was to develop and test decision support systems (DSS) based on clinical guidelines which could be delivered over the Internet for two disease models: asthma and tuberculosis (TB) preventive therapy.
Using open standards of HTML and CGI, we developed an acute asthma severity assessment DSS and a preventative tuberculosis treatment DSS based on content from national guidelines that are recognized as standards of care. Both DSS's are published on the Internet and operate through a decision algorithm developed from the parent guidelines with clinical information provided by the user at the point of clinical care. We tested the effectiveness of each DSS in influencing physician decisions using clinical scenario testing.
We first validated the asthma algorithm by comparing asthma experts' decisions with the decisions reached by nonpulmonary nurses using the computerized DSS. Using the DSS, nurses scored the same as experts (89% vs. 88%; p = NS). Using the same scenario test instrument, we next compared internal medicine residents using the DSS with residents using a printed version of the National Asthma Education Program-2 guidelines. Residents using the computerized DSS scored significantly better than residents using the paper-based guidelines (92% vs. 84%; p <0.002). We similarly compared residents using the computerized TB DSS to residents using a printed reference card; the residents using the computerized DSS scored significantly better (95.8% vs. 56.6% correct; p<0.001).
Previous work has shown that guidelines disseminated through traditional educational interventions have minimal impact on physician behavior. Although computerized DSS have been effective in altering physician behavior, many of these systems are not widely available. We have developed two clinical DSS's based on national guidelines and published them on the Internet. Both systems improved physician compliance with national guidelines when tested in clinical scenarios. By providing information that is coupled to relevant activity, we expect that these widely available DSS's will serve as effective educational tools to positively impact physician behavior.
Asthma; Tuberculosis; Decision Support System; Clinical Guidelines
Physicians' heavy workload is often thought to jeopardise the quality of care and to be a barrier to improving quality. The relationship between these has, however, rarely been investigated. In this study quality of care is defined as care 'in accordance with professional guidelines'. In this study we investigated whether GPs with a higher workload adhere less to guidelines than those with a lower workload and whether guideline recommendations that require a greater time investment are less adhered to than those that can save time.
Data were used from the Second Dutch National survey of General Practice (DNSGP-2). This nationwide study was carried out between April 2000 and January 2002.
A multilevel logistic-regression analysis was conducted of 170,677 decisions made by GPs, referring to 41 Guideline Adherence Indicators (GAIs), which were derived from 32 different guidelines. Data were used from 130 GPs, working in 83 practices with 98,577 patients. GP-characteristics as well as guideline characteristics were used as independent variables. Measures include workload (number of contacts), hours spent on continuing medical education, satisfaction with available time, practice characteristics and patient characteristics. Outcome measure is an indicator score, which is 1 when a decision is in accordance with professional guidelines or 0 when the decision deviates from guidelines.
On average, 66% of the decisions GPs made were in accordance with guidelines. No relationship was found between the objective workload of GPs and their adherence to guidelines. Subjective workload (measured on a five point scale) was negatively related to guideline adherence (OR = 0.95). After controlling for all other variables, the variation between GPs in adherence to guideline recommendations showed a range of less than 10%.
84% of the variation in guideline adherence was located at the GAI-level. Which means that the differences in adherence levels between guidelines are much larger than differences between GPs. Guideline recommendations that require an extra time investment during the same consultation are significantly less adhered to: (OR = 0.46), while those that can save time have much higher adherence levels: OR = 1.55). Recommendations that reduce the likelihood of a follow-up consultation for the same problem are also more often adhered to compared to those that have no influence on this (OR = 3.13).
No significant relationship was found between the objective workload of GPs and adherence to guidelines. However, guideline recommendations that require an extra time investment are significantly less well adhered to while those that can save time are significantly more often adhered to.
National quality indicators show little change in the overuse of antibiotics for uncomplicated acute bronchitis. We compared the impact of two decision support strategies on antibiotic treatment of uncomplicated acute bronchitis.
We conducted a three-arm, cluster-randomized trial among 33 primary care practices belonging to an integrated health care system in central Pennsylvania. The printed intervention arm (n=11 practices) received decision support for acute cough illness through a print-based strategy, the computerized intervention group (n=11) received decision support through an electronic medical record-based strategy, and third group of practices (n=11) served as the control arm. Both intervention groups also received provider education and feedback on prescribing practices, and patient education brochures at check-in. Antibiotic prescription rates for uncomplicated acute bronchitis in the winter period (October 2009 – March 2010) following introduction of the intervention were compared with the previous three winter periods in an intent-to-treat analysis.
Compared with the baseline period, the percentage of adolescents and adults prescribed antibiotics during the intervention period decreased at the printed (from 80.0% to 68.3%) and computerized intervention sites (from 74.0% to 60.7%), but increased slightly at the control sites (from 72.5% to 74.3%). After controlling for patient and provider characteristics, and clustering of observations by provider and practice site, the differences for the intervention groups were statistically significant from control (control vs. printed P=0.003; control vs. computerized P=0.014) but no among themselves (printed vs. computerized P=0.67). Changes in total visits, proportion diagnosed as uncomplicated acute bronchitis and thirty-day return visit rates were similar between study groups.
Implementation of a decision support strategy for acute bronchitis can help reduce overuse of antibiotics in primary care settings. The impact of printed and computerized strategies for providing decision support was equivalent. The study was registered with Clinical Trials.Gov prior to enrolling patients (NCT00981994).
We examined the degree to which attending physicians, residents, and medical students' stated desire for a consultation on difficult-to-diagnose patient cases is related to changes in their diagnostic judgments after a computer consultation, and whether, in fact, their perceptions of the usefulness of these consultations are related to these changes. The decision support system (DSS) used in this study was ILIAD (v4.2). Preliminary findings based on 16 subjects' (6 general internists, 4 second-year residents in internal medicine, and 6 fourth-year medical students) workup of 136 patient cases indicated no significant main effects for 1) level of experience, 2) whether or not subjects indicated they would seek a diagnostic consultation before using the DSS, or 3) whether or not they found the DSS consultation in fact to be helpful in arriving at a diagnosis (p > .49 in all instances). Nor were there any significant interactions. Findings were similar using subjects or cases as the unit of analysis. It is possible that what may appear to be counter-intuitive, and perhaps irrational, may not necessarily be so. We are currently examining potential explanatory hypotheses in our ongoing current, larger study.
The potential role of DSS in CVD prevention remains unclear as only a few studies report on patient outcomes for cardiovascular disease.
Methods and Results
A systematic review and meta-analysis of randomised controlled trials and observational studies was done using Medline, Embase, Cochrane Library, PubMed, Amed, CINAHL, Web of Science, Scopus databases; reference lists of relevant studies to 30 July 2011; and email contact with experts. The primary outcome was prevention of cardiovascular disorders (myocardial infarction, stroke, coronary heart disease, peripheral vascular disorders and heart failure) and management of hypertension owing to decision support systems, clinical decision supports systems, computerized decision support systems, clinical decision making tools and medical decision making (interventions). From 4116 references ten studies met our inclusion criteria (including 16,312 participants). Five papers reported outcomes on blood pressure management, one paper on heart failure, two papers each on stroke, and coronary heart disease. The pooled estimate for CDSS versus control group differences in SBP (mm of Hg) was - 0.99 (95% CI −3.02 to 1.04 mm of Hg; I2 = 0; p = 0.851).
DSS show an insignificant benefit in the management and control of hypertension (insignificant reduction of SBP). The paucity of well-designed studies on patient related outcomes is a major hindrance that restricts interpretation for evaluating the role of DSS in secondary prevention. Future studies on DSS should (1) evaluate both physician performance and patient outcome measures (2) integrate into the routine clinical workflow with a provision for decision support at the point of care.
Next to other GP characteristics, diagnostic labelling (the proportion of acute respiratory tract (RT) episodes to be labelled as infections) probably contributes to a higher volume of antibiotic prescriptions for acute RT episodes. However, it is unknown whether there is an independent association between diagnostic labelling and the volume of prescribed antibiotics, or whether diagnostic labelling is associated with the number of presented acute RT episodes and consequently with the number of antibiotics prescribed per patient per year.
Data were used from the Second Dutch National Survey of General Practice (DNSGP-2) with 163 GPs from 85 Dutch practices, serving a population of 359,625 patients. Data over a 12 month period were analysed by means of multiple linear regression analysis. Main outcome measure was the volume of antibiotic prescriptions for acute RT episodes per 1,000 patients.
The incidence was 236.9 acute RT episodes/1,000 patients. GPs labelled about 70% of acute RT episodes as infections, and antibiotics were prescribed in 41% of all acute RT episodes. A higher incidence of acute RT episodes (beta 0.67), a stronger inclination to label episodes as infections (beta 0.24), a stronger endorsement of the need of antibiotics in case of white spots in the throat (beta 0.11) and being male (beta 0.11) were independent determinants of the prescribed volume of antibiotics for acute RT episodes, whereas diagnostic labelling was not correlated with the incidence of acute RT episodes.
Diagnostic labelling is a relevant factor in GPs' antibiotic prescribing independent from the incidence of acute RT episodes. Therefore, quality assurance programs and postgraduate courses should emphasise to use evidence based prognostic criteria (e.g. chronic respiratory co-morbidity and old age) as an indication to prescribe antibiotics in stead of single inflammation signs or diagnostic labels.
The authors developed and evaluated a rating scale, the Attitudes toward Handheld Decision Support Software Scale (H-DSS), to assess physician attitudes about handheld decision support systems.
The authors conducted a prospective assessment of psychometric characteristics of the H-DSS including reliability, validity, and responsiveness. Participants were 82 Internal Medicine residents. A higher score on each of the 14 five-point Likert scale items reflected a more positive attitude about handheld DSS. The H-DSS score is the mean across the fourteen items. Attitudes toward the use of the handheld DSS were assessed prior to and six months after receiving the handheld device.
Cronbach's Alpha was used to assess internal consistency reliability. Pearson correlations were used to estimate and detect significant associations between scale scores and other measures (validity). Paired sample t-tests were used to test for changes in the mean attitude scale score (responsiveness) and for differences between groups.
Internal consistency reliability for the scale was α = 0.73. In testing validity, moderate correlations were noted between the attitude scale scores and self-reported Personal Digital Assistant (PDA) usage in the hospital (correlation coefficient = 0.55) and clinic (0.48), p < 0.05 for both. The scale was responsive, in that it detected the expected increase in scores between the two administrations (3.99 (s.d. = 0.35) vs. 4.08, (s.d. = 0.34), p < 0.005).
The authors' evaluation showed that the H-DSS scale was reliable, valid, and responsive. The scale can be used to guide future handheld DSS development and implementation.
Developing computer-interpretable clinical practice guidelines (CPGs) to provide decision support for guideline-based care is an extremely labor-intensive task. In the EON/ATHENA and SAGE projects, we formulated substantial portions of CPGs as computable statements that express declarative relationships between patient conditions and possible interventions. We developed query and expression languages that allow a decision-support system (DSS) to evaluate these statements in specific patient situations. A DSS can use these guideline statements in multiple ways, including: (1) as inputs for determining preferred alternatives in decision-making, and (2) as a way to provide targeted commentaries in the clinical information system. The use of these declarative statements significantly reduces the modeling expertise and effort required to create and maintain computer-interpretable knowledge bases for decision-support purpose. We discuss possible implications for sharing of such knowledge bases.
Most Decision Support Systems (DSSs) in medicine have been developed in hospital environments, for use in hospitals. Only a few are designed for use by General Practitioners (GPs) in primary care. The work reported in this paper has a twofold aim:
[List: see text]
Primary Health Care; Decision Support; Design; Knowledge Based Systems; Doctor-patient communication
Despite wide promotion, clinical practice guidelines have had limited effect in changing physician behavior. Effective implementation strategies to date have included: multifaceted interventions involving audit and feedback, local consensus processes, marketing; reminder systems, either manual or computerized; and interactive educational meetings. In addition, there is now growing evidence that contextual factors affecting implementation must be addressed such as organizational support (leadership procedures and resources) for the change and strategies to implement and maintain new systems.
To examine the feasibility and effectiveness of implementation of a computerized decision support system for depression (CDSS-D) in routine public mental health care in Texas, fifteen study clinicians (thirteen physicians and two advanced nurse practitioners) participated across five sites, accruing over 300 outpatient visits on 168 patients.
Issues regarding computer literacy and hardware/software requirements were identified as initial barriers. Clinicians also reported concerns about negative impact on workflow and the potential need for duplication during the transition from paper to electronic systems of medical record keeping.
The following narrative report based on observations obtained during the initial testing and use of a CDSS-D in clinical settings further emphasizes the importance of taking into account organizational factors when planning implementation of evidence-based guidelines or decision support within a system.
Female patients, abused by their partner, are heavy users of medical services. To date, valid indicators of partner abuse of women are lacking.
To outline the healthcare utilisation in family practice of women who have suffered abuse, and compare this to the average female population in family practice.
Design of study
As part of a primary study on the role of family doctors in recognising and managing partner abuse a retrospective study was performed. Anonymised data from the electronic medical records of women who have suffered abuse were collected over the period January 2001–July 2004. These data were compared to those from the average female population of the Second Dutch National Survey in General Practice 2001 (DNSGP-2).
Family practices in Rotterdam and surrounding areas in 2004.
The numbers of consultations and prescriptions for pain medication, tranquillisers and antidepressants of women who have suffered abuse (n = 92) were compared to those of the female population of the DNSGP-2 (n = 210 071). The presented health problems and referrals of the studied group were examined.
Pain, in all its manifestations, appeared to be the most frequently presented health problem. Compared to the female population of the DNSGP-2, in all age categories, women who have suffered abuse consult their family doctor almost twice as often and receive three to seven times more pain medication.
A doubled consultation frequency, chronic pain and an excessively high number of prescriptions for pain medication are characteristics of healthcare utilisation of women have been abused in this study. These findings contribute to the development of the concept of the ‘symptomatic’ female patient.
electronic medical record; family medicine; healthcare utilisation; partner abuse
OBJECTIVES: To assess the effects of incomplete data upon the output of a computerized diagnostic decision support system (DSS), to assess the effects of using the system upon the diagnostic opinions of users, and to explore if these effects vary as a function of clinical experience. DESIGN: Experimental pilot study. Four clusters of nine cases each were constructed and equated for case difficulty. Definitive findings were omitted from the case abstracts. Subjects were randomly assigned to one of four clusters and were trained on the DSS prior to use. SUBJECTS: The study involved 16 physicians at three levels of clinical experience (six general internists, four residents in internal medicine, and six fourth-year medical students), from three academic medical centers. PROCEDURE: Each subject worked up nine cases, first without and then with ILIAD consultation. They were asked to offer up to six potential diagnoses and to list up to three steps that should be the next items in the diagnostic workup. Effects of DSS consultation were measured by changes in the position of the correct diagnosis in the lists of differential diagnoses, pre- and post-consultation. RESULTS: The DSS lists of diagnostic possibilities contained the correct diagnosis in 38% of cases, about midway between the levels of accuracy of residents and attending general internists. In over 70% of cases, the DSS output had no effect on the position of the correct diagnosis in the subjects' lists. The system's diagnostic accuracy was unaffected by the clinical experience of the users.
As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs). Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity.
This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP) DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty) were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality.
Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data.
The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data.
The low sensitivity of parameter estimates and regression analyses to significant amounts of randomly introduced errors indicates a high level of robustness of the dataset. This apparent inertia of population parameter estimates to simulated errors is largely due to the size of the dataset. Tolerable margins of random error in DSS data may exceed 20%. While this is not an argument in favour of poor quality data, reducing the time and valuable resources spent on detecting and correcting random errors in routine DSS operations may be justifiable as the returns from such procedures diminish with increasing overall accuracy. The money and effort currently spent on endlessly correcting DSS datasets would perhaps be better spent on increasing the surveillance population size and geographic spread of DSSs and analysing and disseminating research findings.
There are often disparities between current evidence and current practice. Decreasing the gap between desired practice outcomes and observed practice outcomes in the healthcare system is not always easy. Stopping previously recommended or variably recommended interventions may be even harder to achieve than increasing the use of a desired but under-performed activity. For over a decade, aspirin has been prescribed for primary prevention of cardiovascular disease and for patients with the coronary artery disease risk equivalents; yet, there is no substantial evidence of an appropriate risk-benefit ratio to support this practice. This paper describes the protocol of a randomized trial being conducted in six primary care practices in the Denver metropolitan area to examine the effectiveness of three interventional strategies to change physician behavior regarding prescription of low-dose aspirin.
All practices received academic detailing, one arm received clinician reminders to reconsider aspirin, a second arm received both clinician and patient messages to reconsider aspirin. The intervention will run for 15 to 18 months. Data collected at baseline and for outcomes from an electronic health record will be used to assess pre- and post-interventional prescribing, as well as to explore any inappropriate decrease in aspirin use by patients with known cardiovascular disease.
This study was designed to investigate effective methods of changing physician behavior to decrease the use of aspirin for primary cardiovascular disease prevention. The results of this study will contribute to the small pool of knowledge currently available on the topic of ceasing previously supported practices.