Rule-based and information-integration category learning were compared under minimal and full feedback conditions. Rule-based category structures are those for which the optimal rule is verbalizable. Information-integration category structures are those for which the optimal rule is not verbalizable. With minimal feedback subjects are told whether their response was correct or incorrect, but are not informed of the correct category assignment. With full feedback subjects are informed of the correctness of their response and are also informed of the correct category assignment. An examination of the distinct neural circuits that subserve rule-based and information-integration category learning leads to the counterintuitive prediction that full feedback should facilitate rule-based learning but should also hinder information-integration learning. This prediction was supported in the experiment reported below. The implications of these results for theories of learning are discussed.
Perceptual categorization; Cognitive neuroscience of categorization; Reinforcement learning; Dopamine; Striatum; Bayesian hypothesis testing; Feedback
Due to the increasing prevalence and severity of invasive candidiasis, investigators have developed clinical prediction rules to identify patients who may benefit from antifungal prophylaxis or early empiric therapy. The aims of this study were to validate and compare the Paphitou and Ostrosky-Zeichner clinical prediction rules in ICU patients in a 689-bed academic medical center.
We conducted a retrospective matched case-control study from May 2003 to June 2008 to evaluate the sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of each rule. Cases included adults with ICU stays of at least four days and invasive candidiasis matched to three controls by age, gender and ICU admission date. The clinical prediction rules were applied to cases and controls via retrospective chart review to evaluate the success of the rules in predicting invasive candidiasis. Paphitou's rule included diabetes, total parenteral nutrition (TPN) and dialysis with or without antibiotics. Ostrosky-Zeichner's rule included antibiotics or central venous catheter plus at least two of the following: surgery, immunosuppression, TPN, dialysis, corticosteroids and pancreatitis. Conditional logistic regression was performed to evaluate the rules. Discriminative power was evaluated by area under the receiver operating characteristic curve (AUC ROC).
A total of 352 patients were included (88 cases and 264 controls). The incidence of invasive candidiasis among adults with an ICU stay of at least four days was 2.3%. The prediction rules performed similarly, exhibiting low PPVs (0.041 to 0.054), high NPVs (0.983 to 0.990) and AUC ROCs (0.649 to 0.705). A new prediction rule (Nebraska Medical Center rule) was developed with PPVs, NPVs and AUC ROCs of 0.047, 0.994 and 0.770, respectively.
Based on low PPVs and high NPVs, the rules are most useful for identifying patients who are not likely to develop invasive candidiasis, potentially preventing unnecessary antifungal use, optimizing patient ICU care and facilitating the design of forthcoming antifungal clinical trials.
candidiasis; clinical prediction rules; prophylaxis
Background and Purpose
Successful outcomes from bacterial meningitis require rapid antibiotic treatment; however, unnecessary treatment of viral meningitis may lead to increased toxicities and expense. Thus, improved diagnostics are required to maximize treatment and minimize side effects and cost. Thirteen clinical decision rules have been reported to identify bacterial from viral meningitis. However, few rules have been tested and compared in a single study, while several rules are yet to be tested by independent researchers or in pediatric populations. Thus, simultaneous test and comparison of these rules are required to enable clinicians to select an optimal diagnostic rule for bacterial meningitis in settings and populations similar to ours.
A retrospective cross-sectional study was conducted at the Infectious Department of Pediatric Hospital Number 1, Ho Chi Minh City, Vietnam. The performance of the clinical rules was evaluated by area under a receiver operating characteristic curve (ROC-AUC) using the method of DeLong and McNemar test for specificity comparison.
Our study included 129 patients, of whom 80 had bacterial meningitis and 49 had presumed viral meningitis. Spanos's rule had the highest AUC at 0.938 but was not significantly greater than other rules. No rule provided 100% sensitivity with a specificity higher than 50%. Based on our calculation of theoretical sensitivity and specificity, we suggest that a perfect rule requires at least four independent variables that posses both sensitivity and specificity higher than 85–90%.
No clinical decision rules provided an acceptable specificity (>50%) with 100% sensitivity when applying our data set in children. More studies in Vietnam and developing countries are required to develop and/or validate clinical rules and more very good biomarkers are required to develop such a perfect rule.
Clinical practice guidelines must comprehensively address all logically possible situations, but this completeness may result in sizable and cumbersome rule sets. We applied rule set reduction techniques to a 576-rule set regarding recommendations for medication treatment of hypercholesterolemia. Using decision tables augmented with information regarding test costs and rule application frequencies, we sorted the rule sets prior to identifying irrelevant tests and eliminating unnecessary rules. Alternatively, we examined the semantic relationships among risk factors in hypercholesterolemia and applied a subsumption technique to reduce the rule set. Both methodologies resulted in substantial rule set compression (mean, 48-70%). Subsumption techniques proved superior for compacting a large rule set based on risk factors.
Abstract Objective: To develop a knowledge representation model for
clinical practice guidelines that is linguistically adequate, comprehensible,
reusable, and maintainable.
Design: Decision tables provide the basic framework for the proposed
knowledge representation model. Guideline logic is represented as rules in
conventional decision tables. These tables are augmented by layers where
collateral information is recorded in slots beneath the logic.
Results: Decision tables organize rules into cohesive rule sets
wherein complex logic is clarified. Decision table rule sets may be verified
to assure completeness and consistency. Optimization and display of rule sets
as sequential decision trees may enhance the comprehensibility of the logic.
The modularity of the rule formats may facilitate maintenance. The
augmentation layers provide links to descriptive language, information
sources, decision variable characteristics, costs and expected values of
policies, and evidence sources and quality.
Conclusion: Augmented decision tables can serve as a unifying
knowledge representation for developers and implementers of clinical practice
Many patients undergo non‐invasive testing for the detection of coronary artery disease before non‐cardiac surgery. This is despite the low predictive value of positive tests in this population and the lack of any evidence of benefit of coronary revascularisation before non‐cardiac surgical procedures. Further, this strategy often triggers a clinical cascade exposing the patient to progressively riskier testing and intervention and results in increased costs and unnecessary delays. On the other hand, administration of β blockers, and more recently statins, has been shown to reduce the occurrence of perioperative ischaemic events. Therefore, there is a need for a shift in emphasis from risk stratification by non‐invasive testing to risk modification by the application of interventions, which prevent perioperative ischaemia—principally, perioperative β adrenergic blockade and perhaps treatment with statins. Clinical risk stratification tools reliably identify patients at high risk of perioperative ischaemic events and can guide in the appropriate use of perioperative medical treatment.
perioperative risk, non‐cardiac surgery, preoperative evaluation, β adrenergic blockers
Tools for early identification of workers with back pain who are at high risk of adverse occupational outcome would help concentrate clinical attention on the patients who need it most, while helping reduce unnecessary interventions (and costs) among the others. This study was conducted to develop and validate clinical rules to predict the 2-year work disability status of people consulting for nonspecific back pain in primary care settings.
This was a 2-year prospective cohort study conducted in 7 primary care settings in the Quebec City area. The study enrolled 1007 workers (participation, 68.4% of potential participants expected to be eligible) aged 18–64 years who consulted for nonspecific back pain associated with at least 1 day's absence from work. The majority (86%) completed 5 telephone interviews documenting a large array of variables. Clinical information was abstracted from the medical files. The outcome measure was “return to work in good health” at 2 years, a variable that combined patients' occupational status, functional limitations and recurrences of work absence. Predictive models of 2-year outcome were developed with a recursive partitioning approach on a 40% random sample of our study subjects, then validated on the rest.
The best predictive model included 7 baseline variables (patient's recovery expectations, radiating pain, previous back surgery, pain intensity, frequent change of position because of back pain, irritability and bad temper, and difficulty sleeping) and was particularly efficient at identifying patients with no adverse occupational outcome (negative predictive value 78%– 94%).
A clinical prediction rule accurately identified a large proportion of workers with back pain consulting in a primary care setting who were at a low risk of an adverse occupational outcome.
This study sought to develop a functional taxonomy of rule-based clinical decision support.
The rule-based clinical decision support content of a large integrated delivery network with a long history of computer-based point-of-care decision support was reviewed and analyzed along four functional dimensions: trigger, input data elements, interventions, and offered choices.
A total of 181 rule types were reviewed, comprising 7,120 different instances of rule usage. A total of 42 taxa were identified across the four categories. Many rules fell into multiple taxa in a given category. Entered order and stored laboratory result were the most common triggers; laboratory result, drug list, and hospital unit were the most frequent data elements used. Notify and log were the most common interventions, and write order, defer warning, and override rule were the most common offered choices.
A relatively small number of taxa successfully described a large body of clinical knowledge. These taxa can be directly mapped to functions of clinical systems and decision support systems, providing feature guidance for developers, implementers, and certifiers of clinical information systems.
Health plans are profiling physicians on their relative costs and using these profiles to assign physicians to cost categories. Physician groups have questioned whether the costs category assigned to a physician is driven by the manner in which costs are attributed to physicians.
To evaluate the impact on physician cost profiles of 12 different attribution rules.
1.1 million adults continuously enrolled in 4 commercial health plans in 2004 and 2005
Using an aggregated database of claims from the 4 health plans, we created different cost profiles for each physician using 12 different attribution rules. The attribution rules differ on the unit of analysis (patient versus episode of care); signal for responsibility (costs versus visits); number of physicians that can be assigned responsibility; and threshold for assigning responsibility.
Under each rule, we calculated the percentage of episodes assigned to any physician, calculated the percentage of costs billed by a physician included in his or her own profiles, and placed each physician into high cost, average cost, low cost, or low sample size categories. Compared to a commonly used default rule, we calculated what fraction of physicians are assigned to a different cost category using one of the other 11 attribution rules.
Across the 12 different rules there was substantial variation in the percentage of episodes that could be assigned to a physician (range 20%–69%) and the mean percentage of costs billed by the physician that were included in the physician’s own cost profile (range 13%–60%). Compared to their cost category under the default rule, 17 to 61% of physicians would be assigned a different category across the 11 alternate attribution rules.
Results might differ if data from another state or Medicare were used.
The choice of attribution rule affects how costs are assigned to a physician and can have a substantial impact on the cost category to which a physician is assigned.
A methodological framework for the cost-effectiveness evaluation of diagnostic tests for mass screening is presented. The decision rule is based on disease incidence, probabilities of test error, the cost of the test and of treatment for found cases, and the economic value (expected lifetime earnings or equivalent) of additional length or quality of life for those cured of the disease. The decision rule is applied to the Pap test for cervical cancer, with results showing that as a one-time screening device the test is cost-effective from society's standpoint. Extensions of the method would permit estimation of the disease incidence at which a given test or treatment would be cost-effective; would permit estimation of the breakeven price of test and treatment with given disease incidence; and would allow determination of optimal testing frequency.
Several studies documented substantial variation in medical practice patterns, but physicians often do not have adequate information on the cumulative clinical and financial effects of their decisions. The purpose of developing an expert system for the analysis of clinical practice patterns was to assist providers in analyzing and improving the process and outcome of patient care. The developed QFES (Quality Feedback Expert System) helps users in the definition and evaluation of measurable quality improvement objectives. Based on objectives and actual clinical data, several measures can be calculated (utilization of procedures, annualized cost effect of using a particular procedure, and expected utilization based on peer-comparison and case-mix adjustment). The quality management rules help to detect important discrepancies among members of the selected provider group and compare performance with objectives. The system incorporates a variety of data and knowledge bases: (i) clinical data on actual practice patterns, (ii) frames of quality parameters derived from clinical practice guidelines, and (iii) rules of quality management for data analysis. An analysis of practice patterns of 12 family physicians in the management of urinary tract infections illustrates the use of the system.
Invasive candidiasis is a frequent life-threatening complication in critically ill patients. Early diagnosis followed by prompt treatment aimed at improving outcome by minimizing unnecessary antifungal use remains a major challenge in the ICU setting. Timely patient selection thus plays a key role for clinically efficient and cost-effective management. Approaches combining clinical risk factors and Candida colonization data have improved our ability to identify such patients early. While the negative predictive value of scores and predicting rules is up to 95 to 99%, the positive predictive value is much lower, ranging between 10 and 60%. Accordingly, if a positive score or rule is used to guide the start of antifungal therapy, many patients may be treated unnecessarily. Candida biomarkers display higher positive predictive values; however, they lack sensitivity and are thus not able to identify all cases of invasive candidiasis. The (1→3)-β-D-glucan (BG) assay, a panfungal antigen test, is recommended as a complementary tool for the diagnosis of invasive mycoses in high-risk hemato-oncological patients. Its role in the more heterogeneous ICU population remains to be defined. More efficient clinical selection strategies combined with performant laboratory tools are needed in order to treat the right patients at the right time by keeping costs of screening and therapy as low as possible. The new approach proposed by Posteraro and colleagues in the previous issue of Critical Care meets these requirements. A single positive BG value in medical patients admitted to the ICU with sepsis and expected to stay for more than 5 days preceded the documentation of candidemia by 1 to 3 days with an unprecedented diagnostic accuracy. Applying this one-point fungal screening on a selected subset of ICU patients with an estimated 15 to 20% risk of developing candidemia is an appealing and potentially cost-effective approach. If confirmed by multicenter investigations, and extended to surgical patients at high risk of invasive candidiasis after abdominal surgery, this Bayesian-based risk stratification approach aimed at maximizing clinical efficiency by minimizing health care resource utilization may substantially simplify the management of critically ill patients at risk of invasive candidiasis.
Atrial fibrillation (AF) has been recognised as an important independent risk factor for thromboembolic disease, particularly stroke for which it provides a five-fold increase in risk. This study aimed to determine the baseline prevalence and the incidence of AF based on a variety of screening strategies and in doing so to evaluate the incremental cost-effectiveness of different screening strategies, including targeted or whole population screening, compared with routine clinical practice, for detection of AF in people aged 65 and over. The value of clinical assessment and echocardiography as additional methods of risk stratification for thromboembolic disease in patients with AF were also evaluated.
The study design was a multi-centre randomised controlled trial with a study population of patients aged 65 and over from 50 General Practices in the West Midlands. These purposefully selected general practices were randomly allocated to 25 intervention practices and 25 control practices. GPs and practice nurses within the intervention practices received education on the importance of AF detection and ECG interpretation. Patients in the intervention practices were randomly allocated to systematic (n = 5000) or opportunistic screening (n = 5000). Prospective identification of pre-existing risk factors for AF within the screened population enabled comparison between high risk targeted screening and total population screening. AF detection rates in systematically screened and opportunistically screened populations in the intervention practices were compared to AF detection rate in 5,000 patients in the control practices.
Objective: To measure the accuracy of automated
tuberculosis case detection.
Setting: An inner-city medical center.
Intervention: An electronic medical record and a clinical event monitor
with a natural language processor were used to detect tuberculosis cases
according to Centers for Disease Control criteria.
Measurement: Cases identified by the automated system were compared to the
local health department's tuberculosis registry, and positive predictive value
and sensitivity were calculated.
Results: The best automated rule was based on tuberculosis cultures; it had
a sensitivity of.89 (95% CI.75-.96) and a positive predictive value of.96
(.89-.99). All other rules had a positive predictive value less than.20. A
rule based on chest radiographs had a sensitivity of.41 (.26-.57) and a
positive predictive value of.03 (.02-.05), and a rule that represented the
overall Centers for Disease Control criteria had a sensitivity of.91 (.78-.97)
and a positive predictive value of.15 (.12-.18). The culture-based rule was
the most useful rule for automated case reporting to the health department,
and the chest radiograph-based rule was the most useful rule for improving
tuberculosis respiratory isolation compliance.
Conclusions: Automated tuberculosis case detection is feasible and useful,
although the predictive value of most of the clinical rules was low. The
usefulness of an individual rule depends on the context in which it is used.
The major challenge facing automated detection is the availability and
accuracy of electronic clinical data.
To examine the potential utility of echocardiography and NT-proBNP for heart failure (HF) risk stratification in concert with a validated clinical HF risk score in older adults.
Without clinical guidance, echocardiography and natriuretic peptides have suboptimal test characteristics for population-wide HF risk stratification. However, the value of these tests has not been examined in concert with a clinical HF risk score.
We evaluated the improvement in 5-year HF risk prediction offered by adding an echocardiographic score or/and NT-proBNP levels to the clinical Health ABC HF Risk Score (base model) in 3752 participants of the Cardiovascular Health Study (age, 72.6±5.4 years; 40.8% men; 86.5% white). The echocardiographic score was derived as the weighted sum of independent echocardiographic predictors of HF. We assessed changes in Bayesian information criterion (BIC), C index, integrated discrimination improvement (IDI), and net reclassification improvement (NRI). We examined also the weighted NRI across baseline HF risk categories under multiple scenarios of event versus nonevent weighting.
Reduced left ventricular ejection fraction, abnormal E/A ratio, enlarged left atrium, and increased left ventricular mass, were independent echocardiographic predictors of HF. Adding the echocardiographic score and NT-proBNP levels to the clinical model improved BIC (echocardiography: −43, NT-proBNP: −64.1, combined: −68.9; all p<0.001) and C index (baseline 0.746; echocardiography: +0.031, NT-proBNP: +0.027, combined: +0.043; all p<0.01) and yielded robust IDI (echocardiography: 43.3%, NT-proBNP: 42.2%, combined: 61.7%; all p<0.001), and NRI (based on Health ABC HF risk groups; echocardiography: 11.3%; NT-proBNP: 10.6%, combined: 16.3%; all p<0.01). Participants at intermediate risk by the clinical model (5% to 20% 5-yr HF risk; 35.7% of the cohort) derived the most reclassification benefit. Echocardiography yielded modest reclassification when used sequentially after NT-proBNP.
In older adults, echocardiography and NT-proBNP offer significant HF risk reclassification over a clinical prediction model, especially for intermediate risk individuals.
epidemiology; heart failure; risk score; risk prediction; risk stratification
Risk stratification of atrial fibrillation (AF) and adequate thromboembolism prophylaxis is the cornerstone of treatment in patients with AF. Current risk stratification schemes such as the CHADS2 and CHA2DS2-VASc scores are based on clinical risk factors and suboptimally weight the risk/benefit of anticoagulation. Recently, the potential of biomarkers (troponin and NT-proBNP) in the RE-LY biomarker sub-analysis has been demonstrated. Echocardiography is also being evaluated as a possible approach to improve risk score performance. The authors present an overview on AF risk stratification and discuss future potential developments that may be introduced into our current risk stratification schemes.
Anticoagulation; Atrial fibrillation; Risk stratification; Stroke; Thromboembolism
In association rule mining, evaluating an association rule needs to repeatedly scan database to compare the whole database with the antecedent, consequent of a rule and the whole rule. In order to decrease the number of comparisons and time consuming, we present an attribute index strategy. It only needs to scan database once to create the attribute index of each attribute. Then all metrics values to evaluate an association rule do not need to scan database any further, but acquire data only by means of the attribute indices. The paper visualizes association rule mining as a multiobjective problem rather than a single objective one. In order to make the acquired solutions scatter uniformly toward the Pareto frontier in the objective space, elitism policy and uniform design are introduced. The paper presents the algorithm of attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm, abbreviated as IUARMMEA. It does not require the user-specified minimum support and minimum confidence anymore, but uses a simple attribute index. It uses a well-designed real encoding so as to extend its application scope. Experiments performed on several databases demonstrate that the proposed algorithm has excellent performance, and it can significantly reduce the number of comparisons and time consumption.
Although paediatric patients have an increased risk for adverse drug events, few detection methodologies target this population. To utilise computerised adverse event surveillance, specialised trigger rules are required to accommodate the unique needs of children. The aim was to develop new, tailored rules sustainable for review and robust enough to support aggregate event rate monitoring.
The authors utilised a voluntary staff incident-reporting system, lab values and physician insight to design trigger rules. During Phase 1, problem areas were identified by reviewing 5 years of paediatric voluntary incident reports. Based on these findings, historical lab electrolyte values were analysed to devise critical value thresholds. This evidence informed Phase 2 rule development. For 3 months, surveillance alerts were evaluated for occurrence of adverse drug events.
In Phase 1, replacement preparations and total parenteral nutrition comprised the majority (36.6%) of adverse drug events in 353 paediatric patients. During Phase 2, nine new trigger rules produced 225 alerts in 103 paediatric inpatients. Of these, 14 adverse drug events were found by the paediatric hypoglycaemia rule, but all other electrolyte trigger rules were ineffective. Compared with the adult-focused hypoglycaemia rule, the new, tailored version increased the paediatric event detection rate from 0.43 to 1.51 events per 1000 patient days.
Relying solely on absolute lab values to detect electrolyte-related adverse drug events did not meet our goals. Use of compound rule logic improved detection of hypoglycaemia. More success may be found in designing real-time rules that leverage lab trends and additional clinical information.
Paediatrics; adverse drug event; computerised surveillance; trigger tool; information technology; medication error
Febrile neutropenia is a common and potentially life-threatening complication of treatment for childhood cancer, which has increasingly been subject to targeted treatment based on clinical risk stratification. Our previous meta-analysis demonstrated 16 rules had been described and 2 of them subject to validation in more than one study. We aimed to advance our knowledge of evidence on the discriminatory ability and predictive accuracy of such risk stratification clinical decision rules (CDR) for children and young people with cancer by updating our systematic review.
The review was conducted in accordance with Centre for Reviews and Dissemination methods, searching multiple electronic databases, using two independent reviewers, formal critical appraisal with QUADAS and meta-analysis with random effects models where appropriate. It was registered with PROSPERO: CRD42011001685.
We found 9 new publications describing a further 7 new CDR, and validations of 7 rules. Six CDR have now been subject to testing across more than two data sets. Most validations demonstrated the rule to be less efficient than when initially proposed; geographical differences appeared to be one explanation for this.
The use of clinical decision rules will require local validation before widespread use. Considerable uncertainty remains over the most effective rule to use in each population, and an ongoing individual-patient-data meta-analysis should develop and test a more reliable CDR to improve stratification and optimise therapy. Despite current challenges, we believe it will be possible to define an internationally effective CDR to harmonise the treatment of children with febrile neutropenia.
With the growing national dissemination of the electronic health record (EHR), there are expectations that algorithms to identify disease-based cohorts for health services research will be deployable across health care organizations. Toward that goal, a novel associative classification framework was designed to generate prediction rules to identify cases similar to the exemplar cases on which it was trained. It processes exemplars for any medical condition without modification. The framework is distinguished by core candidate data attributes based on common EHR observation categories, application of associative classification methods to cull disease-specific attributes and predictive rules from the core attributes, and support for attribute concept hierarchies to manage the various layers of granularity in native EHR data. The framework processes and an evaluation of prediction rules generated to identify diabetes mellitus are presented.
Because many illnesses show heterogeneous response to treatment, there is increasing interest in individualizing treatment to patients . An individualized treatment rule is a decision rule that recommends treatment according to patient characteristics. We consider the use of clinical trial data in the construction of an individualized treatment rule leading to highest mean response. This is a difficult computational problem because the objective function is the expectation of a weighted indicator function that is non-concave in the parameters. Furthermore there are frequently many pretreatment variables that may or may not be useful in constructing an optimal individualized treatment rule yet cost and interpretability considerations imply that only a few variables should be used by the individualized treatment rule. To address these challenges we consider estimation based on l1 penalized least squares. This approach is justified via a finite sample upper bound on the difference between the mean response due to the estimated individualized treatment rule and the mean response due to the optimal individualized treatment rule.
decision making; l1 penalized least squares; Value
Accurate clinical problem lists are critical for patient care, clinical decision support, population reporting, quality improvement, and research. However, problem lists are often incomplete or out of date.
To determine whether a clinical alerting system, which uses inference rules to notify providers of undocumented problems, improves problem list documentation.
Study Design and Methods
Inference rules for 17 conditions were constructed and an electronic health record-based intervention was evaluated to improve problem documentation. A cluster randomized trial was conducted of 11 participating clinics affiliated with a large academic medical center, totaling 28 primary care clinical areas, with 14 receiving the intervention and 14 as controls. The intervention was a clinical alert directed to the provider that suggested adding a problem to the electronic problem list based on inference rules. The primary outcome measure was acceptance of the alert. The number of study problems added in each arm as a pre-specified secondary outcome was also assessed. Data were collected during 6-month pre-intervention (11/2009–5/2010) and intervention (5/2010–11/2010) periods.
17 043 alerts were presented, of which 41.1% were accepted. In the intervention arm, providers documented significantly more study problems (adjusted OR=3.4, p<0.001), with an absolute difference of 6277 additional problems. In the intervention group, 70.4% of all study problems were added via the problem list alerts. Significant increases in problem notation were observed for 13 of 17 conditions.
Problem inference alerts significantly increase notation of important patient problems in primary care, which in turn has the potential to facilitate quality improvement.
Problem list; clinical decision support; data mining; automated inference; meaningful use; quality of care; quality measurement; electronic health records; knowledge representations; classical experimental and quasi-experimental study methods (lab and field); designing usable (responsive) resources and systems; statistical analysis of large datasets
The San Francisco Syncope Rule has been proposed as a clinical decision rule for risk stratification of patients presenting to the emergency department with syncope. It has been validated across various populations and settings. We undertook a systematic review of its accuracy in predicting short-term serious outcomes.
We identified studies by means of systematic searches in seven electronic databases from inception to January 2011. We extracted study data in duplicate and used a bivariate random-effects model to assess the predictive accuracy and test characteristics.
We included 12 studies with a total of 5316 patients, of whom 596 (11%) experienced a serious outcome. The prevalence of serious outcomes across the studies varied between 5% and 26%. The pooled estimate of sensitivity of the San Francisco Syncope Rule was 0.87 (95% confidence interval [CI] 0.79–0.93), and the pooled estimate of specificity was 0.52 (95% CI 0.43–0.62). There was substantial between-study heterogeneity (resulting in a 95% prediction interval for sensitivity of 0.55–0.98). The probability of a serious outcome given a negative score with the San Francisco Syncope Rule was 5% or lower, and the probability was 2% or lower when the rule was applied only to patients for whom no cause of syncope was identified after initial evaluation in the emergency department. The most common cause of false-negative classification for a serious outcome was cardiac arrhythmia.
The San Francisco Syncope Rule should be applied only for patients in whom no cause of syncope is evident after initial evaluation in the emergency department. Consideration of all available electrocardiograms, as well as arrhythmia monitoring, should be included in application of the San Francisco Syncope Rule. Between-study heterogeneity was likely due to inconsistent classification of arrhythmia.
Optimal glycemic control prevents the onset of diabetes complications. Identifying diabetic patients at risk of poor glycemic control could help promoting dedicated interventions. The purpose of this study was to identify predictors of poor short-term and long-term glycemic control in older diabetic in-patients.
A total of 1354 older diabetic in-patients consecutively enrolled in a multicenter study formed the training population (retrospective arm); 264 patients consecutively admitted to a ward of general medicine formed the testing population (prospective arm). Glycated hemoglobin (HbA1c) was measured on admission and one year after the discharge in the testing population. Independent correlates of a discharge glycemia ≥ 140 mg/dl in the training population were assessed by logistic regression analysis and a clinical prediction rule was developed. The ability of the prediction rule and that of admission HbA1c to predict discharge glycemia ≥ 140 mg/dl and HbA1c > 7% one year after discharge was assessed in the testing population.
Selected admission variables (diastolic arterial pressure < 80 mmHg, glycemia = 143–218 mg/dl, glycemia > 218 mg/dl, history of insulinic or combined hypoglycemic therapy, Charlson's index > 2) were combined to obtain a score predicting a discharge fasting glycemia ≥ 140 mg/dl in the training population. A modified score was obtained by adding 1 if admission HbA1c exceeded 7.8%. The modified score was the best predictor of both discharge glycemia ≥ 140 mg/dl (sensitivity = 79%, specificity = 63%) and 1 year HbA1c > 7% (sensitivity = 72%, specificity = 71%) in the testing population.
A simple clinical prediction rule might help identify older diabetic in-patients at risk of both short and long term poor glycemic control.
Back pain remains a challenge for primary care internationally. One model that has not been tested is stratification of the management according to the patient's prognosis (low, medium, or high risk). We compared the clinical effectiveness and cost-effectiveness of stratified primary care (intervention) with non-stratified current best practice (control).
1573 adults (aged ≥18 years) with back pain (with or without radiculopathy) consultations at ten general practices in England responded to invitations to attend an assessment clinic. Eligible participants were randomly assigned by use of computer-generated stratified blocks with a 2:1 ratio to intervention or control group. Primary outcome was the effect of treatment on the Roland Morris Disability Questionnaire (RMDQ) score at 12 months. In the economic evaluation, we focused on estimating incremental quality-adjusted life years (QALYs) and health-care costs related to back pain. Analysis was by intention to treat. This study is registered, number ISRCTN37113406.
851 patients were assigned to the intervention (n=568) and control groups (n=283). Overall, adjusted mean changes in RMDQ scores were significantly higher in the intervention group than in the control group at 4 months (4·7 [SD 5·9] vs 3·0 [5·9], between-group difference 1·81 [95% CI 1·06–2·57]) and at 12 months (4·3 [6·4] vs 3·3 [6·2], 1·06 [0·25–1·86]), equating to effect sizes of 0·32 (0·19–0·45) and 0·19 (0·04–0·33), respectively. At 12 months, stratified care was associated with a mean increase in generic health benefit (0·039 additional QALYs) and cost savings (£240·01 vs £274·40) compared with the control group.
The results show that a stratified approach, by use of prognostic screening with matched pathways, will have important implications for the future management of back pain in primary care.
Arthritis Research UK.