Rule-based and information-integration category learning were compared under minimal and full feedback conditions. Rule-based category structures are those for which the optimal rule is verbalizable. Information-integration category structures are those for which the optimal rule is not verbalizable. With minimal feedback subjects are told whether their response was correct or incorrect, but are not informed of the correct category assignment. With full feedback subjects are informed of the correctness of their response and are also informed of the correct category assignment. An examination of the distinct neural circuits that subserve rule-based and information-integration category learning leads to the counterintuitive prediction that full feedback should facilitate rule-based learning but should also hinder information-integration learning. This prediction was supported in the experiment reported below. The implications of these results for theories of learning are discussed.
Perceptual categorization; Cognitive neuroscience of categorization; Reinforcement learning; Dopamine; Striatum; Bayesian hypothesis testing; Feedback
Patients with pulmonary embolism (PE) can be stratified into two different prognostic categories, based on the presence or absence of shock or sustained arterial hypotension. Some patients with normotensive PE have a low risk of early mortality, defined as <1% at 30 days or during hospital stay. In this paper, we will discuss the new prospective for the optimal management of low-risk PE: prognostic assessment, early discharge, and single-drug therapy with new oral anticoagulants. Several parameters have been proposed and investigated to identify low-risk PE: clinical prediction rules, imaging tests, and laboratory markers of right ventricular dysfunction or injury. Moreover, outpatient management has been suggested for low-risk PE: it may lead to a decrease in unnecessary hospitalizations, acquired infections, death, and costs and to an improvement in health-related quality of life. Finally, the main characteristics of new oral anticoagulant drugs and the most recent published data on phase III trials on PE suggest that the single-drug therapy is a possible suitable option. Oral administration, predictable anticoagulant responses, and few drug-drug interactions of direct thrombin and factor Xa inhibitors may further simplify PE home therapy avoiding administration of low-molecular-weight heparin.
Due to the increasing prevalence and severity of invasive candidiasis, investigators have developed clinical prediction rules to identify patients who may benefit from antifungal prophylaxis or early empiric therapy. The aims of this study were to validate and compare the Paphitou and Ostrosky-Zeichner clinical prediction rules in ICU patients in a 689-bed academic medical center.
We conducted a retrospective matched case-control study from May 2003 to June 2008 to evaluate the sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of each rule. Cases included adults with ICU stays of at least four days and invasive candidiasis matched to three controls by age, gender and ICU admission date. The clinical prediction rules were applied to cases and controls via retrospective chart review to evaluate the success of the rules in predicting invasive candidiasis. Paphitou's rule included diabetes, total parenteral nutrition (TPN) and dialysis with or without antibiotics. Ostrosky-Zeichner's rule included antibiotics or central venous catheter plus at least two of the following: surgery, immunosuppression, TPN, dialysis, corticosteroids and pancreatitis. Conditional logistic regression was performed to evaluate the rules. Discriminative power was evaluated by area under the receiver operating characteristic curve (AUC ROC).
A total of 352 patients were included (88 cases and 264 controls). The incidence of invasive candidiasis among adults with an ICU stay of at least four days was 2.3%. The prediction rules performed similarly, exhibiting low PPVs (0.041 to 0.054), high NPVs (0.983 to 0.990) and AUC ROCs (0.649 to 0.705). A new prediction rule (Nebraska Medical Center rule) was developed with PPVs, NPVs and AUC ROCs of 0.047, 0.994 and 0.770, respectively.
Based on low PPVs and high NPVs, the rules are most useful for identifying patients who are not likely to develop invasive candidiasis, potentially preventing unnecessary antifungal use, optimizing patient ICU care and facilitating the design of forthcoming antifungal clinical trials.
candidiasis; clinical prediction rules; prophylaxis
Tools for early identification of workers with back pain who are at high risk of adverse occupational outcome would help concentrate clinical attention on the patients who need it most, while helping reduce unnecessary interventions (and costs) among the others. This study was conducted to develop and validate clinical rules to predict the 2-year work disability status of people consulting for nonspecific back pain in primary care settings.
This was a 2-year prospective cohort study conducted in 7 primary care settings in the Quebec City area. The study enrolled 1007 workers (participation, 68.4% of potential participants expected to be eligible) aged 18–64 years who consulted for nonspecific back pain associated with at least 1 day's absence from work. The majority (86%) completed 5 telephone interviews documenting a large array of variables. Clinical information was abstracted from the medical files. The outcome measure was “return to work in good health” at 2 years, a variable that combined patients' occupational status, functional limitations and recurrences of work absence. Predictive models of 2-year outcome were developed with a recursive partitioning approach on a 40% random sample of our study subjects, then validated on the rest.
The best predictive model included 7 baseline variables (patient's recovery expectations, radiating pain, previous back surgery, pain intensity, frequent change of position because of back pain, irritability and bad temper, and difficulty sleeping) and was particularly efficient at identifying patients with no adverse occupational outcome (negative predictive value 78%– 94%).
A clinical prediction rule accurately identified a large proportion of workers with back pain consulting in a primary care setting who were at a low risk of an adverse occupational outcome.
Background and Purpose
Successful outcomes from bacterial meningitis require rapid antibiotic treatment; however, unnecessary treatment of viral meningitis may lead to increased toxicities and expense. Thus, improved diagnostics are required to maximize treatment and minimize side effects and cost. Thirteen clinical decision rules have been reported to identify bacterial from viral meningitis. However, few rules have been tested and compared in a single study, while several rules are yet to be tested by independent researchers or in pediatric populations. Thus, simultaneous test and comparison of these rules are required to enable clinicians to select an optimal diagnostic rule for bacterial meningitis in settings and populations similar to ours.
A retrospective cross-sectional study was conducted at the Infectious Department of Pediatric Hospital Number 1, Ho Chi Minh City, Vietnam. The performance of the clinical rules was evaluated by area under a receiver operating characteristic curve (ROC-AUC) using the method of DeLong and McNemar test for specificity comparison.
Our study included 129 patients, of whom 80 had bacterial meningitis and 49 had presumed viral meningitis. Spanos's rule had the highest AUC at 0.938 but was not significantly greater than other rules. No rule provided 100% sensitivity with a specificity higher than 50%. Based on our calculation of theoretical sensitivity and specificity, we suggest that a perfect rule requires at least four independent variables that posses both sensitivity and specificity higher than 85–90%.
No clinical decision rules provided an acceptable specificity (>50%) with 100% sensitivity when applying our data set in children. More studies in Vietnam and developing countries are required to develop and/or validate clinical rules and more very good biomarkers are required to develop such a perfect rule.
Clinical practice guidelines must comprehensively address all logically possible situations, but this completeness may result in sizable and cumbersome rule sets. We applied rule set reduction techniques to a 576-rule set regarding recommendations for medication treatment of hypercholesterolemia. Using decision tables augmented with information regarding test costs and rule application frequencies, we sorted the rule sets prior to identifying irrelevant tests and eliminating unnecessary rules. Alternatively, we examined the semantic relationships among risk factors in hypercholesterolemia and applied a subsumption technique to reduce the rule set. Both methodologies resulted in substantial rule set compression (mean, 48-70%). Subsumption techniques proved superior for compacting a large rule set based on risk factors.
Chest pain remains a diagnostic challenge: physicians do not want to miss an acute coronary syndrome (ACS), but, they also wish to avoid unnecessary additional diagnostic procedures. In approximately 75% of the patients presenting with chest pain at the emergency department (ED) there is no underlying cardiac cause. Therefore, diagnostic strategies focus on identifying patients in whom an ACS can be safely ruled out based on findings from history, physical examination and early cardiac marker measurement. The HEART score, a clinical prediction rule, was developed to provide the clinician with a simple, early and reliable predictor of cardiac risk. We set out to quantify the impact of the use of the HEART score in daily practice on patient outcomes and costs.
We designed a prospective, multi-centre, stepped wedge, cluster randomised trial. Our aim is to include a total of 6600 unselected chest pain patients presenting at the ED in 10 Dutch hospitals during an 11-month period. All clusters (i.e. hospitals) start with a period of ‘usual care’ and are randomised in their timing when to switch to ‘intervention care’. The latter involves the calculation of the HEART score in each patient to guide clinical decision; notably reassurance and discharge of patients with low scores and intensive monitoring and early intervention in patients with high HEART scores. Primary outcome is occurrence of major adverse cardiac events (MACE), including acute myocardial infarction, revascularisation or death within 6 weeks after presentation. Secondary outcomes include occurrence of MACE in low-risk patients, quality of life, use of health care resources and costs.
Stepped wedge designs are increasingly used to evaluate the real-life effectiveness of non-pharmacological interventions because of the following potential advantages: (a) each hospital has both a usual care and an intervention period, therefore, outcomes can be compared within and across hospitals; (b) each hospital will have an intervention period which enhances participation in case of a promising intervention; (c) all hospitals generate data about potential implementation problems. This large impact trial will generate evidence whether the anticipated benefits (in terms of safety and cost-effectiveness) of using the HEART score will indeed be achieved in real-life clinical practice.
HEART score; Chest pain; Clinical prediction rule; Risk score implementation; Impact; Stepped wedge design; Cluster randomised trial
Abstract Objective: To develop a knowledge representation model for
clinical practice guidelines that is linguistically adequate, comprehensible,
reusable, and maintainable.
Design: Decision tables provide the basic framework for the proposed
knowledge representation model. Guideline logic is represented as rules in
conventional decision tables. These tables are augmented by layers where
collateral information is recorded in slots beneath the logic.
Results: Decision tables organize rules into cohesive rule sets
wherein complex logic is clarified. Decision table rule sets may be verified
to assure completeness and consistency. Optimization and display of rule sets
as sequential decision trees may enhance the comprehensibility of the logic.
The modularity of the rule formats may facilitate maintenance. The
augmentation layers provide links to descriptive language, information
sources, decision variable characteristics, costs and expected values of
policies, and evidence sources and quality.
Conclusion: Augmented decision tables can serve as a unifying
knowledge representation for developers and implementers of clinical practice
Many patients undergo non‐invasive testing for the detection of coronary artery disease before non‐cardiac surgery. This is despite the low predictive value of positive tests in this population and the lack of any evidence of benefit of coronary revascularisation before non‐cardiac surgical procedures. Further, this strategy often triggers a clinical cascade exposing the patient to progressively riskier testing and intervention and results in increased costs and unnecessary delays. On the other hand, administration of β blockers, and more recently statins, has been shown to reduce the occurrence of perioperative ischaemic events. Therefore, there is a need for a shift in emphasis from risk stratification by non‐invasive testing to risk modification by the application of interventions, which prevent perioperative ischaemia—principally, perioperative β adrenergic blockade and perhaps treatment with statins. Clinical risk stratification tools reliably identify patients at high risk of perioperative ischaemic events and can guide in the appropriate use of perioperative medical treatment.
perioperative risk, non‐cardiac surgery, preoperative evaluation, β adrenergic blockers
This study sought to develop a functional taxonomy of rule-based clinical decision support.
The rule-based clinical decision support content of a large integrated delivery network with a long history of computer-based point-of-care decision support was reviewed and analyzed along four functional dimensions: trigger, input data elements, interventions, and offered choices.
A total of 181 rule types were reviewed, comprising 7,120 different instances of rule usage. A total of 42 taxa were identified across the four categories. Many rules fell into multiple taxa in a given category. Entered order and stored laboratory result were the most common triggers; laboratory result, drug list, and hospital unit were the most frequent data elements used. Notify and log were the most common interventions, and write order, defer warning, and override rule were the most common offered choices.
A relatively small number of taxa successfully described a large body of clinical knowledge. These taxa can be directly mapped to functions of clinical systems and decision support systems, providing feature guidance for developers, implementers, and certifiers of clinical information systems.
Assess a scoring system to triage women with a pregnancy of unknown location.
Validation of prediction rule.
Women with a pregnancy of unknown location.
Main Outcome Measures
Scores were assigned to factors identified at clinical presentation. Total score was calculated to assess risk of ectopic pregnancy women with a pregnancy of unknown location, and a 3-tiered clinical action plan proposed. Recommendations were: low risk, intermediate risk and high risk. Recommendation based on model score was compared to clinical diagnosis.
The cohort of 1400 women (284 ectopic pregnancy (EP), 759 miscarriages, and 357 intrauterine pregnancy (IUP)) was more diverse than the original cohort used to develop the decision rule. A total of 29.4% IUPs were identified for less frequent follow up and 18.4% nonviable gestations were identified for more frequent follow up (to rule out an ectopic pregnancy) compared to intermediate risk (i.e. monitor in current standard fashion). For decision of possible less frequent monitor, specificity was 90.8% (89.0 – 92.6) with negative predictive value of 79.0% (76.7 – 81.3). For decision of more intense follow up specificity was 95.0% (92.7 – 97.2). Test characteristics using the scoring system replicated in the diverse validation cohort.
A scoring system based on symptoms at presentation has value to stratify risk and influence the intensity of outpatient surveillance for women with pregnancy of unknown location but does not serve as a diagnostic tool.
ectopic pregnancy; pregnancy of unknown location; risk factors; scoring system
In British Columbia (BC), we are developing Get Checked Online (GCO), an Internet-based testing program that provides Web-based access to sexually transmitted infections (STI) testing. Much is still unknown about how to implement risk assessment and recommend tests in Web-based settings. Prediction tools have been shown to successfully increase efficiency and cost-effectiveness of STI case finding in the following settings.
This project was designed with three main objectives: (1) to derive a risk prediction rule for screening chlamydia and gonorrhea among clients attending two public sexual health clinics between 2000 and 2006 in Vancouver, BC, (2) to assess the temporal generalizability of the prediction rule among more recent visits in the Vancouver clinics (2007-2012), and (3) to assess the geographical generalizability of the rule in seven additional clinics in BC.
This study is a population-based, cross-sectional analysis of electronic records of visits collected at nine publicly funded STI clinics in BC between 2000 and 2012. We will derive a risk score from the multivariate logistic regression of clinic visit data between 2000 and 2006 at two clinics in Vancouver using newly diagnosed chlamydia and gonorrhea infections as the outcome. The area under the receiver operating characteristic curve (AUC) and the Hosmer-Lemeshow statistic will examine the model’s discrimination and calibration, respectively. We will also examine the sensitivity and proportion of patients that would need to be screened at different cutoffs of the risk score. Temporal and geographical validation will be assessed using patient visit data from more recent visits (2007-2012) at the Vancouver clinics and at clinics in the rest of BC, respectively. Statistical analyses will be performed using SAS, version 9.3.
This is an ongoing research project with initial results expected in 2014.
The results from this research will have important implications for scaling up of Internet-based testing in BC. If a prediction rule with good calibration, discrimination, and high sensitivity to detect infection is found during this project, the prediction rule could be programmed into GCO so that the program offers individualized testing recommendations to clients. Further, the prediction rule could be adapted into educational materials to inform other Web-based content by creating awareness about STI risk factors, which may stimulate health care seeking behavior among individuals accessing the website.
prediction models; Internet-based testing; sexually transmitted infections
Several studies documented substantial variation in medical practice patterns, but physicians often do not have adequate information on the cumulative clinical and financial effects of their decisions. The purpose of developing an expert system for the analysis of clinical practice patterns was to assist providers in analyzing and improving the process and outcome of patient care. The developed QFES (Quality Feedback Expert System) helps users in the definition and evaluation of measurable quality improvement objectives. Based on objectives and actual clinical data, several measures can be calculated (utilization of procedures, annualized cost effect of using a particular procedure, and expected utilization based on peer-comparison and case-mix adjustment). The quality management rules help to detect important discrepancies among members of the selected provider group and compare performance with objectives. The system incorporates a variety of data and knowledge bases: (i) clinical data on actual practice patterns, (ii) frames of quality parameters derived from clinical practice guidelines, and (iii) rules of quality management for data analysis. An analysis of practice patterns of 12 family physicians in the management of urinary tract infections illustrates the use of the system.
A methodological framework for the cost-effectiveness evaluation of diagnostic tests for mass screening is presented. The decision rule is based on disease incidence, probabilities of test error, the cost of the test and of treatment for found cases, and the economic value (expected lifetime earnings or equivalent) of additional length or quality of life for those cured of the disease. The decision rule is applied to the Pap test for cervical cancer, with results showing that as a one-time screening device the test is cost-effective from society's standpoint. Extensions of the method would permit estimation of the disease incidence at which a given test or treatment would be cost-effective; would permit estimation of the breakeven price of test and treatment with given disease incidence; and would allow determination of optimal testing frequency.
Invasive candidiasis is a frequent life-threatening complication in critically ill patients. Early diagnosis followed by prompt treatment aimed at improving outcome by minimizing unnecessary antifungal use remains a major challenge in the ICU setting. Timely patient selection thus plays a key role for clinically efficient and cost-effective management. Approaches combining clinical risk factors and Candida colonization data have improved our ability to identify such patients early. While the negative predictive value of scores and predicting rules is up to 95 to 99%, the positive predictive value is much lower, ranging between 10 and 60%. Accordingly, if a positive score or rule is used to guide the start of antifungal therapy, many patients may be treated unnecessarily. Candida biomarkers display higher positive predictive values; however, they lack sensitivity and are thus not able to identify all cases of invasive candidiasis. The (1→3)-β-D-glucan (BG) assay, a panfungal antigen test, is recommended as a complementary tool for the diagnosis of invasive mycoses in high-risk hemato-oncological patients. Its role in the more heterogeneous ICU population remains to be defined. More efficient clinical selection strategies combined with performant laboratory tools are needed in order to treat the right patients at the right time by keeping costs of screening and therapy as low as possible. The new approach proposed by Posteraro and colleagues in the previous issue of Critical Care meets these requirements. A single positive BG value in medical patients admitted to the ICU with sepsis and expected to stay for more than 5 days preceded the documentation of candidemia by 1 to 3 days with an unprecedented diagnostic accuracy. Applying this one-point fungal screening on a selected subset of ICU patients with an estimated 15 to 20% risk of developing candidemia is an appealing and potentially cost-effective approach. If confirmed by multicenter investigations, and extended to surgical patients at high risk of invasive candidiasis after abdominal surgery, this Bayesian-based risk stratification approach aimed at maximizing clinical efficiency by minimizing health care resource utilization may substantially simplify the management of critically ill patients at risk of invasive candidiasis.
Objective: To measure the accuracy of automated
tuberculosis case detection.
Setting: An inner-city medical center.
Intervention: An electronic medical record and a clinical event monitor
with a natural language processor were used to detect tuberculosis cases
according to Centers for Disease Control criteria.
Measurement: Cases identified by the automated system were compared to the
local health department's tuberculosis registry, and positive predictive value
and sensitivity were calculated.
Results: The best automated rule was based on tuberculosis cultures; it had
a sensitivity of.89 (95% CI.75-.96) and a positive predictive value of.96
(.89-.99). All other rules had a positive predictive value less than.20. A
rule based on chest radiographs had a sensitivity of.41 (.26-.57) and a
positive predictive value of.03 (.02-.05), and a rule that represented the
overall Centers for Disease Control criteria had a sensitivity of.91 (.78-.97)
and a positive predictive value of.15 (.12-.18). The culture-based rule was
the most useful rule for automated case reporting to the health department,
and the chest radiograph-based rule was the most useful rule for improving
tuberculosis respiratory isolation compliance.
Conclusions: Automated tuberculosis case detection is feasible and useful,
although the predictive value of most of the clinical rules was low. The
usefulness of an individual rule depends on the context in which it is used.
The major challenge facing automated detection is the availability and
accuracy of electronic clinical data.
Atrial fibrillation (AF) has been recognised as an important independent risk factor for thromboembolic disease, particularly stroke for which it provides a five-fold increase in risk. This study aimed to determine the baseline prevalence and the incidence of AF based on a variety of screening strategies and in doing so to evaluate the incremental cost-effectiveness of different screening strategies, including targeted or whole population screening, compared with routine clinical practice, for detection of AF in people aged 65 and over. The value of clinical assessment and echocardiography as additional methods of risk stratification for thromboembolic disease in patients with AF were also evaluated.
The study design was a multi-centre randomised controlled trial with a study population of patients aged 65 and over from 50 General Practices in the West Midlands. These purposefully selected general practices were randomly allocated to 25 intervention practices and 25 control practices. GPs and practice nurses within the intervention practices received education on the importance of AF detection and ECG interpretation. Patients in the intervention practices were randomly allocated to systematic (n = 5000) or opportunistic screening (n = 5000). Prospective identification of pre-existing risk factors for AF within the screened population enabled comparison between high risk targeted screening and total population screening. AF detection rates in systematically screened and opportunistically screened populations in the intervention practices were compared to AF detection rate in 5,000 patients in the control practices.
Although paediatric patients have an increased risk for adverse drug events, few detection methodologies target this population. To utilise computerised adverse event surveillance, specialised trigger rules are required to accommodate the unique needs of children. The aim was to develop new, tailored rules sustainable for review and robust enough to support aggregate event rate monitoring.
The authors utilised a voluntary staff incident-reporting system, lab values and physician insight to design trigger rules. During Phase 1, problem areas were identified by reviewing 5 years of paediatric voluntary incident reports. Based on these findings, historical lab electrolyte values were analysed to devise critical value thresholds. This evidence informed Phase 2 rule development. For 3 months, surveillance alerts were evaluated for occurrence of adverse drug events.
In Phase 1, replacement preparations and total parenteral nutrition comprised the majority (36.6%) of adverse drug events in 353 paediatric patients. During Phase 2, nine new trigger rules produced 225 alerts in 103 paediatric inpatients. Of these, 14 adverse drug events were found by the paediatric hypoglycaemia rule, but all other electrolyte trigger rules were ineffective. Compared with the adult-focused hypoglycaemia rule, the new, tailored version increased the paediatric event detection rate from 0.43 to 1.51 events per 1000 patient days.
Relying solely on absolute lab values to detect electrolyte-related adverse drug events did not meet our goals. Use of compound rule logic improved detection of hypoglycaemia. More success may be found in designing real-time rules that leverage lab trends and additional clinical information.
Paediatrics; adverse drug event; computerised surveillance; trigger tool; information technology; medication error
To explore classification rules based on data mining methodologies which are to be used in defining strata in stratified sampling of healthcare providers with improved sampling efficiency.
We performed k-means clustering to group providers with similar characteristics, then, constructed decision trees on cluster labels to generate stratification rules. We assessed the variance explained by the stratification proposed in this study and by conventional stratification to evaluate the performance of the sampling design. We constructed a study database from health insurance claims data and providers' profile data made available to this study by the Health Insurance Review and Assessment Service of South Korea, and population data from Statistics Korea. From our database, we used the data for single specialty clinics or hospitals in two specialties, general surgery and ophthalmology, for the year 2011 in this study.
Data mining resulted in five strata in general surgery with two stratification variables, the number of inpatients per specialist and population density of provider location, and five strata in ophthalmology with two stratification variables, the number of inpatients per specialist and number of beds. The percentages of variance in annual changes in the productivity of specialists explained by the stratification in general surgery and ophthalmology were 22% and 8%, respectively, whereas conventional stratification by the type of provider location and number of beds explained 2% and 0.2% of variance, respectively.
This study demonstrated that data mining methods can be used in designing efficient stratified sampling with variables readily available to the insurer and government; it offers an alternative to the existing stratification method that is widely used in healthcare provider surveys in South Korea.
Sampling Studies; Decision Trees; Data Mining
To examine the potential utility of echocardiography and NT-proBNP for heart failure (HF) risk stratification in concert with a validated clinical HF risk score in older adults.
Without clinical guidance, echocardiography and natriuretic peptides have suboptimal test characteristics for population-wide HF risk stratification. However, the value of these tests has not been examined in concert with a clinical HF risk score.
We evaluated the improvement in 5-year HF risk prediction offered by adding an echocardiographic score or/and NT-proBNP levels to the clinical Health ABC HF Risk Score (base model) in 3752 participants of the Cardiovascular Health Study (age, 72.6±5.4 years; 40.8% men; 86.5% white). The echocardiographic score was derived as the weighted sum of independent echocardiographic predictors of HF. We assessed changes in Bayesian information criterion (BIC), C index, integrated discrimination improvement (IDI), and net reclassification improvement (NRI). We examined also the weighted NRI across baseline HF risk categories under multiple scenarios of event versus nonevent weighting.
Reduced left ventricular ejection fraction, abnormal E/A ratio, enlarged left atrium, and increased left ventricular mass, were independent echocardiographic predictors of HF. Adding the echocardiographic score and NT-proBNP levels to the clinical model improved BIC (echocardiography: −43, NT-proBNP: −64.1, combined: −68.9; all p<0.001) and C index (baseline 0.746; echocardiography: +0.031, NT-proBNP: +0.027, combined: +0.043; all p<0.01) and yielded robust IDI (echocardiography: 43.3%, NT-proBNP: 42.2%, combined: 61.7%; all p<0.001), and NRI (based on Health ABC HF risk groups; echocardiography: 11.3%; NT-proBNP: 10.6%, combined: 16.3%; all p<0.01). Participants at intermediate risk by the clinical model (5% to 20% 5-yr HF risk; 35.7% of the cohort) derived the most reclassification benefit. Echocardiography yielded modest reclassification when used sequentially after NT-proBNP.
In older adults, echocardiography and NT-proBNP offer significant HF risk reclassification over a clinical prediction model, especially for intermediate risk individuals.
epidemiology; heart failure; risk score; risk prediction; risk stratification
Acute decompensated heart failure is a common reason for presentation to the emergency department and is associated with high rates of admission to hospital. Distinguishing between higher-risk patients needing hospitalization and lower-risk patients suitable for discharge home is important to optimize both cost-effectiveness and clinical outcomes. However, this can be challenging and few validated risk stratification tools currently exist to help clinicians. Some prognostic variables predict risks broadly in those who are admitted or discharged from the emergency department. Risk stratification methods such as the Emergency Heart Failure Mortality Risk Grade and Acute Heart Failure Index clinical decision support tools, which utilize many of these predictors, have been found to be accurate in identifying low-risk patients. The use of observation units may also be a cost-effective adjunctive strategy that can assist in determining disposition from the emergency department.
Heart failure; Emergency department; Risk stratification; Hospitalization; Hospital discharge
Because many illnesses show heterogeneous response to treatment, there is increasing interest in individualizing treatment to patients . An individualized treatment rule is a decision rule that recommends treatment according to patient characteristics. We consider the use of clinical trial data in the construction of an individualized treatment rule leading to highest mean response. This is a difficult computational problem because the objective function is the expectation of a weighted indicator function that is non-concave in the parameters. Furthermore there are frequently many pretreatment variables that may or may not be useful in constructing an optimal individualized treatment rule yet cost and interpretability considerations imply that only a few variables should be used by the individualized treatment rule. To address these challenges we consider estimation based on l1 penalized least squares. This approach is justified via a finite sample upper bound on the difference between the mean response due to the estimated individualized treatment rule and the mean response due to the optimal individualized treatment rule.
decision making; l1 penalized least squares; Value
Accurate clinical problem lists are critical for patient care, clinical decision support, population reporting, quality improvement, and research. However, problem lists are often incomplete or out of date.
To determine whether a clinical alerting system, which uses inference rules to notify providers of undocumented problems, improves problem list documentation.
Study Design and Methods
Inference rules for 17 conditions were constructed and an electronic health record-based intervention was evaluated to improve problem documentation. A cluster randomized trial was conducted of 11 participating clinics affiliated with a large academic medical center, totaling 28 primary care clinical areas, with 14 receiving the intervention and 14 as controls. The intervention was a clinical alert directed to the provider that suggested adding a problem to the electronic problem list based on inference rules. The primary outcome measure was acceptance of the alert. The number of study problems added in each arm as a pre-specified secondary outcome was also assessed. Data were collected during 6-month pre-intervention (11/2009–5/2010) and intervention (5/2010–11/2010) periods.
17 043 alerts were presented, of which 41.1% were accepted. In the intervention arm, providers documented significantly more study problems (adjusted OR=3.4, p<0.001), with an absolute difference of 6277 additional problems. In the intervention group, 70.4% of all study problems were added via the problem list alerts. Significant increases in problem notation were observed for 13 of 17 conditions.
Problem inference alerts significantly increase notation of important patient problems in primary care, which in turn has the potential to facilitate quality improvement.
Problem list; clinical decision support; data mining; automated inference; meaningful use; quality of care; quality measurement; electronic health records; knowledge representations; classical experimental and quasi-experimental study methods (lab and field); designing usable (responsive) resources and systems; statistical analysis of large datasets
Risk stratification of atrial fibrillation (AF) and adequate thromboembolism prophylaxis is the cornerstone of treatment in patients with AF. Current risk stratification schemes such as the CHADS2 and CHA2DS2-VASc scores are based on clinical risk factors and suboptimally weight the risk/benefit of anticoagulation. Recently, the potential of biomarkers (troponin and NT-proBNP) in the RE-LY biomarker sub-analysis has been demonstrated. Echocardiography is also being evaluated as a possible approach to improve risk score performance. The authors present an overview on AF risk stratification and discuss future potential developments that may be introduced into our current risk stratification schemes.
Anticoagulation; Atrial fibrillation; Risk stratification; Stroke; Thromboembolism
Febrile neutropenia is a common and potentially life-threatening complication of treatment for childhood cancer, which has increasingly been subject to targeted treatment based on clinical risk stratification. Our previous meta-analysis demonstrated 16 rules had been described and 2 of them subject to validation in more than one study. We aimed to advance our knowledge of evidence on the discriminatory ability and predictive accuracy of such risk stratification clinical decision rules (CDR) for children and young people with cancer by updating our systematic review.
The review was conducted in accordance with Centre for Reviews and Dissemination methods, searching multiple electronic databases, using two independent reviewers, formal critical appraisal with QUADAS and meta-analysis with random effects models where appropriate. It was registered with PROSPERO: CRD42011001685.
We found 9 new publications describing a further 7 new CDR, and validations of 7 rules. Six CDR have now been subject to testing across more than two data sets. Most validations demonstrated the rule to be less efficient than when initially proposed; geographical differences appeared to be one explanation for this.
The use of clinical decision rules will require local validation before widespread use. Considerable uncertainty remains over the most effective rule to use in each population, and an ongoing individual-patient-data meta-analysis should develop and test a more reliable CDR to improve stratification and optimise therapy. Despite current challenges, we believe it will be possible to define an internationally effective CDR to harmonise the treatment of children with febrile neutropenia.