To assess if the Agency for Healthcare Research and Quality patient safety indictors (PSIs) could be used for case findings in the International Classification of Disease 10th revision (ICD-10) hospital discharge abstract data.
We identified and randomly selected 490 patients with a foreign body left during a procedure (PSI 5—foreign body), selected infections (IV site) due to medical care (PSI 7—infection), postoperative pulmonary embolism (PE) or deep vein thrombosis (DVT; PSI 12—PE/DVT), postoperative sepsis (PSI 13—sepsis)and accidental puncture or laceration (PSI 15—laceration) among patients discharged from three adult acute care hospitals in Calgary, Canada in 2007 and 2008. Their charts were reviewed for determining the presence of PSIs and used as the reference standard, positive predictive value (PPV) statistics were calculated to determine the proportion of positives in the administrative data representing ‘true positives’.
The PPV for PSI 5—foreign body was 62.5% (95% CI 35.4% to 84.8%), PSI 7—infection was 79.1% (67.4% to 88.1%), PSI 12—PE/DVT was 89.5% (66.9% to 98.7%), PSI 13—sepsis was 12.5% (1.6% to 38.4%) and PSI 15—laceration was 86.4% (75.0% to 94.0%) after excluding those who presented to the hospital with the condition.
Several PSIs had high PPV in the ICD administrative data and are thus powerful tools for true positive case finding. The tools could be used to identify potential cases from the large volume of admissions for verification through chart reviews. In contrast, their sensitivity has not been well characterised and users of PSIs should be cautious if using them for ‘quality of care reporting’ presenting the rate of PSIs because under-coded data would generate falsely low PSI rates.
Because alcohol and drug use disorders (SUDs) can influence quality of care, we compared patients with and without SUDs on frequency of catheterization, revascularization, and in-hospital mortality after acute myocardial infarction (AMI).
This study employed hospital discharge data identifying all adult AMI admissions (ICD-9-CM code 410) between April 1996 and December 2001. Patients were classified as having an SUD if they had alcohol and/or drug (not nicotine) abuse or dependence using a validated ICD-9-CM coding definition. Catheterization and revascularization data were obtained by linkage with a clinically-detailed cardiac registry. Analyses (controlling for comorbidities and disease severity) compared patients with and without SUDs for post-MI catheterization, revascularization, and in-hospital mortality.
Of 7,876 AMI unique patient admissions, 2.6% had an SUD. In adjusted analyses mortality was significantly higher among those with an SUD (odds ratio (OR) 2.02; 95%CI: 1.10–3.69), while there was a trend toward lower catheterization rates among those with an SUD (OR 0.75; 95%CI: 0.55–1.01). Among the subset of AMI admissions who underwent catheterization, the adjusted hazard ratio for one-year revascularization was 0.85 (95%CI: 0.65–1.11) with an SUD compared to without.
Alcohol and drug use disorders are associated with significantly higher in-hospital mortality following AMI in adults of all ages, and may also be associated with decreased access to catheterization and revascularization. This higher mortality in the face of poorer access to procedures suggests that these individuals may be under-treated following AMI. Targeted efforts are required to explore the interplay of patient and provider factors that underlie this finding.
Histamine fish poisoning, also known as scombroid poisoning, is a histamine toxicity syndrome that results from eating specific types of spoiled fish. Although typically a benign syndrome, characterized by self-limited flushing, headache, and gastrointestinal symptoms, we describe a case unique in its severity and as a precipitant of an asthma exacerbation.
A 25-year-old woman presented to the emergency department (ED) with one hour of tongue and face swelling, an erythematous pruritic rash, and dyspnea with wheezing after consuming a tuna sandwich. She developed abdominal pain, diarrhea and hypotension in the ED requiring admission to the hospital. A diagnosis of histamine fish poisoning was made and the patient was treated supportively and discharged within 24 hours, but was readmitted within 3 hours due to an asthma exacerbation. Her course was complicated by recurrent admissions for asthma exacerbations.
histamine fish poisoning; scombroid; asthma
Real-time locating systems (RTLS) have the potential to enhance healthcare systems through the live tracking of assets, patients and staff. This study evaluated a commercially available RTLS system deployed in a clinical setting, with three objectives: (1) assessment of the location accuracy of the technology in a clinical setting; (2) assessment of the value of asset tracking to staff; and (3) assessment of threshold monitoring applications developed for patient tracking and inventory control. Simulated daily activities were monitored by RTLS and compared with direct research team observations. Staff surveys and interviews concerning the system's effectiveness and accuracy were also conducted and analyzed. The study showed only modest location accuracy, and mixed reactions in staff interviews. These findings reveal that the technology needs to be refined further for better specific location accuracy before full-scale implementation can be recommended.
Accuracy testing; asset tracking; data mining; happy456; machine learning; patient tracking; Patterson; real-time location; RFID; simulation; statistics; threshold monitoring
An estimated 6.9 million children die annually in low and middle-income countries because of treatable illneses including pneumonia, diarrhea, and malaria. To reduce morbidity and mortality, the Integrated Management of Childhood Illness strategy was developed, which included a component to strengthen the skills of health workers in identifying and managing these conditions. A systematic review and meta-analysis were conducted to determine whether IMCI training actually improves performance.
Database searches of CIHAHL, CENTRAL, EMBASE, Global Health, Medline, Ovid Healthstar, and PubMed were performed from 1990 to February 2013, and supplemented with grey literature searches and reviews of bibliographies. Studies were included if they compared the performance of IMCI and non-IMCI health workers in illness classification, prescription of medications, vaccinations, and counseling on nutrition and admistration of oral therapies. Dersminion-Laird random effect models were used to summarize the effect estimates.
The systematic review and meta-analysis included 46 and 26 studies, respectively. Four cluster-randomized controlled trials, seven pre-post studies, and 15 cross-sectional studies were included. Findings were heterogeneous across performance domains with evidence of effect modification by health worker performance at baseline. Overall, IMCI-trained workers were more likely to correctly classify illnesses (RR = 1.93, 95% CI: 1.66–2.24). Studies of workers with lower baseline performance showed greater improvements in prescribing medications (RR = 3.08, 95% CI: 2.04–4.66), vaccinating children (RR = 3.45, 95% CI: 1.49–8.01), and counseling families on adequate nutrition (RR = 10.12, 95% CI: 6.03–16.99) and administering oral therapies (RR = 3.76, 95% CI: 2.30–6.13). Trends toward greater training benefits were observed in studies that were conducted in lower resource settings and reported greater supervision.
Findings suggest that IMCI training improves health worker performance. However, these estimates need to be interpreted cautiously given the observational nature of the studies and presence of heterogeneity.
Noninvasive imaging of atherosclerosis is being increasingly used in clinical practice, with some experts recommending to screen all healthy adults for atherosclerosis and some jurisdictions mandating insurance coverage for atherosclerosis screening. Data on the impact of such screening have not been systematically synthesized.
We aimed to assess whether atherosclerosis screening improves cardiovascular risk factors (CVRF) and clinical outcomes.
This study is a systematic review.
We searched MEDLINE and the Cochrane Clinical Trial Register without language restrictions.
STUDY ELIGIBILITY CRITERIA
We included studies examining the impact of atherosclerosis screening with noninvasive imaging (e.g., carotid ultrasound, coronary calcification) on CVRF, cardiovascular events, or mortality in adults without cardiovascular disease.
We identified four randomized controlled trials (RCT, n = 709) and eight non-randomized studies comparing participants with evidence of atherosclerosis on screening to those without (n = 2,994). In RCTs, atherosclerosis screening did not improve CVRF, but smoking cessation rates increased (18% vs. 6%, p = 0.03) in one RCT. Non-randomized studies found improvements in several intermediate outcomes, such as increased motivation to change lifestyle and increased perception of cardiovascular risk. However, such data were conflicting and limited by the lack of a randomized control group. No studies examined the impact of screening on cardiovascular events or mortality. Heterogeneity in screening methods and studied outcomes did not permit pooling of results.
Available evidence about atherosclerosis screening is limited, with mixed results on CVRF control, increased smoking cessation in one RCT, and no data on cardiovascular events. Such screening should be validated by large clinical trials before widespread use.
atherosclerosis; diagnostic techniques; cardiovascular; coronary disease; health behavior; smoking cessation; clinical trial; systematic review
The transition between acute care and community care represents a vulnerable period in health care delivery. The vulnerability of this period has been attributed to changes to patients’ medication regimens during hospitalization, failure to reconcile discrepancies between admission and discharge and the burdening of patients/families to take over care responsibilities at discharge and to relay important information to the primary care physician. Electronic communication platforms can provide an immediate link between acute care and community care physicians (and other community providers), designed to ensure consistent information transfer. This study examines whether a transfer-of-care (TOC) communication tool is efficacious and cost-effective for reducing hospital readmission, adverse events and adverse drug events as well as reducing death.
A randomized controlled trial conducted on the Medical Teaching Unit of a Canadian tertiary care centre will evaluate the efficacy and cost-effectiveness of a TOC communication tool. Medical in-patients admitted to the unit will be considered for this study. Data will be collected upon admission, and a total of 1400 patients will be randomized. The control group’s acute care stay will be summarized using a traditional dictated summary, while the intervention group will have a summary generated using the TOC communication tool. The primary outcome will be a composite, at 3 months, of death or readmission to any Alberta acute-care hospital. Secondary outcomes will be the occurrence of post-discharge adverse events and adverse drug events at 1 month post discharge. Patients with adverse outcomes will have their cases reviewed by two Royal College certified internists or College-certified family physicians, blinded to patients’ group assignments, to determine the type, severity, preventability and ameliorability of all detected adverse outcomes. An accompanying economic evaluation will assess the cost per life saved, cost per readmission avoided and cost per QALY gained with the TOC communication tool compared to traditional dictation summaries.
This paper outlines the study protocol for a randomized controlled trial evaluating an electronic transfer-of-care communication tool, with sufficient statistical power to assess the impact of the tool on the significant outcomes of post-discharge death or readmission. The study findings will inform health systems around the world on the potential benefits of such tools, and the value for money associated with their widespread implementation.
Medical informatics, Care transitions, Electronic health records; Randomized controlled trials; Hospital discharge
Evaluating geographic access to health services often requires determining the patient travel time to a specified service. For urgent care, many research studies have modeled patient pre-hospital time by ground emergency medical services (EMS) using geographic information systems (GIS). The purpose of this study was to determine if the modeling assumptions proposed through prior United States (US) studies are valid in a non-US context, and to use the resulting information to provide revised recommendations for modeling travel time using GIS in the absence of actual EMS trip data.
The study sample contained all emergency adult patient trips within the Calgary area for 2006. Each record included four components of pre-hospital time (activation, response, on-scene and transport interval). The actual activation and on-scene intervals were compared with those used in published models. The transport interval was calculated within GIS using the Network Analyst extension of Esri ArcGIS 10.0 and the response interval was derived using previously established methods. These GIS derived transport and response intervals were compared with the actual times using descriptive methods. We used the information acquired through the analysis of the EMS trip data to create an updated model that could be used to estimate travel time in the absence of actual EMS trip records.
There were 29,765 complete EMS records for scene locations inside the city and 529 outside. The actual median on-scene intervals were longer than the average previously reported by 7–8 minutes. Actual EMS pre-hospital times across our study area were significantly higher than the estimated times modeled using GIS and the original travel time assumptions. Our updated model, although still underestimating the total pre-hospital time, more accurately represents the true pre-hospital time in our study area.
The widespread use of generalized EMS pre-hospital time assumptions based on US data may not be appropriate in a non-US context. The preference for researchers should be to use actual EMS trip records from the proposed research study area. In the absence of EMS trip data researchers should determine which modeling assumptions more accurately reflect the EMS protocols across their study area.
Pre-hospital time; Geographic Information Systems; Validation; Emergency medical services
Randomized controlled trials (RCTs) are thought to provide the most accurate estimation of “true” treatment effect. The relative quality of effect estimates derived from nonrandomized studies (nRCTs) remains unclear, particularly in surgery, where the obstacles to performing high-quality RCTs are compounded. We performed a meta-analysis of effect estimates of RCTs comparing surgical procedures for breast cancer relative to those of corresponding nRCTs.
English-language RCTs of breast cancer treatment in human patients published from 2003 to 2008 were identified in MEDLINE, EMBASE and Cochrane databases. We identified nRCTs using the National Library of Medicine’s “related articles” function and reference lists. Two reviewers conducted all steps of study selection. We included studies comparing 2 surgical arms for the treatment of breast cancer. Information on treatment efficacy estimates, expressed as relative risk (RR) for outcomes of interest in both the RCTs and nRCTs was extracted.
We identified 12 RCTs representing 10 topic/outcome combinations with comparable nRCTs. On visual inspection, 4 of 10 outcomes showed substantial differences in summary RR. The pooled RR estimates for RCTs versus nRCTs differed more than 2-fold in 2 of 10 outcomes and failed to demonstrate consistency of statistical differences in 3 of 10 cases. A statistically significant difference, as assessed by the z score, was not detected for any of the outcomes.
Randomized controlled trials comparing surgical procedures for breast cancer may demonstrate clinically relevant differences in effect estimates in 20%–40% of cases relative to those generated by nRCTs, depending on which metric is used.
The objective of this study was to conduct a systematic review with meta-analysis of studies assessing the association between living in an urban environment and the development of the Crohn’s disease (CD) or ulcerative colitis (UC).
A systematic literature search of MEDLINE (1950-Oct. 2009) and EMBASE (1980-Oct. 2009) was conducted to identify studies investigating the relationship between urban environment and IBD. Cohort and case–control studies were analyzed using incidence rate ratio (IRR) or odds ratio (OR) with 95 % confidence intervals (CIs), respectively. Stratified and sensitivity analyses were performed to explore heterogeneity between studies and assess effects of study quality.
The search strategy retrieved 6940 unique citations and 40 studies were selected for inclusion. Of these, 25 investigated the relationship between urban environment and UC and 30 investigated this relationship with CD. Included in our analysis were 7 case–control UC studies, 9 case–control CD studies, 18 cohort UC studies and 21 cohort CD studies. Based on a random effects model, the pooled IRRs for urban compared to rural environment for UC and CD studies were 1.17 (1.03, 1.32) and 1.42 (1.26, 1.60), respectively. These associations persisted across multiple stratified and sensitivity analyses exploring clinical and study quality factors. Heterogeneity was observed in the cohort studies for both UC and CD, whereas statistically significant heterogeneity was not observed for the case–control studies.
A positive association between urban environment and both CD and UC was found. Heterogeneity may be explained by differences in study design and quality factors.
Inflammatory bowel disease; Urban population; Risk factors
Intravenous immune globulin (IVIG) is an expensive and sometimes scarce blood product that carries some risk. It may often be used inappropriately. We evaluated the appropriateness of IVIG use before and after the introduction of an utilization control program to reduce inappropriate use.
We used the RAND/UCLA Appropriateness Method to measure the appropriateness of IVIG use in the province of British Columbia (BC) in 2001 and 2003, before and after the introduction of a utilization control program designed to reduce inappropriate use. For comparison, we measured the appropriateness of use during the same periods in the province of Alberta, which had no control program.
Of 2256 instances of IVIG use, 54.1% were deemed to be appropriate, 17.4% were of uncertain benefit, and 28.5% were deemed inappropriate. The frequency of inappropriate use in BC after the introduction of the utilization control program did not differ significantly from the frequency before the program or the frequency in Alberta.
Almost half of IVIG use in BC and Alberta was judged to be inappropriate or of uncertain benefit, and the frequency of inappropriate use did not decrease after implementation of a utilization control program in BC. More effective utilization controls are necessary to prevent wasted resources and unnecessary risk to patients.
Increasing population rates of cardiac catheterization can lead to the detection of more people with high risk coronary disease and opportunity for subsequent revascularization. However, such a strategy should only be undertaken if it is cost-effective.
Based on data from a cohort of patients undergoing cardiac catheterization, and efficacy data from clinical trials, we used a Markov model that considered 1) the yield of high-risk cases as the catheterization rate increases, 2) the long-term survival, quality of life and costs for patients with high risk disease, and 3) the impact of revascularization on survival, quality of life and costs. The cost per quality-adjusted life year was calculated overall, and by indication, age, and sex subgroups.
Increasing the catheterization rate was associated with a cost per QALY of CAN$26,470. The cost per QALY was most attractive in females with Acute Coronary Syndromes (ACS) ($20,320 per QALY gained), and for ACS patients over 75 years of age ($16,538 per QALY gained). However, there is significant model uncertainty associated with the efficacy of revascularization.
A strategy of increasing cardiac catheterization rates among eligible patients is associated with a cost per QALY similar to that of other funded interventions. However, there is significant model uncertainty. A decision to increase population rates of catheterization requires consideration of the accompanying opportunity costs, and careful thought towards the most appropriate strategy.
There is variation in cardiac catheterization utilization across jurisdictions. Previous work from Alberta, Canada, showed no evidence of a plateau in the yield of high-risk disease at cardiac catheterization rates as high as 600 per 100,000 population suggesting that the optimal rate is higher. This work aims 1) To determine if a previously demonstrated linear relationship between the yield of high-risk coronary disease and cardiac catheterization rates persists with contemporary data and 2) to explore whether the linear relationship exists in other jurisdictions.
Detailed clinical information on all patients undergoing cardiac catheterization in 3 Canadian provinces was available through the Alberta Provincial Project for Outcomes Assessment in Coronary Heart (APPROACH) disease and partner initiatives in British Columbia and Nova Scotia. Population rates of catheterization and high-risk coronary disease detection for each health region in these three provinces, and age-adjusted rates produced using direct standardization. A mixed effects regression analysis was performed to assess the relationship between catheterization rate and high-risk coronary disease detection.
In the contemporary Alberta data, we found a linear relationship between the population catheterization rate and the high-risk yield. Although the yield was slightly less in time period 2 (2002-2006) than in time period 1(1995-2001), there was no statistical evidence of a plateau. The linear relationship between catheterization rate and high-risk yield was similarly demonstrated in British Columbia and Nova Scotia and appears to extend, without a plateau in yield, to rates over 800 procedures per 100,000 population.
Our study demonstrates a consistent finding, over time and across jurisdictions, of linearly increasing detection of high-risk CAD as population rates of cardiac catheterization increase. This internationally-relevant finding can inform country-level planning of invasive cardiac care services.
Co-morbidity information derived from administrative data needs to be validated to allow its regular use. We assessed evolution in the accuracy of coding for Charlson and Elixhauser co-morbidities at three time points over a 5-year period, following the introduction of the International Classification of Diseases, 10th Revision (ICD-10), coding of hospital discharges.
Cross-sectional time trend evaluation study of coding accuracy using hospital chart data of 3'499 randomly selected patients who were discharged in 1999, 2001 and 2003, from two teaching and one non-teaching hospital in Switzerland. We measured sensitivity, positive predictive and Kappa values for agreement between administrative data coded with ICD-10 and chart data as the 'reference standard' for recording 36 co-morbidities.
For the 17 the Charlson co-morbidities, the sensitivity - median (min-max) - was 36.5% (17.4-64.1) in 1999, 42.5% (22.2-64.6) in 2001 and 42.8% (8.4-75.6) in 2003. For the 29 Elixhauser co-morbidities, the sensitivity was 34.2% (1.9-64.1) in 1999, 38.6% (10.5-66.5) in 2001 and 41.6% (5.1-76.5) in 2003. Between 1999 and 2003, sensitivity estimates increased for 30 co-morbidities and decreased for 6 co-morbidities. The increase in sensitivities was statistically significant for six conditions and the decrease significant for one. Kappa values were increased for 29 co-morbidities and decreased for seven.
Accuracy of administrative data in recording clinical conditions improved slightly between 1999 and 2003. These findings are of relevance to all jurisdictions introducing new coding systems, because they demonstrate a phenomenon of improved administrative data accuracy that may relate to a coding 'learning curve' with the new coding system.
ICD-10; Agreement; Administrative Data; Co-morbidity
Recent clinical trials have demonstrated benefit with early revascularization following acute myocardial infarction (AMI). Trends in and the association between early revascularization after (ie, 30 days or fewer) AMI and early death were determined.
METHODS AND RESULTS:
The Statistics Canada Health Person-Oriented Information Database, consisting of hospital discharge records for seven provinces from the Canadian Institute for Health Information Hospital Morbidity Database, was used. If there was no AMI in the preceding year, the first AMI visit within a fiscal year for a patient 20 years of age or older was included. Times to death in hospital and to revascularization procedures were counted from the admission date of the first AMI visit. Mixed model regression analyses with random slopes were used to assess the relationship between early revascularization and mortality. The overall rate of revascularization within 30 days of AMI increased significantly from 12.5% in 1995 to 37.4% in 2003, while the 30-day mortality rate decreased significantly from 13.5% to 10.6%. There was a linearly decreasing relationship – higher regional use of revascularization was associated with lower mortality in both men and women.
These population-based utilization and outcome findings are consistent with clinical trial evidence of improved 30-day in-hospital mortality with increased early revascularization after AMI.
Acute myocardial infarction; Administrative data; Mortality; Outcomes research; Revascularization
Previous studies evaluated cardiac procedure use and outcome over the short term, with relatively few Asian patients included.
To determine the likelihood of undergoing percutaneous coronary intervention and coronary artery bypass grafting, and survival during 10.5 years of follow-up after coronary angiography among South Asian, Chinese and other Canadian patients.
Using prospective cohort study data from two large Canadian provinces, 3061 South Asian, 1473 Chinese and 77,314 other Canadian patients with angiographically proven coronary artery disease from 1995 to 2004 were assessed, and their revascularization and mortality rates during 10.5 years of follow-up were determined.
Compared with other Canadian patients, South Asian and Chinese patients were slightly less likely to undergo revascularization (risk-adjusted HR 0.94, 95% CI 0.90 to 0.98 for South Asian patients; and HR 0.94, 95% CI 0.88 to 1.00 for Chinese patients). However, South Asian patients underwent coronary artery bypass grafting (HR 1.00, 95% CI 0.94 to 1.07) and Chinese patients underwent percutaneous coronary intervention (HR 0.96, 95% CI 0.89 to 1.04) as frequently as other Canadian patients. Although the 30-day mortality rate was similar across the three ethnic groups, the mortality rate in the follow-up period was significantly lower for South Asian patients (HR 0.76, 95% CI 0.61 to 0.95) and marginally lower for Chinese patients (HR 0.80, 95% CI 0.60 to 1.07) compared with other Canadian patients.
South Asian and Chinese patients used revascularization slightly less but had better survival outcomes than other Canadian patients. The factors underlying the better outcomes for South Asian and Chinese patients warrant further study.
Chinese Canadians; Coronary artery disease; Invasive cardiac procedure; South Asian Canadians
Glucose-insulin infusions (with potassium [GIK] or without [GI]) have been advocated in the setting of coronary artery bypass graft (CABG) surgery to optimize myocardial glucose use and to minimize ischemic injury.
To conduct a meta-analysis assessing whether the use of GIK/GI infusions perioperatively reduce in-hospital mortality or atrial fibrillation (AF) after CABG surgery.
Electronic databases (Medline, EMBASE and Cochrane Central Register of Controlled Trials [CENTRAL]) and references of retrieved articles were searched for randomized controlled trials that evaluated the effects of GIK or GI infusions, before or during CABG surgery, on in-hospital mortality and/or postoperative AF. Pooled ORs and 95% CIs were calculated for each outcome.
Twenty trials were identified and eligible for review. The summary OR for in-hospital mortality was 0.88 (95% CI 0.56 to 1.40), based on 44 deaths among 2326 patients. While postoperative AF was a more frequent outcome (occurring in 519 of 1540 patients in the 10 trials reporting this outcome), the overall pooled estimate of effect was nonsignificant (OR 0.79, 95% CI 0.54 to 1.15). This latter finding needs to be interpreted cautiously because it is accompanied by significant heterogeneity across trials.
Perioperative use of GIK/GI does not significantly reduce mortality or atrial fibrillation in patients undergoing CABG surgery. Unless future trial data in support of GIK/GI infusions become available, the routine use of these treatments in patients undergoing CABG surgery should be discouraged because the safety of these infusions has not been systematically examined.
Coronary artery bypass graft surgery; GIK; Insulin; Meta-analysis
Aspirin has been recommended for the prevention of major adverse cardiovascular events (MACE, composite of non-fatal myocardial infarction, non-fatal stroke, and cardiovascular death) in diabetic patients without previous cardiovascular disease. However, recent meta-analyses have prompted re-evaluation of this practice. The study objective was to evaluate the relative and absolute benefits and harms of aspirin for the prevention of incident MACE in patients with diabetes.
We performed a systematic review and meta-analysis on seven studies (N = 11,618) reporting on the use of aspirin for the primary prevention of MACE in patients with diabetes. Two reviewers conducted a systematic search of electronic databases (MEDLINE, EMBASE, the Cochrane Library, and BIOSIS) and hand searched bibliographies and clinical trial registries. Reviewers extracted data in duplicate, evaluated the quality of the trials, and calculated pooled estimates.
A total of 11,618 participants were included in the analysis. The overall risk ratio (RR) for MACE was 0.91 (95% confidence intervals, CI, 0.82-1.00) with little heterogeneity among trials (I2 0.0%). Secondary outcomes of interest included myocardial infarction (RR, 0.85; 95% CI, 0.66-1.10), stroke (RR, 0.84; 95% CI, 0.64-1.11), cardiovascular death (RR, 0.95; 95% CI, 0.71-1.27), and all-cause mortality (RR, 0.95; 95% CI, 0.85-1.06). There were higher rates of hemorrhagic and gastrointestinal events. In absolute terms, these relative risks indicate that for every 10,000 diabetic patients treated with aspirin, 109 MACE may be prevented at the expense of 19 major bleeding events (with the caveat that the relative risk for the latter is not statistically significant).
The studies reviewed suggest that aspirin reduces the risk of MACE in patients with diabetes without cardiovascular disease, while also causing a trend toward higher rates of bleeding and gastrointestinal complications. These findings and our absolute benefit and risk calculations suggest that those with diabetes but without cardiovascular disease lie somewhere between primary and secondary prevention patients on the spectrum of benefit and risk. This underscores the importance of considering individual risk in clinical decision making regarding aspirin in those with diabetes.
The advent of clinical trials and evidence-informed medicine has resulted in a vast wealth of medical literature. Here, we summarize five notable articles for general internal medicine published in late 2009 and in 2010, and reflect on the remarkable advances made by an increasingly prolific medical research community.
Objective To conduct a comprehensive systematic review and meta-analysis of studies assessing the effect of alcohol consumption on multiple cardiovascular outcomes.
Design Systematic review and meta-analysis.
Data sources A search of Medline (1950 through September 2009) and Embase (1980 through September 2009) supplemented by manual searches of bibliographies and conference proceedings.
Inclusion criteria Prospective cohort studies on the association between alcohol consumption and overall mortality from cardiovascular disease, incidence of and mortality from coronary heart disease, and incidence of and mortality from stroke.
Studies reviewed Of 4235 studies reviewed for eligibility, quality, and data extraction, 84 were included in the final analysis.
Results The pooled adjusted relative risks for alcohol drinkers relative to non-drinkers in random effects models for the outcomes of interest were 0.75 (95% confidence interval 0.70 to 0.80) for cardiovascular disease mortality (21 studies), 0.71 (0.66 to 0.77) for incident coronary heart disease (29 studies), 0.75 (0.68 to 0.81) for coronary heart disease mortality (31 studies), 0.98 (0.91 to 1.06) for incident stroke (17 studies), and 1.06 (0.91 to 1.23) for stroke mortality (10 studies). Dose-response analysis revealed that the lowest risk of coronary heart disease mortality occurred with 1–2 drinks a day, but for stroke mortality it occurred with ≤1 drink per day. Secondary analysis of mortality from all causes showed lower risk for drinkers compared with non-drinkers (relative risk 0.87 (0.83 to 0.92)).
Conclusions Light to moderate alcohol consumption is associated with a reduced risk of multiple cardiovascular outcomes.
Objective To systematically review interventional studies of the effects of alcohol consumption on 21 biological markers associated with risk of coronary heart disease in adults without known cardiovascular disease.
Design Systematic review and meta-analysis.
Data sources Medline (1950 to October 2009) and Embase (1980 to October 2009) without limits.
Study selection Two reviewers independently selected studies that examined adults without known cardiovascular disease and that compared fasting levels of specific biological markers associated with coronary heart disease after alcohol use with those after a period of no alcohol use (controls). 4690 articles were screened for eligibility, the full texts of 124 studies reviewed, and 63 relevant articles selected.
Results Of 63 eligible studies, 44 on 13 biomarkers were meta-analysed in fixed or random effects models. Quality was assessed by sensitivity analysis of studies grouped by design. Analyses were stratified by type of beverage (wine, beer, spirits). Alcohol significantly increased levels of high density lipoprotein cholesterol (pooled mean difference 0.094 mmol/L, 95% confidence interval 0.064 to 0.123), apolipoprotein A1 (0.101 g/L, 0.073 to 0.129), and adiponectin (0.56 mg/L, 0.39 to 0.72). Alcohol showed a dose-response relation with high density lipoprotein cholesterol (test for trend P=0.013). Alcohol decreased fibrinogen levels (−0.20 g/L, −0.29 to −0.11) but did not affect triglyceride levels. Results were similar for crossover and before and after studies, and across beverage types.
Conclusions Favourable changes in several cardiovascular biomarkers (higher levels of high density lipoprotein cholesterol and adiponectin and lower levels of fibrinogen) provide indirect pathophysiological support for a protective effect of moderate alcohol use on coronary heart disease.
Primary percutaneous coronary intervention (PCI) is a proven therapy for acute ST-segment elevation myocardial infarction. However, outcomes associated with primary PCI may differ depending on time of day.
Methods and results
Using the Alberta Provincial Project for Outcomes Assessment in Coronary Heart Disease, a clinical data-collection initiative capturing all cardiac catheterisation patients in Alberta, Canada, the authors described and compared crude and risk-adjusted survival for ST-segment elevation myocardial infarction patients undergoing primary PCI after-hours versus regular working hours. From 1 January 1999 to 31 March 2006, 1664 primary PCI procedures were performed (54.4% after-hours). Mortalities at 30 days were 3.6% for regular hours procedures and 5.0% for after-hours procedures (p=0.16). 1-year mortalities were 6.2% and 7.3% in the regular hours and after-hours groups, respectively (p=0.35). After adjusting for baseline risk factor differences, HRs for after-hours mortality were 1.26 (95% CI 0.78 to 2.02) for survival to 30 days and 1.08 (0.73 to 1.59) for survival to 1 year. A meta-analysis of our after-hours HR point estimate with other published risk estimates for after hours primary PCI outcomes yielded an RR of 1.23 (1.00 to 1.51) for shorter-term outcomes.
After-hours primary PCI was not associated with a statistically significant increase in mortality. However, a meta-analysis of this study with other published after-hours outcome studies yields an RR that leaves some questions about unexplored factors that may influence after-hours primary PCI care.
Infarction; mortality; angioplasty; stents; adverse event; morbidity and mortality; patient outcomes
Given the vast and growing volume of medical literature, it is essential to develop reliable strategies for identifying articles of importance and relevance. Here, we summarize 5 notable articles for general internal medicine published in 2008 and 2009. Clinical vignettes are presented to illustrate situations in which each study might apply, and each summary ends with a description of how a physician might use the study findings to resolve the vignette case. Finally, we describe a surveillance strategy that physicians can use to identify articles important to their own practices.
Physicians are often unable to eat and drink properly during their work day. Nutrition has been linked to cognition. We aimed to examine the effect of a nutrition based intervention, that of scheduled nutrition breaks during the work day, upon physician cognition, glucose, and hypoglycemic symptoms.
A volunteer sample of twenty staff physicians from a large urban teaching hospital were recruited from the doctors' lounge. During both the baseline and the intervention day, we measured subjects' cognitive function, capillary blood glucose, "hypoglycemic" nutrition-related symptoms, fluid and nutrient intake, level of physical activity, weight, and urinary output.
Cognition scores as measured by a composite score of speed and accuracy (Tput statistic) were superior on the intervention day on simple (220 vs. 209, p = 0.01) and complex (92 vs. 85, p < 0.001) reaction time tests. Group mean glucose was 0.3 mmol/L lower (p = 0.03) and less variable (coefficient of variation 12.2% vs. 18.0%) on the intervention day. Although not statistically significant, there was also a trend toward the reporting of fewer hypoglycemic type symptoms. There was higher nutrient intake on intervention versus baseline days as measured by mean caloric intake (1345 vs. 935 kilocalories, p = 0.008), and improved hydration as measured by mean change in body mass (+352 vs. -364 grams, p < 0.001).
Our study provides evidence in support of adequate workplace nutrition as a contributor to improved physician cognition, adding to the body of research suggesting that physician wellness may ultimately benefit not only the physicians themselves but also their patients and the health care systems in which they work.
We sought to evaluate agreement between a new and widely implemented method of temperature measurement in critical care, temporal artery thermometry and an established method of core temperature measurement, bladder thermometry as performed in clinical practice.
Temperatures were simultaneously recorded hourly (n = 736 observations) using both devices as part of routine clinical monitoring in 14 critically ill adult patients with temperatures ranging ≥1°C prior to consent.
The mean difference between temporal artery and bladder temperatures measured was -0.44°C (95% confidence interval, -0.47°C to -0.41°C), with temporal artery readings lower than bladder temperatures. Agreement between the two devices was greatest for normothermia (36.0°C to < 38.3°C) (mean difference -0.35°C [95% confidence interval, -0.37°C to -0.33°C]). The temporal artery thermometer recorded higher temperatures during hypothermia (< 36°C) (mean difference 0.66°C [95% confidence interval, 0.53°C to 0.79°C]) and lower temperatures during hyperthermia (≥38.3°C) (mean difference -0.90°C [95% confidence interval, -0.99°C to -0.81°C]). The sensitivity for detecting fever (core temperature ≥38.3°C) using the temporal artery thermometer was 0.26 (95% confidence interval, 0.20 to 0.33), and the specificity was 0.99 (95% confidence interval, 0.98 to 0.99). The positive likelihood ratio for fever was 24.6 (95% confidence interval, 10.7 to 56.8); the negative likelihood ratio was 0.75 (95% confidence interval, 0.68 to 0.82).
Temporal artery thermometry produces somewhat surprising disagreement with an established method of core temperature measurement and should not to be used in situations where body temperature needs to be measured with accuracy.