Noninvasive imaging of atherosclerosis is being increasingly used in clinical practice, with some experts recommending to screen all healthy adults for atherosclerosis and some jurisdictions mandating insurance coverage for atherosclerosis screening. Data on the impact of such screening have not been systematically synthesized.
We aimed to assess whether atherosclerosis screening improves cardiovascular risk factors (CVRF) and clinical outcomes.
This study is a systematic review.
We searched MEDLINE and the Cochrane Clinical Trial Register without language restrictions.
STUDY ELIGIBILITY CRITERIA
We included studies examining the impact of atherosclerosis screening with noninvasive imaging (e.g., carotid ultrasound, coronary calcification) on CVRF, cardiovascular events, or mortality in adults without cardiovascular disease.
We identified four randomized controlled trials (RCT, n = 709) and eight non-randomized studies comparing participants with evidence of atherosclerosis on screening to those without (n = 2,994). In RCTs, atherosclerosis screening did not improve CVRF, but smoking cessation rates increased (18% vs. 6%, p = 0.03) in one RCT. Non-randomized studies found improvements in several intermediate outcomes, such as increased motivation to change lifestyle and increased perception of cardiovascular risk. However, such data were conflicting and limited by the lack of a randomized control group. No studies examined the impact of screening on cardiovascular events or mortality. Heterogeneity in screening methods and studied outcomes did not permit pooling of results.
Available evidence about atherosclerosis screening is limited, with mixed results on CVRF control, increased smoking cessation in one RCT, and no data on cardiovascular events. Such screening should be validated by large clinical trials before widespread use.
atherosclerosis; diagnostic techniques; cardiovascular; coronary disease; health behavior; smoking cessation; clinical trial; systematic review
The transition between acute care and community care represents a vulnerable period in health care delivery. The vulnerability of this period has been attributed to changes to patients’ medication regimens during hospitalization, failure to reconcile discrepancies between admission and discharge and the burdening of patients/families to take over care responsibilities at discharge and to relay important information to the primary care physician. Electronic communication platforms can provide an immediate link between acute care and community care physicians (and other community providers), designed to ensure consistent information transfer. This study examines whether a transfer-of-care (TOC) communication tool is efficacious and cost-effective for reducing hospital readmission, adverse events and adverse drug events as well as reducing death.
A randomized controlled trial conducted on the Medical Teaching Unit of a Canadian tertiary care centre will evaluate the efficacy and cost-effectiveness of a TOC communication tool. Medical in-patients admitted to the unit will be considered for this study. Data will be collected upon admission, and a total of 1400 patients will be randomized. The control group’s acute care stay will be summarized using a traditional dictated summary, while the intervention group will have a summary generated using the TOC communication tool. The primary outcome will be a composite, at 3 months, of death or readmission to any Alberta acute-care hospital. Secondary outcomes will be the occurrence of post-discharge adverse events and adverse drug events at 1 month post discharge. Patients with adverse outcomes will have their cases reviewed by two Royal College certified internists or College-certified family physicians, blinded to patients’ group assignments, to determine the type, severity, preventability and ameliorability of all detected adverse outcomes. An accompanying economic evaluation will assess the cost per life saved, cost per readmission avoided and cost per QALY gained with the TOC communication tool compared to traditional dictation summaries.
This paper outlines the study protocol for a randomized controlled trial evaluating an electronic transfer-of-care communication tool, with sufficient statistical power to assess the impact of the tool on the significant outcomes of post-discharge death or readmission. The study findings will inform health systems around the world on the potential benefits of such tools, and the value for money associated with their widespread implementation.
Medical informatics, Care transitions, Electronic health records; Randomized controlled trials; Hospital discharge
Evaluating geographic access to health services often requires determining the patient travel time to a specified service. For urgent care, many research studies have modeled patient pre-hospital time by ground emergency medical services (EMS) using geographic information systems (GIS). The purpose of this study was to determine if the modeling assumptions proposed through prior United States (US) studies are valid in a non-US context, and to use the resulting information to provide revised recommendations for modeling travel time using GIS in the absence of actual EMS trip data.
The study sample contained all emergency adult patient trips within the Calgary area for 2006. Each record included four components of pre-hospital time (activation, response, on-scene and transport interval). The actual activation and on-scene intervals were compared with those used in published models. The transport interval was calculated within GIS using the Network Analyst extension of Esri ArcGIS 10.0 and the response interval was derived using previously established methods. These GIS derived transport and response intervals were compared with the actual times using descriptive methods. We used the information acquired through the analysis of the EMS trip data to create an updated model that could be used to estimate travel time in the absence of actual EMS trip records.
There were 29,765 complete EMS records for scene locations inside the city and 529 outside. The actual median on-scene intervals were longer than the average previously reported by 7–8 minutes. Actual EMS pre-hospital times across our study area were significantly higher than the estimated times modeled using GIS and the original travel time assumptions. Our updated model, although still underestimating the total pre-hospital time, more accurately represents the true pre-hospital time in our study area.
The widespread use of generalized EMS pre-hospital time assumptions based on US data may not be appropriate in a non-US context. The preference for researchers should be to use actual EMS trip records from the proposed research study area. In the absence of EMS trip data researchers should determine which modeling assumptions more accurately reflect the EMS protocols across their study area.
Pre-hospital time; Geographic Information Systems; Validation; Emergency medical services
Randomized controlled trials (RCTs) are thought to provide the most accurate estimation of “true” treatment effect. The relative quality of effect estimates derived from nonrandomized studies (nRCTs) remains unclear, particularly in surgery, where the obstacles to performing high-quality RCTs are compounded. We performed a meta-analysis of effect estimates of RCTs comparing surgical procedures for breast cancer relative to those of corresponding nRCTs.
English-language RCTs of breast cancer treatment in human patients published from 2003 to 2008 were identified in MEDLINE, EMBASE and Cochrane databases. We identified nRCTs using the National Library of Medicine’s “related articles” function and reference lists. Two reviewers conducted all steps of study selection. We included studies comparing 2 surgical arms for the treatment of breast cancer. Information on treatment efficacy estimates, expressed as relative risk (RR) for outcomes of interest in both the RCTs and nRCTs was extracted.
We identified 12 RCTs representing 10 topic/outcome combinations with comparable nRCTs. On visual inspection, 4 of 10 outcomes showed substantial differences in summary RR. The pooled RR estimates for RCTs versus nRCTs differed more than 2-fold in 2 of 10 outcomes and failed to demonstrate consistency of statistical differences in 3 of 10 cases. A statistically significant difference, as assessed by the z score, was not detected for any of the outcomes.
Randomized controlled trials comparing surgical procedures for breast cancer may demonstrate clinically relevant differences in effect estimates in 20%–40% of cases relative to those generated by nRCTs, depending on which metric is used.
The objective of this study was to conduct a systematic review with meta-analysis of studies assessing the association between living in an urban environment and the development of the Crohn’s disease (CD) or ulcerative colitis (UC).
A systematic literature search of MEDLINE (1950-Oct. 2009) and EMBASE (1980-Oct. 2009) was conducted to identify studies investigating the relationship between urban environment and IBD. Cohort and case–control studies were analyzed using incidence rate ratio (IRR) or odds ratio (OR) with 95 % confidence intervals (CIs), respectively. Stratified and sensitivity analyses were performed to explore heterogeneity between studies and assess effects of study quality.
The search strategy retrieved 6940 unique citations and 40 studies were selected for inclusion. Of these, 25 investigated the relationship between urban environment and UC and 30 investigated this relationship with CD. Included in our analysis were 7 case–control UC studies, 9 case–control CD studies, 18 cohort UC studies and 21 cohort CD studies. Based on a random effects model, the pooled IRRs for urban compared to rural environment for UC and CD studies were 1.17 (1.03, 1.32) and 1.42 (1.26, 1.60), respectively. These associations persisted across multiple stratified and sensitivity analyses exploring clinical and study quality factors. Heterogeneity was observed in the cohort studies for both UC and CD, whereas statistically significant heterogeneity was not observed for the case–control studies.
A positive association between urban environment and both CD and UC was found. Heterogeneity may be explained by differences in study design and quality factors.
Inflammatory bowel disease; Urban population; Risk factors
Intravenous immune globulin (IVIG) is an expensive and sometimes scarce blood product that carries some risk. It may often be used inappropriately. We evaluated the appropriateness of IVIG use before and after the introduction of an utilization control program to reduce inappropriate use.
We used the RAND/UCLA Appropriateness Method to measure the appropriateness of IVIG use in the province of British Columbia (BC) in 2001 and 2003, before and after the introduction of a utilization control program designed to reduce inappropriate use. For comparison, we measured the appropriateness of use during the same periods in the province of Alberta, which had no control program.
Of 2256 instances of IVIG use, 54.1% were deemed to be appropriate, 17.4% were of uncertain benefit, and 28.5% were deemed inappropriate. The frequency of inappropriate use in BC after the introduction of the utilization control program did not differ significantly from the frequency before the program or the frequency in Alberta.
Almost half of IVIG use in BC and Alberta was judged to be inappropriate or of uncertain benefit, and the frequency of inappropriate use did not decrease after implementation of a utilization control program in BC. More effective utilization controls are necessary to prevent wasted resources and unnecessary risk to patients.
Increasing population rates of cardiac catheterization can lead to the detection of more people with high risk coronary disease and opportunity for subsequent revascularization. However, such a strategy should only be undertaken if it is cost-effective.
Based on data from a cohort of patients undergoing cardiac catheterization, and efficacy data from clinical trials, we used a Markov model that considered 1) the yield of high-risk cases as the catheterization rate increases, 2) the long-term survival, quality of life and costs for patients with high risk disease, and 3) the impact of revascularization on survival, quality of life and costs. The cost per quality-adjusted life year was calculated overall, and by indication, age, and sex subgroups.
Increasing the catheterization rate was associated with a cost per QALY of CAN$26,470. The cost per QALY was most attractive in females with Acute Coronary Syndromes (ACS) ($20,320 per QALY gained), and for ACS patients over 75 years of age ($16,538 per QALY gained). However, there is significant model uncertainty associated with the efficacy of revascularization.
A strategy of increasing cardiac catheterization rates among eligible patients is associated with a cost per QALY similar to that of other funded interventions. However, there is significant model uncertainty. A decision to increase population rates of catheterization requires consideration of the accompanying opportunity costs, and careful thought towards the most appropriate strategy.
There is variation in cardiac catheterization utilization across jurisdictions. Previous work from Alberta, Canada, showed no evidence of a plateau in the yield of high-risk disease at cardiac catheterization rates as high as 600 per 100,000 population suggesting that the optimal rate is higher. This work aims 1) To determine if a previously demonstrated linear relationship between the yield of high-risk coronary disease and cardiac catheterization rates persists with contemporary data and 2) to explore whether the linear relationship exists in other jurisdictions.
Detailed clinical information on all patients undergoing cardiac catheterization in 3 Canadian provinces was available through the Alberta Provincial Project for Outcomes Assessment in Coronary Heart (APPROACH) disease and partner initiatives in British Columbia and Nova Scotia. Population rates of catheterization and high-risk coronary disease detection for each health region in these three provinces, and age-adjusted rates produced using direct standardization. A mixed effects regression analysis was performed to assess the relationship between catheterization rate and high-risk coronary disease detection.
In the contemporary Alberta data, we found a linear relationship between the population catheterization rate and the high-risk yield. Although the yield was slightly less in time period 2 (2002-2006) than in time period 1(1995-2001), there was no statistical evidence of a plateau. The linear relationship between catheterization rate and high-risk yield was similarly demonstrated in British Columbia and Nova Scotia and appears to extend, without a plateau in yield, to rates over 800 procedures per 100,000 population.
Our study demonstrates a consistent finding, over time and across jurisdictions, of linearly increasing detection of high-risk CAD as population rates of cardiac catheterization increase. This internationally-relevant finding can inform country-level planning of invasive cardiac care services.
Co-morbidity information derived from administrative data needs to be validated to allow its regular use. We assessed evolution in the accuracy of coding for Charlson and Elixhauser co-morbidities at three time points over a 5-year period, following the introduction of the International Classification of Diseases, 10th Revision (ICD-10), coding of hospital discharges.
Cross-sectional time trend evaluation study of coding accuracy using hospital chart data of 3'499 randomly selected patients who were discharged in 1999, 2001 and 2003, from two teaching and one non-teaching hospital in Switzerland. We measured sensitivity, positive predictive and Kappa values for agreement between administrative data coded with ICD-10 and chart data as the 'reference standard' for recording 36 co-morbidities.
For the 17 the Charlson co-morbidities, the sensitivity - median (min-max) - was 36.5% (17.4-64.1) in 1999, 42.5% (22.2-64.6) in 2001 and 42.8% (8.4-75.6) in 2003. For the 29 Elixhauser co-morbidities, the sensitivity was 34.2% (1.9-64.1) in 1999, 38.6% (10.5-66.5) in 2001 and 41.6% (5.1-76.5) in 2003. Between 1999 and 2003, sensitivity estimates increased for 30 co-morbidities and decreased for 6 co-morbidities. The increase in sensitivities was statistically significant for six conditions and the decrease significant for one. Kappa values were increased for 29 co-morbidities and decreased for seven.
Accuracy of administrative data in recording clinical conditions improved slightly between 1999 and 2003. These findings are of relevance to all jurisdictions introducing new coding systems, because they demonstrate a phenomenon of improved administrative data accuracy that may relate to a coding 'learning curve' with the new coding system.
ICD-10; Agreement; Administrative Data; Co-morbidity
Recent clinical trials have demonstrated benefit with early revascularization following acute myocardial infarction (AMI). Trends in and the association between early revascularization after (ie, 30 days or fewer) AMI and early death were determined.
METHODS AND RESULTS:
The Statistics Canada Health Person-Oriented Information Database, consisting of hospital discharge records for seven provinces from the Canadian Institute for Health Information Hospital Morbidity Database, was used. If there was no AMI in the preceding year, the first AMI visit within a fiscal year for a patient 20 years of age or older was included. Times to death in hospital and to revascularization procedures were counted from the admission date of the first AMI visit. Mixed model regression analyses with random slopes were used to assess the relationship between early revascularization and mortality. The overall rate of revascularization within 30 days of AMI increased significantly from 12.5% in 1995 to 37.4% in 2003, while the 30-day mortality rate decreased significantly from 13.5% to 10.6%. There was a linearly decreasing relationship – higher regional use of revascularization was associated with lower mortality in both men and women.
These population-based utilization and outcome findings are consistent with clinical trial evidence of improved 30-day in-hospital mortality with increased early revascularization after AMI.
Acute myocardial infarction; Administrative data; Mortality; Outcomes research; Revascularization
Previous studies evaluated cardiac procedure use and outcome over the short term, with relatively few Asian patients included.
To determine the likelihood of undergoing percutaneous coronary intervention and coronary artery bypass grafting, and survival during 10.5 years of follow-up after coronary angiography among South Asian, Chinese and other Canadian patients.
Using prospective cohort study data from two large Canadian provinces, 3061 South Asian, 1473 Chinese and 77,314 other Canadian patients with angiographically proven coronary artery disease from 1995 to 2004 were assessed, and their revascularization and mortality rates during 10.5 years of follow-up were determined.
Compared with other Canadian patients, South Asian and Chinese patients were slightly less likely to undergo revascularization (risk-adjusted HR 0.94, 95% CI 0.90 to 0.98 for South Asian patients; and HR 0.94, 95% CI 0.88 to 1.00 for Chinese patients). However, South Asian patients underwent coronary artery bypass grafting (HR 1.00, 95% CI 0.94 to 1.07) and Chinese patients underwent percutaneous coronary intervention (HR 0.96, 95% CI 0.89 to 1.04) as frequently as other Canadian patients. Although the 30-day mortality rate was similar across the three ethnic groups, the mortality rate in the follow-up period was significantly lower for South Asian patients (HR 0.76, 95% CI 0.61 to 0.95) and marginally lower for Chinese patients (HR 0.80, 95% CI 0.60 to 1.07) compared with other Canadian patients.
South Asian and Chinese patients used revascularization slightly less but had better survival outcomes than other Canadian patients. The factors underlying the better outcomes for South Asian and Chinese patients warrant further study.
Chinese Canadians; Coronary artery disease; Invasive cardiac procedure; South Asian Canadians
Glucose-insulin infusions (with potassium [GIK] or without [GI]) have been advocated in the setting of coronary artery bypass graft (CABG) surgery to optimize myocardial glucose use and to minimize ischemic injury.
To conduct a meta-analysis assessing whether the use of GIK/GI infusions perioperatively reduce in-hospital mortality or atrial fibrillation (AF) after CABG surgery.
Electronic databases (Medline, EMBASE and Cochrane Central Register of Controlled Trials [CENTRAL]) and references of retrieved articles were searched for randomized controlled trials that evaluated the effects of GIK or GI infusions, before or during CABG surgery, on in-hospital mortality and/or postoperative AF. Pooled ORs and 95% CIs were calculated for each outcome.
Twenty trials were identified and eligible for review. The summary OR for in-hospital mortality was 0.88 (95% CI 0.56 to 1.40), based on 44 deaths among 2326 patients. While postoperative AF was a more frequent outcome (occurring in 519 of 1540 patients in the 10 trials reporting this outcome), the overall pooled estimate of effect was nonsignificant (OR 0.79, 95% CI 0.54 to 1.15). This latter finding needs to be interpreted cautiously because it is accompanied by significant heterogeneity across trials.
Perioperative use of GIK/GI does not significantly reduce mortality or atrial fibrillation in patients undergoing CABG surgery. Unless future trial data in support of GIK/GI infusions become available, the routine use of these treatments in patients undergoing CABG surgery should be discouraged because the safety of these infusions has not been systematically examined.
Coronary artery bypass graft surgery; GIK; Insulin; Meta-analysis
Aspirin has been recommended for the prevention of major adverse cardiovascular events (MACE, composite of non-fatal myocardial infarction, non-fatal stroke, and cardiovascular death) in diabetic patients without previous cardiovascular disease. However, recent meta-analyses have prompted re-evaluation of this practice. The study objective was to evaluate the relative and absolute benefits and harms of aspirin for the prevention of incident MACE in patients with diabetes.
We performed a systematic review and meta-analysis on seven studies (N = 11,618) reporting on the use of aspirin for the primary prevention of MACE in patients with diabetes. Two reviewers conducted a systematic search of electronic databases (MEDLINE, EMBASE, the Cochrane Library, and BIOSIS) and hand searched bibliographies and clinical trial registries. Reviewers extracted data in duplicate, evaluated the quality of the trials, and calculated pooled estimates.
A total of 11,618 participants were included in the analysis. The overall risk ratio (RR) for MACE was 0.91 (95% confidence intervals, CI, 0.82-1.00) with little heterogeneity among trials (I2 0.0%). Secondary outcomes of interest included myocardial infarction (RR, 0.85; 95% CI, 0.66-1.10), stroke (RR, 0.84; 95% CI, 0.64-1.11), cardiovascular death (RR, 0.95; 95% CI, 0.71-1.27), and all-cause mortality (RR, 0.95; 95% CI, 0.85-1.06). There were higher rates of hemorrhagic and gastrointestinal events. In absolute terms, these relative risks indicate that for every 10,000 diabetic patients treated with aspirin, 109 MACE may be prevented at the expense of 19 major bleeding events (with the caveat that the relative risk for the latter is not statistically significant).
The studies reviewed suggest that aspirin reduces the risk of MACE in patients with diabetes without cardiovascular disease, while also causing a trend toward higher rates of bleeding and gastrointestinal complications. These findings and our absolute benefit and risk calculations suggest that those with diabetes but without cardiovascular disease lie somewhere between primary and secondary prevention patients on the spectrum of benefit and risk. This underscores the importance of considering individual risk in clinical decision making regarding aspirin in those with diabetes.
The advent of clinical trials and evidence-informed medicine has resulted in a vast wealth of medical literature. Here, we summarize five notable articles for general internal medicine published in late 2009 and in 2010, and reflect on the remarkable advances made by an increasingly prolific medical research community.
Objective To conduct a comprehensive systematic review and meta-analysis of studies assessing the effect of alcohol consumption on multiple cardiovascular outcomes.
Design Systematic review and meta-analysis.
Data sources A search of Medline (1950 through September 2009) and Embase (1980 through September 2009) supplemented by manual searches of bibliographies and conference proceedings.
Inclusion criteria Prospective cohort studies on the association between alcohol consumption and overall mortality from cardiovascular disease, incidence of and mortality from coronary heart disease, and incidence of and mortality from stroke.
Studies reviewed Of 4235 studies reviewed for eligibility, quality, and data extraction, 84 were included in the final analysis.
Results The pooled adjusted relative risks for alcohol drinkers relative to non-drinkers in random effects models for the outcomes of interest were 0.75 (95% confidence interval 0.70 to 0.80) for cardiovascular disease mortality (21 studies), 0.71 (0.66 to 0.77) for incident coronary heart disease (29 studies), 0.75 (0.68 to 0.81) for coronary heart disease mortality (31 studies), 0.98 (0.91 to 1.06) for incident stroke (17 studies), and 1.06 (0.91 to 1.23) for stroke mortality (10 studies). Dose-response analysis revealed that the lowest risk of coronary heart disease mortality occurred with 1–2 drinks a day, but for stroke mortality it occurred with ≤1 drink per day. Secondary analysis of mortality from all causes showed lower risk for drinkers compared with non-drinkers (relative risk 0.87 (0.83 to 0.92)).
Conclusions Light to moderate alcohol consumption is associated with a reduced risk of multiple cardiovascular outcomes.
Objective To systematically review interventional studies of the effects of alcohol consumption on 21 biological markers associated with risk of coronary heart disease in adults without known cardiovascular disease.
Design Systematic review and meta-analysis.
Data sources Medline (1950 to October 2009) and Embase (1980 to October 2009) without limits.
Study selection Two reviewers independently selected studies that examined adults without known cardiovascular disease and that compared fasting levels of specific biological markers associated with coronary heart disease after alcohol use with those after a period of no alcohol use (controls). 4690 articles were screened for eligibility, the full texts of 124 studies reviewed, and 63 relevant articles selected.
Results Of 63 eligible studies, 44 on 13 biomarkers were meta-analysed in fixed or random effects models. Quality was assessed by sensitivity analysis of studies grouped by design. Analyses were stratified by type of beverage (wine, beer, spirits). Alcohol significantly increased levels of high density lipoprotein cholesterol (pooled mean difference 0.094 mmol/L, 95% confidence interval 0.064 to 0.123), apolipoprotein A1 (0.101 g/L, 0.073 to 0.129), and adiponectin (0.56 mg/L, 0.39 to 0.72). Alcohol showed a dose-response relation with high density lipoprotein cholesterol (test for trend P=0.013). Alcohol decreased fibrinogen levels (−0.20 g/L, −0.29 to −0.11) but did not affect triglyceride levels. Results were similar for crossover and before and after studies, and across beverage types.
Conclusions Favourable changes in several cardiovascular biomarkers (higher levels of high density lipoprotein cholesterol and adiponectin and lower levels of fibrinogen) provide indirect pathophysiological support for a protective effect of moderate alcohol use on coronary heart disease.
Primary percutaneous coronary intervention (PCI) is a proven therapy for acute ST-segment elevation myocardial infarction. However, outcomes associated with primary PCI may differ depending on time of day.
Methods and results
Using the Alberta Provincial Project for Outcomes Assessment in Coronary Heart Disease, a clinical data-collection initiative capturing all cardiac catheterisation patients in Alberta, Canada, the authors described and compared crude and risk-adjusted survival for ST-segment elevation myocardial infarction patients undergoing primary PCI after-hours versus regular working hours. From 1 January 1999 to 31 March 2006, 1664 primary PCI procedures were performed (54.4% after-hours). Mortalities at 30 days were 3.6% for regular hours procedures and 5.0% for after-hours procedures (p=0.16). 1-year mortalities were 6.2% and 7.3% in the regular hours and after-hours groups, respectively (p=0.35). After adjusting for baseline risk factor differences, HRs for after-hours mortality were 1.26 (95% CI 0.78 to 2.02) for survival to 30 days and 1.08 (0.73 to 1.59) for survival to 1 year. A meta-analysis of our after-hours HR point estimate with other published risk estimates for after hours primary PCI outcomes yielded an RR of 1.23 (1.00 to 1.51) for shorter-term outcomes.
After-hours primary PCI was not associated with a statistically significant increase in mortality. However, a meta-analysis of this study with other published after-hours outcome studies yields an RR that leaves some questions about unexplored factors that may influence after-hours primary PCI care.
Infarction; mortality; angioplasty; stents; adverse event; morbidity and mortality; patient outcomes
Given the vast and growing volume of medical literature, it is essential to develop reliable strategies for identifying articles of importance and relevance. Here, we summarize 5 notable articles for general internal medicine published in 2008 and 2009. Clinical vignettes are presented to illustrate situations in which each study might apply, and each summary ends with a description of how a physician might use the study findings to resolve the vignette case. Finally, we describe a surveillance strategy that physicians can use to identify articles important to their own practices.
Physicians are often unable to eat and drink properly during their work day. Nutrition has been linked to cognition. We aimed to examine the effect of a nutrition based intervention, that of scheduled nutrition breaks during the work day, upon physician cognition, glucose, and hypoglycemic symptoms.
A volunteer sample of twenty staff physicians from a large urban teaching hospital were recruited from the doctors' lounge. During both the baseline and the intervention day, we measured subjects' cognitive function, capillary blood glucose, "hypoglycemic" nutrition-related symptoms, fluid and nutrient intake, level of physical activity, weight, and urinary output.
Cognition scores as measured by a composite score of speed and accuracy (Tput statistic) were superior on the intervention day on simple (220 vs. 209, p = 0.01) and complex (92 vs. 85, p < 0.001) reaction time tests. Group mean glucose was 0.3 mmol/L lower (p = 0.03) and less variable (coefficient of variation 12.2% vs. 18.0%) on the intervention day. Although not statistically significant, there was also a trend toward the reporting of fewer hypoglycemic type symptoms. There was higher nutrient intake on intervention versus baseline days as measured by mean caloric intake (1345 vs. 935 kilocalories, p = 0.008), and improved hydration as measured by mean change in body mass (+352 vs. -364 grams, p < 0.001).
Our study provides evidence in support of adequate workplace nutrition as a contributor to improved physician cognition, adding to the body of research suggesting that physician wellness may ultimately benefit not only the physicians themselves but also their patients and the health care systems in which they work.
We sought to evaluate agreement between a new and widely implemented method of temperature measurement in critical care, temporal artery thermometry and an established method of core temperature measurement, bladder thermometry as performed in clinical practice.
Temperatures were simultaneously recorded hourly (n = 736 observations) using both devices as part of routine clinical monitoring in 14 critically ill adult patients with temperatures ranging ≥1°C prior to consent.
The mean difference between temporal artery and bladder temperatures measured was -0.44°C (95% confidence interval, -0.47°C to -0.41°C), with temporal artery readings lower than bladder temperatures. Agreement between the two devices was greatest for normothermia (36.0°C to < 38.3°C) (mean difference -0.35°C [95% confidence interval, -0.37°C to -0.33°C]). The temporal artery thermometer recorded higher temperatures during hypothermia (< 36°C) (mean difference 0.66°C [95% confidence interval, 0.53°C to 0.79°C]) and lower temperatures during hyperthermia (≥38.3°C) (mean difference -0.90°C [95% confidence interval, -0.99°C to -0.81°C]). The sensitivity for detecting fever (core temperature ≥38.3°C) using the temporal artery thermometer was 0.26 (95% confidence interval, 0.20 to 0.33), and the specificity was 0.99 (95% confidence interval, 0.98 to 0.99). The positive likelihood ratio for fever was 24.6 (95% confidence interval, 10.7 to 56.8); the negative likelihood ratio was 0.75 (95% confidence interval, 0.68 to 0.82).
Temporal artery thermometry produces somewhat surprising disagreement with an established method of core temperature measurement and should not to be used in situations where body temperature needs to be measured with accuracy.
The goal of this study was to assess the validity of the International Classification of Disease, 10th Version (ICD-10) administrative hospital discharge data and to determine whether there were improvements in the validity of coding for clinical conditions compared with ICD-9 Clinical Modification (ICD-9-CM) data.
We reviewed 4,008 randomly selected charts for patients admitted from January 1 to June 30, 2003 at four teaching hospitals in Alberta, Canada to determine the presence or absence of 32 clinical conditions and to assess the agreement between ICD-10 data and chart data. We then recoded the same charts using ICD-9-CM and determined the agreement between the ICD-9-CM data and chart data for recording those same conditions. The accuracy of ICD-10 data relative to chart data was compared with the accuracy of ICD-9-CM data relative to chart data.
Sensitivity values ranged from 9.3 to 83.1 percent for ICD-9-CM and from 12.7 to 80.8 percent for ICD-10 data. Positive predictive values ranged from 23.1 to 100 percent for ICD-9-CM and from 32.0 to 100 percent for ICD-10 data. Specificity and negative predictive values were consistently high for both ICD-9-CM and ICD-10 databases. Of the 32 conditions assessed, ICD-10 data had significantly higher sensitivity for one condition and lower sensitivity for seven conditions relative to ICD-9-CM data. The two databases had similar sensitivity values for the remaining 24 conditions.
The validity of ICD-9-CM and ICD-10 administrative data in recording clinical conditions was generally similar though validity differed between coding versions for some conditions. The implementation of ICD-10 coding has not significantly improved the quality of administrative data relative to ICD-9-CM. Future assessments like this one are needed because the validity of ICD-10 data may get better as coders gain experience with the new coding system.
ICD-9-CM; ICD-10; chart data; validity; Canada
At least implicitly, most clinical decisions represent an integration of disease and treatment-based risk assessments. Often, as is the case with acute coronary syndrome (ACS), these decisions need to be made quickly at a time when data elements are limited, and published risk models are very useful in clarifying time-dependent determinants of risk. The present review emphasizes the value of explicit risk assessment and reinforces the fact that patients at highest risk are often those most likely to benefit from newer and more invasive therapies. Suggested ways to incorporate published ACS risk models into clinical practice are included. In addition, the need to adopt a longer-term view of risk in ACS patients is stressed, with particular regard to the important role of heart failure prediction and treatment.
Acute coronary syndrome; Risk prediction
Admission hyperglycemia has been associated with worse outcomes in ischemic stroke. We hypothesized that hyperglycemia (glucose >8.0 mmol/l) in the hyperacute phase would be independently associated with increased mortality, symptomatic intracerebral hemorrhage (SICH), and poor functional status at 90 days in stroke patients treated with intravenous tissue plasminogen activator (IV-tPA).
RESEARCH DESIGN AND METHODS
Using data from the prospective, multicenter Canadian Alteplase for Stroke Effectiveness Study (CASES), the association between admission glucose >8.0 mmol/l and mortality, SICH, and poor functional status at 90 days (modified Rankin Scale >1) was examined. Similar analyses examining glucose as a continuous measure were conducted.
Of 1,098 patients, 296 (27%) had admission hyperglycemia, including 18% of those without diabetes and 70% of those with diabetes. After multivariable logistic regression, admission hyperglycemia was found to be independently associated with increased risk of death (adjusted risk ratio 1.5 [95% CI 1.2–1.9]), SICH (1.69 [0.95–3.00]), and a decreased probability of a favorable outcome at 90 days (0.7 [0.5–0.9]). An incremental risk of death and SICH and unfavorable 90-day outcomes was observed with increasing admission glucose. This observation held true for patients with and without diabetes.
In this cohort of IV-tPA–treated stroke patients, admission hyperglycemia was independently associated with increased risk of death, SICH, and poor functional status at 90 days. Treatment trials continue to be urgently needed to determine whether this is a modifiable risk factor for poor outcome.
Primary percutaneous coronary intervention (PCI) is preferred over fibrinolysis for the treatment of ST-segment elevation myocardial infarction (STEMI). In the United States, nearly 80% of people aged 18 years and older have access to a PCI facility within 60 minutes. We conducted this study to evaluate the areas in Canada and the proportion of the population aged 40 years and older with access to a PCI facility within 60, 90 and 120 minutes.
We used geographic information systems to estimate travel times by ground transport to PCI facilities across Canada. Time to dispatch, time to patient and time at the scene were considered in the overall access times. Using 2006 Canadian census data, we extracted the number of adults aged 40 years and older who lived in areas with access to a PCI facility within 60, 90 and 120 minutes. We also examined the effect on these estimates of the hypothetical addition of new PCI facilities in underserved areas.
Only a small proportion of the country’s geographic area was within 60 minutes of a PCI facility. Despite this, 63.9% of Canadians aged 40 and older had such access. This proportion varied widely across provinces, from a low of 15.8% in New Brunswick to a high of 72.6% in Ontario. The hypothetical addition of a single facility to each of 4 selected provinces could increase the proportion by 3.2% to 4.3%, depending on the province. About 470 000 adults would gain access in such a scenario of new facilities.
We found that nearly two-thirds of Canada’s population aged 40 years and older had timely access to PCI facilities. The proportion varied widely across the country. Such information can inform the development of regionalized STEMI care models.