Background The extent to which adult height, a biomarker of the interplay of genetic endowment and early-life experiences, is related to risk of chronic diseases in adulthood is uncertain.
Methods We calculated hazard ratios (HRs) for height, assessed in increments of 6.5 cm, using individual–participant data on 174 374 deaths or major non-fatal vascular outcomes recorded among 1 085 949 people in 121 prospective studies.
Results For people born between 1900 and 1960, mean adult height increased 0.5–1 cm with each successive decade of birth. After adjustment for age, sex, smoking and year of birth, HRs per 6.5 cm greater height were 0.97 (95% confidence interval: 0.96–0.99) for death from any cause, 0.94 (0.93–0.96) for death from vascular causes, 1.04 (1.03–1.06) for death from cancer and 0.92 (0.90–0.94) for death from other causes. Height was negatively associated with death from coronary disease, stroke subtypes, heart failure, stomach and oral cancers, chronic obstructive pulmonary disease, mental disorders, liver disease and external causes. In contrast, height was positively associated with death from ruptured aortic aneurysm, pulmonary embolism, melanoma and cancers of the pancreas, endocrine and nervous systems, ovary, breast, prostate, colorectum, blood and lung. HRs per 6.5 cm greater height ranged from 1.26 (1.12–1.42) for risk of melanoma death to 0.84 (0.80–0.89) for risk of death from chronic obstructive pulmonary disease. HRs were not appreciably altered after further adjustment for adiposity, blood pressure, lipids, inflammation biomarkers, diabetes mellitus, alcohol consumption or socio-economic indicators.
Conclusion Adult height has directionally opposing relationships with risk of death from several different major causes of chronic diseases.
Height; cardiovascular disease; cancer; cause-specific mortality; epidemiological study; meta-analysis
Raynaud phenomenon (RP) is a temporary vasoconstrictive condition that often manifests itself in the fingers in response to cold or stress. It often co-occurs with certain chronic diseases that impact mortality. Our objective was to determine whether RP has any independent association with survival.
From 1987–1989, a total of 830 participants of the Charleston Heart Study cohort completed an in-person RP screening questionnaire. Two definitions of RP were used: a broad definition that included both blanching (pallor) and cyanotic color changes and a narrow definition that included only blanching. All-cause and cardiovascular disease (CVD) mortality were compared between subjects with and without RP using race-specific survival models that adjusted for age, sex, baseline CVD, and 10-year risk of coronary heart disease.
Using the narrow RP definition, we identified a significant interaction between older age and the presence of RP on all-cause mortality. In the broad RP definition model, the presence of RP was not associated with CVD mortality among blacks; however, among whites, the presence of RP was associated with a 1.6-fold increase in the hazard associated with CVD-related death (hazard ratio: 1.55, 95% confidence interval: 1.10–2.20, P=0.013).
RP was independently associated with mortality among older adults in our cohort. Among whites, RP was associated with increased CVD-related death. It is possible that RP may be a sign of undiagnosed vascular disease.
Raynaud disease; cohort studies; cardiovascular diseases; survival analysis
Background and Objectives
A hardcopy or paper cognitive aid has been shown to improve performance during the management of simulated local anesthetic systemic toxicity (LAST) when given to the team leader. However, there remains room for improvement in order to ensure a system that can achieve perfect adherence to the published guidelines for LAST management. Recent research has shown that implementing a checklist via a designated reader may be of benefit. Accordingly, we sought to investigate the effect of an electronic decision support tool (DST) and designated ‘Reader’ role on team performance during an in-situ simulation of LAST.
Participants were randomized to Reader+DST (N = 16, rDST) and Control (N = 15, memory alone). The rDST group received the assistance of a dedicated Reader on the response team who was equipped with an electronic DST. The primary outcome measure was adherence to guidelines.
For overall and critical percent correct scores, the rDST group scored higher than Control (99.3% vs 72.2%, P < 0.0001; 99.5% vs 70%, P < 0.0001, respectively). In the LAST scenario, 0 of 15 (0%) in the control group performed 100% of critical management steps, while 15 of 16 (93.8%) in the rDST group did so (P < 0.0001).
In a prospective, randomized single-blinded study, a designated Reader with an electronic DST improved adherence to guidelines in the management of an in-situ simulation of LAST. Such tools are promising in the future of medicine, but further research is needed to ensure the best methods for implementing them in the clinical arena.
It has been noted that increased focus on learning acute care skills
is needed in undergraduate medical curricula. This study investigated
whether a simulation-based curriculum improved a senior medical
student's ability to manage acute coronary syndrome (ACS)as measured
during a Clinical Practice Exam (CPX). We hypothesized that simulation
training would improve overall performance as compared to targeted didactics
or historical controls.
All fourth year medical students (N=291) over 2 years at our
institution were included in this study. In the third year of medical
school, the “Control” group received no intervention, the
“Didactic” group received a targeted didactic curriculum,
and the “Simulation” group participated in small group
simulation training and the didactic curriculum. For intergroup comparison
on the CPX, we calculated the percentage of correct actions completed by the
student. Data is presented as Mean ± SD with significance defined as
There was a significant improvement in overall performance with
Simulation (53.5 ± 8.9%) versus both Didactics (47.7
± 9.0%) and Control (47.9 ± 9.8%)
(P<0.001).Performance on the physical exam
component was significantly better in Simulation (48.5 ±
16.2%) versus both Didactics (37.6 ± 13.1%) and
Control (37.7 ± 15.7%), as was diagnosis, Simulation (75.7
± 24.2%) versus both Didactics (64.6 ±
25.1%) and Control (62.1 ± 24.2%)
(P<0.02 for all comparisons).
Simulation training had a modest impact on overall CPX performance in
the management of a simulated ACS. Further studies are needed to evaluate
how to further improve curricula regarding unstable patients.
medical student; simulation; deliberate practice; curriculum; acute coronary syndrome
The 2007 American College of Cardiologists/American Heart Association Guidelines on Perioperative Cardiac Evaluation and Care for Noncardiac Surgery is the standard for perioperative cardiac evaluation. Recent work has shown residents and anesthesiologists do not apply these guidelines when tested. This research hypothesized that a decision support tool would improve adherence to this consensus guideline.
Anesthesiology residents at 4 training programs participated in an unblinded prospective randomized cross-over trial in which they completed two tests covering clinical scenarios. One quiz was completed from memory and one with the aid of an electronic decision support tool. Performance was evaluated by overall score (% correct), number of incorrect answers with possibly increased cost or risk of care, and the amount of time required to complete the quizzes both with and without the cognitive aid. The primary outcome was the proportion of correct responses attributable to the use of the decision support tool.
All anesthesiology residents at four institutions were recruited and 111 residents participated. Use of the decision support tool resulted in a 25% improvement in adherence to guidelines compared to memory alone (p<0.0001), and participants made 77% fewer incorrect responses that would have resulted in increased costs. Use of the tool was associated with a 3.4-minute increase in time to complete the test (p<0.001).
Use of an electronic decision support tool significantly improved adherence to the guidelines as compared to memory alone. The decision support tool also prevented inappropriate management steps possibly associated with increased healthcare costs.
To determine the prospective risk of IUFD ≥ 34 weeks’ gestation for monochorionic (MC) and dichorionic (DC) twins receiving intensive antenatal fetal surveillance. The secondary objective is to calculate the incidence of prematurity-related neonatal morbidity/mortality, stratified by gestational week and chorionicity.
A retrospective cohort study of all twins ≥ 34 weeks delivered at MUSC (1987–2010) was performed. Twins were cared for in a longstanding Twin Clinic with standardized management and surveillance protocols; supervised by a consistent Maternal-Fetal Medicine specialist. Gestational age specific fetal/neonatal mortality and composite neonatal morbidity rates were compared by chorionicity. A generalized linear mixed model was used to identify variables associated with increased composite neonatal morbidity.
Among 768 twin gestations (601 DC and 167 MC), only one dichorionic IUFD occurred. The prospective risk of IUFD ≥34 weeks was 0.17% for DC twins and 0% for MC twins. Composite neonatal morbidity decreased with each gestational week (p<0.0001). Morbidity was increased by white race, gestational diabetes and elective indication for delivery. The nadir of composite neonatal morbidity occurred at 36/0-36/6 weeks for MC twins and 37/0-37/6 weeks for DC twins.
Our data do not support concern for an increased risk of stillbirth in uncomplicated intensively managed MC twins ≥34 weeks’ gestation. However, our data do show significantly increased rates of neonatal morbidity in late preterm MC twins that cannot be justified by a corresponding reduction in the risk of stillbirth. We feel that our data support delivery of uncomplicated MC twins at 37 weeks’ gestation.
Delivery timing; Dichorionic twins; Monochorionic twins; Stillbirth
Novel statistical methods are constantly being developed within the context of biomedical research; however, the characteristics of biostatistics methods that have been adopted into the field of general / internal medicine (GIM) is unclear. This study highlights the statistical journal articles, the statistical journals, and the types of statistical methods that appear to be having the most direct impact on GIM research.
Descriptive techniques, including analyses of articles’ keywords and controlled vocabulary terms, were used to characterize the articles published in statistics and probability journals that were subsequently referenced within GIM journal articles during a recent 10-year period (2000–2009).
From the 45 statistics and probability journals of interest, a total of 989 unique articles were identified as being cited by 2,183 (out of a total of about 127,469) unique GIM journal articles. The most frequently cited statistical topics included general/other statistical methods, followed by randomized trials, epidemiologic methods, meta-analysis, generalized linear models, and computer simulation.
As statisticians continue to develop and refine techniques, the promotion and adoption of these methods should also be addressed so that their efforts spent in developing the methods are not done in vain.
bibliometrics; biostatistical methods; general/internal medicine; journal impact factor
To determine the relationship between main pulmonary artery diameter and pulmonary hypertension (PH) in scleroderma patients with and without interstitial lung disease (ILD).
We retrospectively reviewed 48 subjects with scleroderma who underwent a chest CT and right heart catheterization with six months of each other. Patients were divided into two groups based on the absence or presence of ILD on chest CT. Subset analysis was performed based on available pulmonary function tests (PFTs) and divided into groups by forced vital capacity (FVC). CT scans were scored for extent of fibrosis and ground glass opacity. Pulmonary artery and ascending aorta measurements were obtained by two independent observers. Univariate and multi-variate models were used to evaluate the correlation between main pulmonary artery diameter (MPAD) and mean pulmonary artery pressure (mPAP) measured by right-heart catheterization. Receiver operating characteristic analysis was performed for diagnostic accuracy of the MPAD measurement in predicting PH
Strong correlations between mPAP and MPAD were found in this study regardless of the presence or absence of mild to moderate interstitial fibrosis on chest CT. When dividing patients based on FVC, the correlation between mPAP and MPAD was substantially attenuated. An MPAD value of 30.8 mm yielded the highest sensitivity and specificity at 81.3% and 87.5% respectively.
In scleroderma patients, an enlarged main pulmonary artery (>30 mm) predicts pulmonary hypertension even in the presence of mild to moderate fibrosis although the relationship may be attenuated in the presence of a low % FVC.
Advanced Cardiac Life Support (ACLS) algorithms are the default standard of care for in-hospital cardiac arrest (IHCA) management. However, adherence to published guidelines is relatively poor. The records of 149 patients who experienced IHCA were examined to begin to understand the association between overall adherence to ACLS protocols and successful return of spontaneous circulation (ROSC).
A retrospective chart review of medical records and code team worksheets was conducted for 75 patients who had ROSC after an IHCA event (SE group) and 74 who did not survive an IHCA event (DNS group). Protocol adherence was assessed using a detailed checklist based on the 2005 ACLS Update protocols. Several additional patient characteristics and circumstances were also examined as potential predictors of ROSC.
In unadjusted analyses, the percentage of correct steps performed was positively correlated with ROSC from an IHCA (p <0.01), and the number of errors of commission and omission were both negatively correlated with ROSC from an IHCA (p <0.01). In multivariable models, the percentage of correct steps performed and the number of errors of commission and omission remained significantly predictive of ROSC (p<0.01 and p<0.0001, respectively) even after accounting for confounders such as the difference in age and location of the IHCAs.
Our results show that adherence to ACLS protocols throughout an event is correlated with increased ROSC in the setting of cardiac arrest. Furthermore, the results suggest that, in addition to correct actions, both wrong actions and omissions of indicated actions lead to decreased ROSC after IHCA.
The objective of this study was to determine whether a composite outcome, derived of objective signs of inadequate cardiac output, would be associated with other important measures of outcomes and therefore be an appropriate end point for clinical trials in neonatal cardiac surgery.
Neonates (n = 76) undergoing cardiac operations requiring cardiopulmonary bypass were prospectively enrolled. Patients were defined to have met the composite outcome if they had any of the following events before hospital discharge: death, the use of mechanical circulatory support, cardiac arrest requiring chest compressions, hepatic injury (2 times the upper limit of normal for aspartate aminotransferase or alanine aminotransferase), renal injury (creatinine >1.5 mg/dL), or lactic acidosis (an increasing lactate >5 mmol/L in the postoperative period). Associations between the composite outcome and the duration of mechanical ventilation, intensive care unit stay, hospital stay, and total hospital charges were determined.
The median age at the time of surgery was 7 days, and the median weight was 3.2 kg. The composite outcome was met in 39% of patients (30/76). Patients who met the composite outcome compared with those who did not had a longer duration of mechanical ventilation (4.9 vs 2.9 days, P<.01), intensive care unit stay (8.8 vs 5.7 days, P<.01), hospital stay (23 vs 12 days, P<.01), and increased hospital charges ($258,000 vs $170,000, P<.01). In linear regression analysis, controlling for surgical complexity, these differences remained significant (R2 = 0.29–0.42, P<.01).
The composite outcome is highly associated with important early operative outcomes and may serve as a useful end point for future clinical research in neonates undergoing cardiac operations.
Adherence to Advanced Cardiac Life Support (ACLS) guidelines during in69 hospital cardiac arrest (IHCA) is associated with improved outcomes, but current evidence shows that sub-optimal care is common. Successful execution of such protocols during IHCA requires rapid patient assessment and the performance of a number of ordered, time-sensitive interventions. Accordingly, we sought to determine whether the use of an electronic decision support tool (DST) improves performance during high-fidelity simulations of IHCA.
After IRB approval and written informed consent was obtained, 47 senior medical students were enrolled. All participants were ACLS certified and within one month of graduation. Each participant was issued an iPod Touch device with a DST installed that contained all ACLS management algorithms. Participants managed two scenarios of IHCA and were allowed to use the DST in one scenario and prohibited from using it in the other. All participants managed the same scenarios. Simulation sessions were video recorded and graded by trained raters according to previously validated checklists.
Performance of correct protocol steps was significantly greater with the DST than without (84.7% v 73.8%, p< 0.001) and participants committed significantly fewer additional errors when using the DST (2.5 errors v. 3.8 errors, p< 0.012).
Use of an electronic DST provided a significant improvement in the management of simulated IHCA by senior medical students as measured by adherence to published guidelines.
We investigate whether the distributions to the states from the Tobacco Master Settlement Agreement (MSA) in 1998 is associated with stronger tobacco control efforts. We use state level data from 50 states and the District of Columbia from four time periods post MSA (1999, 2002, 2004, and 2006) for the analysis. Using fixed effect regression models, we estimate the relationship between MSA disbursements and a new aggregate measure of strength of state tobacco control known as the Strength of Tobacco Control (SoTC) Index. Results show an increase of $1 in the annual per capita MSA disbursement to a state is associated with a decrease of −0.316 in the SoTC mean value, indicating higher MSA payments were associated with weaker tobacco control measures within states. In order to achieve the initial objectives of the MSA payments, policy makers should focus on utilizing MSA payments strictly on tobacco control activities across states.
Overconsumption of alcohol is well known to lead to numerous health and social problems. Prevalence studies of United States adults found that 20% of patients meet criteria for an alcohol use disorder. Routine screening for alcohol use is recommended in primary care settings, yet little is known about the organizational factors that are related to successful implementation of screening, brief intervention (SBI) and treatment in these settings. The purpose of this study was to evaluate organizational attributes in primary care practices that participated in a practice-based research network trial to implement alcohol SBI. The Survey of Organizational Attributes in Primary Care (SOAPC) has reliably measured four factors: communication, decision-making, stress/chaos and history of change. This 21-item instrument was administered to 178 practice members at the baseline of this trial, to evaluate for relationship of organizational attributes to implementation of alcohol SBI and treatment. No significant relationships were found correlating alcohol screening, identification of high-risk drinkers and brief intervention, to the factors measured in the SOAPC instrument. These results highlight the challenges related to the use of organizational survey instruments in explaining or predicting variations in clinical improvement. Comprehensive mixed methods approaches may be more effective in evaluations of implementation of SBI and treatment.
alcohol screening; high-risk drinkers; primary care; organizational attributes
Quality chest compressions (CC) are the most important factor in successful cardiopulmonary resuscitation. Adjustment of CC based upon an invasive arterial blood pressure (ABP) display would be theoretically beneficial. Additionally, having one compressor present for longer than a 2-min cycle with an ABP display may allow for a learning process to further maximize CC. Accordingly, we tested the hypothesis that CC can be improved with a real-time display of invasively measured blood pressure and with an unchanged, physically fit compressor.
A manikin was attached to an ABP display derived from a hemodynamic model responding to parameters of CC rate, depth, and compression-decompression ratio. The area under the blood pressure curve over time (AUC) was used for data analysis. Each participant (N = 20) performed 4 CPR sessions: (1) No ABP display, exchange of compressor every 2 min; (2) ABP display, exchange of compressor every 2 min; (3) no ABP display, no exchange of the compressor; (4) ABP display, no exchange of the compressor. Data were analyzed by ANOVA. Significance was set at a p-value < 0.05.
The average AUC for cycles without ABP display was 5201 mmHg s (95% confidence interval (CI) of 4804–5597 mmHg s), and for cycles with ABP display 6110 mmHg s (95% CI of 5715–6507 mmHg s) (p< 0.0001). The average AUC increase with ABP display for each participant was 20.2 ± 17.4% 95 CI (p < 0.0001).
Our study confirms the hypothesis that a real-time display of simulated ABP during CPR that responds to participant performance improves achieved and sustained ABP. However, without any real-time visual feedback, even fit compressors demonstrated degradation of CC quality.
Cardiopulmonary resuscitation; Chest compressions; Simulation; Arterial blood pressure
Preeclampsia (PE) affects 2–8% of pregnancies worldwide and is a significant source of maternal and neonatal morbidity and mortality. However, the mechanisms underlying PE are poorly understood and major questions regarding etiology and risk factors remain to be addressed. Our objective was to examine whether abnormal expression of the cardiovascular developmental transcription factor, Nkx2-5, was associated with early onset and severe pre-eclampsia (EOSPE).
Using qPCR and immunohistochemical assay, we examined expression of Nkx2-5 and target gene expression in EOSPE and control placental tissue. We tested resulting mechanistic hypotheses in cultured cells using shRNA knockdown, qPCR and western blot.
Nkx2-5 is highly expressed in racially disparate fashion (Caucasians > African Americans) in a subset of early EOSPE placentae. Nkx2-5 mRNA expression is highly correlated (Caucasians > African Americans) to mRNA expression of the preeclampsia marker sFlt-1, and of the Nkx2-5 target and RNA splicing factor, Sam68. Knockdown of Sam68 expression in cultured cells significantly impacts sFlt-1 mRNA isoform generation in vitro, supporting a mechanistic hypothesis that Nkx2-5 impacts EOSPE severity in a subset of patients via upregulation of Sam68 to increase sFlt-1 expression. Expression of additional Nkx2-5 targets potentially regulating metabolic stress response is also elevated in racially disparate fashion in EOSPE.
Expression of Nkx2-5 and its target genes may directly influence the genesis and racially disparate severity, and define a mechanistically distinct subclass of EOSPE.
Preeclampsia; placenta; sFlt-1; Sam68; Nkx2-5; Xbp-1; Ccdc117; racial disparity
To prospectively evaluate in a phantom the effects of reconstruction kernel, field of view (FOV), and section thickness on automated measurements of pulmonary nodule volume.
Materials and Methods
Spherical and lobulated pulmonary nodules 3–15 mm in diameter were placed in a commercially available lung phantom and scanned by using a 16-section computed tomographic (CT) scanner. Nodule volume (V) was determined by using the diameters of 27 spherical nodules and the mass and density values of 29 lobulated nodules measured by using the formulas V = (4/3)πr3 (spherical nodules) and V = 1000 × (M/D) (lobulated nodules) as reference standards, where r is nodule radius; M, nodule mass; and D, wax density. Experiments were performed to evaluate seven reconstruction kernels and the independent effects of FOV and section thickness. Automated nodule volume measurements were performed by using computer-assisted volume measurement software. General linear regression models were used to examine the independent effects of each parameter, with percentage overestimation of volume as the dependent variable of interest.
There was no substantial difference in the accuracy of volume estimations across the seven reconstruction kernels. The bone reconstruction kernel was deemed optimal on the basis of the results of a series of statistical analyses and other qualitative findings. Overall, volume accuracy was significantly associated (P < .0001) with larger reference standard–measured nodule diameter. There was substantial overestimation of the volumes of the 3–5-mm nodules measured by using the volume measurement software. Decreasing the FOV facilitated no significant improvement in the precision of lobulated nodule volume measurements. The accuracy of volume estimations—particularly those for small nodules—was significantly (P < .0001) affected by section thickness.
Substantial, highly variable overestimation of volume occurs with decreasing nodule diameter. A section thickness that enables the acquisition of at least three measurements along the z-axis should be used to measure the volumes of larger pulmonary nodules.
To assess the accuracy of dual-energy CT (DECT) for the quantification of iodine concentrations in a thoracic phantom across various cardiac DECT protocols and simulated patient sizes.
Materials and methods
Experiments were performed on first- and second-generation dual-source CT (DSCT) systems in DECT mode using various cardiac DECT protocols. An anthropomorphic thoracic phantom was equipped with tubular inserts containing known iodine concentrations (0–20 mg/mL) in the cardiac chamber and up to two fat-equivalent rings to simulate different patient sizes. DECT-derived iodine concentrations were measured using dedicated software and compared to true concentrations. General linear regression models were used to identify predictors of measurement accuracy
Correlation between measured and true iodine concentrations (n=72) across CT systems and protocols was excellent (R=0.994–0.997, P< 0.0001). Mean measurement errors were 3.0 ± 7.0 % and −2.9 ± 3.8 % for first- and second-generation DSCT, respectively. This error increased with simulated patient size. The second-generation DSCT showed the most stable measurements across a wide range of iodine concentrations and simulated patient sizes.
Overall, DECT provides accurate measurements of iodine concentrations across cardiac CT protocols, strengthening the case for DECT-derived blood volume estimates as a surrogate of myocardial blood supply.
dual-energy CT; dual-source CT; cardiac CT; iodine; quantification
Colorectal cancer (CRC) screening is recommended for all adults 50-75 years old, yet only slightly more than one-half of eligible people are current with screening. Since CRC screening is usually initiated upon recommendations of primary care physicians, interventions in these settings are needed to improve screening.
To assess the impact of a quality improvement (QI) intervention combining electronic medical record (EMR) based audit and feedback, practice site visits for academic detailing and participatory planning, and “best-practice” dissemination on CRC screening in primary care practice.
Two year group-randomized trial.
Physicians, mid-level providers and clinical staff members in 32 primary care practices in 19 States caring for 68,150 patients 50 years of age or older.
Proportion of active patients up to date (UTD) with CRC screening (colonoscopy within 10 years, sigmoidoscopy within 5 years, or at home fecal occult blood testing within 1 year) and having screening recommended within past year among those not UTD.
Patients 50-75 years in intervention practices exhibited significantly greater improvement (from 60.7% to 71.2%) in being UTD with CRC screening than patients in control practices (from 57.7% to 62.8%), the adjusted difference being 4.9% (95% CI: 3.8% to 6.1%). Recommendations for screening also increased more in intervention practices with the adjusted difference being 7.9% (95%CI: 6.3% to 9.5%). There was wide inter-practice variation in CRC screening throughout the intervention.
A multi-component QI intervention in practices that use EMR can improve colorectal cancer screening.
Colorectal Cancer Screening; EMR; Quality Improvement
The sign test is a well-known nonparametric approach for testing whether one of two conditions is preferable to another. In medicine, this method may be used when one is interested in testing in the context of a clinical trial whether either of two treatments that are provided to study subjects is favored over the other. When neither treatment outperforms the other within a given individual, a “tie” is said to have occurred. When planning such a trial and estimating statistical power and/or sample size, one should consider the probability of a tie occurring (PT). This paper quantifies the degree to which uncertainty in PT affects a study’s statistical power.
Binomial theory was used to calculate power given varying levels of uncertainty and varying distributional forms (i.e. beta, uniform) for PT.
Across a range of prior distributions for PT, power was reduced (i.e. <80%) for 46 (71.9%) of 64 experimental conditions, with large reductions (i.e. power <70%) for 10 (15.6%) of them.
When designing a clinical trial that will incorporate the sign test to compare 2 conditions, ignoring potential variation in the probability of a tie occurring will tend to result in an underpowered study. These findings have implications to the design of any clinical trial for which assumptions are made in calculating an appropriate sample size.
statistical power; sign test; non-parametric statistics; uncertainty; binomial distribution; sample size estimation
Antibiotics are often inappropriately prescribed for acute respiratory infections (ARIs).
To assess the impact of a clinical decision support system (CDSS) on antibiotic prescribing for ARIs.
A two-phase, 27-month demonstration project.
Nine primary care practices in PPRNet, a practice-based research network whose members use a common electronic health record (EHR).
Thirty-nine providers were included in the project.
A CDSS was designed as an EHR progress note template. To facilitate CDSS implementation, each practice participated in two to three site visits, sent representatives to two project meetings, and received quarterly performance reports on antibiotic prescribing for ARIs.
MAIN OUTCOME MEASURES
1) Use of antibiotics for inappropriate indications. 2) Use of broad spectrum antibiotics when inappropriate. 3) Use of antibiotics for sinusitis and bronchitis.
The CDSS was used 38,592 times during the 27-month intervention; its use was sustained for the study duration. Use of antibiotics for encounters at which diagnoses for which antibiotics are rarely appropriate did not significantly change through the course of the study (estimated 27-month change, 1.57 % [95 % CI, −5.35 %, 8.49 %] in adults and −1.89 % [95 % CI, −9.03 %, 5.26 %] in children). However, use of broad spectrum antibiotics for ARI encounters improved significantly (estimated 27 month change, −16.30 %, [95 % CI, −24.81 %, −7.79 %] in adults and −16.30 [95%CI, −23.29 %, −9.31 %] in children). Prescribing for bronchitis did not change significantly, but use of broad spectrum antibiotics for sinusitis declined.
This multi-method intervention appears to have had a sustained impact on reducing the use of broad spectrum antibiotics for ARIs. This intervention shows promise for promoting judicious antibiotic use in primary care.
acute respiratory infections; antibiotic prescribing; electronic health records; clinical decision support
With advances in technology, detection of small pulmonary nodules is increasing. Nodule detection software (NDS) has been developed to assist radiologists with pulmonary nodule diagnosis. Although it may increase sensitivity for small nodules, often there is an accompanying increase in false-positive findings. We designed a study to examine the extent to which computed tomography (CT) NDS influences the confidence of radiologists in identifying small pulmonary nodules.
Materials and Methods
Eight radiologists (readers) with different levels of experience examined thoracic CT scans of 131 cases and identified all the clinically relevant pulmonary nodules. The reference standard was established by an expert, dedicated thoracic radiologist. For each nodule, the readers recorded nodule size, density, location, and confidence level. Two weeks (or more) later, the readers reinterpreted the same scans; however, this time they were provided marks, when present, as indicated by NDS and asked to reassess their level of confidence. The effect of NDS on changes in reader confidence was assessed using multivariable generalized linear regression models.
A total of 327 unique nodules were identified. Declines in confidence were significantly (P<0.05) associated with the absence of an NDS mark and smaller nodules (odds ratio=71.0, 95% confidence interval =14.8–339.7). Among nodules with pre-NDS confidence less than 100%, increases in confidence were significantly (P<0.05) associated with the presence of an NDS mark (odds ratio=6.0, 95% confidence interval =2.7–13.6) and larger nodules. Secondary findings showed that NDS did not improve reader diagnostic accuracy.
Although in this study NDS does not seem to enhance reader accuracy, the confidence of the radiologists in identifying small pulmonary nodules with CT is greatly influenced by NDS.
clinical decision making; computed tomography scan; diagnostic imaging; lung neoplasm; diagnostic errors
Biostatistics—the application of statistics to understanding health and biology—provides powerful tools for developing research questions, designing studies, refining measurements, analyzing data, and interpreting findings. Biostatistics plays an important role in health-related research, yet biostatistics resources are often fragmented, ad hoc, or oversubscribed within academic health centers (AHCs). Given the increasing complexity and quantity of health-related data, the emphasis on accelerating clinical and translational science, and the importance of conducting reproducible research, the need for the thoughtful development of biostatistics resources within AHCs is growing.
In this article, the authors identify strategies for developing biostatistics resources in three areas: (1) recruiting and retaining biostatisticians; (2) efficiently using biostatistics resources; and (3) improving biostatistical contributions to science. AHCs should consider these three domains in building strong biostatistics resources, which they can leverage to support a broad spectrum of research. For each of the three domains, the authors describe the advantages and disadvantages of AHCs creating centralized biostatistics units rather than dispersing such resources across clinical departments or other research units. They also address the challenges biostatisticians face in contributing to research without sacrificing their individual professional growth or the trajectory of their research team. The authors ultimately recommend that AHCs create centralized biostatistics units, as this approach offers distinct advantages both to investigators who collaborate with biostatisticians as well as to the biostatisticians themselves, and it is better suited to accomplish the research and education missions of AHCs.
To assess the effect of a clinical decision support system (CDSS) integrated into an electronic health record (EHR) on antibiotic prescribing for acute respiratory infections (ARIs) in primary care.
Materials and methods
Quasi-experimental design with nine intervention practices and 61 control practices in the Practice Partner Research Network, a network of practices which all use the same EHR (Practice Partner). The nine intervention practices were located in nine US states. The design included a 3-month baseline data collection period (October through December 2009) before the introduction of the intervention and 15 months of follow-up (January 2010 through March 2011). The main outcome measures were the prescribing of antibiotics in ARI episodes for which antibiotics are inappropriate and prescribing of broad-spectrum antibiotics in all ARI episodes.
In adult patients, prescribing of antibiotics in ARI episodes where antibiotics are inappropriate declined more (−0.6%) among intervention practices than in control practices (+4.2%) (p=0.03). However, among adults, the CDSS intervention improved prescribing of broad-spectrum antibiotics, with a decline of 16.6% among intervention practices versus an increase of 1.1% in control practices (p<0.0001). A similar effect on broad-spectrum antibiotic prescribing was found in pediatric patients with a decline of 19.7% among intervention practices versus an increase of 0.9% in control practices (p<0.0001).
A CDSS embedded in an EHR had a modest effect in changing prescribing for adults where antibiotics were inappropriate but had a substantial impact on changing the overall prescribing of broad-spectrum antibiotics among pediatric and adult patients.
Respiratory infections; primary care
Purpose: In the effort to reduce radiation exposure to patients undergoing myocardial perfusion imaging (MPI) with SPECT/CT, we evaluate the feasibility of a single CT for attenuation correction (AC) of single-day rest (R)/stress (S) perfusion. Methods: Processing of 20 single isotope and 20 dual isotope MPI with perfusion defects were retrospectively repeated in three steps: (1) the standard method using a concurrent R-CT for AC of R-SPECT and S-CT for S-SPECT; (2) the standard method repeated; and (3) with the R-CT used for AC of S-SPECT, and the S-CT used for AC of R-SPECT. Intra-Class Correlation Coefficients (ICC) and Choen’s kappa were used to measure intra-operator variability in sum scoring. Results: The highest level of intra-operator reliability was seen with the reproduction of the sum rest score (SRS) and sum stress score (SSS) (ICC > 95%). ICCs were > 85% for SRS and SSS when alternate CTs were used for AC, but when sum difference scores were calculated, ICC values were much lower (~22% to 27%), which may imply that neither CT substitution resulted in a reproducible difference score. Similar results were seen when evaluating dichotomous outcomes (sum scores difference of ≥ 4) when comparing different processing techniques (kappas ~0.32 to 0.43). Conclusions: When a single CT is used for AC of both rest and stress SPECT, there is disproportionately high variability in sum scoring that is independent of user error. This information can be used to direct further investigation in radiation reduction for common imaging exams in nuclear medicine.
Tomography; emission-computed; single-photon; myocardial perfusion imaging; “reproducibility of results”; tomography; X-ray computed; radiation dosage
When designing cluster randomized trials, it is important for researchers to be familiar with strategies to achieve valid study designs given limited resources. Constrained randomization is a technique to help ensure balance on pre-specified baseline covariates.
The goal was to develop a randomization scheme that balanced 16 intervention and 16 control practices with respect to 7 factors that may influence improvement in study outcomes during a 4-year cluster randomized trial to improve colorectal cancer screening within a primary care practice-based research network. We used a novel approach that included simulating 30,000 randomization schemes, removing duplicates, identifying which schemes were sufficiently balanced, and randomly selecting one scheme for use in the trial. For a given factor, balance was considered achieved when the frequency of each factor’s sub-classifications differed by no more than 1 between intervention and control groups. The population being studied includes approximately 32 primary care practices located in 19 states within the U.S. that care for approximately 56,000 patients at least 50 years old.
Of 29,782 unique simulated randomization schemes, 116 were determined to be balanced according to pre-specified criteria for all 7 baseline covariates. The final randomization scheme was randomly selected from these 116 acceptable schemes.
Using this technique, we were successfully able to find a randomization scheme that allocated 32 primary care practices into intervention and control groups in a way that preserved balance across 7 baseline covariates. This process may be a useful tool for ensuring covariate balance within moderately large cluster randomized trials.
Randomization techniques; cluster randomized trials; covariate balance; study design; practice based research networks; colorectal cancer screening