Background The extent to which adult height, a biomarker of the interplay of genetic endowment and early-life experiences, is related to risk of chronic diseases in adulthood is uncertain.
Methods We calculated hazard ratios (HRs) for height, assessed in increments of 6.5 cm, using individual–participant data on 174 374 deaths or major non-fatal vascular outcomes recorded among 1 085 949 people in 121 prospective studies.
Results For people born between 1900 and 1960, mean adult height increased 0.5–1 cm with each successive decade of birth. After adjustment for age, sex, smoking and year of birth, HRs per 6.5 cm greater height were 0.97 (95% confidence interval: 0.96–0.99) for death from any cause, 0.94 (0.93–0.96) for death from vascular causes, 1.04 (1.03–1.06) for death from cancer and 0.92 (0.90–0.94) for death from other causes. Height was negatively associated with death from coronary disease, stroke subtypes, heart failure, stomach and oral cancers, chronic obstructive pulmonary disease, mental disorders, liver disease and external causes. In contrast, height was positively associated with death from ruptured aortic aneurysm, pulmonary embolism, melanoma and cancers of the pancreas, endocrine and nervous systems, ovary, breast, prostate, colorectum, blood and lung. HRs per 6.5 cm greater height ranged from 1.26 (1.12–1.42) for risk of melanoma death to 0.84 (0.80–0.89) for risk of death from chronic obstructive pulmonary disease. HRs were not appreciably altered after further adjustment for adiposity, blood pressure, lipids, inflammation biomarkers, diabetes mellitus, alcohol consumption or socio-economic indicators.
Conclusion Adult height has directionally opposing relationships with risk of death from several different major causes of chronic diseases.
Height; cardiovascular disease; cancer; cause-specific mortality; epidemiological study; meta-analysis
Raynaud phenomenon (RP) is a temporary vasoconstrictive condition that often manifests itself in the fingers in response to cold or stress. It often co-occurs with certain chronic diseases that impact mortality. Our objective was to determine whether RP has any independent association with survival.
From 1987–1989, a total of 830 participants of the Charleston Heart Study cohort completed an in-person RP screening questionnaire. Two definitions of RP were used: a broad definition that included both blanching (pallor) and cyanotic color changes and a narrow definition that included only blanching. All-cause and cardiovascular disease (CVD) mortality were compared between subjects with and without RP using race-specific survival models that adjusted for age, sex, baseline CVD, and 10-year risk of coronary heart disease.
Using the narrow RP definition, we identified a significant interaction between older age and the presence of RP on all-cause mortality. In the broad RP definition model, the presence of RP was not associated with CVD mortality among blacks; however, among whites, the presence of RP was associated with a 1.6-fold increase in the hazard associated with CVD-related death (hazard ratio: 1.55, 95% confidence interval: 1.10–2.20, P=0.013).
RP was independently associated with mortality among older adults in our cohort. Among whites, RP was associated with increased CVD-related death. It is possible that RP may be a sign of undiagnosed vascular disease.
Raynaud disease; cohort studies; cardiovascular diseases; survival analysis
Advanced Cardiac Life Support (ACLS) algorithms are the default standard of care for in-hospital cardiac arrest (IHCA) management. However, adherence to published guidelines is relatively poor. The records of 149 patients who experienced IHCA were examined to begin to understand the association between overall adherence to ACLS protocols and successful return of spontaneous circulation (ROSC).
A retrospective chart review of medical records and code team worksheets was conducted for 75 patients who had ROSC after an IHCA event (SE group) and 74 who did not survive an IHCA event (DNS group). Protocol adherence was assessed using a detailed checklist based on the 2005 ACLS Update protocols. Several additional patient characteristics and circumstances were also examined as potential predictors of ROSC.
In unadjusted analyses, the percentage of correct steps performed was positively correlated with ROSC from an IHCA (p <0.01), and the number of errors of commission and omission were both negatively correlated with ROSC from an IHCA (p <0.01). In multivariable models, the percentage of correct steps performed and the number of errors of commission and omission remained significantly predictive of ROSC (p<0.01 and p<0.0001, respectively) even after accounting for confounders such as the difference in age and location of the IHCAs.
Our results show that adherence to ACLS protocols throughout an event is correlated with increased ROSC in the setting of cardiac arrest. Furthermore, the results suggest that, in addition to correct actions, both wrong actions and omissions of indicated actions lead to decreased ROSC after IHCA.
The objective of this study was to determine whether a composite outcome, derived of objective signs of inadequate cardiac output, would be associated with other important measures of outcomes and therefore be an appropriate end point for clinical trials in neonatal cardiac surgery.
Neonates (n = 76) undergoing cardiac operations requiring cardiopulmonary bypass were prospectively enrolled. Patients were defined to have met the composite outcome if they had any of the following events before hospital discharge: death, the use of mechanical circulatory support, cardiac arrest requiring chest compressions, hepatic injury (2 times the upper limit of normal for aspartate aminotransferase or alanine aminotransferase), renal injury (creatinine >1.5 mg/dL), or lactic acidosis (an increasing lactate >5 mmol/L in the postoperative period). Associations between the composite outcome and the duration of mechanical ventilation, intensive care unit stay, hospital stay, and total hospital charges were determined.
The median age at the time of surgery was 7 days, and the median weight was 3.2 kg. The composite outcome was met in 39% of patients (30/76). Patients who met the composite outcome compared with those who did not had a longer duration of mechanical ventilation (4.9 vs 2.9 days, P<.01), intensive care unit stay (8.8 vs 5.7 days, P<.01), hospital stay (23 vs 12 days, P<.01), and increased hospital charges ($258,000 vs $170,000, P<.01). In linear regression analysis, controlling for surgical complexity, these differences remained significant (R2 = 0.29–0.42, P<.01).
The composite outcome is highly associated with important early operative outcomes and may serve as a useful end point for future clinical research in neonates undergoing cardiac operations.
Adherence to Advanced Cardiac Life Support (ACLS) guidelines during in69 hospital cardiac arrest (IHCA) is associated with improved outcomes, but current evidence shows that sub-optimal care is common. Successful execution of such protocols during IHCA requires rapid patient assessment and the performance of a number of ordered, time-sensitive interventions. Accordingly, we sought to determine whether the use of an electronic decision support tool (DST) improves performance during high-fidelity simulations of IHCA.
After IRB approval and written informed consent was obtained, 47 senior medical students were enrolled. All participants were ACLS certified and within one month of graduation. Each participant was issued an iPod Touch device with a DST installed that contained all ACLS management algorithms. Participants managed two scenarios of IHCA and were allowed to use the DST in one scenario and prohibited from using it in the other. All participants managed the same scenarios. Simulation sessions were video recorded and graded by trained raters according to previously validated checklists.
Performance of correct protocol steps was significantly greater with the DST than without (84.7% v 73.8%, p< 0.001) and participants committed significantly fewer additional errors when using the DST (2.5 errors v. 3.8 errors, p< 0.012).
Use of an electronic DST provided a significant improvement in the management of simulated IHCA by senior medical students as measured by adherence to published guidelines.
We investigate whether the distributions to the states from the Tobacco Master Settlement Agreement (MSA) in 1998 is associated with stronger tobacco control efforts. We use state level data from 50 states and the District of Columbia from four time periods post MSA (1999, 2002, 2004, and 2006) for the analysis. Using fixed effect regression models, we estimate the relationship between MSA disbursements and a new aggregate measure of strength of state tobacco control known as the Strength of Tobacco Control (SoTC) Index. Results show an increase of $1 in the annual per capita MSA disbursement to a state is associated with a decrease of −0.316 in the SoTC mean value, indicating higher MSA payments were associated with weaker tobacco control measures within states. In order to achieve the initial objectives of the MSA payments, policy makers should focus on utilizing MSA payments strictly on tobacco control activities across states.
Overconsumption of alcohol is well known to lead to numerous health and social problems. Prevalence studies of United States adults found that 20% of patients meet criteria for an alcohol use disorder. Routine screening for alcohol use is recommended in primary care settings, yet little is known about the organizational factors that are related to successful implementation of screening, brief intervention (SBI) and treatment in these settings. The purpose of this study was to evaluate organizational attributes in primary care practices that participated in a practice-based research network trial to implement alcohol SBI. The Survey of Organizational Attributes in Primary Care (SOAPC) has reliably measured four factors: communication, decision-making, stress/chaos and history of change. This 21-item instrument was administered to 178 practice members at the baseline of this trial, to evaluate for relationship of organizational attributes to implementation of alcohol SBI and treatment. No significant relationships were found correlating alcohol screening, identification of high-risk drinkers and brief intervention, to the factors measured in the SOAPC instrument. These results highlight the challenges related to the use of organizational survey instruments in explaining or predicting variations in clinical improvement. Comprehensive mixed methods approaches may be more effective in evaluations of implementation of SBI and treatment.
alcohol screening; high-risk drinkers; primary care; organizational attributes
Quality chest compressions (CC) are the most important factor in successful cardiopulmonary resuscitation. Adjustment of CC based upon an invasive arterial blood pressure (ABP) display would be theoretically beneficial. Additionally, having one compressor present for longer than a 2-min cycle with an ABP display may allow for a learning process to further maximize CC. Accordingly, we tested the hypothesis that CC can be improved with a real-time display of invasively measured blood pressure and with an unchanged, physically fit compressor.
A manikin was attached to an ABP display derived from a hemodynamic model responding to parameters of CC rate, depth, and compression-decompression ratio. The area under the blood pressure curve over time (AUC) was used for data analysis. Each participant (N = 20) performed 4 CPR sessions: (1) No ABP display, exchange of compressor every 2 min; (2) ABP display, exchange of compressor every 2 min; (3) no ABP display, no exchange of the compressor; (4) ABP display, no exchange of the compressor. Data were analyzed by ANOVA. Significance was set at a p-value < 0.05.
The average AUC for cycles without ABP display was 5201 mmHg s (95% confidence interval (CI) of 4804–5597 mmHg s), and for cycles with ABP display 6110 mmHg s (95% CI of 5715–6507 mmHg s) (p< 0.0001). The average AUC increase with ABP display for each participant was 20.2 ± 17.4% 95 CI (p < 0.0001).
Our study confirms the hypothesis that a real-time display of simulated ABP during CPR that responds to participant performance improves achieved and sustained ABP. However, without any real-time visual feedback, even fit compressors demonstrated degradation of CC quality.
Cardiopulmonary resuscitation; Chest compressions; Simulation; Arterial blood pressure
Preeclampsia (PE) affects 2–8% of pregnancies worldwide and is a significant source of maternal and neonatal morbidity and mortality. However, the mechanisms underlying PE are poorly understood and major questions regarding etiology and risk factors remain to be addressed. Our objective was to examine whether abnormal expression of the cardiovascular developmental transcription factor, Nkx2-5, was associated with early onset and severe pre-eclampsia (EOSPE).
Using qPCR and immunohistochemical assay, we examined expression of Nkx2-5 and target gene expression in EOSPE and control placental tissue. We tested resulting mechanistic hypotheses in cultured cells using shRNA knockdown, qPCR and western blot.
Nkx2-5 is highly expressed in racially disparate fashion (Caucasians > African Americans) in a subset of early EOSPE placentae. Nkx2-5 mRNA expression is highly correlated (Caucasians > African Americans) to mRNA expression of the preeclampsia marker sFlt-1, and of the Nkx2-5 target and RNA splicing factor, Sam68. Knockdown of Sam68 expression in cultured cells significantly impacts sFlt-1 mRNA isoform generation in vitro, supporting a mechanistic hypothesis that Nkx2-5 impacts EOSPE severity in a subset of patients via upregulation of Sam68 to increase sFlt-1 expression. Expression of additional Nkx2-5 targets potentially regulating metabolic stress response is also elevated in racially disparate fashion in EOSPE.
Expression of Nkx2-5 and its target genes may directly influence the genesis and racially disparate severity, and define a mechanistically distinct subclass of EOSPE.
Preeclampsia; placenta; sFlt-1; Sam68; Nkx2-5; Xbp-1; Ccdc117; racial disparity
To prospectively evaluate in a phantom the effects of reconstruction kernel, field of view (FOV), and section thickness on automated measurements of pulmonary nodule volume.
Materials and Methods
Spherical and lobulated pulmonary nodules 3–15 mm in diameter were placed in a commercially available lung phantom and scanned by using a 16-section computed tomographic (CT) scanner. Nodule volume (V) was determined by using the diameters of 27 spherical nodules and the mass and density values of 29 lobulated nodules measured by using the formulas V = (4/3)πr3 (spherical nodules) and V = 1000 × (M/D) (lobulated nodules) as reference standards, where r is nodule radius; M, nodule mass; and D, wax density. Experiments were performed to evaluate seven reconstruction kernels and the independent effects of FOV and section thickness. Automated nodule volume measurements were performed by using computer-assisted volume measurement software. General linear regression models were used to examine the independent effects of each parameter, with percentage overestimation of volume as the dependent variable of interest.
There was no substantial difference in the accuracy of volume estimations across the seven reconstruction kernels. The bone reconstruction kernel was deemed optimal on the basis of the results of a series of statistical analyses and other qualitative findings. Overall, volume accuracy was significantly associated (P < .0001) with larger reference standard–measured nodule diameter. There was substantial overestimation of the volumes of the 3–5-mm nodules measured by using the volume measurement software. Decreasing the FOV facilitated no significant improvement in the precision of lobulated nodule volume measurements. The accuracy of volume estimations—particularly those for small nodules—was significantly (P < .0001) affected by section thickness.
Substantial, highly variable overestimation of volume occurs with decreasing nodule diameter. A section thickness that enables the acquisition of at least three measurements along the z-axis should be used to measure the volumes of larger pulmonary nodules.
To assess the accuracy of dual-energy CT (DECT) for the quantification of iodine concentrations in a thoracic phantom across various cardiac DECT protocols and simulated patient sizes.
Materials and methods
Experiments were performed on first- and second-generation dual-source CT (DSCT) systems in DECT mode using various cardiac DECT protocols. An anthropomorphic thoracic phantom was equipped with tubular inserts containing known iodine concentrations (0–20 mg/mL) in the cardiac chamber and up to two fat-equivalent rings to simulate different patient sizes. DECT-derived iodine concentrations were measured using dedicated software and compared to true concentrations. General linear regression models were used to identify predictors of measurement accuracy
Correlation between measured and true iodine concentrations (n=72) across CT systems and protocols was excellent (R=0.994–0.997, P< 0.0001). Mean measurement errors were 3.0 ± 7.0 % and −2.9 ± 3.8 % for first- and second-generation DSCT, respectively. This error increased with simulated patient size. The second-generation DSCT showed the most stable measurements across a wide range of iodine concentrations and simulated patient sizes.
Overall, DECT provides accurate measurements of iodine concentrations across cardiac CT protocols, strengthening the case for DECT-derived blood volume estimates as a surrogate of myocardial blood supply.
dual-energy CT; dual-source CT; cardiac CT; iodine; quantification
Novel statistical methods are constantly being developed within the context of biomedical research; however, the characteristics of biostatistics methods that have been adopted into the field of general / internal medicine (GIM) is unclear. This study highlights the statistical journal articles, the statistical journals, and the types of statistical methods that appear to be having the most direct impact on GIM research.
Descriptive techniques, including analyses of articles’ keywords and controlled vocabulary terms, were used to characterize the articles published in statistics and probability journals that were subsequently referenced within GIM journal articles during a recent 10-year period (2000–2009).
From the 45 statistics and probability journals of interest, a total of 989 unique articles were identified as being cited by 2,183 (out of a total of about 127,469) unique GIM journal articles. The most frequently cited statistical topics included general/other statistical methods, followed by randomized trials, epidemiologic methods, meta-analysis, generalized linear models, and computer simulation.
As statisticians continue to develop and refine techniques, the promotion and adoption of these methods should also be addressed so that their efforts spent in developing the methods are not done in vain.
bibliometrics; biostatistical methods; general/internal medicine; journal impact factor
Colorectal cancer (CRC) screening is recommended for all adults 50-75 years old, yet only slightly more than one-half of eligible people are current with screening. Since CRC screening is usually initiated upon recommendations of primary care physicians, interventions in these settings are needed to improve screening.
To assess the impact of a quality improvement (QI) intervention combining electronic medical record (EMR) based audit and feedback, practice site visits for academic detailing and participatory planning, and “best-practice” dissemination on CRC screening in primary care practice.
Two year group-randomized trial.
Physicians, mid-level providers and clinical staff members in 32 primary care practices in 19 States caring for 68,150 patients 50 years of age or older.
Proportion of active patients up to date (UTD) with CRC screening (colonoscopy within 10 years, sigmoidoscopy within 5 years, or at home fecal occult blood testing within 1 year) and having screening recommended within past year among those not UTD.
Patients 50-75 years in intervention practices exhibited significantly greater improvement (from 60.7% to 71.2%) in being UTD with CRC screening than patients in control practices (from 57.7% to 62.8%), the adjusted difference being 4.9% (95% CI: 3.8% to 6.1%). Recommendations for screening also increased more in intervention practices with the adjusted difference being 7.9% (95%CI: 6.3% to 9.5%). There was wide inter-practice variation in CRC screening throughout the intervention.
A multi-component QI intervention in practices that use EMR can improve colorectal cancer screening.
Colorectal Cancer Screening; EMR; Quality Improvement
Antibiotics are often inappropriately prescribed for acute respiratory infections (ARIs).
To assess the impact of a clinical decision support system (CDSS) on antibiotic prescribing for ARIs.
A two-phase, 27-month demonstration project.
Nine primary care practices in PPRNet, a practice-based research network whose members use a common electronic health record (EHR).
Thirty-nine providers were included in the project.
A CDSS was designed as an EHR progress note template. To facilitate CDSS implementation, each practice participated in two to three site visits, sent representatives to two project meetings, and received quarterly performance reports on antibiotic prescribing for ARIs.
MAIN OUTCOME MEASURES
1) Use of antibiotics for inappropriate indications. 2) Use of broad spectrum antibiotics when inappropriate. 3) Use of antibiotics for sinusitis and bronchitis.
The CDSS was used 38,592 times during the 27-month intervention; its use was sustained for the study duration. Use of antibiotics for encounters at which diagnoses for which antibiotics are rarely appropriate did not significantly change through the course of the study (estimated 27-month change, 1.57 % [95 % CI, −5.35 %, 8.49 %] in adults and −1.89 % [95 % CI, −9.03 %, 5.26 %] in children). However, use of broad spectrum antibiotics for ARI encounters improved significantly (estimated 27 month change, −16.30 %, [95 % CI, −24.81 %, −7.79 %] in adults and −16.30 [95%CI, −23.29 %, −9.31 %] in children). Prescribing for bronchitis did not change significantly, but use of broad spectrum antibiotics for sinusitis declined.
This multi-method intervention appears to have had a sustained impact on reducing the use of broad spectrum antibiotics for ARIs. This intervention shows promise for promoting judicious antibiotic use in primary care.
acute respiratory infections; antibiotic prescribing; electronic health records; clinical decision support
Biostatistics—the application of statistics to understanding health and biology—provides powerful tools for developing research questions, designing studies, refining measurements, analyzing data, and interpreting findings. Biostatistics plays an important role in health-related research, yet biostatistics resources are often fragmented, ad hoc, or oversubscribed within academic health centers (AHCs). Given the increasing complexity and quantity of health-related data, the emphasis on accelerating clinical and translational science, and the importance of conducting reproducible research, the need for the thoughtful development of biostatistics resources within AHCs is growing.
In this article, the authors identify strategies for developing biostatistics resources in three areas: (1) recruiting and retaining biostatisticians; (2) efficiently using biostatistics resources; and (3) improving biostatistical contributions to science. AHCs should consider these three domains in building strong biostatistics resources, which they can leverage to support a broad spectrum of research. For each of the three domains, the authors describe the advantages and disadvantages of AHCs creating centralized biostatistics units rather than dispersing such resources across clinical departments or other research units. They also address the challenges biostatisticians face in contributing to research without sacrificing their individual professional growth or the trajectory of their research team. The authors ultimately recommend that AHCs create centralized biostatistics units, as this approach offers distinct advantages both to investigators who collaborate with biostatisticians as well as to the biostatisticians themselves, and it is better suited to accomplish the research and education missions of AHCs.
To assess the effect of a clinical decision support system (CDSS) integrated into an electronic health record (EHR) on antibiotic prescribing for acute respiratory infections (ARIs) in primary care.
Materials and methods
Quasi-experimental design with nine intervention practices and 61 control practices in the Practice Partner Research Network, a network of practices which all use the same EHR (Practice Partner). The nine intervention practices were located in nine US states. The design included a 3-month baseline data collection period (October through December 2009) before the introduction of the intervention and 15 months of follow-up (January 2010 through March 2011). The main outcome measures were the prescribing of antibiotics in ARI episodes for which antibiotics are inappropriate and prescribing of broad-spectrum antibiotics in all ARI episodes.
In adult patients, prescribing of antibiotics in ARI episodes where antibiotics are inappropriate declined more (−0.6%) among intervention practices than in control practices (+4.2%) (p=0.03). However, among adults, the CDSS intervention improved prescribing of broad-spectrum antibiotics, with a decline of 16.6% among intervention practices versus an increase of 1.1% in control practices (p<0.0001). A similar effect on broad-spectrum antibiotic prescribing was found in pediatric patients with a decline of 19.7% among intervention practices versus an increase of 0.9% in control practices (p<0.0001).
A CDSS embedded in an EHR had a modest effect in changing prescribing for adults where antibiotics were inappropriate but had a substantial impact on changing the overall prescribing of broad-spectrum antibiotics among pediatric and adult patients.
Respiratory infections; primary care
Purpose: In the effort to reduce radiation exposure to patients undergoing myocardial perfusion imaging (MPI) with SPECT/CT, we evaluate the feasibility of a single CT for attenuation correction (AC) of single-day rest (R)/stress (S) perfusion. Methods: Processing of 20 single isotope and 20 dual isotope MPI with perfusion defects were retrospectively repeated in three steps: (1) the standard method using a concurrent R-CT for AC of R-SPECT and S-CT for S-SPECT; (2) the standard method repeated; and (3) with the R-CT used for AC of S-SPECT, and the S-CT used for AC of R-SPECT. Intra-Class Correlation Coefficients (ICC) and Choen’s kappa were used to measure intra-operator variability in sum scoring. Results: The highest level of intra-operator reliability was seen with the reproduction of the sum rest score (SRS) and sum stress score (SSS) (ICC > 95%). ICCs were > 85% for SRS and SSS when alternate CTs were used for AC, but when sum difference scores were calculated, ICC values were much lower (~22% to 27%), which may imply that neither CT substitution resulted in a reproducible difference score. Similar results were seen when evaluating dichotomous outcomes (sum scores difference of ≥ 4) when comparing different processing techniques (kappas ~0.32 to 0.43). Conclusions: When a single CT is used for AC of both rest and stress SPECT, there is disproportionately high variability in sum scoring that is independent of user error. This information can be used to direct further investigation in radiation reduction for common imaging exams in nuclear medicine.
Tomography; emission-computed; single-photon; myocardial perfusion imaging; “reproducibility of results”; tomography; X-ray computed; radiation dosage
Anion gap (AG) metabolic acidosis is common in critically ill patients. The relationship between initial AG at the time of admission to the medical intensive care unit (MICU) and mortality or length of stay (LOS) is unclear. This study was undertaken to evaluate this relationship.
Materials and Method
We prospectively examined the acid-base status of 100 consecutive patients at the time of MICU admission and recorded their mortality and LOS. The etiology of each AG was also recorded. Anion gap was corrected for albumin levels. The patients were divided into 4 stages based on severity of AG. Outcomes based on severity of AG were measured, and comparisons that adjusted for baseline characteristics were performed.
This study showed that increased AG was associated with the higher mortality and that an AG more than 30 had the highest mortality. Mortality was significantly (P = .013) increased, even after accounting for AG etiology. Patients with highest AG also had the longest LOS in the MICU, and patients with normal acid-base status had the shortest MICU LOS (P < .01).
A high AG at the time of admission to the MICU was associated with higher mortality and LOS. Initial risk stratification based on AG and metabolic acidosis may help guide appropriate patient disposition (especially in patients without other definitive criteria for MICU admission) and assist with prognosis. Mixed AG metabolic acidosis with concomitant acid-base disorder was associated with increased MICU LOS.
Anion gap; Mortality; Lactic acidosis; Risk stratification; Length of stay; Prognostication
Success in deep biliary cannulation via native ampullae of Vater is an accepted measure of competence in ERCP training and practice, yet prior studies focused on predicting adverse events alone, rather than success. Our aim is to determine factors associated with deep biliary cannulation success, with/ without precut sphincterotomy.
The ERCP Quality Network is a unique prospective database of over 10,000 procedures by over 80 endoscopists over several countries. After data cleaning, and eliminating previously stented or cut papillae, two multilevel fixed effect multivariate models were used to control for clustering within physicians, to predict biliary cannulation success, with and without allowing “precut” to assist an initially failed cannulation.
13018 ERCPs were performed by 85 endoscopists (March 2007 - May 2011). Conventional (without precut) and overall cannulation rates were 89.8% and 95.6%, respectively. Precut was performed in 876 (6.7%). Conventional success was more likely in outpatients (OR 1.21), but less likely in complex contexts (OR 0.59), sicker patients (ASA grade (II, III/V: OR 0.81, 0.77)), teaching cases (OR 0.53), and certain indications (strictures, active pancreatitis). Overall cannulation success (some precut-assisted) was more likely with higher volume endoscopists (> 239/year: OR 2.79), more efficient fluoroscopy practices (OR 1.72), and lower with moderate (versus deeper) sedation (OR 0.67).
Biliary cannulation success appears influenced by both patient and practitioner factors. Patient- and case-specific factors have greater impact on conventional (precut-free) cannulation success, but volume influences ultimate success; both may be used to select appropriate cases and can help guide credentialing policies.
Examine in a randomize controlled feasibility clinical trial the efficacy of a cognitive-behavioral intervention designed to manage pain, enhance disease adjustment and adaptation, and improve quality of life among female adolescents with systemic lupus erythematosus (SLE).
Female adolescents (N = 53) ranging in age from 12 to 18 years were randomized to one of three groups including a cognitive-behavioral intervention, an education-only arm, and a no-contact control group. Participants were assessed at baseline, post-intervention, and at three-and six-month intervals following completion of the intervention.
No significant differences were revealed among the three treatment arms for any of the dependent measures at any of the assessment points. For the mediator variables, a post-hoc secondary analysis did reveal increases in coping skills from baseline to post-intervention among the participants in the cognitive-behavioral intervention group compared to both the no-contact control group and the education-only group.
Although no differences were detected in the primary outcome, a possible effect on female SLE adolescent coping was detected in this feasibility study. Whether the impact of training in the area of coping was of sufficient magnitude to generalize to other areas of functioning, such as adjustment and adaptation, is unclear. Future Phase III randomized trials will be needed to assess additional coping models, and to evaluate the dose of training and its influence on pain management, adjustment, and health-related quality of life.
lupus; cognitive-behavioral; quality of life
Systemic Lupus erythematosus (SLE) is an autoimmune disease caused, in part, by abnormalities in cells of the immune system including B and T cells. Genetically reducing globally the expression of the ETS transcription factor FLI1 by 50% in two lupus mouse models significantly improves disease measures and survival through an unknown mechanism. In this study we analyze the effects of reducing FLI1 in the MRL/lpr lupus prone model on T cell function. We demonstrate that adoptive transfer of MRL/lpr Fli1+/+ or Fli1+/- T cells and B cells into Rag1-deficient mice results in significantly decreased serum immunoglobulin levels in animals receiving Fli1+/- lupus T cells compared to animals receiving Fli1+/+ lupus T cells regardless of the genotype of co-transferred lupus B cells. Ex vivo analyses of MRL/lpr T cells demonstrated that Fli1+/- T cells produce significantly less IL-4 during early and late disease and exhibited significantly decreased TCR-specific activation during early disease compared to Fli1+/+ T cells. Moreover, the Fli1+/- T cells expressed significantly less neuraminidase 1 (Neu1) message and decreased NEU activity during early disease and significantly decreased levels of glycosphingolipids during late disease compared to Fli1+/+ T cells. FLI1 dose-dependently activated the Neu1 promoter in mouse and human T cell lines. Together, our results suggest reducing FLI1 in lupus decreases the pathogenicity of T cells by decreasing TCR-specific activation and IL-4 production in part through the modulation of glycosphingolipid metabolism. Reducing the expression of FLI1 or targeting the glycosphingolipid metabolic pathway in lupus may serve as a therapeutic approach to treating lupus.
People frequently present to voice clinics with complaints of irritating laryngeal sensations. Clinicians attempt to reduce the irritating sensations and their common sequela, coughing and throat clearing, by advocating for techniques that remove the irritation with less harm to the vocal fold tissue. Despite the prevalence of patients with these complaints, it is not known if the less harmful techniques recommended by clinicians are effective at clearing irritating laryngeal sensations or that irritating laryngeal sensations are, in fact, more frequent in people with voice disorders than people without voice disorders.
Assessments of participant reported laryngeal sensation, pre- and post- clearing task, were obtained from 22 people with and 24 people without a voice disorder. Six clearing tasks were used to preliminarily evaluate the differing effects of tasks believed to be deleterious and ameliorative.
People with and without voice disorders reported pre-clear laryngeal sensation at a similar rate. Post-clear sensation was less likely to be completely or partially removed in people with voice disorders than in the non-voice disordered group. Hard throat clear and swallow with water were the most effective techniques at removing laryngeal sensation.
The findings provide initial evidence for some of the clinical practices common to treating patients with voice disorders and chronic clearing such as advocating for swallowing a sip of water as a replacement behavior instead of coughing or throat clearing. However, the findings raise questions about other practices such as associating irritating laryngeal sensation with a voice disorder.
voice; larynx; sensation; throat clear; cough
Purpose of review
Racial disparities appear to exist in the susceptibility and severity of systemic sclerosis (SSc, scleroderma) and are responsible for a greater health burden in blacks as compared to whites. Disparities in socioeconomic status and access to health care do not sufficiently explain the observed differences in prevalence and mortality. It is important to determine if there might be a biologic basis for the racial disparities observed in SSc.
We present data to suggest that the increased susceptibility and severity of SSc in blacks may result in part from an imbalance of pro-fibrotic and anti-fibrotic factors. Racial differences in the expression of transforming growth factor-β1 (TGF-β1) and caveolin-1, as well as differences in the expression of hepatocyte growth factor (HGF) and PPAR-γ have been demonstrated in blacks with SSc, as well as in normal black subjects. A genetic predisposition to fibrosis may account for much of the racial disparities between black and white patients with SSc.
A better understanding of the biologic basis for the racial disparities observed in SSc may lead to improved therapies, along with the recognition that different therapies may need to be adapted for different groups of patients.
Systemic Sclerosis; Health Disparities; TGF-β; Caveolin-1; HGF
The sign test is a well-known nonparametric approach for testing whether one of two conditions is preferable to another. In medicine, this method may be used when one is interested in testing in the context of a clinical trial whether either of two treatments that are provided to study subjects is favored over the other. When neither treatment outperforms the other within a given individual, a “tie” is said to have occurred. When planning such a trial and estimating statistical power and/or sample size, one should consider the probability of a tie occurring (PT). This paper quantifies the degree to which uncertainty in PT affects a study’s statistical power.
Binomial theory was used to calculate power given varying levels of uncertainty and varying distributional forms (i.e. beta, uniform) for PT.
Across a range of prior distributions for PT, power was reduced (i.e. <80%) for 46 (71.9%) of 64 experimental conditions, with large reductions (i.e. power <70%) for 10 (15.6%) of them.
When designing a clinical trial that will incorporate the sign test to compare 2 conditions, ignoring potential variation in the probability of a tie occurring will tend to result in an underpowered study. These findings have implications to the design of any clinical trial for which assumptions are made in calculating an appropriate sample size.
statistical power; sign test; non-parametric statistics; uncertainty; binomial distribution; sample size estimation
Human cytomegalovirus (HCMV) is an important cause of morbidity and mortality in patients with chronic graft-versus-host disease (cGVHD), but the underlying mechanisms are not understood. The aim of this investigation was to determine whether humoral immune responses to the HCMV antigens were quantitatively different in hematopoietic cell transplant (HCT) recipients who developed cGVHD from those who did not. Antibodies to HCMV and its proteins UL94 and UL70 were quantitated in 79 cGVHD and 30 non-cGVHD patients by enzyme-linked immunosorbent assays (ELISAs). Mean levels of antibodies to the whole HCMV and to its protein UL94 were not significantly different between the cGVHD and the non-cGVHD subjects. However, the levels of antibodies to HCMV UL70 were significantly higher in non-cGVHD subjects than in those with cGVHD (20.91±15.63 versus 15.00±10.35 ng/mL; p=0.03). This suggests that anti-UL70 antibodies might play a protective role in the development of cGVHD.