Anion gap (AG) metabolic acidosis is common in critically ill patients. The relationship between initial AG at the time of admission to the medical intensive care unit (MICU) and mortality or length of stay (LOS) is unclear. This study was undertaken to evaluate this relationship.
Materials and Method
We prospectively examined the acid-base status of 100 consecutive patients at the time of MICU admission and recorded their mortality and LOS. The etiology of each AG was also recorded. Anion gap was corrected for albumin levels. The patients were divided into 4 stages based on severity of AG. Outcomes based on severity of AG were measured, and comparisons that adjusted for baseline characteristics were performed.
This study showed that increased AG was associated with the higher mortality and that an AG more than 30 had the highest mortality. Mortality was significantly (P = .013) increased, even after accounting for AG etiology. Patients with highest AG also had the longest LOS in the MICU, and patients with normal acid-base status had the shortest MICU LOS (P < .01).
A high AG at the time of admission to the MICU was associated with higher mortality and LOS. Initial risk stratification based on AG and metabolic acidosis may help guide appropriate patient disposition (especially in patients without other definitive criteria for MICU admission) and assist with prognosis. Mixed AG metabolic acidosis with concomitant acid-base disorder was associated with increased MICU LOS.
Anion gap; Mortality; Lactic acidosis; Risk stratification; Length of stay; Prognostication
Systemic Lupus erythematosus (SLE) is an autoimmune disease caused, in part, by abnormalities in cells of the immune system including B and T cells. Genetically reducing globally the expression of the ETS transcription factor FLI1 by 50% in two lupus mouse models significantly improves disease measures and survival through an unknown mechanism. In this study we analyze the effects of reducing FLI1 in the MRL/lpr lupus prone model on T cell function. We demonstrate that adoptive transfer of MRL/lpr Fli1+/+ or Fli1+/- T cells and B cells into Rag1-deficient mice results in significantly decreased serum immunoglobulin levels in animals receiving Fli1+/- lupus T cells compared to animals receiving Fli1+/+ lupus T cells regardless of the genotype of co-transferred lupus B cells. Ex vivo analyses of MRL/lpr T cells demonstrated that Fli1+/- T cells produce significantly less IL-4 during early and late disease and exhibited significantly decreased TCR-specific activation during early disease compared to Fli1+/+ T cells. Moreover, the Fli1+/- T cells expressed significantly less neuraminidase 1 (Neu1) message and decreased NEU activity during early disease and significantly decreased levels of glycosphingolipids during late disease compared to Fli1+/+ T cells. FLI1 dose-dependently activated the Neu1 promoter in mouse and human T cell lines. Together, our results suggest reducing FLI1 in lupus decreases the pathogenicity of T cells by decreasing TCR-specific activation and IL-4 production in part through the modulation of glycosphingolipid metabolism. Reducing the expression of FLI1 or targeting the glycosphingolipid metabolic pathway in lupus may serve as a therapeutic approach to treating lupus.
People frequently present to voice clinics with complaints of irritating laryngeal sensations. Clinicians attempt to reduce the irritating sensations and their common sequela, coughing and throat clearing, by advocating for techniques that remove the irritation with less harm to the vocal fold tissue. Despite the prevalence of patients with these complaints, it is not known if the less harmful techniques recommended by clinicians are effective at clearing irritating laryngeal sensations or that irritating laryngeal sensations are, in fact, more frequent in people with voice disorders than people without voice disorders.
Assessments of participant reported laryngeal sensation, pre- and post- clearing task, were obtained from 22 people with and 24 people without a voice disorder. Six clearing tasks were used to preliminarily evaluate the differing effects of tasks believed to be deleterious and ameliorative.
People with and without voice disorders reported pre-clear laryngeal sensation at a similar rate. Post-clear sensation was less likely to be completely or partially removed in people with voice disorders than in the non-voice disordered group. Hard throat clear and swallow with water were the most effective techniques at removing laryngeal sensation.
The findings provide initial evidence for some of the clinical practices common to treating patients with voice disorders and chronic clearing such as advocating for swallowing a sip of water as a replacement behavior instead of coughing or throat clearing. However, the findings raise questions about other practices such as associating irritating laryngeal sensation with a voice disorder.
voice; larynx; sensation; throat clear; cough
Purpose of review
Racial disparities appear to exist in the susceptibility and severity of systemic sclerosis (SSc, scleroderma) and are responsible for a greater health burden in blacks as compared to whites. Disparities in socioeconomic status and access to health care do not sufficiently explain the observed differences in prevalence and mortality. It is important to determine if there might be a biologic basis for the racial disparities observed in SSc.
We present data to suggest that the increased susceptibility and severity of SSc in blacks may result in part from an imbalance of pro-fibrotic and anti-fibrotic factors. Racial differences in the expression of transforming growth factor-β1 (TGF-β1) and caveolin-1, as well as differences in the expression of hepatocyte growth factor (HGF) and PPAR-γ have been demonstrated in blacks with SSc, as well as in normal black subjects. A genetic predisposition to fibrosis may account for much of the racial disparities between black and white patients with SSc.
A better understanding of the biologic basis for the racial disparities observed in SSc may lead to improved therapies, along with the recognition that different therapies may need to be adapted for different groups of patients.
Systemic Sclerosis; Health Disparities; TGF-β; Caveolin-1; HGF
Human cytomegalovirus (HCMV) is an important cause of morbidity and mortality in patients with chronic graft-versus-host disease (cGVHD), but the underlying mechanisms are not understood. The aim of this investigation was to determine whether humoral immune responses to the HCMV antigens were quantitatively different in hematopoietic cell transplant (HCT) recipients who developed cGVHD from those who did not. Antibodies to HCMV and its proteins UL94 and UL70 were quantitated in 79 cGVHD and 30 non-cGVHD patients by enzyme-linked immunosorbent assays (ELISAs). Mean levels of antibodies to the whole HCMV and to its protein UL94 were not significantly different between the cGVHD and the non-cGVHD subjects. However, the levels of antibodies to HCMV UL70 were significantly higher in non-cGVHD subjects than in those with cGVHD (20.91±15.63 versus 15.00±10.35 ng/mL; p=0.03). This suggests that anti-UL70 antibodies might play a protective role in the development of cGVHD.
Distal oesophageal spasm (DES) is a rare and under-investigated motility abnormality. Recent studies indicate effective bolus transit in varying percentages of DES patients.
Explore functional aspects including contraction onset velocity and contraction amplitude cut-off values for simultaneous contractions to predict complete bolus transit
We re-examined data from 107 impedance-manometry recordings with a diagnosis of DES. Receiver operating characteristic (ROC) analysis was conducted, regarding effects of onset velocity on bolus transit taking into account distal oesophageal amplitude (DEA) and correcting for intra-individual repeated measures.
Mean area under the ROC curve for saline and viscous swallows were 0.84±0.05 and 0.84±0.04, respectively. Velocity criteria of >30cm/s when DEA>100mmHg and 8cm/s when DEA<100mmHg for saline and 32cm/s when DEA>100mmHg and >7cm/s when DEA<100mmHg for viscous had a sensitivity of 75% and specificity of 80% to identify complete bolus transit. Using these criteria, final diagnosis changed in 44.9% of patients. Abnormal bolus transit was observed in 50.9% of newly diagnosed DES patients versus 7.5% of patients classified as normal. DES patients with DEA>100mmHg suffered twice as often from chest pain than those with DEA<100mmHg.
The proposed velocity cut-offs for diagnosing distal oesophageal spasm improve the ability to identify patients with spasm and abnormal bolus transit.
DES; diffuse oesophageal spasm; distal oesophageal spasm; bolus transit; Impedance Manometry; MII-EM; dysphagia; chest pain; GERD
Many patients receiving cardiac rhythm devices have conditions requiring antiplatelet (AP) and/or anticoagulant (AC) therapy. Current guidelines recommend a heparin bridging strategy (HBS) for anticoagulated patients with moderate/high risk for thrombosis. Several studies reported lower bleeding risk with continued oral anticoagulation rather than HBS. The best strategy for perioperative management of patients on AP therapy is less clear. The present study was designed as a meta-analysis of device implantation associated bleeding complications using different AC/AP therapies.
Methods and Results
PubMed and Cochrane Database searches identified articles based on design, outcomes and available data. Device recipients were grouped as follows: no therapy (NT), aspirin only, AC held, AC continued, dual AP, HBS. The primary outcome was defined as a bleeding complication including hematoma, transfusion or prolonged hospital stay. Thirteen articles were identified for analysis including 5978 patients. The combined incidence of bleeding complications was 274/5978 (4.6%), ranging from 2.2% (NT) to 14.6% (HBS). The estimated odds of bleeding were increased by 8.3 (95% CI 5.5-12.9) times in the HBS group, 5.0 (95% CI 3.0-8.3) for dual AP therapy, 1.7 (95% CI 1.0-3.1) for AC held, 1.6 (95% CI 0.9-2.6) for AC continued, and 1.5 (95% CI 0.9-2.3) for aspirin only, relative to the NT group. HBS significantly increased bleeding events compared with holding or continuing AC. Continuing AC did not increase bleeding events compared with NT.
Continuing AC appears safer than HBS for device implantation. Dual AP therapy but not continuing AC carries a significant risk of bleeding.
anticoagulants; heparin; implantable cardioverter-defibrillator; pacemakers
Stage IIIA non-small cell lung cancer (NSCLC) is comprised of a heterogeneous group of patients with predominant ipsilateral mediastinal (N2) disease. The spectrum of lymph node presentation has lead to a host of trials involving various therapeutic combinations and optimal management has been unclear.
In 2007 and 2008, ten live research events surveyed the practice preferences of American medical oncologists using two hypothetical scenarios. The first scenario was of a stage IIIA NSCLC in the right upper lobe with a single enlarged (>1cm) 4R lymph node found to be malignant by mediastinoscopy. The second was of a bulky stage IIIA NSCLC with multi-station N2 pathologically positive nodes.
In the first secenario, 373 (92%) of the oncologists incorporated surgery into their treatment plan. Only 34 (8%) offered chemoradiotherapy alone. Neoadjuvant chemotherapy, followed by surgery, then additional chemoradiotherapy (32%) was the most commonly offered treatment strategy. In the second scenario, 209 (52%) medical oncologists chose definitive chemoradiation. 193(48%) included surgery as part of the treatment plan.
The current standard of care for IIIA N2 NSCLC recognized prior to treatment is concurrent chemoradiotherapy. This study demonstrated that a significant proportion of oncologists treating locally advanced lung cancer include surgery in as part of the treatment plan more so in single versus multi-nodal station disease. Since node positive locally advanced disease is such a common presentation for patients with lung cancer, well-designed clinical trials are needed to define the most advantageous treatment strategy for individual subsets of patients with Stage IIIA disease.
The sign test is a well-known nonparametric approach for testing whether one of two conditions is preferable to another. In medicine, this method may be used when one is interested in testing in the context of a clinical trial whether either of two treatments that are provided to study subjects is favored over the other. When neither treatment outperforms the other within a given individual, a “tie” is said to have occurred. When planning such a trial and estimating statistical power and/or sample size, one should consider the probability of a tie occurring (PT). This paper quantifies the degree to which uncertainty in PT affects a study’s statistical power.
Binomial theory was used to calculate power given varying levels of uncertainty and varying distributional forms (i.e. beta, uniform) for PT.
Across a range of prior distributions for PT, power was reduced (i.e. <80%) for 46 (71.9%) of 64 experimental conditions, with large reductions (i.e. power <70%) for 10 (15.6%) of them.
When designing a clinical trial that will incorporate the sign test to compare 2 conditions, ignoring potential variation in the probability of a tie occurring will tend to result in an underpowered study. These findings have implications to the design of any clinical trial for which assumptions are made in calculating an appropriate sample size.
statistical power; sign test; non-parametric statistics; uncertainty; binomial distribution; sample size estimation
Vasoconstrictor therapy has been advocated as treatment for hepatorenal syndrome (HRS). Our aim was to explore whether across all tested vasoconstrictors, achievement of a substantial rise in arterial blood pressure is associated with recovery of kidney function in HRS.
Pooled analysis of published studies identified by electronic database search.
Setting & Population
Data pooled across 501 subjects from 21 studies.
Selection Criteria for Studies
Human studies evaluating the efficacy of a vasoconstrictor administered for ≥ 72 hours in adults with HRS Type 1 or 2.
Outcomes & Measurements
Cohorts’ mean arterial pressure (MAP), serum creatinine, urinary output and plasma renin activity (PRA) at baseline and at subsequent time points during treatment. Linear regression models were constructed to estimate the mean daily change in MAP, serum creatinine, urinary output and PRA for each study subgroup. Correlations were used to assess for association between variables.
An increase in MAP is strongly associated with a decline in serum creatinine but not associated with an increase in urinary output. The associations were stronger when analyses were restricted to randomized clinical trials and were not limited to cohorts with either lower baseline MAP or lower baseline serum creatinine. The majority of the studies tested terlipressin as vasoconstrictor, whereas fewer studies tested ornipressin, midodrine, octreotide or norepinephrine. Excluding cohorts of subjects treated with terlipressin or ornipressin did not eliminate the association. Furthermore, a fall in PRA correlated with improvement in kidney function.
Studies were not originally designed to test our question. We lacked access to individual patient data.
A rise in MAP during vasoconstrictor therapy in HRS is associated with improvement in kidney function, across the spectrum of drugs tested to date. These results support consideration for a goal-directed approach to the treatment of HRS.
With the current focus on personalized medicine, patient/subject level inference is often of key interest in translational research. As a result, random effects models (REM) are becoming popular for patient level inference. However, for very large data sets that are characterized by large sample size, it can be difficult to fit REM using commonly available statistical software such as SAS since they require inordinate amounts of computer time and memory allocations beyond what are available preventing model convergence. For example, in a retrospective cohort study of over 800,000 Veterans with type 2 diabetes with longitudinal data over 5 years, fitting REM via generalized linear mixed modeling using currently available standard procedures in SAS (e.g. PROC GLIMMIX) was very difficult and same problems exist in Stata’s gllamm or R’s lme packages. Thus, this study proposes and assesses the performance of a meta regression approach and makes comparison with methods based on sampling of the full data.
We use both simulated and real data from a national cohort of Veterans with type 2 diabetes (n=890,394) which was created by linking multiple patient and administrative files resulting in a cohort with longitudinal data collected over 5 years.
Methods and results
The outcome of interest was mean annual HbA1c measured over a 5 years period. Using this outcome, we compared parameter estimates from the proposed random effects meta regression (REMR) with estimates based on simple random sampling and VISN (Veterans Integrated Service Networks) based stratified sampling of the full data. Our results indicate that REMR provides parameter estimates that are less likely to be biased with tighter confidence intervals when the VISN level estimates are homogenous.
When the interest is to fit REM in repeated measures data with very large sample size, REMR can be used as a good alternative. It leads to reasonable inference for both Gaussian and non-Gaussian responses if parameter estimates are homogeneous across VISNs.
Generalized linear mixed model; Homogeneity; Random effect meta regression; Longitudinal data; Very large dataset
With advances in technology, detection of small pulmonary nodules is increasing. Nodule detection software (NDS) has been developed to assist radiologists with pulmonary nodule diagnosis. Although it may increase sensitivity for small nodules, often there is an accompanying increase in false-positive findings. We designed a study to examine the extent to which computed tomography (CT) NDS influences the confidence of radiologists in identifying small pulmonary nodules.
Materials and Methods
Eight radiologists (readers) with different levels of experience examined thoracic CT scans of 131 cases and identified all the clinically relevant pulmonary nodules. The reference standard was established by an expert, dedicated thoracic radiologist. For each nodule, the readers recorded nodule size, density, location, and confidence level. Two weeks (or more) later, the readers reinterpreted the same scans; however, this time they were provided marks, when present, as indicated by NDS and asked to reassess their level of confidence. The effect of NDS on changes in reader confidence was assessed using multivariable generalized linear regression models.
A total of 327 unique nodules were identified. Declines in confidence were significantly (P<0.05) associated with the absence of an NDS mark and smaller nodules (odds ratio=71.0, 95% confidence interval =14.8–339.7). Among nodules with pre-NDS confidence less than 100%, increases in confidence were significantly (P<0.05) associated with the presence of an NDS mark (odds ratio=6.0, 95% confidence interval =2.7–13.6) and larger nodules. Secondary findings showed that NDS did not improve reader diagnostic accuracy.
Although in this study NDS does not seem to enhance reader accuracy, the confidence of the radiologists in identifying small pulmonary nodules with CT is greatly influenced by NDS.
clinical decision making; computed tomography scan; diagnostic imaging; lung neoplasm; diagnostic errors
Examine in a randomize controlled feasibility clinical trial the efficacy of a cognitive-behavioral intervention designed to manage pain, enhance disease adjustment and adaptation, and improve quality of life among female adolescents with systemic lupus erythematosus (SLE).
Female adolescents (N = 53) ranging in age from 12 to 18 years were randomized to one of three groups including a cognitive-behavioral intervention, an education-only arm, and a no-contact control group. Participants were assessed at baseline, post-intervention, and at three-and six-month intervals following completion of the intervention.
No significant differences were revealed among the three treatment arms for any of the dependent measures at any of the assessment points. For the mediator variables, a post-hoc secondary analysis did reveal increases in coping skills from baseline to post-intervention among the participants in the cognitive-behavioral intervention group compared to both the no-contact control group and the education-only group.
Although no differences were detected in the primary outcome, a possible effect on female SLE adolescent coping was detected in this feasibility study. Whether the impact of training in the area of coping was of sufficient magnitude to generalize to other areas of functioning, such as adjustment and adaptation, is unclear. Future Phase III randomized trials will be needed to assess additional coping models, and to evaluate the dose of training and its influence on pain management, adjustment, and health-related quality of life.
lupus; cognitive-behavioral; quality of life
When designing cluster randomized trials, it is important for researchers to be familiar with strategies to achieve valid study designs given limited resources. Constrained randomization is a technique to help ensure balance on pre-specified baseline covariates.
The goal was to develop a randomization scheme that balanced 16 intervention and 16 control practices with respect to 7 factors that may influence improvement in study outcomes during a 4-year cluster randomized trial to improve colorectal cancer screening within a primary care practice-based research network. We used a novel approach that included simulating 30,000 randomization schemes, removing duplicates, identifying which schemes were sufficiently balanced, and randomly selecting one scheme for use in the trial. For a given factor, balance was considered achieved when the frequency of each factor’s sub-classifications differed by no more than 1 between intervention and control groups. The population being studied includes approximately 32 primary care practices located in 19 states within the U.S. that care for approximately 56,000 patients at least 50 years old.
Of 29,782 unique simulated randomization schemes, 116 were determined to be balanced according to pre-specified criteria for all 7 baseline covariates. The final randomization scheme was randomly selected from these 116 acceptable schemes.
Using this technique, we were successfully able to find a randomization scheme that allocated 32 primary care practices into intervention and control groups in a way that preserved balance across 7 baseline covariates. This process may be a useful tool for ensuring covariate balance within moderately large cluster randomized trials.
Randomization techniques; cluster randomized trials; covariate balance; study design; practice based research networks; colorectal cancer screening
There has been resurgence of interest in lung cancer screening using low‐dose computed tomography. The implications of directing a screening programme at smokers has been little explored.
A nationwide telephone survey was conducted. Demographics, certain clinical characteristics and attitudes about screening for lung cancer were ascertained. Responses of current, former and never smokers were compared.
2001 people from the US were interviewed. Smokers were significantly (p<0.05) more likely than never smokers to be male, non‐white, less educated, and to report poor health status or having had cancer, and less likely to be able to identify a usual source of healthcare. Compared with never smokers, current smokers were less likely to believe that early detection would result in a good chance of survival (p<0.05). Smokers were less likely to be willing to consider computed tomography screening for lung cancer (71.2% (current smokers) v 87.6% (never smokers) odds ratio (OR) 0.48; 95% confidence interval (CI) 0.32 to 0.71). More never smokers as opposed to current smokers believed that the risk of disease (88% v 56%) and the accuracy of the test (92% v 71%) were important determinants in deciding whether to be screened (p<0.05). Only half of the current smokers would opt for surgery for a screen‐diagnosed cancer.
The findings suggest that there may be substantial obstacles to the successful implementation of a mass‐screening programme for lung cancer that will target cigarette smokers.
Activation of the coagulation cascade leading to generation of thrombin has been extensively documented in various forms of lung injury including that associated with systemic sclerosis. We previously demonstrated that direct thrombin inhibitor (DTI) dabigatran inhibits thrombin-induced profibrotic signaling in lung fibroblasts. This study tested whether dabigatran attenuates lung injury in a murine model of interstitial lung disease.
Lung injury was induced in 6–8 week old female C57BL/6 mice by a single intratracheal instillation of bleomycin. Dabigatran etexilate was given as supplemented chow beginning on day 1 (early treatment, anti-inflammatory effect) or on day 8 (late treatment, anti-fibrotic effect) following bleomycin instillation. Two and three weeks after bleomycin instillation mice were euthanized; lung tissue, bronchoalveolar lavage fluid (BALF), and plasma were investigated.
Both early and late treatment with dabigatran etexilate attenuated the development of bleomycin-induced pulmonary fibrosis. Dabigatran etexilate significantly reduced thrombin activity and levels of TGF-β1 and PDGF-AA in BALF simultaneously decreasing inflammatory cells and protein concentrations. Histological lung inflammation and fibrosis were significantly decreased in dabigatran etexilate-treated mice. Additionally, dabigatran etexilate reduced collagen, CTGF, and α-SMA expression in mice with bleomycin-induced lung fibrosis, whereas it had no effect on basal levels of these proteins.
Inhibition of thrombin using the oral DTI dabigatran etexilate has marked anti-inflammatory and anti-fibrotic effects in a bleomycin model of pulmonary fibrosis. Our data provide preclinical information about the feasibility and efficacy of dabigatran etexilate as a new therapeutic approach for the treatment of interstitial lung diseases.
Colorectal cancer (CRC) is the second leading cause of cancer death in the United States (US). Half of Americans above age 50 are not current with recommended screening; research is needed to assess the impact of interventions designed to increase receipt of CRC screening. The Colorectal Cancer Screening in Primary Care (C-TRIP) study is a theoretically-informed group randomized trial within 32 primary care practices. Baseline median proportion of active patients aged 50 years or older up-to-date with CRC screening among the 32 practices was 50.8% (N=55,746). Men were more likely to be screened than women (52.9% vs. 49.2% respectively). Patients 50–59 years of age were less likely to be up-to-date with screening (45.4%) than those in the 60–69 year and 70–79 years groups (58.5% in both groups). Opportunities exist to increase the proportion of CRC screening received in adults age 50 and older. C-TRIP evaluates the effectiveness of a model for improvement for increasing this proportion.
We assessed whether extra-immunization can serve as a clinical indicator for fragmentation of care.
Using public-use files of the 1999–2003 National Immunization Survey, we classified children 19–35 months of age by their vaccination providers for the degree of fragmentation of care as ordered from lowest with one vaccine provider, to increasing fragmentation with multiple providers in one facility type, to multiple providers in more than one facility type. Extra-immunization was defined conservatively based on the year-specific recommendations of the Advisory Committee on Immunization Practices (ACIP) for immunizations due before 18 months of age. Of note, 1999–2003 transitioned from oral to inactivated poliovirus vaccines.
The rate for extra-immunization was 9.4% (95% confidence interval [CI] 9.2, 9.7). Of single vaccines, the rate for polio vaccine was highest (5.7%, 95% CI 5.5, 6.0). Extra-immunization was lowest for the 69% of children with only one vaccination provider (6.4%, 95% CI 6.1, 6.7), was higher in children who had more than one vaccination provider with one vaccination facility type (13.9%, 95% CI 13.2, 14.6), and highest with more than one facility type (24.1%, 95% CI 22.5, 25.6). Logistic regression (including race/ethnicity, language, provider type, survey year, and a parent-held immunization record) confirmed that multiple providers (adjusted odds ratio [AOR] = 2.30), multiple facility types (AOR=4.67), Spanish language (AOR=1.29), and race/ethnicity (black AOR=1.16, Hispanic AOR=1.31) were each associated with extra-immunization. Excluding poliovirus vaccine from the analysis, AORs for multiple providers and multiple facility types increased to 3.64 and 8.95, respectively.
Extra-immunization is associated with receiving immunizations from multiple providers and multiple facility types.
To evaluate the effectiveness of an innovative reform in 2000 to the Dental Medicaid program in South Carolina.
Data Sources/Study Setting
South Carolina Medicaid enrollment data and dental services utilization data from 1998, 1999, and 2000.
The study was observational and retrospective in nature. Quarterly data were used in general linear regression models to examine time trends in the percent of Medicaid enrollees ages 21 and younger who received dental services. Trends in the total number of dental procedures provided per Medicaid enrollee were also analyzed, with sub-analyses performed on the four most frequent categories of procedures.
Data Collection/Extraction Methods
Data were provided by the state's Quality Improvement Organization.
From 1998 to 1999, there was a downward trend in the number and percent of Medicaid enrollees ages 21 and younger receiving dental services and in the total number of services provided. This trend was dramatically reversed in 2000.
The January 2000 dental Medicaid reform in South Carolina had marked impact on Medicaid enrollees' access to dental services.
Medicaid; dental care for children; health care reform; insurance; health; reimbursement; health services accessibility
Defining valid, reliable, defensible, and generalizable standards for the evaluation of learner performance is a key issue in assessing both baseline competence and mastery in medical education. However, prior to setting these standards of performance, the reliability of the scores yielding from a grading tool must be assessed. Accordingly, the purpose of this study was to assess the reliability of scores generated from a set of grading checklists used by non-expert raters during simulations of American Heart Association (AHA) MegaCodes.
The reliability of scores generated from a detailed set of checklists, when used by four non-expert raters, was tested by grading team leader performance in eight MegaCode scenarios. Videos of the scenarios were reviewed and rated by trained faculty facilitators and by a group of non-expert raters. The videos were reviewed “continuously” and “with pauses.” Two content experts served as the reference standard for grading, and four non-expert raters were used to test the reliability of the checklists.
Our results demonstrate that non-expert raters are able to produce reliable grades when using the checklists under consideration, demonstrating excellent intra-rater reliability and agreement with a reference standard. The results also demonstrate that non-expert raters can be trained in the proper use of the checklist in a short amount of time, with no discernible learning curve thereafter. Finally, our results show that a single trained rater can achieve reliable scores of team leader performance during AHA MegaCodes when using our checklist in continuous mode, as measures of agreement in total scoring were very strong (Lin’s Concordance Correlation Coefficient = 0.96; Intraclass Correlation Coefficient = 0.97).
We have shown that our checklists can yield reliable scores, are appropriate for use by non-expert raters, and are able to be employed during continuous assessment of team leader performance during the review of a simulated MegaCode. This checklist may be more appropriate for use by Advanced Cardiac Life Support (ACLS) instructors during MegaCode assessments than current tools provided by the AHA.
simulation; education; checklist; reliability; ACLS
Host genetic factors responsible for the interindividual differences in naturally occurring antibody responses to the human epidermal growth factor receptor 2 (HER-2) in humans have not been identified. The aim of the present investigation was to determine whether GM and KM allotypes — genetic markers of IgG heavy chains and κ-type light chains, respectively — contribute to these differences. A total of 152 Estonian women with breast cancer were characterized for IgG antibodies to HER-2 and allotyped for several GM and KM markers. IgG3 determinant GM 13 was significantly associated with higher HER-2 IgG levels (median IgG titer 800 vs. 400, p = 0.007). Other GM allotypes, known to be in linkage disequilibrium with GM 13, were also associated with higher anti-HER-2 antibody levels, albeit not as strongly. These results show that GM allotypes are associated with humoral immunity to HER-2, a finding with potentially significant implications for immunotherapy of breast cancer.
breast cancer; HER-2; humoral immune response; genetic regulation; GM allotypes; KM allotypes; IgG antibodies
Treatment of hypertension among hospitalized patients represents an opportunity to improve blood pressure recognition and treatment. To address this issue, we examined patterns of antihypertensive medication prescribing among 5,668 hypertensive inpatients. Outcomes were treatment with any antihypertensive medication and treatment with first line therapy, defined as ACE inhibitor, beta blocker, thiazide diuretic, or calcium channel blocker. Logistic regression models adjusting for age, sex, race, length of stay (LOS), service line, and co-morbidity were used for all comparisons. The multivariate-adjusted odds ratios for treatment were higher for men (1.4, p<0.001), older patients (2.5 for age >80 vs. 1.0 for age < 40, p<0.001), non-white race (1.2 vs. 1.0 for white race, p<0.004), and generalist service line (1.4 vs. 1.0 for all other services, p<0.001). Multivariate-adjusted odds ratios for receiving first-line agents were higher for older patients and generalist service line. Among surgical patients, receipt of medical consultation was only marginally associated with higher odds of antihypertensive or first-line treatment after adjustment for relevant clinical variables. Demographic factors and service line appear to play a major role in determining the likelihood of inpatients hypertension treatment. Understanding and addressing these disparities has the potential to incrementally improve hypertension control rates in the population.
Forty years of follow-up data from the Charleston Heart Study (CHS) were used to examine the risk for early mortality associated with marital separation or divorce in a sample of over 1,300 adults assessed on several occasions between 1960 and 2000. Participants who were separated or divorced at study inception evidenced significantly higher rates of early mortality, and these results held after adjusting for baseline health status and other demographic variables. Being separated or divorced throughout the CHS follow-up window was one of the strongest predictors of early mortality. However, the excess mortality risk associated with remaining separated/divorced was completely eliminated when participants were re-classified as having ever experienced a marital separation or divorce. These findings suggest a key determinant of early death is the amount of time people live as separated or divorced and/or dimensions of personality that predict divorce as well as a decreased likelihood of future remarriage.
Divorce; marital separation; mortality; survival analysis; adults; longitudinal analysis
Objective. To determine if adherence as measured by pill count would show a significant association with serum-based measures of adherence.
Methods. Data were obtained from a prenatal vitamin D supplementation trial where subjects were stratified by race and randomized into three dosing groups: 400 (control), 2000, or 4000 IU vitamin D3/day. One measurement of adherence was obtained via pill counts remaining compared to a novel definition for adherence using serum 25-hydroxy-vitamin D (25-OH-D) levels (absolute change in 25(OH)D over the study period and the subject's steady-state variation in their 25(OH)D levels). A multivariate logistic regression model examined whether mean percent adherence by pill count was significantly associated with the adherence measure by serum metabolite levels.
Results. Subjects' mean percentage of adherence by pill count was not a significant predictor of adherence by serum metabolite levels. This finding was robust across a series of sensitivity analyses.
Conclusions. Based on our novel definition of adherence, pill count was not a reliable predictor of adherence to protocol, and calls into question how adherence is measured in clinical research. Our findings have implications regarding the determination of efficacy of medications under study and offer an alternative approach to measuring adherence of long half-life supplements/medications.