Background And Objectives
To understand the association between shared decision-making (SDM) and health care expenditures and use among children with special health care needs (CSHCN).
We identified CSHCN <18 years in the 2002–2006 Medical Expenditure Panel Survey by using the CSHCN Screener. Outcomes included health care expenditures (total, out-of-pocket, office-based, inpatient, emergency department [ED], and prescription) and utilization (hospitalization, ED and office visit, and prescription rates). The main exposure was the pattern of SDM over the 2 study years (increasing, decreasing, or unchanged high or low). We assessed the impact of these patterns on the change in expenditures and utilization over the 2 study years.
Among 2858 subjects representing 12 million CSHCN, 15.9% had increasing, 15.2% decreasing, 51.9% unchanged high, and 17.0% unchanged low SDM. At baseline, mean per child total expenditures were $2131. Over the 2 study years, increasing SDM was associated with a decrease of $339 (95% confidence interval: $21, $660) in total health care costs. Rates of hospitalization and ED visits declined by 4.0 (0.1, 7.9) and 11.3 (4.3, 18.3) per 100 CSHCN, and office visits by 1.2 (0.3, 2.0) per child with increasing SDM. Relative to decreasing SDM, increasing SDM was associated with significantly lower total and out-of-pocket costs, and fewer office visits.
We found that increasing SDM was associated with decreased utilization and expenditures for CSHCN. Prospective study is warranted to confirm if fostering SDM reduces the costs of caring for CSHCN for the health system and families.
children with special health care needs; communication; decision-making; health care expenditures
To characterize the effect of corticosteroid exposure on clinical outcomes in children hospitalized with new-onset Henoch-Schönlein purpura (HSP).
PATIENTS AND METHODS
We conducted a retrospective cohort study of children discharged with an International Classification of Diseases, Clinical Modification code of HSP between 2000 and 2007 by using inpatient administrative data from 36 tertiary care children’s hospitals. We used stratified Cox proportional hazards regression models to estimate the relative effect of time-varying corticosteroid exposure on the risks of clinical outcomes that occur during hospitalization for acute HSP.
During the 8-year study period, there were 1895 hospitalizations for new-onset HSP. After multivariable regression modeling adjustment, early corticosteroid exposure significantly reduced the hazard ratios for abdominal surgery (0.39 [95% confidence interval (CI): 0.17– 0.91]), endoscopy (0.27 [95% CI: 0.13– 0.55]), and abdominal imaging (0.50 [95% CI: 0.29 – 0.88]) during hospitalization.
In the hospital setting, early corticosteroid exposure was associated with benefits for several clinically relevant HSP outcomes, specifically those related to the gastrointestinal manifestations of the disease.
cohort; corticosteroids; adolescents; and epidemiology
The influence of community context on the effectiveness of evidence-based maternal and child home visitation programs following implementation is poorly understood. This study compared prenatal smoking cessation between home visitation program recipients and local-area comparison women across 24 implementation sites within one state, while also estimating the independent effect of community smoking norms on smoking cessation behavior.
Retrospective cohort design using propensity score matching of Nurse-Family Partnership (NFP) clients and local-area matched comparison women who smoked cigarettes in the first trimester of pregnancy. Birth certificate data were used to classify smoking status. The main outcome measure was smoking cessation in the third trimester of pregnancy. Multivariable logistic regression analysis examined, over two time periods, the association of NFP exposure and the association of baseline county prenatal smoking rate on prenatal smoking cessation.
The association of NFP participation and prenatal smoking cessation was stronger in a later implementation period (35.5% for NFP clients vs. 27.5% for comparison women, p < 0.001) than in an earlier implementation period (28.4% vs. 25.8%, p = 0.114). Cessation was also negatively associated with county prenatal smoking rate, controlling for NFP program effect, (OR = 0.84 per 5 percentage point change in county smoking rate, p = 0.002).
Following a statewide implementation, program recipients of NFP demonstrated increased smoking cessation compared to comparison women, with a stronger program effect in later years. The significant association of county smoking rate with cessation suggests that community behavioral norms may present a challenge for evidence-based programs as models are translated into diverse communities.
Home visitation; Community context; Implementation; Evaluation; Smoking cessation
We developed an electronic medical record (EMR)-based HPV vaccine decision support intervention targeting clinicians, (immunization alerts, education, and feedback) and families (phone reminders and referral to an educational website). Through telephone surveys completed by 162 parents of adolescent girls, we assessed the acceptability of the family-focused intervention and its effect on information-seeking behavior, communication, and HPV vaccine decision-making. The intervention was acceptable to parents and 46% remembered receiving the reminder call. Parents reported that the call prompted them to seek out information regarding the HPV vaccine, discuss the vaccine with friends and family, and reach a decision. Parents whose adolescent girls attended practices receiving the clinician-focused intervention were more likely to report that their clinician discussed the HPV vaccine at preventive visits. The results of this study demonstrate the acceptability and potential impact on clinical care of a comprehensive decision support system directed at both clinicians and families.
Transplant centers are reluctant to perform heart transplantation in patients with hepatitis C virus (HCV) infection because augmented immunosuppression could potentially increase mortality. However, there have been few studies examining whether HCV infection reduces survival after heart transplantation.
We used data from the the U.S. Scientific Registry of Transplant Recipients to perform a multicenter cohort study evaluating the association between recipient pre-transplant HCV status and survival after heart transplantation. Adults undergoing heart transplantation between January 1, 1993 and December 31, 2007 were eligible to participate.
Among 20,687 heart transplant recipients (443 HCV-positive and 20,244 HCV-negative) at 103 institutions followed for a mean of 5.6 years, mortality was higher among HCV-positive than HCV-negative recipients (177 [40%] vs 6,367 [31.5%]; p = 0.0001). After matching on propensity score, hospital and gender, the hazard ratio (HR) of death for HCV-positive heart transplant recipients was 1.32 (95% confidence interval [CI] 1.08 to 1.61). Mortality rates were higher among HCV-positive heart transplant recipients at 1 year (9.4% vs 8.2%), 5 years (26.3% vs 22.9%), 10 years (53.1% vs 43.4%) and 15 years (74.8% vs 62.3%) post-transplantation. HRs did not vary by gender or overall number of heart transplantations performed at the center.
Pre-transplant HCV positivity is associated with decreased survival after heart transplantation.
heart transplantation; HCV; survival; mortality; outcome; cohort study
Antiviral therapy reduces symptom duration and hospitalization risk among previously healthy and chronically ill children infected with seasonal influenza. The impact of oseltamivir on outcomes of hospitalized children is unknown. The primary objective of this study was to determine whether oseltamivir improves outcomes of critically ill children hospitalized with influenza.
We performed a retrospective cohort study of children admitted to a pediatric intensive care unit with influenza during 6 consecutive winter seasons (2001-2007). We used the Pediatric Health Information System (PHIS) database, which contains resource utilization data from 41 children's hospitals. We matched oseltamivir-treated and oseltamivir-non-treated patients by the probability of oseltamivir exposure using a propensity score we derived from patient and hospital characteristics. We subsequently compared the outcomes of critically ill children treated with oseltamivir within 24 hours of admission with propensity score matched children who were not treated with oseltamivir.
We identified 1,257 children with influenza infection, 264 of whom were treated with oseltamivir within 24 hours of hospital admission. Multivariable analysis of 252 oseltamivir-treated patients and 252 propensity score- matched untreated patients demonstrated that patients treated with oseltamivir experienced a 18% reduction in total hospital days (Time Ratio: 0.82, p=0.02) whereas intensive care unit stay, in-hospital mortality and readmission rates did not differ.
For critically ill children infected with seasonal influenza, treatment with oseltamivir within 24 hours of hospitalization was associated with a shorter duration of hospital stay. Additional study is needed to determine the impact of delayed initiation of oseltamivir on clinical outcomes.
influenza; child; treatment; epidemiology; oseltamivir
The Institute of Medicine has prioritized shared decision making (SDM), yet little is known about the impact of SDM over time on behavioral outcomes for children. This study examined the longitudinal association of SDM with behavioral impairment among children with special health care needs (CSHCN).
CSHCN aged 5-17 years in the 2002-2006 Medical Expenditure Panel Survey were followed for 2 years. The validated Columbia Impairment Scale measured impairment. SDM was measured with 7 items addressing the 4 components of SDM. The main exposures were (1) the mean level of SDM across the 2 study years and (2) the change in SDM over the 2 years. Using linear regression, we measured the association of SDM and behavioral impairment.
Among 2,454 subjects representing 10.2 million CSHCN, SDM increased among 37% of the population, decreased among 36% and remained unchanged among 27%. For CSHCN impaired at baseline, the change in SDM was significant with each 1-point increase in SDM over time associated with a 2-point decrease in impairment (95% CI: 0.5, 3.4), whereas the mean level of SDM was not associated with impairment. In contrast, among those below the impairment threshold, the mean level of SDM was significant with each one point increase in the mean level of SDM associated with a 1.1-point decrease in impairment (0.4, 1.7), but the change was not associated with impairment.
Although the change in SDM may be more important for children with behavioral impairment and the mean level over time for those below the impairment threshold, results suggest that both the change in SDM and the mean level may impact behavioral health for CSHCN.
Children with Special Health Care Needs; Communication; Decision-Making
Adherence to hepatitis C virus (HCV) therapy with pegylated interferon and ribavirin has been incompletely examined.
To evaluate the relationship between adherence to HCV therapy and early and sustained virologic response, assess changes in adherence over time, and examine risk factors for non-adherence.
Retrospective cohort study.
National Veterans Affairs Hepatitis C Clinical Case Registry
5,706 HCV-infected patients (genotypes 1, 2, 3, or 4) with at least one prescription for pegylated interferon and ribavirin between 2003 and 2006 and HCV RNA results prior to and after treatment initiation.
Adherence was calculated over 12-week intervals using pharmacy refill data. Endpoints included early virologic response (decrease of ≥2 log10 HCV RNA at 12 weeks) and sustained virologic response (undetectable HCV RNA 24 weeks after end of treatment).
Early virologic response increased with higher levels of ribavirin adherence over the initial 12 weeks of therapy (genotype 1, 4: 25/68 [37%] with the lowest category [≤40% adherence] versus 1,367/2,187 [63%] with the highest category [91–100% adherence], p<0.001; genotype 2, 3: 12/18 [67%] with ≤40% adherence versus 651/713 [91%] with 91–100% adherence, p<0.001). Among genotype 1 and 4 patients, sustained response increased with higher ribavirin adherence over the second, third, and fourth 12-week intervals. Results were similar for interferon adherence. Mean adherence to interferon and ribavirin decreased 3.4% and 6.6% per 12-week interval, respectively (test for trend, p<0.001 for each drug). Patients prescribed growth factors or thyroid medications during treatment had higher mean antiviral adherence.
Observational study without standardized timing for outcomes measurements.
Early and sustained virologic responses increased with higher levels of adherence to interferon and ribavirin. Adherence to both antivirals declined over time, but more so for ribavirin.
Adherence; hepatitis C virus; HCV; antiviral therapy
To identify patterns of shared decision-making (SDM) among a nationally representative sample of US children with attention-deficit/hyperactivity disorder (ADHD) or asthma and determine if demographics, health status, or access to care are associated with SDM.
PATIENTS AND METHODS
We performed a cross-sectional study of the 2002–2006 Medical Expenditure Panel Survey, which represents 2 million children with ADHD and 4 million children with asthma. The outcome, high SDM, was defined by using latent class models based on 7 Medical Expenditure Panel Survey items addressing aspects of SDM. We entered factors potentially associated with SDM into logistic regression models with high SDM as the outcome. Marginal standardization then described the standardized proportion of children’s households with high SDM for each factor.
For both ADHD and asthma, 65% of children’s households had high SDM. Those who reported poor general health for their children were 13% less likely to have high SDM for ADHD (64 vs 77%) and 8% less likely for asthma (62 vs 70%) when adjusting for other factors. Results for behavioral impairment were similar. Respondent demographic characteristics were not associated with SDM. Those with difficulty contacting their clinician by telephone were 26% (ADHD: 55 vs 81%) and 29% (asthma: 48 vs 77%) less likely to have high SDM than those without difficulty.
These findings indicate that households of children who report greater impairment or difficulty contacting their clinician by telephone are less likely to fully participate in SDM. Future research should examine how strategies to foster ongoing communication between families and clinicians affect SDM.
ADHD; asthma; communication; decision-making; telephone care
Primary care offices are critical access points for obesity treatment, but evidence for approaches that can be implemented within these settings is limited. The Think Health! (¡Vive Saludable!) Study was designed to assess the feasibility and effectiveness of a behavioral weight loss program, adapted from the Diabetes Prevention Program, for implementation in routine primary care. Recruitment of clinical sites targeted primary care practices serving African American and Hispanic adults. The randomized design compares (a) a moderate-intensity treatment consisting of primary care provider counseling plus additional counseling by an auxiliary staff member (i.e., lifestyle coach), with (b) a low-intensity, control treatment involving primary care provider counseling only. Treatment and follow up duration are 1 to 2 years. The primary outcome is weight change from baseline at 1 and 2 years post-randomization. Between November 2006 and January 2008, 14 primary care providers (13 physicians; 1 physician assistant) were recruited at five clinical sites. Patients were recruited between October 2007 and November 2008. A total of 412 patients were pre-screened, of whom 284 (68.9%) had baseline assessments and 261 were randomized, with the following characteristics: 65% African American; 16% Hispanic American; 84% female; mean (SD) age of 47.2 (11.7) years; mean (SD) BMI of 37.2(6.4) kg/m2; 43.7% with high blood pressure; and 18.4% with diabetes. This study will provide insights into the potential utility of moderate-intensity lifestyle counseling delivered by motivated primary care clinicians and their staff. The study will have particular relevance to African Americans and women.
Obesity; Counseling; Female; Ethnic Groups; Office Visits; Patient Education
To characterize patterns of electronic medical record (EMR) use at pediatric primary care acute visits.
Direct observational study of 529 acute visits with 27 experienced pediatric clinician users.
For each 20 s interval and at each stage of the visit according to the Davis Observation Code, we recorded whether the physician was communicating with the family only, using the computer while communicating, or using the computer without communication. Regression models assessed the impact of clinician, patient and visit characteristics on overall visit length, time spent interacting with families, and time spent using the computer while interacting.
The mean overall visit length was 11:30 (min:sec) with 9:06 spent in the exam room. Clinicians used the EMR during 27% of exam room time and at all stages of the visit (interacting, chatting, and building rapport; history taking; formulation of the diagnosis and treatment plan; and discussing prevention) except the physical exam. Communication with the family accompanied 70% of EMR use. In regression models, computer documentation outside the exam room was associated with visits that were 11% longer (p=0.001), and female clinicians spent more time using the computer while communicating (p=0.003).
The 12 study practices shared one EMR.
Among pediatric clinicians with EMR experience, conversation accompanies most EMR use. Our results suggest that efforts to improve EMR usability and clinician EMR training should focus on use in the context of doctor–patient communication. Further study of the impact of documentation inside versus outside the exam room on productivity is warranted.
This cross-sectional study used Geographic Information System methods to compare sociodemographic and clinical characteristics of children enrolled and not enrolled in a primary care network to determine the suitability of the network to estimate population-based disease rates. We validated the network surveillance system by comparing invasive pneumococcal disease rates between network and nonnetwork children using population-based surveillance data. Among the study population of 130300 children, network children were more likely to be female, Black, non-Hispanic, younger, and receive Medicaid. These differences varied across neighborhoods, however, adjusting for neighborhood characteristics did not significantly change observed differences. Rates of invasive pneumococcal disease were not significantly different between network and non-network children. Significant demographic and clinical differences existed between network and non-network children and varied over small areas. Observed population rates of an infectious disease did not significantly differ suggesting that the network can potentially provide valid disease estimates for the community population.
The purpose of this study was to test the discriminant validity of ISHLT PGD grades in terms of lung injury biomarker profiles and survival.
The study samples consisted of a multicenter prospective cohort study for the biomarker analysis and a 450 patient cohort study for the mortality analyses. PGD was defined according to ISHLT consensus at 24, 48 and 72 hours after transplantation. We compared the changes in plasma markers of acute lung injury between PGD grades using longitudinal data models. To test predictive validity, we compared differences in the 30-day mortality and long-term survival according to PGD grade.
PGD grade 3 demonstrated greater differences between plasma ICAM-1, protein C, and PAI-1 levels than did PGD grades 0-2 at 24, 48, and 72 hours after lung transplantation (p<0.05 for each). Grade 3 had the highest 30-day (test for trend p<0.001) and overall mortality (log rank p<0.001), with PGD grades 1 and 2 demonstrating intermediate risks of mortality. The ability to discriminate both 30 day and overall mortality improved as the time of grading moved away from the time of transplantation (test for trend p<0.001).
The ISHLT grading system has good discriminant validity, based on plasma markers of lung injury and mortality. Grade 3 PGD was associated with the most severely altered plasma biomarker profile and the worst outcomes, regardless of the time point of grading. PGD grade at 48 and 72 hours discriminated mortality better than PGD grade at 24 hours.
Lung Transplantation; complications; Acute lung injury; Primary graft dysfunction; reperfusion injury
As cancer treatments evolve, it is important to re-evaluate their impact on lymphedema risk in breast cancer survivors.
A population-based random sample of 631 women from metropolitan Philadelphia, PA, diagnosed with incident breast cancer 1999–2001, was followed for five years. Risk factor information was obtained by questionnaire and medical records review. Lymphedema was assessed with a validated questionnaire. Using Cox proportional hazards models, we estimated relative incidence rates (hazard ratios:HRs) of lymphedema with standard adjusted multivariable analyses ignoring interactions, followed by models including clinically plausible treatment interactions.
Compared to no lymph node surgery, adjusted HRs for lymphedema were increased following axillary lymph node dissection (ALND) (HR 2.61; 95% CI 1.77,3.84) but not sentinel lymph node biopsy (SLNB) (HR 1.04; 95% CI 0.58,1.88). Risk was not increased following irradiation: breast/chest wall only (HR 1.18; 95% CI 0.80,1.73), breast/chest wall plus supraclavicular field (+/− full axilla) (HR 0.86; 95% CI 0.48,1.54). Eighty-one percent of chemotherapy was anthracycline-based. The HR for anthracycline chemotherapy vs no chemotherapy was 1.46; 95% CI 1.04,2.04, persisting after stratifying on stage at diagnosis or number of positive nodes. Treatment combinations involving ALND or chemotherapy resulted in approximately 4–5 fold increases in HRs for lymphedema (e.g. HR 4.16; 95% CI 1.32,12.45 for SLNB/chemotherapy/no radiation) compared to no treatment.
With standard multivariable analyses, ALND and chemotherapy increased lymphedema risk while radiation therapy and SLNB did not. However, risk varied by combinations of exposures.
Treatment patterns should be considered when counseling and monitoring patients for lymphedema.
lymphedema; risk factors; prospective study; breast cancer; chemotherapy; radiation therapy; lymph node surgery
Peroxiredoxin 6 (PRDX6) is involved in redox regulation of the cell and is thought to be protective against oxidant injury. Little is known about genetic variation within the PRDX6 gene and its association with acute lung injury (ALI). In this study we sequenced the PRDX6 gene to uncover common variants, and tested association with ALI following major trauma.
To examine the extent of variation in the PRDX6 gene, we performed direct sequencing of the 5' UTR, exons, introns and the 3' UTR in 25 African American cases and controls and 23 European American cases and controls (selected from a cohort study of major trauma), which uncovered 80 SNPs. In silico modeling was performed using Patrocles and Transcriptional Element Search System (TESS). Thirty seven novel and tagging SNPs were tested for association with ALI compared with ICU at-risk controls who did not develop ALI in a cohort study of 259 African American and 254 European American subjects that had been admitted to the ICU with major trauma.
Resequencing of critically ill subjects demonstrated 43 novel SNPs not previously reported. Coding regions demonstrated no detectable variation, indicating conservation of the protein. Block haplotype analyses reveal that recombination rates within the gene seem low in both Caucasians and African Americans. Several novel SNPs appeared to have the potential for functional consequence using in silico modeling. Chi2 analysis of ALI incidence and genotype showed no significant association between the SNPs in this study and ALI. Haplotype analysis did not reveal any association beyond single SNP analyses.
This study revealed novel SNPs within the PRDX6 gene and its 5' and 3' flanking regions via direct sequencing. There was no association found between these SNPs and ALI, possibly due to a low sample size, which was limited to detection of relative risks of 1.93 and above. Future studies may focus on the role of PRDX6 genetic variation in other diseases, where oxidative stress is suspected.
Peroxiredoxin; Acute Lung Injury; Oxidant Stress; Genetic Polymorphisms
Among unanswered questions is whether menopausal use of estrogen therapy (ET) or estrogen-plus-progestin therapy (CHT) increases risk of developing fatal breast cancer, i.e developing and dying of breast cancer. Using a population-based case-control design, we estimated incidence rate ratios of fatal breast cancer in postmenopausal hormone therapy (HT) users compared to non-users by type, duration, and recency of HT use.
HT use prior to breast cancer diagnosis in 278 women who died of breast cancer within 6 years of diagnosis (cases) was compared with use in 2,224 controls never diagnosed with breast cancer using conditional logistic regression. Measures taken to address potential bias and confounding inherent in case-control studies included collecting and adjusting for detailed data on demographic and other factors potentially associated both with hormone therapy use and breast cancer.
Fifty-six percent of cases and 68% of controls reported HT use. Among current 3+ year HT users, odds ratios and 95% confidence intervals for death were 0.83 (0.50, 1.38) and 0.69 (0.44, 1.09), respectively, for exclusive use of CHT or of ET, and were 0.94 (0.59, 1.48) and 0.70 (0.45, 1.07) for any use of CHT or of ET regardless of other hormone use.
Point estimates suggest no increased risk of fatal breast cancer with HT use, although 50% increases in risk in longer-term current CHT users cannot be ruled out.
Hormone replacement therapy; estrogen replacement therapy; estrogen progestin combination therapy; breast neoplasms; death; menopause; case-control
Mammography has been established as the primary imaging screening method for breast cancer; however, the sensitivity of mammography is limited, especially in women with dense breast tissue. Given the limitations of mammography, interest has developed in alternative screening techniques. This interest has led to numerous studies reporting mammographically occult breast cancers detected on magnetic resonance imaging (MRI) or ultrasound. In addition, digital mammography was shown to be more sensitive than film mammography in selected populations. Our goal was to prospectively compare cancer detection of digital mammography (DM), whole-breast ultrasound (WBUS), and contrast-enhanced MRI in a high-risk screening population previously screened negative by film screen mammogram (FSM).
During a 2-year period, 609 asymptomatic high-risk women with nonactionable FSM examinations presented for a prospective multimodality screening consisting of DM, WBUS, and MRI. The FSM examinations were reinterpreted by study radiologists. Patients had benign or no suspicious findings on clinical examination. The cancer yield by modality was evaluated.
Twenty cancers were diagnosed in 18 patients (nine ductal carcinomas in situ and 11 invasive breast cancers). The overall cancer yield on a per-patient basis was 3.0% (18 of 609 patients). The cancer yield by modality was 1.0% for FSM (six of 597 women), 1.2% for DM (seven of 569 women), 0.53% for WBUS (three of 567 women), and 2.1% for MRI (12 of 571 women). Of the 20 cancers detected, some were only detected on one imaging modality (FSM, n = 1; DM, n = 3; WBUS, n = 1; and MRI, n = 8).
The addition of MRI to mammography in the high-risk group has the greatest potential to detect additional mammographically occult cancers. The incremental cancer yield of WBUS and DM is much less.
To describe variation regarding inpatient therapy and evaluation of children with Henoch Schönlein purpura (HSP) admitted to children’s hospitals across the United States.
We conducted a retrospective cohort study of children discharged with a diagnosis of HSP between 2000 and 2007 using inpatient administrative data from 36 children’s hospitals. We examined variation among hospitals in the use of medications, diagnostic tests, and intensive care services using multivariate mixed effects logistic regression models.
During the initial HSP hospitalization (N=1,988), corticosteroids were the most common medication (56% of cases), followed by opioids (36%), NSAIDs (35%), and anti-hypertensives (11%). After adjustment for patient characteristics, hospitals varied significantly in their use of corticosteroids, opioids, and NSAIDs; the use of diagnostic abdominal imaging, endoscopy, laboratory testing, and renal biopsy; and the utilization of intensive care services. By contrast, hospitals did not differ significantly regarding administration of anti-hypertensives or performance of skin biopsy.
The significant variation identified may contribute to varying HSP clinical outcomes between hospitals, warrants further investigation, and represents a potentially important opportunity to improve quality of care.
opioids; corticosteroids; anti-hypertensives; non-steroidal anti-inflammatory drugs; adolescents; epidemiology
Before using computerized databases to study hepatitis C virus (HCV) epidemiology, the validity of the diagnosis must be assessed. We determined the accuracy of HCV diagnostic codes within The Health Improvement Network (THIN), an electronic database containing medical record data from general medical practices in the United Kingdom.
Patients with initial diagnostic codes for HCV infection and nonspecific viral hepatitis between 2000 and 2007 in the THIN database were identified. Questionnaires were mailed to general practitioners caring for a random sample of 150 of these patients (75 with an HCV code; 75 with a nonspecific viral hepatitis code) to collect information on HCV and other hepatitis diagnoses. We determined the positive predictive value of the database's HCV diagnostic codes and its ability to identify the date of a new HCV diagnosis.
Usable surveys were returned for 146 (97%) patients. Among 74 patients with an HCV code and questionnaire data, HCV was confirmed in 64 (positive predictive value, 86%; 95% CI, 77% – 93%). In 40 (63%), the first recorded diagnosis in THIN was within 30 days of the date reported in the questionnaire (median difference, 11 days; interquartile range, 0 – 362 days). Among 72 patients with a nonspecific viral hepatitis code, 16 (22%) had HCV, but manual review of the database's electronic records correctly identified 12/16 (75%).
In THIN, the HCV-specific diagnostic codes are highly predictive of HCV infection. After manual review, few patients with a nonspecific viral hepatitis code were misclassified as having HCV infection.
epidemiology; hepatitis C virus; HCV; validity; database
Warfarin is widely used to prevent stroke and venous thromboembolism despite its narrow therapeutic window. Warfarin nonadherence is a substantial problem, but risk factors have not been well elucidated.
A prospective cohort study of adults initiating warfarin at two anticoagulation clinics (University and VA-affiliated) was performed to determine factors affecting nonadherence to warfarin. Nonadherence, defined by failure to record a correct pill bottle opening each day, was measured daily via electronic medication event monitoring systems (MEMS) caps. A multivariable explanatory model using logistic regression for longitudinal data was used to identify risk factors for nonadherence.
One hundred eleven subjects were followed for a median of 137 days. Warfarin nonadherence was common (4787 of 22425 or 21% of patient-days observed). Factors independently associated with higher odds of nonadherence included education beyond high school (odds ratio (OR) 1.8 (95%CI 1.2–2.7)), lower Short Form (SF)-36 mental component score (OR 1.4 (1.1–1.6) for each 10 point decrease); and impaired cognition (≤19 points) on the Cognitive Capacity Screening Examination (CCSE) (OR 2.9 (1.7–4.8)). Compared to currently employed subjects, unemployed (OR 0.6 (0.3–1.2)) and retired (OR 0.5 (0.3–0.8)) subjects had somewhat improved adherence; disabled subjects over age 55 had worse adherence (OR 1.8 (1.1–3.1)) than younger disabled subjects (OR 0.8 (0.4–1.5)).
Poor adherence to warfarin is common and risk factors are related to education level, employment status, mental health functioning, and cognitive impairment. Within the carefully controlled anticoagulation clinic setting, such patient-specific factors may be the basis of future interventions to improve nonadherence.
medication adherence; MEMS; warfarin; anticoagulation
Warfarin is an anticoagulant effective in preventing stroke, but it has a narrow therapeutic range requiring optimal adherence to achieve the most favorable effects.
The goal of this study was to examine specific patient factors that might help explain warfarin non-adherence at outpatient anticoagulation clinics.
In a prospective cohort study of 156 adults, we utilized logistic regression analyses to examine the relationship between the five Treatment Prognostics scales from the Millon Behavioral Medicine Diagnostic (MBMD), as well as three additional MBMD scales (Depression, Future Pessimism, and Social Isolation), and daily warfarin non-adherence assessed using electronic medication event monitoring systems caps over a median of 139 days.
Four of the five Treatment Prognostic scales and greater social isolation were associated with warfarin non-adherence. When controlling for pertinent demographic and medical variables, the Information Discomfort scale remained significantly associated with warfarin non-adherence over time.
Although several factors were related to warfarin non-adherence, patients reporting a lack of receptivity to details regarding their medical illness seemed most at risk for warfarin non-adherence. This information might aid in the development of interventions to enhance warfarin adherence and perhaps reduce adverse medical events.
Warfarin adherence; Patient factors; MBMD-MEMS
We sought to develop a simple point score that would accurately capture the risk of hospital death for patients with acute lung injury (ALI).
This is a secondary analysis of data from two randomized trials. Baseline clinical variables collected within 24 hours of enrollment were modeled as predictors of hospital mortality using logistic regression and bootstrap resampling to arrive at a parsimonious model. We constructed a point score based on regression coefficients.
Medical centers participating in the Acute Respiratory Distress Syndrome Clinical Trials network (ARDSnet).
Model development: 414 patients with non-traumatic ALI participating in the low tidal volume arm of the ARDSnet ARMA study. Model validation: 459 patients participating in the ARDSnet ALVEOLI study.
Measurements and Main Results
Variables comprising the prognostic model were: hematocrit <26% (1 point), bilirubin ≥ 2 mg/dl (1 point), fluid balance greater than 2.5 liters positive (1 point), and age (1 point for age 40–64, 2 points for age ≥ 65 years). Predicted mortality (95% confidence interval) for 0, 1, 2, 3, and 4+ point totals was 8% (5–14%), 17% (12–23%), 31% (26–37%), 51% (43–58%), and 70% (58–80%), respectively. There was excellent agreement between predicted and observed mortality in the validation cohort. Observed mortality for 0, 1, 2, 3, and 4+ point totals in the validation cohort was 12%, 16%, 28%, 47%, and 67%, respectively. Compared to the APACHE III score, areas under the receiver operating characteristic curve for the point score were greater in the development cohort (0.72 vs. 0.67, p=0.09) and lower in the validation cohort (0.68 vs. 0.75, p=0.03).
Mortality in ALI patients can be predicted using an index of four readily-available clinical variables with good calibration. This index may help inform prognostic discussions, but validation in non-clinical trial populations is necessary before widespread use.
Acute respiratory distress syndrome; acute lung injury; Respiratory Distress Syndrome; Adult; Human ARDS; Statistical Model; logistic models; mortality determinants; Mortality; In-Hospital; Acute Physiology and Chronic Health Evaluation; APACHE III; Bayesian Prediction; Prognosis
The objective of this study was to compare on a national cohort of children with autism spectrum disorder (ASD) the concurrent use of ≥3 psychotropic medications between children in foster care and children who have disabilities and receive Supplemental Security Income, and to describe variation among states in the use of these medications by children in foster care.
Studied was the concurrent use of ≥3 classes of psychotropic medications, identified from the 2001 Medicaid claims of 43 406 children who were aged 3 to 18 years and had ≥1 annual claim for ASD. Medicaid enrollment as a child in foster care versus a child with disabilities was compared. Multilevel logistic regression, clustered at the state level and controlling for demographics and comorbidities, yielded standardized (adjusted) estimates of concurrent use of ≥3 medications and estimated variation in medication use within states that exceeded 1 and 2 SDs from the average across states.
Among children in foster care, 20.8% used ≥3 classes of medication concurrently, compared with 10.1% of children who were classified as having a disability. Differences grew in relationship to overall use of medications within a state; for every 5% increase in concurrent use of ≥3 medication classes by a state’s population with disabilities, such use by children in a state’s foster care population increased by 8.3%. Forty-three percent (22) of states were >1 SD from the adjusted mean for children who were using ≥3 medications concurrently, and 14% (7) of the states exceeded 2 SDs.
Among children with ASD, children in foster care were more likely to use ≥3 medications concurrently than children with disabilities. State-level differences underscore policy or programmatic differences that might affect the receipt of medications in this population.
autism; foster care; psychotropic drugs; Medicaid
HIV-monoinfected patients may be at risk for significant liver fibrosis, but its prevalence and determinants in these patients are unknown. Since HIV-monoinfected patients do not routinely undergo liver biopsy, we evaluated the prevalence and risk factors of significant hepatic fibrosis in this group using the aspartate aminotransferase (AST)-to-platelet ratio index (APRI).
We conducted a cross-sectional study among HIV-infected patients negative for hepatitis B surface antigen and hepatitis C antibody in the Penn Center for AIDS Research Adult/Adolescent Database. Clinical and laboratory data were collected from the database at enrollment. Hypothesized determinants of significant fibrosis were modifiable risk factors associated with liver disease progression, hepatic fibrosis, or hepatotoxicity, including immune dysfunction (i.e., CD4 T lymphocyte count <200 cells/mm3, HIV viremia), diseases associated with hepatic steatosis (e.g., obesity, diabetes mellitus), and use of antiretroviral therapy. The primary outcome was an APRI score >1.5, which suggests significant hepatic fibrosis. Multivariable logistic regression identified independent risk factors for significant fibrosis by APRI.
Among 432 HIV-monoinfected patients enrolled in the CFAR Database between November 1999 and May 2008, significant fibrosis by APRI was identified in 36 (8.3%; 95% CI, 5.9 - 11.4%) patients. After controlling for all other hypothesized risk factors as well as active alcohol use and site, detectable HIV viremia (adjusted OR, 2.56; 95% CI, 1.02 - 8.87) and diabetes mellitus (adjusted OR, 3.15; 95% CI, 1.12 - 10.10) remained associated with significant fibrosis by APRI.
Significant fibrosis by APRI score was found in 8% of HIV-monoinfected patients. Detectable HIV viremia and diabetes mellitus were associated with significant fibrosis. Future studies should explore mechanisms for fibrosis in HIV-monoinfected patients.