Role conflict can motivate behavior change. No prior studies have explored the association between parent/smoker role conflict and readiness to quit. The objective of the study is to assess the association of a measure of parent/smoker role conflict with other parent and child characteristics and to test the hypothesis that parent/smoker role conflict is associated with a parent’s intention to quit smoking in the next 30 days. As part of a cluster randomized controlled trial to address parental smoking (Clinical Effort Against Secondhand Smoke Exposure—CEASE), research assistants completed exit interviews with 1980 parents whose children had been seen in 20 Pediatric Research in Office Settings (PROS) practices and asked a novel identity-conflict question about “how strongly you agree or disagree” with the statement, “My being a smoker gets in the way of my being a parent.” Response choices were dichotomized as “Strongly Agree” or “Agree” versus “Disagree” or “Strongly Disagree” for the analysis. Parents were also asked whether they were “seriously planning to quit smoking in 30 days.” Chi-square and logistic regression were performed to assess the association between role conflict and other parent/children characteristics. A similar strategy was used to determine whether role conflict was independently associated with intention to quit in the next 30 days.
As part of a RTC in 20 pediatric practices, exit interviews were held with smoking parents after their child’s exam. Parents who smoked were asked questions about smoking behavior, smoke-free home and car rules, and role conflict. Role conflict was assessed with the question, “Please tell me how strongly you agree or disagree with the statement: ‘My being a smoker gets in the way of my being a parent.’ (Answer choices were: “Strongly agree, Agree, Disagree, Strongly Disagree.”)
Of 1980 eligible smokers identified, 1935 (97%) responded to the role-conflict question, and of those, 563 (29%) reported experiencing conflict. Factors that were significantly associated with parent/smoker role conflict in the multivariable model included: being non-Hispanic white, allowing home smoking, the child being seen that day for a sick visit, parents receiving any assistance for their smoking, and planning to quit in the next 30 days. In a separate multivariable logistic regression model, parent/smoker role conflict was independently associated with intention to quit in the next 30 days [AOR 2.25 (95% CI 1.80-2.18)].
This study demonstrated an association between parent/smoker role conflict and readiness to quit. Interventions that increase parent/smoker role conflict might act to increase readiness to quit among parents who smoke.
Clinical trial registration number: NCT00664261.
Parent smoker identity; Parent smoker role conflict; Tobacco smoke exposure; Readiness to quit; Stages of change; Tobacco control; Pediatrics; Smoking cessation; Parent child dyad
To assess whether older age is independently associated with hemorrhage risk in patients with atrial fibrillation, whether or not they are taking warfarin therapy.
Integrated healthcare delivery system.
Thirteen thousand five hundred fifty-nine adults with nonvalvular atrial fibrillation.
Patient data were collected from automated clinical and administrative databases using previously validated search algorithms. Medical charts were reviewed from patients hospitalized were for major hemorrhage (intracranial, fatal, requiring ≥2 units of transfused blood, or involving a critical anatomic site). Age was categorized into four categories (<60, 60–69, 70–79, and ≥80), and multivariable Poisson regression was used to assess whether major hemorrhage rates increased with age, stratified by warfarin use and adjusted for other clinical risk factors for hemorrhage.
A total of 170 major hemorrhages were identified during 15,300 person-years of warfarin therapy and 162 major hemorrhages during 15,530 person-years off warfarin therapy. Hemorrhage rates rose with older age, with an average increase in hemorrhage rate of 1.2 (95% confidence interval (CI) 1.0–1.4) per older age category in patients taking warfarin and 1.5 (95% CI=1.3–1.8) in those not taking warfarin. Intracranial hemorrhage rates were significantly higher in those aged 80 and older (adjusted rate ratio=1.8, 95% CI=1.1–3.1 for those taking warfarin, adjusted rate ratio=4.7, 95% CI=2.4–9.2 for those not taking warfarin) than in those younger than 80.
Older age increases the risk of major hemorrhage, particularly intracranial hemorrhage, in patients with atrial fibrillation, whether or not they are taking warfarin. Hemorrhage rates were generally comparable with those reported in previous randomized trials, indicating that carefully monitored warfarin therapy can be used with reasonable safety in older patients.
aging; anticoagulation; hemorrhage; atrial fibrillation
We assessed 5 risk stratification schemes for their ability to predict atrial fibrillation (AF)–related thromboembolism in a large community-based cohort.
Risk schemes can help target anticoagulant therapy for patients at highest risk for AF–related thromboembolism. We tested the predictive ability of 5 risk schemes: the Atrial Fibrillation Investigators, Stroke Prevention in Atrial Fibrillation, CHADS2 (Congestive heart failure, Hypertension, Age ≥ 75 years, Diabetes mellitus, and prior Stroke or transient ischemic attack) index, Framingham score, and the 7th American College of Chest Physicians Guidelines.
We followed a cohort of 13,559 adults with AF for a median of 6.0 years. Among non-warfarin users, we identified incident thromboembolism (ischemic stroke or peripheral embolism) and risk factors from clinical databases. Each scheme was divided into low, intermediate, and high predicted risk categories and applied to the cohort. Annualized thromboembolism rates and c-statistics (to assess discrimination) were calculated for each risk scheme.
We identified 685 validated thromboembolic events that occurred during 32,721 person-years off warfarin therapy. The risk schemes had only fair discriminating ability, with c-statistics ranging from 0.56 to 0.62. The proportion of patients assigned to individual risk categories varied widely across the schemes. The proportion categorized as low risk ranged from 11.7% to 37.1% across schemes, and the proportion considered high risk ranged from 16.4% to 80.4%.
Current risk schemes have comparable, but only limited, overall ability to predict thromboembolism in persons with AF. Recommendations for antithrombotic therapy may vary widely depending on which scheme is applied for individual patients. Better risk stratification is crucially needed to improve selection of AF patients for anticoagulant therapy.
Little is known about the outcomes of patients who have hemorrhagic complications while receiving warfarin therapy. We examined the rates of death and disability resulting from warfarin-associated intracranial and extracranial hemorrhages in a large cohort of patients with atrial fibrillation.
We assembled a cohort of 13,559 adults with nonvalvular atrial fibrillation and identified patients hospitalized for warfarin-associated intracranial and major extracranial hemorrhage. Data on functional disability at discharge and 30-day mortality were obtained from a review of medical charts and state death certificates. The relative odds of 30-day mortality by hemorrhage type were calculated using multivariable logistic regression.
We identified 72 intracranial and 98 major extracranial hemorrhages occurring in more than 15,300 person-years of warfarin exposure. At hospital discharge, 76% of patients with intracranial hemorrhage had severe disability or died, compared with only 3% of those with major extracranial hemorrhage. Of the 40 deaths from warfarin-associated hemorrhage that occurred within 30 days, 35 (88%) were from intracranial hemorrhage. Compared with extracranial hemorrhages, intracranial events were strongly associated with 30-day mortality (odds ratio 20.8 [95% confidence interval, 6.0–72]) even after adjusting for age, sex, anticoagulation intensity on admission, and other coexisting illnesses.
Among anticoagulated patients with atrial fibrillation, intracranial hemorrhages caused approximately 90% of the deaths from warfarin-associated hemorrhage and the majority of disability among survivors. When considering anticoagulation, patients and clinicians need to weigh the risk of intracranial hemorrhage far more than the risk of all major hemorrhages.
Atrial fibrillation; Death; Disability; Hemorrhage; Intracranial hemorrhage; warfarin
Practice variation in breast cancer surgery has raised concerns about the quality of treatment decisions. We sought to evaluate the quality of decisions about surgery for early stage breast cancer by measuring patient knowledge, concordance between goals and treatments, and involvement in decisions.
A mailed survey of Stage I/II breast cancer survivors was conducted at four sites. The Decision Quality Instrument measured knowledge, goals, and involvement in decisions. A multivariable logistic regression model of treatment was developed. The model-predicted probability of mastectomy was compared to treatment received for each patient. Concordance was defined as having mastectomy and predicted probability >=0.5 or partial mastectomy and predicted probability <0.5. Frequency of discussion about partial mastectomy was compared to discussion about mastectomy using chi-squared tests.
440 patients participated (59% response rate). Mean overall knowledge was 52.7%. 45.9% knew that local recurrence risk is higher after breast conservation. 55.7% knew that survival is equivalent for the two options. Most participants (89.0%) had treatment concordant with their goals. Participants preferring mastectomy had lower concordance (80.5%) than those preferring partial mastectomy (92.6%, p=0.001). Participants reported more frequent discussion of partial mastectomy and its advantages than of mastectomy. 48.6% reported being asked their preference.
Breast cancer survivors had major knowledge deficits, and those preferring mastectomy were less likely to have treatment concordant with goals. Patients perceived that discussions focused on partial mastectomy, and many were not asked their preference. Improvements in the quality of decisions about breast cancer surgery are needed.
Previous studies provide conflicting results about whether women are at higher risk than men for thromboembolism in the setting of atrial fibrillation (AF). We examined data from a large contemporary cohort of AF patients to address this question.
Methods and Results
We prospectively studied 13 559 adults with AF and recorded data on patients’ clinical characteristics and the occurrence of incident hospitalizations for ischemic stroke, peripheral embolism, and major hemorrhagic events through searching validated computerized databases and medical record review. We compared event rates by patient sex using multivariable log-linear regression, adjusting for clinical risk factors for stroke, and stratifying by warfarin use. We identified 394 ischemic stroke and peripheral embolic events during 15 494 person-years of follow-up off warfarin. After multivariable analysis, women had higher annual rates of thromboembolism off warfarin than did men (3.5% versus 1.8%; adjusted rate ratio [RR], 1.6; 95% CI, 1.3 to 1.9). There was no significant difference by sex in 30-day mortality after thromboembolism (23% for both). Warfarin use was associated with significantly lower adjusted thromboembolism rates for both women and men (RR, 0.4; 95% CI, 0.3 to 0.5; and RR, 0.6; 95% CI, 0.5 to 0.8, respectively), with similar annual rates of major hemorrhage (1.0% and 1.1%, respectively).
Women are at higher risk than men for AF-related thromboembolism off warfarin. Warfarin therapy appears be as effective in women, if not more so, than in men, with similar rates of major hemorrhage. Female sex is an independent risk factor for thromboembolism and should influence the decision to use anticoagulant therapy in persons with AF.
anticoagulants; atrial fibrillation; risk factors; stroke; women
A hospital admission offers smokers an opportunity to quit. Smoking cessation counseling provided in the hospital is effective, but only if it continues for more than one month after discharge. Providing smoking cessation medication at discharge may add benefit to counseling. A major barrier to translating this research into clinical practice is sustaining treatment during the transition to outpatient care. An evidence-based, practical, cost-effective model that facilitates the continuation of tobacco treatment after discharge is needed. This paper describes the design of a comparative effectiveness trial testing a hospital-initiated intervention against standard care.
A two-arm randomized controlled trial compares the effectiveness of standard post-discharge care with a multi-component smoking cessation intervention provided for three months after discharge. Current smokers admitted to Massachusetts General Hospital who receive bedside smoking cessation counseling, intend to quit after discharge and are willing to consider smoking cessation medication are eligible. Study participants are recruited following the hospital counseling visit and randomly assigned to receive Standard Care or Extended Care after hospital discharge. Standard Care includes a recommendation for a smoking cessation medication and information about community resources. Extended Care includes up to three months of free FDA-approved smoking cessation medication and five proactive computerized telephone calls that use interactive voice response technology to provide tailored motivational messages, offer additional live telephone counseling calls from a smoking cessation counselor, and facilitate medication refills. Outcomes are assessed at one, three, and six months after hospital discharge. The primary outcomes are self-reported and validated seven-day point prevalence tobacco abstinence at six months. Other outcomes include short-term and sustained smoking cessation, post-discharge utilization of smoking cessation treatment, hospital readmissions and emergency room visits, and program cost per quit.
This study tests a disseminable smoking intervention model for hospitalized smokers. If effective and widely adopted, it could help to reduce population smoking rates and thereby reduce tobacco-related mortality, morbidity, and health care costs.
United States Clinical Trials Registry NCT01177176.
Smoking cessation; Hospitalization; Pharmacotherapy; Counseling; Randomized clinical trial; Interactive voice response
To develop a risk stratification score to predict warfarin-associated hemorrhage
Optimal decision-making regarding warfarin use for atrial fibrillation requires estimation of hemorrhage risk.
We followed 9,186 patients with atrial fibrillation contributing 32,888 person-years of follow-up on warfarin, obtaining data from clinical databases and validating hemorrhage events using medical record review. We used Cox regression models to develop a hemorrhage risk stratification score, selecting candidate variables using bootstrapping approaches. The final model was internally validated via split-sample testing and compared to six published hemorrhage risk schemes.
We observed 461 first major hemorrhages during follow-up (1.4% annually). Five independent variables were included in the final model and weighted by regression coefficients: anemia (3 points), severe renal disease (e.g., glomerular filtration rate < 30 ml/min or dialysis-dependent, 3 points), age ≥ 75 years (2 points), prior bleeding (1 point), and hypertension (1 point). Major hemorrhage rates ranged from 0.4% (0 points) to 17.3% per year (10 points). Collapsed into a 3-category risk score, major hemorrhage rates were 0.8% in the low risk group (0-3 points), 2.6% in intermediate risk (4 points), and 5.8% in high risk (5-10 points). The c-index for the continuous risk score was 0.74 and 0.69 for the 3-category score, higher than in the other risk schemes. There was net reclassification improvement versus all six comparators (from 27% to 56%).
A simple 5-variable risk score was effective in quantifying the risk of warfarin-associated hemorrhage in a large community-based cohort of patients with atrial fibrillation.
anticoagulants; atrial fibrillation; hemorrhage; risk prediction; warfarin
The purpose of this paper is to examine the acceptability, feasibility, reliability and validity of a new decision quality instrument that assesses the extent to which patients are informed and receive treatments that match their goals.
Cross-sectional mail survey of recent breast cancer survivors, providers and healthy controls and a retest survey of survivors. The decision quality instrument includes knowledge questions and a set of goals, and results in two scores: a breast cancer surgery knowledge score and a concordance score, which reflects the percentage of patients who received treatments that match their goals. Hypotheses related to acceptability, feasibility, discriminant validity, content validity, predictive validity and retest reliability of the survey instrument were examined.
We had responses from 440 eligible patients, 88 providers and 35 healthy controls. The decision quality instrument was feasible to implement in this study, with low missing data. The knowledge score had good retest reliability (intraclass correlation coefficient = 0.70) and discriminated between providers and patients (mean difference 35%, p < 0.001). The majority of providers felt that the knowledge items covered content that was essential for the decision. Five of the 6 treatment goals met targets for content validity. The five goals had moderate to strong retest reliability (0.64 to 0.87). The concordance score was 89%, indicating that a majority had treatments concordant with that predicted by their goals. Patients who had concordant treatment had similar levels of confidence and regret as those who did not.
The decision quality instrument met the criteria of feasibility, reliability, discriminant and content validity in this sample. Additional research to examine performance of the instrument in prospective studies and more diverse populations is needed.
Most individuals exposed to a traumatic event do not develop post-traumatic stress disorder (PTSD), although many individuals may experience sub-clinical levels of post-traumatic stress symptoms (PTSS). There are notable individual differences in the presence and severity of PTSS among individuals who report seemingly comparable traumatic events. Individual differences in PTSS following exposure to traumatic events could be influenced by pre-trauma vulnerabilities for developing PTSS/PTSD.
Pre-trauma psychological, psychophysiological and personality variables were prospectively assessed for their predictive relationships with post-traumatic stress symptoms (PTSS). Police and firefighter trainees were tested at the start of their professional training (i.e., pre-trauma; n = 211) and again several months after exposure to a potentially traumatic event (i.e., post-trauma, n = 99). Pre-trauma assessments included diagnostic interviews, psychological and personality measures and two psychophysiological assessment procedures. The psychophysiological assessments measured psychophysiologic reactivity to loud tones and the acquisition and extinction of a conditioned fear response. Post-trauma assessment included a measure of psychophysiologic reactivity during recollection of the traumatic event using a script-driven imagery task.
Logistic stepwise regression identified the combination of lower IQ, higher depression score and poorer extinction of forehead (corrugator) electromyogram responses as pre-trauma predictors of higher PTSS. The combination of lower IQ and increased skin conductance (SC) reactivity to loud tones were identified as pre-trauma predictors of higher post-trauma psychophysiologic reactivity during recollection of the traumatic event. A univariate relationship was also observed between pre-trauma heart rate (HR) reactivity to fear cues during conditioning and post-trauma psychophysiologic reactivity.
The current study contributes to a very limited literature reporting results from truly prospective examinations of pre-trauma physiologic, psychologic, and demographic predictors of PTSS. Findings that combinations of lower estimated IQ, greater depression symptoms, a larger differential corrugator EMG response during extinction and larger SC responses to loud tones significantly predicted higher PTSS suggests that the process(es) underlying these traits contribute to the pathogenesis of subjective and physiological PTSS. Due to the low levels of PTSS severity and relatively restricted ranges of outcome scores due to the healthy nature of the participants, results may underestimate actual predictive relationships.
Stress disorders, Post-traumatic; Conditioning; Startle; Imagery; Psychophysiology; Risk factors
Information technology offers the promise, as yet unfulfilled, of delivering efficient, evidence-based health care.
To evaluate whether a primary care network-based informatics intervention can improve breast cancer screening rates.
Cluster-randomized controlled trial of 12 primary care practices conducted from March 20, 2007 to March 19, 2008.
Women 42–69 years old with no record of a mammogram in the prior 2 years.
In intervention practices, a population-based informatics system was implemented that: connected overdue patients to appropriate care providers, presented providers with a Web-based list of their overdue patients in a non-visit-based setting, and enabled “one-click” mammography ordering or documented deferral reasons. Patients selected for mammography received automatically generated letters and follow-up phone calls. All practices had electronic health record reminders about breast cancer screening available during clinical encounters.
The primary outcome was the proportion of overdue women undergoing mammography at 1-year follow-up.
Baseline mammography rates in intervention and control practices did not differ (79.5% vs 79.3%, p = 0.73). Among 3,054 women in intervention practices and 3,676 women in control practices overdue for mammograms, intervention patients were somewhat younger, more likely to be non-Hispanic white, and have health insurance. Most intervention providers used the system (65 of 70 providers, 92.9%). Action was taken for 2,652 (86.8%) intervention patients [2,274 (74.5%) contacted and 378 (12.4%) deferred]. After 1 year, mammography rates were significantly higher in the intervention arm (31.4% vs 23.3% in control arm, p < 0.001 after adjustment for baseline differences; 8.1% absolute difference, 95% CI 5.1–11.2%). All demographic subgroups benefited from the intervention. Intervention patients completed screening sooner than control patients (p < 0.001).
A novel population-based informatics system functioning as part of a non-visit-based care model increased mammography screening rates in intervention practices.
Electronic supplementary material
The online version of this article (doi:10.1007/s11606-010-1500-0) contains supplementary material, which is available to authorized users.
primary care; screening; mammography; health information technology; randomized controlled trial
Revised World Health Organization recommendations seek to increase HIV testing. We assessed the need for expanded testing in South Africa by examining current testing and treatment trends among a high-prevalence population.
We determined the numbers of adults receiving HIV testing and antiretroviral treatment (ART) during 2001–2006 using testing registers linked to patient records from two healthcare facilities believed responsible for virtually all HIV services available to the population. We evaluated annual population testing rates using census population counts; proportions of clients testing seropositive (yield); CD4 counts and WHO stage at diagnosis; and ART initiation rates.
HIV testing rates rose from 4% in 2001 to 20% in 2006 (p<0.001) and were highest among pregnant females receiving provider-initiated testing. Yield for first-time testers decreased from 47% in 2001 to 28% in 2006. Median CD4 counts and WHO stage distributions for newly-diagnosed clients remained stable. HIV-infected clients receiving ART within six months of eligibility increased from 0% in 2001 to 68% in 2006 (p<0.001).
Population testing and ART initiation rates rose dramatically during 2001–2006. Yet, yield remained high and HIV-infected persons continued to receive late diagnoses. These findings highlight the continuing need for expanded testing and linkage to care.
Although warfarin is widely recommended to prevent atrial fibrillation-related thromboembolism, many eligible patients do not take warfarin. The objective of this study was to describe factors associated with warfarin discontinuation in people newly starting warfarin for atrial fibrillation.
Methods and Results
We identified 4,188 subjects newly starting warfarin in the ATRIA Study and tracked longitudinal warfarin use using pharmacy and laboratory databases. Data on patient characteristics, international normalized ratio (INR) tests, and incident hospitalizations for hemorrhage were obtained from clinical and laboratory databases. Multivariable Cox regression analysis was used to identify independent predictors of prolonged warfarin discontinuation, defined as ≥ 180 consecutive days off warfarin.
Within one year after warfarin initiation, 26.3% of subjects discontinued therapy despite few hospitalizations for hemorrhage (2.3% of patients). The risk of discontinuation was higher in patients aged < 65 years (adjusted hazard ratio and 95%CI [HR] 1.33 [1.03-1.72] compared to age ≥ 85 years), patients with poorer anticoagulation control (HR 1.46 [1.42-1.49] for every 10% decrease in time in therapeutic INR range) and lower stroke risk (HR 2.54 [1.86-3.47] for CHADS2 stroke risk index of 0 compared to 4-6).
More than one in four individuals newly starting warfarin for atrial fibrillation discontinued therapy in the first year despite a low overall hemorrhage rate. Individuals deriving potentially less benefit from warfarin, including those with younger age, fewer stroke risk factors, and poorer INR control, were less likely to remain on warfarin. Maximizing the benefits of anticoagulation for atrial fibrillation depends upon determining which patients are most appropriately initiated and maintained on therapy.
anticoagulation; atrial fibrillation; discontinuation; stroke prevention; warfarin
Patients with acute stroke are often transferred to tertiary care centers for advanced interventional services. We hypothesized that the presence of a proximal cerebral artery occlusion on CT angiography (CTA) is an independent predictor of the use of these services.
We performed a historical cohort study of consecutive ischemic stroke patients presenting within 24 h of symptom onset to an academic emergency department who underwent emergent CTA. Use of tertiary care interventions including intra-arterial (IA) thrombolysis, mechanical clot retrieval, and neurosurgery were captured.
During the study period, 207/290 (71%) of patients with acute ischemic stroke underwent emergent CTA. Of the patients, 74/207 (36%) showed evidence of a proximal cerebral artery occlusion, and 22/207 (11%) underwent an interventional procedure. Those with proximal occlusions were more likely to receive a neurointervention (26% vs. 2%, p < 0.001). They were more likely to undergo IA thrombolysis (9% vs. 0%, p = 0.001) or a mechanical intervention (19% vs. 0%, p < 0.0001), but not more likely to undergo neurosurgery (5% vs. 2%, p = 0.2). After controlling for the initial NIH stroke scale (NIHSS) score, proximal occlusion remained an independent predictor of the use of neurointerventional services (OR 8.5, 95% CI 2.2-33). Evidence of proximal occlusion on CTA predicted use of neurointervention with sensitivity of 82% (95% CI 59-94%), specificity of 71% (95% CI 64%-77%), positive predictive value (PPV) of 25% (95% CI 16%-37%), and negative predictive value (NPV) of 97% (95% CI 92%-99%).
Proximal cerebral artery occlusion on CTA predicts the need for advanced neurointerventional services.
A high quality decision requires that patients who meet clinical criteria for surgery are informed about the options (including non-surgical alternatives) and receive treatments that match their goals. The aim of this study was to evaluate the psychometric properties and clinical sensibility of a patient self report instrument, to measure the quality of decisions about total joint replacement for knee or hip osteoarthritis.
The performance of the Hip/Knee Osteoarthritis Decision Quality Instrument (HK-DQI) was evaluated in two samples: (1) a cross-sectional mail survey with 489 patients and 77 providers (study 1); and (2) a randomized controlled trial of a patient decision aid with 138 osteoarthritis patients considering total joint replacement (study 2). The HK-DQI results in two scores. Knowledge items are summed to create a total knowledge score, and a set of goals and concerns are used in a logistic regression model to develop a concordance score. The concordance score measures the proportion of patients whose treatment matched their goals. Hypotheses related to acceptability, feasibility, reliability and validity of the knowledge and concordance scores were examined.
In study 1, the HK-DQI was completed by 382 patients (79%) and 45 providers (58%), and in study 2 by 127 patients (92%), with low rates of missing data. The DQI-knowledge score was reproducible (ICC = 0.81) and demonstrated discriminant validity (68% decision aid vs. 54% control, and 78% providers vs. 61% patients) and content validity. The concordance score demonstrated predictive validity, as patients whose treatments were concordant with their goals had more confidence and less regret with their decision compared to those who did not.
The HK-DQI is feasible and acceptable to patients. It can be used to assess whether patients with osteoarthritis are making informed decisions about surgery that are concordant with their goals.
shared decision making; patient centered care; quality measurement; osteoarthritis; total joint replacement; decision quality
We sought to use data captured in the electronic health record (EHR) to develop and validate a prediction rule for virologic failure in patients being treated for HIV infection.
We used EHRs at two Boston tertiary care hospitals, Massachusetts General Hospital and Brigham and Women's Hospital, to identify HIV-infected patients who were virologically suppressed (HIV RNA ≤400 copies/mL) on antiretroviral therapy between 1/1/05 and 12/31/06. We used a multivariable logistic model with data from Massachusetts General Hospital to derive a one-year virologic failure prediction rule. The model was validated using data from the Brigham and Women's Hospital. We then simplified the scoring scheme to develop a clinical prediction rule.
The one-year virologic failure prediction model, using data from 712 Massachusetts General Hospital patients, demonstrated good discrimination (c-statistic 0.78) and calibration (χ2 =6.6, p =0.58). The validation model, based on 362 Brigham and Women's Hospital patients, also showed good discrimination (c-statistic 0.79) and calibration (χ2 =1.9, p =0.93). The clinical prediction rule included seven predictors, Sub-optimal Adherence, CD4 count <100/μL, Drug and/or Alcohol Abuse, Heavily ART Experienced, Missed ≥1Appointment, Prior Virologic Failure, and Suppressed ≤12 months, and appropriately stratified patients in the validation dataset into low, medium and high risk groups, with one-year virologic failure rates of 3.0%, 13.0% and 28.6%.
A risk score based on seven variables available in the EHR predicts HIV virologic failure at one year and could be used for targeted interventions to improve outcomes in HIV disease.
To determine whether the use of a goals-of-care video to supplement a verbal description can improve end-of-life decision making for patients with cancer.
Fifty participants with malignant glioma were randomly assigned to either a verbal narrative of goals-of-care options at the end of life (control), or a video after the same verbal narrative (intervention) in this randomized controlled trial. The video depicts three levels of medical care: life-prolonging care (cardiopulmonary resuscitation [CPR], ventilation), basic care (hospitalization, no CPR), and comfort care (symptom relief). The primary study outcome was participants' preferences for end-of-life care. The secondary outcome was participants' uncertainty regarding decision making (score range, 3 to 15; higher score indicating less uncertainty). Participants' comfort level with the video was also measured.
Fifty participants were randomly assigned to either the verbal narrative (n = 27) or video (n = 23). After the verbal description, 25.9% of participants preferred life-prolonging care, 51.9% basic care, and 22.2% comfort care. In the video arm, no participants preferred life-prolonging care, 4.4% preferred basic care, 91.3% preferred comfort care, and 4.4% were uncertain (P < .0001). The mean uncertainty score was higher in the video group than in the verbal group (13.7 v 11.5, respectively; P < .002). In the intervention arm, 82.6% of participants reported being very comfortable watching the video.
Compared with participants who only heard a verbal description, participants who viewed a goals-of-care video were more likely to prefer comfort care and avoid CPR, and were more certain of their end-of-life decision making. Participants reported feeling comfortable watching the video.
Responses to ART among HIV-infected children in resource-limited settings have recently been reported, but outcomes vary. We sought to derive pooled estimates of the 12-month rate of virologic suppression (HIV RNA<400 copies/ml) and gain in CD4 cell percentage (ΔCD4%) for children initiating ART in resource-limited settings.
We conducted a systematic review and meta-analysis of published reports of HIV RNA and CD4 outcomes for treatment-naïve children (0–17 years) using the Medline, EMBASE, and LILACS electronic databases and the Cochrane Clinical Trials Register. Pooled estimates of the reported proportion with RNA<400/ml and ΔCD4% after 12 months of ART were derived using patient-level estimates and fixed- and random-effects models. To approximate “intention-to-treat” analyses, in sensitivity analyses, children with missing 12-month data were assumed to have RNA>400/ml or ΔCD4% of zero.
Using patient-level estimates after 12-months of ART, the pooled proportion with virologic suppression was 70% (95%CI: 67–73); the pooled ΔCD4% was 13.7% (95%CI: 11.8–15.7). Results from the fixed- and random-effects models were similar. In approximated “intention-to-treat” analyses, the pooled estimates fell to 53% with virologic suppression (95%CI: 50–55) and to a ΔCD4% of 8.5% (95%CI: 5.5–11.4).
Pooled estimates of reported virologic and immunologic benefits after 12 months of ART among HIV-infected children in resource-limited settings are comparable to those observed among children in developed settings. Consistency in reporting on reasons for missing data will aid in the evaluation of ART outcomes in resource-limited settings.
HIV; pediatric; antiretroviral therapy; resource-limited settings; meta-analysis
To assess whether perceived changes in postpartum support were associated with postpartum return to smoking.
This is a prospective repeated measures, mixed methods observational study. Sixty-five women who smoked prior to pregnancy were recruited at delivery and surveyed at 2, 6, 12, and 24 weeks postpartum; in-depth interviews were conducted when participants reported smoking.
Fifty-two percent self identified as white, non-Hispanic. Forty-seven percent resumed smoking by 24 weeks postpartum. Women who had returned to smoking by 24 weeks had a significantly larger decrease in perceived smoking-specific support than women who remained abstinent (p<0.001). By 24-week postpartum follow-up, only 24% of women reported that an obstetric clinician had discussed how to quit/stay quit. When qualitatively interviewed, more than half of women reported having no one to support them to stay quit or quit smoking.
Following delivery, women lack needed smoking-specific support. Decline in perceived smoking-specific support from family and friends is associated with postpartum smoking resumption.
postpartum; relapse prevention; smoking cessation
Valid measurement of physician performance requires accurate identification of patients for whom a physician is responsible. Among all patients seen by a physician, some will be more strongly connected to their physician than others, but the effect of connectedness on measures of physician performance is not known.
To determine whether patient–physician connectedness affects measures of clinical performance.
Population-based cohort study.
Academic network of 4 community health centers and 9 hospital-affiliated primary care practices.
155 590 adults with 1 or more visits to a study practice from 2003 to 2005.
A validated algorithm was used to connect patients to either 1 of 181 physicians or 1 of 13 practices in which they received most of their care. Performance measures included breast, cervical, and colorectal cancer screening in eligible patients; hemoglobin A1c measurement and control in patients with diabetes; and low-density lipoprotein cholesterol measurement and control in patients with diabetes and coronary artery disease.
Overall, 92 315 patients (59.3%) were connected to a specific physician, whereas 53 669 patients (34.5%) were connected only to a specific practice and 9606 patients (6.2%) could not be connected to a physician or practice. The proportion of patients in a practice who could be connected to a physician varied markedly (45.6% to 71.2% of patients per practice; P < 0.001). Physician-connected patients were significantly more likely than practice-connected patients to receive guideline-consistent care (for example, adjusted mammography rates were 78.1% vs. 65.9% [P < 0.001] and adjusted hemoglobin A1c rates were 90.3% vs. 74.9% [P < 0.001]). Receipt of preventive care varied more by whether patients were more or less connected to a physician than by race or ethnicity.
Patient–physician connectedness was assessed in 1 primary care network.
Patients seen in primary care practices seem to be variably connected with a specific physician, and less connected patients are less likely to receive guideline-consistent care.
When patients are unable to make important end-of-life decisions, doctors ask surrogate decision makers to provide insight into patients’ preferences. Unfortunately, multiple studies have shown that surrogates’ knowledge of patient preferences is poor. We hypothesized that a video decision tool would improve concordance between patients and their surrogates for end-of-life preferences.
To compare the concordance of preferences among elderly patients and their surrogates listening to only a verbal description of advanced dementia or viewing a video decision support tool of the disease after hearing the verbal description.
This was a randomized controlled trial of a convenience sample of community-dwelling elderly subjects (≥65 years) and their surrogates, and was conducted at 2 geriatric clinics affiliated with 2 academic medical centers in Boston. The study was conducted between September 1, 2007, and May 30, 2008. Random assignment of patient and surrogate dyads was to either a verbal narrative or a video decision support tool after the verbal narrative. End points were goals of care chosen by the patient and predicted goals of care by the surrogate. Goals of care included life-prolonging care (CPR, mechanical ventilation), limited care (hospitalization, antibiotics, but not CPR), and comfort care (only treatment to relieve symptoms). The primary outcome measure was the concordance rate of preferences between patients and their surrogates.
A total of 14 pairs of patients and their surrogates were randomized to verbal narrative (n = 6) or video after verbal narrative (n = 8). Among the 6 patients receiving only the verbal narrative, 3 (50%) preferred comfort care, 1 (17%) chose limited care, and 2 (33%) desired life-prolonging care. Among the surrogates for these patients, only 2 correctly chose what their loved one would want if in a state of advanced dementia, yielding a concordance rate of 33%. Among the 8 patients receiving the video decision support tool, all 8 chose comfort care. Among the surrogates for these patients, all 8 correctly chose what their loved one would want if in a state of advanced dementia, yielding a concordance rate of 100%.
Patients and surrogates viewing a video decision support tool for advanced dementia are more likely to concur about the patient’s end-of-life preferences than when solely listening to a verbal description of the disease.
Few data are available to guide programmatic solutions to the overlapping problems of undernutrition and HIV infection. We evaluated the impact of food assistance on patient outcomes in a comprehensive HIV program in central Haiti in a prospective observational cohort study.
Adults with HIV infection were eligible for monthly food rations if they had any one of: tuberculosis, body mass index (BMI) <18.5kg/m2, CD4 cell count <350/mm3 (in the prior 3 months) or severe socio-economic conditions. A total of 600 individuals (300 eligible and 300 ineligible for food assistance) were interviewed before rations were distributed, at 6 months and at 12 months. Data collected included demographics, BMI and food insecurity score (range 0 - 20).
At 6- and 12-month time-points, 488 and 340 subjects were eligible for analysis. Multivariable analysis demonstrated that at 6 months, food security significantly improved in those who received food assistance versus who did not (-3.55 vs -0.16; P < 0.0001); BMI decreased significantly less in the food assistance group than in the non-food group (-0.20 vs -0.66; P = 0.020). At 12 months, food assistance was associated with improved food security (-3.49 vs -1.89, P = 0.011) and BMI (0.22 vs -0.67, P = 0.036). Food assistance was associated with improved adherence to monthly clinic visits at both 6 (P < 0.001) and 12 months (P = 0.033).
Food assistance was associated with improved food security, increased BMI, and improved adherence to clinic visits at 6 and 12 months among people living with HIV in Haiti and should be part of routine care where HIV and food insecurity overlap.
Randomized trials and observational studies support using an international normalized ratio (INR) target of 2.0 to 3.0 for preventing ischemic stroke in atrial fibrillation (AF). We assessed whether the INR target should be adjusted based on selected patient characteristics.
Methods and Results
We conducted a case-control study nested within the ATRIA cohort’s 9,217 AF patients taking warfarin to define the relationship between INR level and the odds of thromboembolism (TE, mainly stroke) and of intracranial hemorrhage (ICH) relative to INR 2.0-2.5. We identified 396 TE cases and 164 ICH cases during follow-up. Each case was compared with four randomly selected controls matched on calendar date and stroke risk factors using matched univariable analyses and conditional logistic regression. We explored modification of the INR-outcome relationships by the following stroke risk factors: prior stroke, age and CHADS2 risk score.
Overall, the odds of TE were low and stable above INR 1.8. Compared to INR 2.0-2.5, the relative odds of TE increased strikingly at INR <1.8 (e.g., OR=3.72; 95% CI: 2.67-5.19, at INR 1.4-1.7). The odds of ICH increased markedly at INR values >3.5 (e.g., OR=3.56; 95% CI: 1.70-7.46, at INR 3.6-4.5). The relative odds of ICH were consistently low at INR <3.6. There was no evidence of lower ICH risk at INR levels<2.0. These patterns of risk did not differ substantially by history of stroke, age, or CHADS2 risk score.
Our results confirm that the current standard of INR 2.0-3.0 for AF falls in the optimal INR range. Our findings do not support adjustment of INR targets according to previously defined stroke risk factors.
atrial fibrillation; anticoagulation; stroke prevention
The aim of this prospective repeated measures, mixed-methods observational study was to assess whether depressive, anxiety, and stress symptoms are associated with postpartum relapse to smoking.
A total of 65 women who smoked prior to pregnancy and had not smoked during the last month of pregnancy were recruited at delivery and followed for 24 weeks. Surveys administered at baseline and at 2, 6, 12, and 24 weeks postpartum assessed smoking status and symptoms of depression (Beck Depression Inventory [BDI]), anxiety (Beck Anxiety Inventory [BAI]), and stress (Perceived Stress Scale [PSS]). In-depth interviews were conducted with women who reported smoking.
Although 92% of the participants reported a strong desire to stay quit, 47% resumed smoking by 24 weeks postpartum. Baseline factors associated with smoking at 24 weeks were having had a prior delivery, not being happy about the pregnancy, undergoing counseling for depression or anxiety during pregnancy, and ever having struggled with depression (p < .05). In a repeated measures regression model, the slope of BDI scores from baseline to the 12-week follow-up differed between nonsmokers and smokers (−0.12 vs. +0.11 units/week, p = .03). The slope of PSS scores also differed between nonsmokers and smokers (−0.05 vs. +0.08 units/week, p = .04). In qualitative interviews, most women who relapsed attributed their relapse and continued smoking to negative emotions.
Among women who quit smoking during pregnancy, a worsening of depressive and stress symptoms over 12 weeks postpartum was associated with an increased risk of smoking by 24 weeks.
Guidelines recommend warfarin use in patients with atrial fibrillation solely on the basis of risk for ischemic stroke without antithrombotic therapy. These guidelines rely on ischemic stroke rates observed in older trials and do not explicitly account for increased risk for hemorrhage.
To quantify the net clinical benefit of warfarin therapy in a cohort of patients with atrial fibrillation.
Mixed retrospective and prospective cohort study of patients with atrial fibrillation between 1996 and 2003.
An integrated health care delivery system.
13559 adults with nonvalvular atrial fibrillation.
Warfarin exposure, patient characteristics, and outcome events were ascertained from health plan records and databases. Outcome events were validated by formal physician review. Net clinical benefit was defined as the annual rate of (ischemic strokes and systemic emboli prevented by warfarin) minus (intracranial hemorrhages attributable to warfarin multiplied by an impact weight). For the base case, the impact weight was 1.5, reflecting the greater clinical impact of intracranial hemorrhage versus thromboembolism.
Patients accumulated more than 66000 person-years of follow-up. The adjusted net clinical benefit of warfarin for the cohort overall was 0.68% per year (95% CI, 0.34% to 0.87%). Adjusted net clinical benefit was greatest for patients with a history of ischemic stroke (2.48% per year [CI, 0.75% to 4.22%]) and for those 85 years or older (2.34% per year [CI, 1.29% to 3.30%]). The net clinical benefit of warfarin increased from essentially zero in CHADS2 (congestive heart failure/hypertension/age/diabetes/prior stroke2) stroke risk categories 0 and 1, to 2.22% per year (CI, 0.58% to 3.75%) in CHADS2 categories 4 to 6. The patterns of results were preserved using weighting factors for intracranial hemorrhage of 1.0 and 2.0.
Residual confounding is a possibility. Some outcome events were probably missed by the screening algorithm or when medical records were unavailable.
Expected net clinical benefit of warfarin therapy is highest among patients with the highest untreated risk for stroke, which includes the oldest age category. Risk assessment that incorporates both risk for thromboembolism and risk for intracranial hemorrhage provides a more quantitatively informed basis for the decision on antithrombotic therapy in patients with atrial fibrillation.
Primary Funding Source
National Institute on Aging; National Heart, Lung, and Blood Institute; and Massachusetts General Hospital.