Whether HIV viremia, particularly at low levels is associated with inflammation, increased coagulation, and all-cause mortality is unclear.
The associations of HIV RNA level with C-reactive protein (CRP), fibrinogen, interleukin (IL)-6 and mortality were evaluated in 1116 HIV-infected participants from the Study of Fat Redistribution and Metabolic Change in HIV infection. HIV RNA level was categorized as undetectable (i.e., “target not detected”), 1–19, 20–399, 400–9999, and ≥10,000 copies/ml. Covariates included demographics, lifestyle, adipose tissue, and HIV-related factors.
HIV RNA level had little association with CRP. Categories of HIV RNA below 10,000 copies/ml had similar levels of IL-6 compared with an undetectable HIV RNA level, while HIV RNA ≥10,000 copies/ml was associated with 89% higher IL-6 (p<0.001). This association was attenuated by ∼50% after adjustment for CD4+ cell count. Higher HIV RNA was associated with higher fibrinogen. Compared to an undetectable HIV RNA level, fibrinogen was 0.6%, 1.9%, 4.5%, 4.6%, and 9.4% higher across HIV RNA categories, respectively, and statistically significant at the highest level (p = 0.0002 for HIV RNA ≥10,000 copies/ml). Higher HIV RNA was associated with mortality during follow-up in unadjusted analysis, but showed little association after adjustment for CD4+ cell count and inflammation.
HIV RNA ≥10,000 copies/ml was associated with higher IL-6 and fibrinogen, but lower levels of viremia appeared similar, and there was little association with CRP. The relationship of HIV RNA with IL-6 was strongly affected by CD4 cell depletion. After adjustment for CD4+ cell count and inflammation, viremia did not appear to be substantially associated with mortality risk over 5 years.
To determine the association of inflammatory markers, fibrinogen and C-reactive protein (CRP), with 5-year mortality risk.
Vital status was ascertained in 922 HIV-infected participants from the Study of Fat Redistribution and Metabolic Change in HIV infection. Multivariable logistic regression estimated odds ratios (OR) after adjustment for demographic, cardiovascular and HIV-related factors.
Over a 5-year period, HIV-infected participants with fibrinogen levels in the highest tertile(>406mg/dL) had 2.6-fold higher adjusted odds of death than those with fibrinogen in the lowest tertile(<319mg/dL). Those with high CRP(>3mg/L) had 2.7-fold higher adjusted odds of death than those with CRP<1mg/L. When stratified by CD4 count category, fibrinogen (as a linear variable) remained independently associated [OR(95% confidence intervals) per 100mg/dL increase in fibrinogen: 1.93(1.57,2.37);1.43(1.14,1.79);1.43(1.14,1.81);and 1.30(1.04,1.63) for CD4 <200,200–350,>350–500, and >500cells/μL, respectively. Higher CRP also remained associated with higher odds of death overall and within each CD4 subgroup.
Fibrinogen and CRP are strong and independent predictors of mortality in HIV-infected adults. Our findings suggest that even in those with relatively preserved CD4 counts >500cells/μL, inflammation remains an important risk factor for mortality. Further investigation should determine whether interventions to reduce inflammation might decrease mortality risk in HIV-infected individuals.
HIV; inflammation; C-reactive protein; fibrinogen; mortality
Compared with controls, HIV-infected persons have a greater prevalence of kidney disease as assessed by high levels of cystatin C and albuminuria, but not as assessed by creatinine level. However, the clinical importance of elevated cystatin C and albuminuria in the HIV-infected population has not been studied.
We conducted an observational cohort study to determine the association of kidney disease (measured by albuminuria, cystatin C, and serum creatinine) with mortality.
Setting & Participants
922 HIV-infected persons enrolled in the FRAM (Fat Redistribution and Metabolic Change in HIV infection) study.
Serum cystatin C and serum creatinine were used to estimate glomerular filtration rate (eGFR). Albuminuria was defined as a positive urine dipstick (≥1+) or a urine albumin-creatinine ratio > 30 mg/g.
At baseline, reduced kidney function (eGFRSCysC <60 mL/min/1.73m2) or albuminuria was present in 28% of participants. After five years of follow-up, mortality was 48% among those with both eGFRSCysC <60 mL/min/1.73m2 and albuminuria, 23% in those with eGFRSCysC <60 mL/min/1.73m2 alone, 20% in those with albuminuria alone, and 9% in those with neither condition. After multivariable adjustment for demographics, cardiovascular risk factors, HIV-related factors, and inflammatory markers, eGFRSCysC <60 mL/min/1.73m2 and albuminuria were associated with nearly a twofold increase in mortality, whereas eGFRSCr <60 mL/min/1.73m2 did not appear to have any substantial association with mortality. Together, eGFRSCysC <60 mL/min/1.73m2 and albuminuria accounted for 17% of the population-level attributable risk for mortality.
Vital status was unknown in 261 participants from the original cohort.
Kidney disease marked by albuminuria or increased cystatin C levels appears to be an important risk factor for mortality in HIV-infected individuals. A substantial proportion of this risk may be unrecognized because of the current reliance on serum creatinine to estimate kidney function in clinical practice.
kidney disease; mortality; HIV infection
Although studies have reported a high prevalence of end-stage renal disease in human immunodeficiency virus (HIV)-infected individuals, little is known about moderate impairments in kidney function. Cystatin C measurement may be more sensitive than creatinine for detecting impaired kidney function in persons with HIV.
We evaluated kidney function in the Fat Redistribution and Metabolic Change in HIV Infection (FRAM) cohort, a representative sample of 1008 HIV-infected persons and 290 controls from the Coronary Artery Risk Development in Young Adults (CARDIA) study in the United States.
Cystatin C level was elevated in HIV-infected individuals; the mean±SD cystatin C level was 0.92±0.22 mg/L in those infected with HIV and 0.76±0.15 mg/L in controls (P<.001). In contrast, both mean creatinine levels and estimated glomerular filtration rates appeared similar in HIV-infected individuals and controls (0.87±0.21 vs 0.85±0.19 mg/dL [to convert to micromoles per liter, multiply by 88.4] [P=.35] and 110±26 vs 106±23 mL/min/1.73 m2 [P=.06], respectively). Persons with HIV infection were more likely to have a cystatin C level greater than 1.0 mg/L (OR, 9.8; 95% confidence interval, 4.4-22.0 [P<.001]), a threshold demonstrated to be associated with increased risk for death and cardiovascular and kidney disease. Among participants with HIV, potentially modifiable risk factors for kidney disease, hypertension, and low high-density lipoprotein concentration were associated with a higher cystatin C level, as were lower CD4 lymphocyte count and coinfection with hepatitis C virus (all P<.001).
Individuals infected with HIV had substantially worse kidney function when measured by cystatin Clevel compared with HIV-negative controls, whereas mean creatinine levels and estimated glomerular filtration rates were similar. Cystatin C measurement could be a useful clinical tool to identify HIV-infected persons at increased risk for kidney and cardiovascular disease.
Complications of hepatitis C virus (HCV) infection are primarily related to the development of advanced fibrosis.
Baseline data from a prospective community-based cohort study of 204 persons with chronic hepatitis C virus (HCV) infection were used for analysis. The outcome was fibrosis score on biopsy and the primary predictor evaluated was daily cannabis use.
The median age of the cohort was 46.8 years, 69.1% were male, 49.0% were Caucasian, and the presumed route of infection was injection drug use in 70.1%. The median lifetime duration and average daily use of alcohol were 29.1 years and 1.94 drink equivalents per day. Cannabis use frequency (within prior 12 months) was daily in 13.7%, occasional in 45.1%, and never in 41.2%. Fibrosis stage, assessed by Ishak method, was F0, F1–2 and F3–6 in 27.5%, 55.4% and 17.2% of subjects, respectively. Daily compared to non-daily cannabis use was significantly associated with moderate to severe fibrosis (F3–6 versus F1–2) in univariate [OR = 3.21 (95% CI, 1.20–8.56), p = 0.020] and multivariate analyses (OR = 6.78, (1.89–24.31), p=0.003). Other independent predictors of F3–6 were ≥11 portal tracts (compared to <5, OR = 6.92 (1.34–35.7), p=0.021] and lifetime duration of moderate and heavy alcohol use [OR per decade = 1.72 (1.02–2.90), p=0.044].
We conclude that daily cannabis use is strongly associated with moderate to severe fibrosis and that HCV-infected individuals should be counseled to reduce or abstain from cannabis use.
fibrosis; alcohol; viral load; marijuana; cirrhosis
Visceral obesity is associated with insulin resistance, but the association of other regional adipose depots with insulin resistance is not understood. In HIV infection, buffalo hump (upper trunk fat) is associated, but the association of upper trunk fat with insulin resistance has not been examined in controls. To determine the independent association of adipose depots other than visceral with insulin resistance, we performed a cross-sectional analysis of controls and HIV-infected subjects in the Fat Redistribution and Metabolic Change in HIV Infection (FRAM) study, who had measurements of glucose, insulin, and adipose tissue volumes by whole-body magnetic resonance imaging. We studied 926 HIV-positive persons from 16 academic medical center clinics and trials units with demographic characteristics representative of US patients with HIV infection and 258 FRAM controls from the population-based Coronary Artery Risk Development in Young Adults study. We measured visceral adipose tissue (VAT) and subcutaneous adipose tissue (SAT) volume in the legs, arms, lower trunk (back and abdomen), and upper trunk (back and chest) and assessed their association with the homeostasis model of assessment (HOMA) and HOMA >4 by stepwise multivariable analysis. The prevalence of HOMA >4 as a marker of insulin resistance was 28% among controls compared with 37% among HIV-infected subjects (P = 0.005). Among controls, those in the highest tertile of upper trunk SAT volume had an odds ratio (OR) of 9.0 (95% confidence interval [CI]: 2.4 to 34; P = 0.001) for having HOMA >4 compared with the lowest tertile, whereas in HIV-positive subjects, the OR was lower (OR = 2.09, 95% CI: 1.36 to 3.19; P = 0.001). Among controls, the highest tertile of VAT volume had an OR of 12.1 (95% CI: 3.2 to 46; P = 0.0002) of having HOMA >4 compared with the lowest tertile, whereas in HIV-positive subjects, the OR was 3.12 (95% CI: 2.0 to 4.8; P < 0.0001). After adjusting for VAT and upper trunk SAT, the association of other SAT depots with HOMA >4 did not reach statistical significance. Thus, VAT and upper trunk SAT are independently associated with insulin resistance in controls and in HIV-infected persons.
buffalo hump; fat distribution; insulin resistance; lipodystrophy; visceral obesity
Coinfection with hepatitis C virus (HCV) is reported to be associated with a higher prevalence of lipodystrophy than HIV infection alone. We examine the association between HCV and adipose tissue volume in HIV-infected men and women.
Cross-sectional analysis of HIV-infected subjects from the study of Fat Redistribution and Metabolic Change in HIV Infection. MRI measured regional adipose tissue volume. Detectable HCV RNA defined HCV infection.
Twenty percent of 792 men and 26% of 329 women were HIV/HCV-coinfected. HIV/HCV-coinfected and HIV-monoinfected women had similar amounts of subcutaneous adipose tissue (SAT) in the leg, lower trunk, upper trunk, and arm and similar amounts of visceral adipose tissue (VAT). Similar findings were seen in men, except in the leg and VAT. After adjustment, HCV infection remained associated with more leg fat in men (12.2%, 95% confidence interval [CI]: 0.3 to 25.3; P = 0.043). Among those on stavudine, HIV-monoinfected men had less leg fat (−7% effect per year of stavudine use, 95% CI: −9 to −5; P < 0.001); a weaker association was seen in HIV/HCV-coinfected men (−2% effect, 95% CI: −7 to 3; P = 0.45). Indinavir was associated with less leg fat (−4% in HIV-monoinfected men, 95% CI: −6 to −1; P = 0.002; −5% in HIV/HCV-coinfected men, 95% CI: −11 to 2; P = 0.14).
Our findings suggest that HIV/HCV coinfection is not associated with less SAT in men and women. HCV infection seems to mitigate the loss of leg fat seen in HIV-infected men on stavudine.
adipose tissue volume; fat distribution; hepatitis C virus; HIV; lipodystrophy
HIV infection and antiretroviral therapy are associated with dyslipidemia, but the association between regional adipose tissue depots and lipid levels is not defined.
The association of MRI-measured visceral (VAT) and regional subcutaneous adipose tissue (SAT) volume with fasting lipid parameters was analyzed by multivariable linear regression in 737 HIV-infected and 145 control men from the study of Fat Redistribution and Metabolic Change in HIV Infection (FRAM).
HIV-infected men had higher median triglycerides (TG) (170mg/dl vs. 107mg/dl, p<0.0001), lower high density lipoprotein (HDL-C) (38mg/dl vs. 46mg/dl, p<0.0001) and lower low density lipoprotein (LDL-C) (105mg/dl vs. 125mg/dl, p<0.0001) than controls. After adjustment, greater VAT was associated with higher TG and lower HDL-C in both HIV-infected and control men, while greater leg SAT was associated with lower TG in HIV-infected men with a similar trend in controls. More upper trunk SAT was associated with higher LDL-C and lower HDL-C in controls, while more lower trunk SAT was associated with higher TG in controls. After adjustment, HIV infection remained strongly associated (p<0.0001) with higher TG (+76%, CI: 53, 103), lower LDL-C (−19%, CI: −25,−12), and lower HDL-C (−18%, CI: −22,−12).
HIV-infected men are more likely than controls to have higher TG and lower HDL-C, which promote atherosclerosis, but also lower LDL-C. Less leg SAT and more VAT are important factors associated with high TG and low HDL-C in HIV-infected men. The reduced leg SAT in HIV-infected men with lipoatrophy places them at increased risk for pro-atherogenic dyslipidemia.
Studies in persons without HIV infection have compared dual energy X-ray absorptiometry (DXA) and magnetic resonance imaging (MRI) measured adipose tissue (AT), but no such study has been conducted in HIV+ subjects, who have a high prevalence of regional fat loss.
We compared DXA with MRI-measured trunk, leg, arm, and total fat in HIV+ and control subjects.
Cross-sectional analysis in 877 HIV+ and 260 controls in FRAM (Fat Redistribution and Metabolic Change in HIV Infection), stratified by sex and HIV status.
Univariate associations of DXA with MRI were strongest for total and trunk fat (r≥0.92), and slightly weaker in leg (r≥0.87) and arm (r≥0.71). Estimated limb fat averaged substantially higher for DXA than MRI for HIV+ and control, men and women (all p<0.0001). Trunk showed much less difference between DXA and MRI, but was still statistically significant (p<0.0001). Bland-Altman plots showed increasing differences and variability; higher average limb fat in controls and HIV+ (both p<0.0001) was associated with greater DXA vs. MRI difference. As controls have more limb fat than HIV+, the bias leads to even higher fat measured by DXA than by MRI when controls are compared to HIV+; more HIV+ subjects had leg fat in the bottom decile of controls by DXA than by MRI (p<0.0001).
Although DXA and MRI-measured AT depots correlate strongly in HIV+ subjects and controls, differences increase as average fat increases, particularly for limb fat. DXA may estimate a higher peripheral lipoatrophy prevalence than MRI in HIV+ subjects.
DXA; MRI; adipose tissue depots; lipoatrophy; HIV infection
fibrinogen; HIV; protease inhibitors; non-nucleoside reverse transcriptase inhibitors
Background & Aims
Progressive familial intrahepatic cholestasis (PFIC) with normal serum levels of gamma-glutamyltranspeptidase can result from mutations in ATP8B1 (encoding familial intrahepatic cholestasis 1 [FIC1]) or ABCB11 (encoding bile salt export pump [BSEP]). We evaluated clinical and laboratory features of disease in patients diagnosed with PFIC, who carried mutations in ATP8B1 (FIC1 deficiency) or ABCB11 (BSEP deficiency). Our goal was to identify features that distinguish presentation and course of these 2 disorders, thus facilitating diagnosis and elucidating the differing consequences of ATP8B1 and ABCB11 mutations.
A retrospective multi-center study was conducted, using questionnaires and chart review. Available clinical and biochemical data from 145 PFIC patients with mutations in either ATP8B1 (61 “FIC1 patients”) or ABCB11 (84 “BSEP patients”) were evaluated.
At presentation, serum aminotransferase and bile salt levels were higher in BSEP patients; serum alkaline phosphatase values were higher, and serum albumin values were lower, in FIC1 patients. Elevated white blood cell counts, and giant or multinucleate cells at liver biopsy, were more common in BSEP patients. BSEP patients more often had gallstones and portal hypertension. Diarrhea, pancreatic disease, rickets, pneumonia, abnormal sweat tests, hearing impairment, and poor growth were more common in FIC1 patients. Among BSEP patients, the course of disease was less rapidly progressive in patients bearing the D482G mutation.
Severe forms of FIC1 and BSEP deficiency differed. BSEP patients manifested more severe hepatobiliary disease, while FIC1 patients showed greater evidence of extrahepatic disease.
cholestasis; genetics; transport protein; pediatrics; P-type ATPase; ATP binding cassette protein; ATP8B1; FIC1; ABCB11; BSEP
Liver disease is a leading cause of death in human immunodeficiency virus (HIV)–infected women; however, risk factors for hepatitis B virus (HBV) infection in this population have not been well studied.
We describe the seroprevalence and predictors of HBV infection in a cross-sectional analysis of 2132 women with and at risk for HIV infection enrolled in the Women’s Interagency HIV Study during the periods 1994–95 and 2001–02. Any test result positive for antibody to hepatitis B core antigen defined infection; those women with serological evidence of vaccine immunity were excluded from analysis. Women were stratified into those with a history of injection drug use (IDU), those with a history of noninjection drug use (non-IDU), and those with no history of illicit drug use.
Of 1606 HIV-infected and 526 HIV-uninfected women, 7% and 12%, respectively, appeared to be vaccine immune. After exclusion of these women, 43% of 1500 HIV-infected and 22% of 461 HIV-uninfected women had HBV infection. HBV infection prevalence differed among the IDU, non-IDU, and no illicit drug use groups (76%, 30%, and 17%, respectively; P < .0001). HBV infection was strongly associated with herpes simplex virus 2 (HSV-2) seropositivity in the IDU group (odds ratio [OR], 2.9; 95% confidence interval [CI], 1.6–5.4) and with a history of syphilis in the non-IDU group (OR, 2.7; 95% CI, 1.4–5.0).
We found a high prevalence of HBV infection in our cohort of women with and at risk for HIV infection. HSV-2 seropositivity and a history of syphilis appeared to be important correlates of HBV infection. Sexual transmission of HBV, particularly in those with a history of genital ulcer disease, should be a major focus of education in all high-risk groups.
Sex-based differences in CD4 T-cell (CD4) counts are well recognized, but the basis for these differences has not been identified. Conceivably, homeostatic factors may play a role in this process by regulating T-cell maintenance and repletion. Interleukin (IL)-7 is essential for normal T-cell production and homeostasis. We hypothesized that differences in IL-7 might contribute to sex-based differences in CD4 counts. Circulating IL-7 levels were analyzed in 299 HIV-1–infected women and men. Regression analysis estimated that IL-7 levels were 40% higher in women than in men (P = 0.0032) after controlling for CD4 count, age, and race. Given the important role of IL-7 in T-cell development and homeostasis, these findings suggest that higher IL-7 levels may contribute to higher CD4 counts in women.
interleukin-7; sexual dimorphism; CD4-positive T cells; cytokines; sex differences
Fibrosis stages from liver biopsies reflect liver damage from hepatitis C infection, but analysis is challenging due to their ordered but non-numeric nature, infrequent measurement, misclassification, and unknown infection times.
We used a non-Markov multistate model, accounting for misclassification, with multiple imputation of unknown infection times, applied to 1062 participants of whom 159 had multiple biopsies. Odds ratios (OR) quantified the estimated effects of covariates on progression risk at any given time.
Models estimated that progression risk decreased the more time participants had already spent in the current stage, African American race was protective (OR 0.75, 95% confidence interval 0.60 to 0.95, p = 0.018), and older current age increased risk (OR 1.33 per decade, 95% confidence interval 1.15 to 1.54, p = 0.0002). When controlled for current age, older age at infection did not appear to increase risk (OR 0.92 per decade, 95% confidence interval 0.47 to 1.79, p = 0.80). There was a suggestion that co-infection with human immunodeficiency virus increased risk of progression in the era of highly active antiretroviral treatment beginning in 1996 (OR 2.1, 95% confidence interval 0.97 to 4.4, p = 0.059). Other examined risk factors may influence progression risk, but evidence for or against this was weak due to wide confidence intervals. The main results were essentially unchanged using different assumed misclassification rates or imputation of age of infection.
The analysis avoided problems inherent in simpler methods, supported the previously suspected protective effect of African American race, and suggested that current age rather than age of infection increases risk. Decreasing risk of progression with longer time already spent in a stage was also previously found for post-transplant progression. This could reflect varying disease activity, with recent progression indicating active disease and high risk, while longer time already spent in a stage indicates quiescent disease and low risk.
The aim was to determine if frequently repeated glucose measurements mandated by an inpatient protocol led to falsely elevated reported rates of both hypo- and hyperglycemia.
In our academic medical center, a mandatory standardized subcutaneous insulin order form and protocol was implemented in May 2006. We analyzed point-of-care blood glucose (BG) measurements collected on all medical/surgical wards during the month of August in both 2005 and 2006 by all BGs measured, by patient admission, and by monitored patient-day. We then repeated all analyses using an algorithm that excluded BG values if another BG was measured less than 5 minutes later or 5-60 minutes earlier.
In 2005 versus 2006, there were 7034 versus 8016 glucoses measured in 397 versus 389 patients over 1704 versus 1710 patient days, respectively. Analyses based on patient-day balanced differences in BG measurement frequency and patient length of stay. In both years, failure to exclude repeat values overestimated both the proportion of patient days with hypoglycemia (3.5% versus 1.8% in 2005, p = .003; 2.6% versus 1.3% in 2006, p = .007) and severe hyperglycemia (9.3% versus 7.4% in 2005, p = .09; 7.7% versus 5.9% in 2006, p = .08). Mean, median, and proportion of patient-day means within our target range (80-150 mg/dl) were not significantly different.
Glucometric reports should exclude repeated BG measurements from a single clinical episode of hypo- or hyperglycemia in order to accurately reflect inpatient glycemic control.
glucometrics; glucose monitoring; hypoglycemia; inpatient diabetes
HIV-1 viral load in early infection predicts the risk of subsequent disease progression but the factors responsible for the differences between individuals in viral load during this period have not been fully identified. We sought to determine the relationship between HIV-1 RNA levels in the source partner and recently infected recipient partners within transmission pairs.
We recruited donor partners of persons who presented with acute or recent (< 6 months) HIV infection. Transmission was confirmed by phyologenetic comparison of virus sequence in the donor and recipient partners. We compared viral load in the donor partner and the recipient in the first 6 months of HIV infection.
We identified 24 transmission pairs. The median estimated time from infection to evaluation in acutely/recently infected recipient individuals was 72 days. The viral load in the donor was closely associated with viral load at presentation in the recipient case (r=0.55, P=0.006).
The strong correlation between HIV-1 RNA levels within HIV transmission pairs indicates that virus characteristics are an important determinant of viral load in early HIV infection.
HIV-1 RNA; acute HIV-1 infection; HIV-1 transmission; viral load set-point; HIV-1 pathogenesis
Multistate modeling methods are well-suited for analysis of some chronic diseases that move through distinct stages. The memoryless or Markov assumptions typically made, however, may be suspect for some diseases, such as hepatitis C, where there is interest in whether prognosis depends on history. This paper describes methods for multistate modeling where transition risk can depend on any property of past progression history, including time spent in the current stage and the time taken to reach the current stage. Analysis of 901 measurements of fibrosis in 401 patients following liver transplantation found decreasing risk of progression as time in the current stage increased, even when controlled for several fixed covariates. Longer time to reach the current stage did not appear associated with lower progression risk. Analysis of simulation scenarios based on the transplant study showed that greater misclassification of fibrosis produced more technical difficulties in fitting the models and poorer estimation of covariate effects than did less misclassification or error-free fibrosis measurement. The higher risk of progression when less time has been spent in the current stage could be due to varying disease activity over time, with recent progression indicating an “active” period and consequent higher risk of further progression.
fibrosis; hepatitis C; liver transplant; memoryless assumptions; multistate modeling
To determine the relationship of HIV infection, demographic and cardiovascular disease (CVD) risk factors with mortality in the recent HAART era.
Vital status was ascertained from 2004–2007 in 922 HIV-infected and 280 controls in the Study of Fat Redistribution and Metabolic Change in HIV infection; 469 HIV-infected were included in analysis comparing HIV to similar age controls. Multivariable exponential survival regression (adjusting for demographic and CVD factors) estimated hazard ratios (HR) for death.
After 5 years of follow-up, the overall adjusted mortality HR was 3.4[95% confidence interval (CI):1.35,8.5]; HR was 6.3 among HIV-infected with CD4<200(95% CI:2.2,18.2), 4.3 with CD4 200–350(95% CI:1.14,16.0), and 2.3 with CD4>350(95% CI:0.78,6.9). Among HIV-infected, current smoking (HR=2.73 vs. never smokers, 95% CI:1.64,4.5) and older age (HR=1.61 per decade, 95% CI:1.27,2.1) were independent risk factors for death; higher baseline CD4 count was associated with lower risk (HR=0.65 per CD4 doubling, 95% CI:0.58,0.73).
HIV infection was associated with a 3-fold mortality risk compared to controls after adjustment for demographic and CVD risk factors. In addition to low baseline CD4 count, older age and current smoking were strong and independent predictors of mortality in a US cohort of HIV-infected participants in clinical care.
Cardiovascular disease; Mortality; HIV infection; FRAM
Background & Aims
Data are conflicting on the benefit of selective serotonin reuptake inhibitors (SSRIs) for patients with irritable bowel syndrome (IBS); the role of visceral sensitivity in IBS pathophysiology is unclear. We assessed the effects of citalopram and the relationships between, symptoms, and quality of life (QOL), and rectal sensitivity in non-depressed patients with IBS.
Patients from primary, secondary and tertiary care centers were randomly assigned to groups given citalopram (20 mg/day for 4 weeks, then 40 mg/day for 4 weeks) or placebo. The study was double masked with concealed allocation. Symptoms were assessed weekly; IBS-QOL and rectal sensation were determined from barostat measurements made at the beginning and end of the study.
Patients that received citalopram did not have a higher rate of adequate relief from IBS symptoms than subjects that received placebo (12/27, 44% vs 15/27, 56% respectively; P=0.59), regardless of IBS subtype. The odds ratio for weekly response to citalopram vs placebo was 0.80 (95% confidence interval [CI] 0.61–1.04). Citalopram did not reduce specific symptoms or increase IBS-QOL scores; it had no effect on rectal compliance and a minimal effect on sensation. Changes in IBS-QOL score and pressure-eliciting pain were correlated (r=0.33, 95% CI 0.03–0.57); changes in symptoms and rectal sensitivity or IBS-QOL scores were not correlated.
Citalopram was not superior to placebo in treating non-depressed IBS patients. Changes in symptoms were not correlated with changes in rectal sensation assessed by barostat; Any benefit of citalopram in non-depressed IBS patients is likely to be modest.
Transmitted HIV-1 drug resistance (TDR) is an ongoing public health problem, representing 10–20% of new HIV infections in many geographic areas. TDR usually arises from two main sources: individuals on antiretroviral therapy (ART) who are failing to achieve virologic suppression, and individuals who acquired TDR and transmit it while still ART-naïve. TDR rates can be impacted when novel antiretroviral medications are introduced that allow for greater virologic suppression of source patients. Although several new HIV medications were introduced starting in late 2007, including raltegravir, maraviroc, and etravirine, it is not known whether the prevalence of TDR was subsequently affected in 2008–2009.
We performed population sequence genotyping on individuals who were diagnosed with acute or early HIV (<6 months duration) and who enrolled in the Options Project, a prospective cohort, between 2002 and 2009. We used logistic regression to compare the odds of acquiring drug-resistant HIV before versus after the arrival of new ART (2005–2007 vs. 2008–2009). From 2003–2007, TDR rose from 7% to 24%. Prevalence of TDR was then 15% in 2008 and in 2009. While the odds of acquiring TDR were lower in 2008–2009 compared to 2005–2007, this was not statistically significant (odds ratio 0.65, 95% CI 0.31–1.38; p = 0.27).
Our study suggests that transmitted drug resistance rose from 2003–2007, but this upward trend did not continue in 2008 and 2009. Nevertheless, the TDR prevalence in 2008–2009 remained substantial, emphasizing that improved management strategies for drug-resistant HIV are needed if TDR is to be further reduced. Continued surveillance for TDR will be important in understanding the full impact of new antiretroviral medications.
To determine the association of self-perceived fat gain or fat loss in central and peripheral body sites with adherence to highly active antiretroviral therapy (HAART) in HIV-seropositive women. 1,671 women from the Women’s Interagency HIV Study who reported HAART use between April 1999 and March 2006 were studied. Adherence was defined as report of taking HAART ≥ 95% of the time during the prior 6 months. Participant report of any increase or decrease in the chest, abdomen, or upper back in the prior 6 months defined central fat gain and central fat loss, respectively. Report of any increase or decrease in the face, arms, legs or buttocks in the prior 6 months defined peripheral fat gain or peripheral fat loss. Younger age, being African-American (vs. White non-Hispanic), a history of IDU, higher HIV RNA at the previous visit, and alcohol consumption were significant predictors of HAART non-adherence (P <0.05). After multivariate adjustment, self-perception of central fat gain was associated with a 1.5-fold increased odds of HAART non-adherence compared to no change. Perception of fat gain in the abdomen was the strongest predictor of HAART non-adherence when the individual body sites were studied. Women who perceive central fat gain particularly in the abdomen are at risk for decreased adherence to HAART despite recent evidence to suggest that HIV and specific antiretroviral drugs are more commonly associated with fat loss than fat gain.
Lipodystrophy; HIV; Women; HAART adherence; body image perception
End-stage renal disease disproportionately affects black persons, but it is unknown when in the course of chronic kidney disease racial differences arise. Understanding the natural history of racial differences in kidney disease may help guide efforts to reduce disparities.
We compared white/black differences in the risk of end-stage renal disease and death by level of estimated glomerular filtration rate (eGFR) at baseline in a national sample of 2,015,891 veterans between 2001 to 2005.
Rates of end-stage renal disease among black patients exceeded those among white patients at all levels of baseline eGFR. The adjusted hazard ratios (HR) for end-stage renal disease associated with black versus white race for patients with an eGFR ≥90, 60-89, 45-59, 30-44, 15-29, and <15 mL/min/1.73m2, respectively were 2.14 (95% confidence interval [CI], 1.72-2.65), 2.30 (95% CI, 2.02-2.61), 3.08 (95% CI, 2.74-3.46), 2.47 (95% CI, 2.26-2.70), 1.86 (95% CI, 1.75-1.98), and 1.23 (95% CI, 1.12- 1.34). We observed a similar pattern for mortality, with equal or higher rates of death among black persons at all levels of eGFR. The highest risk of mortality associated with black race was also observed among those with an eGFR 45-59 mL/min/1.73m2 (HR 1.32, 95% CI, 1.27-1.36).
Racial differences in the risk of end-stage renal disease appear early in the course of kidney disease and are not explained by a survival advantage among blacks. Efforts to identify and slow progression of chronic kidney disease at earlier stages may be needed to reduce racial disparities.
kidney disease; racial disparities; mortality
Small intensive pharmacokinetic (PK) studies of medications in early-phase trials cannot identify the range of factors that influence drug exposure in heterogeneous populations. We performed PK studies in large numbers of HIV-infected women on nonnucleoside-reverse-transcriptase-inhibitors (NNRTIs) under conditions of actual use to assess patient characteristics that influence exposure and evaluated the relationship between exposure and response.
225 women on NNRTI-based antiretroviral regimens from the Women’s Interagency HIV Study (WIHS) were enrolled into 12 or 24-hour PK studies. Extensive demographic, laboratory and medication covariate data was collected before and during the visit to be used in multivariate models. Total NNRTI drug exposure was estimated by area-under-the-concentration-time curves (AUC).
Hepatic inflammation and renal insufficiency were independently associated with increased nevirapine (NVP) exposure in multivariate analyses; crack cocaine, high fat diets, and amenorrhea were associated with decreased levels (n=106). Higher efavirenz (EFV) exposure was seen with increased transaminase, albumin levels, and orange juice consumption; tenofovir use, increased weight, being African-American and amenorrhea were associated with decreased exposure (n=119). With every 10-fold increase in NVP or EFV exposure, participants were 3.3 and 3.6 times as likely to exhibit virologic suppression, respectively. Patients with higher drug exposure were also more likely to report side effects on therapy.
Our study identifies and quantitates previously unrecognized factors modifying NNRTI exposure in the “real-world” setting. Comprehensive PK studies in representative populations are feasible and may ultimatley lead to dose optimization strategies in patients at risk for failure or adverse events.
HIV; antiretrovirals; nevirapine; efavirenz; pharmacokinetics; drug exposure; women
The belief remains widespread that medical research studies must have statistical power of at least 80% in order to be scientifically sound, and peer reviewers often question whether power is high enough.
This requirement and the methods for meeting it have severe flaws. Notably, the true nature of how sample size influences a study's projected scientific or practical value precludes any meaningful blanket designation of <80% power as "inadequate". In addition, standard calculations are inherently unreliable, and focusing only on power neglects a completed study's most important results: estimates and confidence intervals. Current conventions harm the research process in many ways: promoting misinterpretation of completed studies, eroding scientific integrity, giving reviewers arbitrary power, inhibiting innovation, perverting ethical standards, wasting effort, and wasting money. Medical research would benefit from alternative approaches, including established value of information methods, simple choices based on cost or feasibility that have recently been justified, sensitivity analyses that examine a meaningful array of possible findings, and following previous analogous studies. To promote more rational approaches, research training should cover the issues presented here, peer reviewers should be extremely careful before raising issues of "inadequate" sample size, and reports of completed studies should not discuss power.
Common conventions and expectations concerning sample size are deeply flawed, cause serious harm to the research process, and should be replaced by more rational alternatives.
Antiretroviral (ARV) therapies fail when behavioral or biologic factors lead to inadequate medication exposure. Currently available methods to assess ARV exposure are limited. Levels of ARVs in hair reflect plasma concentrations over weeks to months and may provide a novel method for predicting therapeutic responses.
The Women's Interagency HIV Study, a prospective cohort of HIV-infected women, provided the basis for developing and assessing methods to measure commonly-prescribed protease inhibitors (PIs) - lopinavir (LPV) and atazanavir (ATV) - in small hair samples. We examined the association between hair PI levels and initial virologic responses to therapy in multivariate logistic regression models.
ARV concentrations in hair were strongly and independently associated with treatment response for 224 women starting a new PI-based regimen. For participants initiating LPV/RTV, the odds ratio (OR) for virologic suppression was 39.8 (95%CI 2.8–564) for those with LPV hair levels in the top tertile (>1.9ng/mg) compared to the bottom (≤0.41ng/mg) when controlling for self-reported adherence, age, race, starting viral load and CD4, and prior PI experience. For women starting ATV, the adjusted OR for virologic success was 7.7 (95%CI 2.0-29.7) for those with hair concentrations in the top tertile (>3.4ng/mg) compared to the lowest (≤1.2ng/mg).
PI levels in small hair samples were the strongest independent predictor of virologic success in a diverse group of HIV-infected adults. This noninvasive method for determining ARV exposure may have particular relevance for the epidemic in resource-poor settings due to the ease of collecting and storing hair.
Hair levels; therapeutic drug monitoring; antiretroviral exposure; virologic response; protease inhibitors; atazanavir; lopinavir; WIHS cohort