PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (28)
 

Clipboard (0)
None

Select a Filter Below

Year of Publication
Document Types
1.  HIV Viremia and T-Cell Activation Differentially Affect the Performance of Glomerular Filtration Rate Equations Based on Creatinine and Cystatin C 
PLoS ONE  2013;8(12):e82028.
Background
Serum creatinine and cystatin C are used as markers of glomerular filtration rate (GFR). The performance of these GFR markers relative to exogenously measured GFR (mGFR) in HIV-positive individuals is not well established.
Methods
We assessed the performance of the chronic kidney disease epidemiology collaboration equations based on serum concentrations of creatinine (eGFRcr), cystatin C (eGFRcys) and both biomarkers combined (eGFRcr-cys) in 187 HIV-positive and 98 HIV-negative participants. Measured GFR was calculated by plasma iohexol clearance. Bias and accuracy were defined as the difference between eGFR and mGFR and the percentage of eGFR observations within 30% of mGFR, respectively. Activated CD4 and CD8 T-cells (CD38+ HLA-DR+) were measured by flow cytometry.
Results
The median mGFR was >100 ml/min/1.73 m2 in both groups. All equations tended to be less accurate in HIV-positive than in HIV-negative subjects, with eGFRcr-cys being the most accurate overall. In the HIV-positive group, eGFRcys was significantly less accurate and more biased than eGFRcr and eGFRcr_cys. Additionally eGFRcys bias and accuracy were strongly associated with use of antiretroviral therapy, HIV RNA suppression, and percentages of activated CD4 or CD8 T-cells. Hepatitis C seropositivity was associated with larger eGFRcys bias in both HIV-positive and HIV-negative groups. In contrast, eGFRcr accuracy and bias were not associated with HIV-related factors, T-cell activation, or hepatitis C.
Conclusions
The performance of eGFRcys relative to mGFR was strongly correlated with HIV treatment factors and markers of T-cell activation, which may limit its usefulness as a GFR marker in this population.
doi:10.1371/journal.pone.0082028
PMCID: PMC3871673  PMID: 24376511
2.  Alcohol Consumption and CD4 T-cell count response among persons initiating antiretroviral therapy 
Background
We evaluated the longitudinal association of alcohol use with immunological response to combination antiretroviral therapy (ART) among HIV infected individuals.
Methods
This was a prospective cohort study of individuals initiating ART. Participants underwent an Audio Computer-Assisted Self Interview querying drug and alcohol use within 6 months of treatment. Immunological response to ART was defined by CD4 T-cell count (CD4). Primary independent variables were self-reported number of drinks consumed per drinking day (quantity) and days of alcohol consumption in a typical week (frequency). We used linear mixed effects models to quantify the association between CD4 T-cell count and alcohol quantity and frequency and Cox proportional hazards models to estimate the relative hazard of an increase 100, 150 and 200 CD4 cells/mm3 per additional drink per drinking day. Analyses were stratified by gender. Viral suppression was examined as a time-varying covariate.
Results
Between 2000-2008, 1107 individuals were eligible for inclusion in this study. There was no statistically significant difference in CD4 T-cell count by average drinks per drinking day at any frequency of alcohol use irrespective of gender or viral suppression. Similarly, we found no difference in the hazard ratio for drinks per drinking day within the categories of drinking frequency for time to CD4 T-cell count increase of 100, 150 and 200 cells/mm3, respectively.
Conclusions
Among individuals initiating antiretroviral therapy (ART) the benefits of therapy and viral suppression on the immune system outweigh detrimental effects of alcohol, reinforcing the importance of initiating ART and ensuring adequate adherence to therapy.
doi:10.1097/QAI.0b013e3182712d39
PMCID: PMC3541505  PMID: 22955054
HIV; alcohol; Immune Response; CD4 T-CELL COUNT; antiretroviral therapy
4.  Insurance Status, not Race, is Associated with Mortality After an Acute Cardiovascular Event in Maryland 
Journal of General Internal Medicine  2012;27(10):1368-1376.
ABSTRACT
BACKGROUND
It is unclear how lack of health insurance or otherwise being underinsured contributes to observed racial disparities in health outcomes related to cardiovascular disease.
OBJECTIVE
To determine the relative risk of death associated with insurance and race after hospital admission for an acute cardiovascular event.
DESIGN
Prospective cohort study in three hospitals in Maryland representing different demographics between 1993 and 2007.
PATIENTS
Patients with an incident admission who were either white or black, and had either private insurance, state-based insurance or were uninsured. 4,908 patients were diagnosed with acute myocardial infarction, 6,759 with coronary atherosclerosis, and 1,293 with stroke.
MAIN MEASURES
Demographic and clinical patient-level data were collected from an administrative billing database and neighborhood household income was collected from the 2000 US Census. The outcome of all-cause mortality was collected from the Social Security Death Master File.
KEY RESULTS
In an analysis adjusted for race, disease severity, location, neighborhood household income among other confounders, being underinsured was associated with an increased risk of death after myocardial infarction (relative hazard, 1.31 [95 % CI: 1.09, 1.59]), coronary atherosclerosis (relative hazard, 1.50 [95 % CI: 1.26, 1.80]) or stroke (relative hazard, 1.25 [95 % CI: 0.91, 1.72]). Black race was not associated with an increased risk of death after myocardial infarction (relative hazard, 1.03 [95 % CI: 0.85, 1.24]), or after stroke (relative hazard, 1.18 [95 % CI: 0.86, 1.61]) and was associated with a decreased risk of death after coronary atherosclerosis (relative hazard, 0.82 [95 % CI: 0.69, 0.98]).
CONCLUSIONS
Race was not associated with an increased risk of death, before or after adjustment. Being underinsured was strongly associated with death among those admitted with myocardial infarction, or a coronary atherosclerosis event. Our results support growing evidence implicating insurance status and socioeconomic factors as important drivers of health disparities, and potentially racial disparities.
Electronic supplementary material
The online version of this article (doi:10.1007/s11606-012-2147-9) contains supplementary material, which is available to authorized users.
doi:10.1007/s11606-012-2147-9
PMCID: PMC3445670  PMID: 22821570
health disparities; insurance coverage; socioeconomic status; race; cardiovascular disease
5.  Changes in sexual and drug-related risk behavior following antiretroviral therapy initiation among HIV-infected injection drug users 
AIDS (London, England)  2012;26(18):2383-2391.
Objective
To evaluate whether HAART is associated with subsequent sexual and drug-related risk behavior compensation among injection drug users (IDUs).
Design
A community-based cohort study of 362 HIV-infected IDUs initiating HAART in Baltimore, Maryland.
Methods
HAART use and risk behavior was assessed at 8316 biannual study visits (median 23). Using logistic regression with generalized estimating equations (GEE), we examined the effect of HAART initiation on changes in risk behavior while adjusting for sociodemographics, alcohol use, CD4+ cell count, year of initiation and consistency of HAART use.
Results
At HAART initiation, participants were a median of 44.4 years old, 71.3% men and 95.3% African–American. In multivariable analysis, HAART initiation was associated with a 75% reduction in the likelihood of unprotected sex [adjusted odds ratio (aOR) 0.25; 95% confidence interval (CI), 0.19–0.32] despite no change in overall sexual activity (aOR 0.95; 0.80–1.12). Odds of any injecting decreased by 38% (aOR 0.62; 0.51–0.75) after HAART initiation. Among the subset of persistent injectors, needle-sharing increased nearly two-fold (aOR 1.99; 1.57–2.52). Behavioral changes were sustained for more than 5 years after HAART initiation and did not differ by consistency of HAART use. Reporting specific high-risk behaviors in the year prior to initiation was a robust predictor of engaging in those behaviors subsequent to HAART.
Conclusion
Overall, substantial declines in sexual risk-taking and active injecting argue against significant behavioral compensation among IDUs following HAART initiation. These data also provide evidence to support identifying persons with risky pre-HAART behavior for targeted behavioral intervention.
doi:10.1097/QAD.0b013e32835ad438
PMCID: PMC3678983  PMID: 23079804
antiretroviral therapy; HIV prevention; injecting; injection drug users; risk compensation; sexual behavior
6.  Risk of Anal Cancer in HIV-Infected and HIV-Uninfected Individuals in North America 
In a large North American cohort study, anal cancer incidence rates were substantially higher for HIV-infected men who have sex with men, other men, and women compared with HIV-uninfected individuals. Rates increased from 1996–1999 to 2000–2003 but plateaued by 2004–2007.
Background. Anal cancer is one of the most common cancers affecting individuals infected with human immunodeficiency virus (HIV), although few have evaluated rates separately for men who have sex with men (MSM), other men, and women. There are also conflicting data regarding calendar trends.
Methods. In a study involving 13 cohorts from North America with follow-up between 1996 and 2007, we compared anal cancer incidence rates among 34 189 HIV-infected (55% MSM, 19% other men, 26% women) and 114 260 HIV-uninfected individuals (90% men).
Results. Among men, the unadjusted anal cancer incidence rates per 100 000 person-years were 131 for HIV-infected MSM, 46 for other HIV-infected men, and 2 for HIV-uninfected men, corresponding to demographically adjusted rate ratios (RRs) of 80.3 (95% confidence interval [CI], 42.7–151.1) for HIV-infected MSM and 26.7 (95% CI, 11.5–61.7) for other HIV-infected men compared with HIV-uninfected men. HIV-infected women had an anal cancer rate of 30/100 000 person-years, and no cases were observed for HIV-uninfected women. In a multivariable Poisson regression model, among HIV-infected individuals, the risk was higher for MSM compared with other men (RR, 3.3; 95% CI, 1.8–6.0), but no difference was observed comparing women with other men (RR, 1.0; 95% CI, 0.5–2.2). In comparison with the period 2000–2003, HIV-infected individuals had an adjusted RR of 0.5 (95% CI, .3–.9) in 1996–1999 and 0.9 (95% CI, .6–1.2) in 2004–2007.
Conclusions. Anal cancer rates were substantially higher for HIV-infected MSM, other men, and women compared with HIV-uninfected individuals, suggesting a need for universal prevention efforts. Rates increased after the early antiretroviral therapy era and then plateaued.
doi:10.1093/cid/cir1012
PMCID: PMC3297645  PMID: 22291097
7.  Impact of Bariatric Surgery on Healthcare Utilization and Costs among Patients with Diabetes 
Medical care  2012;50(1):58-65.
Background
The effect of bariatric surgery on health care utilization and costs among individuals with type 2 diabetes remains unclear.
Objective
To examine healthcare utilization and costs in an insured cohort of individuals with type 2 diabetes after bariatric surgery.
Research Design
Cohort study derived from administrative data from 2002–2008 from 7 Blue Cross Blue Shield Plans.
Subjects
7,806 individuals with type 2 diabetes who had bariatric surgery
Measures
Cost (inpatient, outpatient, pharmacy, other) and utilization (number of inpatient days, outpatient visits, specialist visits).
Results
Compared to pre-surgical costs, the ratio of hospital costs (excluding the initial surgery), among beneficiaries who had any hospital costs, was higher in years 2 through 6 of the post-surgery period and increased over time [post 1: OR = 0.58 (95% CI: 0.50, 0.67); post 6: OR = 3.43 (95% CI: 2.60, 4.53)]. In comparison to the pre-surgical period, the odds of having any healthcare costs was lower in the post-surgery period and remained relatively flat over time. Among those with hospitalizations, the adjusted ratio of inpatient days was higher after surgery [post 1: OR = 1.05 (95% CI: 0.94, 1.16); post 6: OR = 2.77 (95% CI: 1.57, 4.90)]. Among those with primary care visits, the adjusted odds ratio was lower after surgery [post 1: OR = 0.80 (95% CI: 0.78, 0.82); post 6: OR = 0.66 (95% CI: 0.57, 0.76)].
Conclusion
In the six years following surgery, individuals with type 2 diabetes did not have lower healthcare costs than before surgery.
doi:10.1097/MLR.0b013e3182290349
PMCID: PMC3241012  PMID: 22167064
8.  Risk factors for chronic kidney disease in a large cohort of HIV-1 infected individuals initiating antiretroviral therapy in routine care 
AIDS (London, England)  2012;26(15):1907-1915.
Objective
To examine long-term effects of antiretroviral therapy (ART) on kidney function, we evaluated the incidence and risk factors for chronic kidney disease (CKD) among ART-naive, HIV-infected adults and compared changes in estimated glomerular filtration rates (eGFR) before and after starting ART.
Methods
Multicenter observational cohort study of patients with at least one serum creatinine measurement before and after initiating ART. Cox proportional hazard models, and marginal structure models examined CKD risk factors; mixed-effects linear models examined eGFR slopes.
Results
Three thousand, three hundred and twenty-nine patients met entry criteria, contributing 10 099 person-years of observation on ART. ART was associated with a significantly slower rate of eGFR decline (from −2.18 to −1.37 ml/min per 1.73 m2 per year; P = 0.02). The incidence of CKD defined by eGFR thresholds of 60, 45 and 30 ml/min per 1.73 m2 was 10.5, 3.4 and 1.6 per 1000 person-years, respectively. In adjusted analyses black race, hepatitis C coinfection, lower time-varying CD4 cell count and higher time-varying viral load on ART were associated with higher CKD risk, and the magnitude of these risks increased with more severe CKD. Tenofovir and a ritonavir-boosted protease inhibitor (rPI) was also associated with higher CKD risk [hazard odds ratio for an eGFR threshold <60 ml/min per 1.73 m2: 3.35 (95% confidence interval (CI) = 1.40–8.02)], which developed in 5.7% of patients after 4 years of exposure to this regimen-type.
Conclusion
ART was associated with reduced CKD risk in association with CD4 cell restoration and plasma viral load suppression, despite an increased CKD risk that was associated with initial regimens that included tenofovir and rPI.
doi:10.1097/QAD.0b013e328357f5ed
PMCID: PMC3531628  PMID: 22824630
antiretroviral therapy; chronic kidney disease; tenofovir
9.  HIV Infection, Immune Suppression, and Uncontrolled Viremia Are Associated With Increased Multimorbidity Among Aging Injection Drug Users 
Human immunodeficiency virus (HIV)-infected persons, especially those with advanced immune suppression or uncontrolled viremia, experienced increased numbers of multimorbid non-AIDS-defining conditions compared with epidemiologically comparable HIV-uninfected persons. Many of these clinically identified chronic diseases were unrecognized and untreated.
Background. Despite an increasing burden of age-associated non-AIDS outcomes, few studies have investigated the prevalence or correlates of multimorbidity among aging human immunodeficiency virus (HIV)–infected and epidemiologically comparable at-risk populations.
Methods. Among 1262 AIDS Linked to the IntraVenous Experience (ALIVE) study participants followed in a community-based observational cohort, we defined the prevalence of 7 non-AIDS-defining chronic conditions (diabetes, obstructive lung disease, liver disease, anemia, obesity, kidney dysfunction, and hypertension) using clinical and laboratory criteria. Ordinal logistic regression was used to model the odds of increased multimorbidity associated with demographic, behavioral, and clinical factors. Self-reported prevalence was compared with clinically defined prevalence.
Results. Participants were a median of 48.9 years of age; 65.1% were male, 87.5% were African-American, and 28.7% were HIV infected. In multivariable analysis, HIV infection (odds ratio [OR], 1.50; 95% confidence interval [CI], 1.13–1.99) was positively associated with increased multimorbidity. Among HIV-infected participants, multimorbidity was increased with lower nadir CD4 T-cell count (OR, 1.14 per 100-cell decrease; 95% CI, 1.00–1.29) and higher current HIV RNA (OR, 1.32 per log10 increase; 95% CI, 1.08–1.60). Older age, being female, not using cigarettes or drugs, and having depressive symptoms were also associated with increased multimorbidity. A substantial proportion of multimorbid conditions in HIV-infected and HIV-uninfected participants were unrecognized and untreated.
Conclusions. HIV-infected participants experienced increased numbers of multimorbid conditions; risk increased with advanced immunosuppression and higher viremia. These results underscore the heavy burden of multimorbidity associated with HIV and highlight the need for incorporating routine assessment and integrated management of chronic diseases as part of comprehensive healthcare for aging, HIV-infected persons.
doi:10.1093/cid/cir673
PMCID: PMC3214585  PMID: 21976463
10.  Viremia Copy-Years Predicts Mortality Among Treatment-Naive HIV-Infected Patients Initiating Antiretroviral Therapy 
Viremia copy-years predicted all-cause mortality independent of traditional, cross-sectional viral load measures and time-updated CD4+ T-lymphocyte count in antiretroviral therapy-treated patients suggesting cumulative human immunodeficiency virus replication causes harm independent of its effect on the degree of immunodeficiency.
Background. Cross-sectional plasma human immunodeficiency virus (HIV) viral load (VL) measures have proven invaluable for clinical and research purposes. However, cross-sectional VL measures fail to capture cumulative plasma HIV burden longitudinally. We evaluated the cumulative effect of exposure to HIV replication on mortality following initiation of combination antiretroviral therapy (ART).
Methods. We included treatment-naive HIV-infected patients starting ART from 2000 to 2008 at 8 Center for AIDS Research Network of Integrated Clinical Systems sites. Viremia copy-years, a time-varying measure of cumulative plasma HIV exposure, were determined for each patient using the area under the VL curve. Multivariable Cox models were used to evaluate the independent association of viremia copy-years for all-cause mortality.
Results. Among 2027 patients contributing 6579 person-years of follow-up, the median viremia copy-years was 5.3 log10 copy × y/mL (interquartile range: 4.9–6.3 log10 copy × y/mL), and 85 patients (4.2%) died. When evaluated separately, viremia copy-years (hazard ratio [HR] = 1.81 per log10 copy × y/mL; 95% confidence interval [CI], 1.51–2.18 per log10 copy × y/mL), 24-week VL (1.74 per log10 copies/mL; 95% CI, 1.48–2.04 per log10 copies/mL), and most recent VL (HR = 1.89 per log10 copies/mL; 95% CI: 1.63–2.20 per log10 copies/mL) were associated with increased mortality. When simultaneously evaluating VL measures and controlling for other covariates, viremia copy-years increased mortality risk (HR = 1.44 per log10 copy × y/mL; 95% CI, 1.07–1.94 per log10 copy × y/mL), whereas no cross-sectional VL measure was independently associated with mortality.
Conclusions. Viremia copy-years predicted all-cause mortality independent of traditional, cross-sectional VL measures and time-updated CD4+ T-lymphocyte count in ART-treated patients, suggesting cumulative HIV replication causes harm independent of its effect on the degree of immunodeficiency.
doi:10.1093/cid/cir526
PMCID: PMC3189165  PMID: 21890751
11.  Risk Factors for Tuberculosis After Highly Active Antiretroviral Therapy Initiation in the United States and Canada: Implications for Tuberculosis Screening 
The Journal of Infectious Diseases  2011;204(6):893-901.
Background. Screening for tuberculosis prior to highly active antiretroviral therapy (HAART) initiation is not routinely performed in low-incidence settings. Identifying factors associated with developing tuberculosis after HAART initiation could focus screening efforts.
Methods. Sixteen cohorts in the United States and Canada contributed data on persons infected with human immunodeficiency virus (HIV) who initiated HAART December 1995–August 2009. Parametric survival models identified factors associated with tuberculosis occurrence.
Results. Of 37845 persons in the study, 145 were diagnosed with tuberculosis after HAART initiation. Tuberculosis risk was highest in the first 3 months of HAART (20 cases; 215 cases per 100000 person-years; 95% confidence interval [CI]: 131–333 per 100000 person-years). In a multivariate Weibull proportional hazards model, baseline CD4+ lymphocyte count <200, black race, other nonwhite race, Hispanic ethnicity, and history of injection drug use were independently associated with tuberculosis risk. In addition, in a piece-wise Weibull model, increased baseline HIV-1 RNA was associated with increased tuberculosis risk in the first 3 months; male sex tended to be associated with increased risk.
Conclusions. Screening for active tuberculosis prior to HAART initiation should be targeted to persons with baseline CD4 <200 lymphocytes/mm3 or increased HIV-1 RNA, persons of nonwhite race or Hispanic ethnicity, history of injection drug use, and possibly male sex.
doi:10.1093/infdis/jir421
PMCID: PMC3156918  PMID: 21849286
12.  Missing Data on the Estimation of the Prevalence of Accumulated Human Immunodeficiency Virus Drug Resistance in Patients Treated With Antiretroviral Drugs in North America 
American Journal of Epidemiology  2011;174(6):727-735.
Determination of the prevalence of accumulated antiretroviral drug resistance among persons infected with human immunodeficiency virus (HIV) is complicated by the lack of routine measurement in clinical care. By using data from 8 clinic-based cohorts from the North American AIDS Cohort Collaboration on Research and Design, drug-resistance mutations from those with genotype tests were determined and scored using the Genotypic Resistance Interpretation Algorithm developed at Stanford University. For each year from 2000 through 2005, the prevalence was calculated using data from the tested subset, assumptions that incorporated clinical knowledge, and multiple imputation methods to yield a complete data set. A total of 9,289 patients contributed data to the analysis; 3,959 had at least 1 viral load above 1,000 copies/mL, of whom 2,962 (75%) had undergone at least 1 genotype test. Using these methods, the authors estimated that the prevalence of accumulated resistance to 2 or more antiretroviral drug classes had increased from 14% in 2000 to 17% in 2005 (P < 0.001). In contrast, the prevalence of resistance in the tested subset declined from 57% to 36% for 2 or more classes. The authors’ use of clinical knowledge and multiple imputation methods revealed trends in HIV drug resistance among patients in care that were markedly different from those observed using only data from patients who had undergone genotype tests.
doi:10.1093/aje/kwr141
PMCID: PMC3202147  PMID: 21813792
antiretroviral therapy, highly active; drug resistance; genotype; HIV
13.  Parametric mixture models to evaluate and summarize hazard ratios in the presence of competing risks with time-dependent hazards and delayed entry 
Statistics in medicine  2010;30(6):654-665.
In the analysis of survival data, there are often competing events that preclude an event of interest from occurring. Regression analysis with competing risks is typically undertaken using a cause-specific proportional hazards model. However, modern alternative methods exist for the analysis of the subdistribution hazard with a corresponding subdistribution proportional hazards model. In this paper, we introduce a flexible parametric mixture model as a unifying method to obtain estimates of the cause-specific and subdistribution hazards and hazard ratio functions. We describe how these estimates can be summarized over time to give a single number that is comparable to the hazard ratio that is obtained from a corresponding cause-specific or subdistribution proportional hazards model. An application to the Women’s Interagency HIV Study is provided to investigate injection drug use and the time to either the initiation of effective antiretroviral therapy, or clinical disease progression as a competing event.
doi:10.1002/sim.4123
PMCID: PMC3069508  PMID: 21337360
Cause-specific hazards; Competing risks; Hazard ratio; Mixture Model; Subdistribution; Subdistribution hazards; Survival analysis
14.  Alcohol Consumption Among HIV-Infected Women: Impact on Time to Antiretroviral Therapy and Survival 
Journal of Women's Health  2011;20(2):279-286.
Abstract
Objective
Alcohol use is prevalent among HIV-infected people and is associated with lower antiretroviral adherence and high-risk sexual and injection behaviors. We sought to determine factors associated with alcohol use among HIV-infected women engaged in clinical care and if baseline alcohol use was associated with time to combination antiretroviral therapy (cART) and death in this population.
Methods
In an observational clinical cohort, alcohol consumption at the initial medical visit was examined and categorized as heavy, occasional, past, or no use. We used multinomial logistic regression to test preselected covariates and their association with baseline alcohol consumption. We then examined the association between alcohol use and time to cART and time to death using Kaplan-Meier statistics and Cox proportional hazards regression.
Results
Between 1997 and 2006, 1030 HIV-infected women enrolled in the cohort. Assessment of alcohol use revealed occasional and hazardous consumption in 29% and 17% of the cohort, respectively; 13% were past drinkers. In multivariate regression, heavy drinkers were more likely to be infected with hepatitis C than nondrinkers (relative risk ratios [RRR] 2.06, 95% confidence interval [CI] 1.29-3.44) and endorse current drug (RRR 3.51, 95% CI 2.09-5.91) and tobacco use (RRR 3.85 95% CI 1.81-8.19). Multivariable Cox regression adjusting for all clinical covariates demonstrated an increased mortality risk (hazard ratio [HR] 1.40, 95% CI 1.00-1.97, p < 0.05) among heavy drinkers compared to nondrinkers but no delays in cART initiation (1.04 95% CI 0.81-1.34)
Conclusions
Among this cohort of HIV-infected women, heavy alcohol consumption was independently associated with earlier death. Baseline factors associated with heavy alcohol use included tobacco use, hepatitis C, and illicit drug use. Alcohol is a modifiable risk factor for adverse HIV-related outcomes. Providers should consistently screen for alcohol consumption and refer HIV-infected women with heavy alcohol use for treatment.
doi:10.1089/jwh.2010.2043
PMCID: PMC3064875  PMID: 21281111
15.  Virologic and immunologic response to HAART, by age and regimen class 
AIDS (London, England)  2010;24(16):2469-2479.
Objective
To determine the impact of age and initial HAART regimen class on virologic and immunologic response within 24 months after initiation.
Design
Pooled analysis of data from 19 prospective cohort studies in the North American AIDS Cohort Collaboration on Research and Design (NA-ACCORD).
Methods
Twelve thousand, one hundred and ninety-six antiretroviral-naive adults who initiated HAART between 1998 and 2008 using a boosted protease inhibitor-based regimen or a nonnucleoside reverse transcriptase inhibitor (NNRTI)-based regimen were included in our study. Discrete time-to-event models estimated adjusted hazard odds ratios (aHOR) and 95% confidence intervals (CIs) for suppressed viral load (≤500 copies/ml) and, separately, at least 100 cells/μl increase in CD4 cell count. Truncated, stabilized inverse probability weights accounted for selection biases from discontinuation of initial regimen class.
Results
Among 12 196 eligible participants (mean age = 42 years), 50% changed regimen classes after initiation (57 and 48% of whom initiated protease inhibitor and NNRTI-based regimens, respectively). Mean CD4 cell count at initiation was similar by age. Virologic response to treatment was less likely in those initiating using a boosted protease inhibitor [aHOR = 0.77 (0.73, 0.82)], regardless of age. Immunologic response decreased with increasing age [18–<30: ref; 30–<40: aHOR 0.92 (0.85, 1.00); 40–<50: aHOR = 0.85 (0.78, 0.92); 50–<60: aHOR = 0.82 (0.74, 0.90); ≥60: aHOR=0.74 (0.65, 0.85)], regardless of initial regimen.
Conclusion
We found no evidence of an interaction between age and initial anti-retroviral regimen on virologic or immunologic response to HAART; however, decreased immunologic response with increasing age may have implications for age-specific when-to-start guidelines.
doi:10.1097/QAD.0b013e32833e6d14
PMCID: PMC3136814  PMID: 20829678
age; CD4 lymphocyte count; HAART; HIV; viral load
16.  Copy-Years Viremia as a Measure of Cumulative Human Immunodeficiency Virus Viral Burden 
American Journal of Epidemiology  2009;171(2):198-205.
Plasma human immunodeficiency virus type 1 (HIV-1) viral load is a valuable tool for HIV research and clinical care but is often used in a noncumulative manner. The authors developed copy-years viremia as a measure of cumulative plasma HIV-1 viral load exposure among 297 HIV seroconverters from the Multicenter AIDS Cohort Study (1984–1996). Men were followed from seroconversion to incident acquired immunodeficiency syndrome (AIDS), death, or the beginning of the combination antiretroviral therapy era (January 1, 1996); the median duration of follow-up was 4.6 years (interquartile range (IQR), 2.7–6.5). The median viral load and level of copy-years viremia over 2,281 semiannual follow-up assessments were 29,628 copies/mL (IQR, 8,547–80,210) and 63,659 copies × years/mL (IQR, 15,935–180,341). A total of 127 men developed AIDS or died, and 170 survived AIDS-free and were censored on January 1, 1996, or lost to follow-up. Rank correlations between copy-years viremia and other measures of viral load were 0.56–0.87. Each log10 increase in copy-years viremia was associated with a 1.70-fold increased hazard (95% confidence interval: 0.94, 3.07) of AIDS or death, independently of infection duration, age, race, CD4 cell count, set-point, peak viral load, or most recent viral load. Copy-years viremia, a novel measure of cumulative viral burden, may provide prognostic information beyond traditional single measures of viremia.
doi:10.1093/aje/kwp347
PMCID: PMC2878100  PMID: 20007202
acquired immunodeficiency syndrome; HIV; HIV infections; viral load; viremia
17.  Clinic-based Treatment for Opioid-dependent HIV-infected Patients versus Referral to an Opioid Treatment Program: A Randomized Controlled Trial 
Annals of internal medicine  2010;152(11):704-711.
Background
Opioid dependence is common in HIV clinics. Buprenorphine/naloxone (BUP) is an effective treatment for opioid dependence that may be used in routine medical settings.
Objective
To compare clinic-based treatment with BUP (clinic-based BUP) with case-management and referral to an opioid treatment program (referred-treatment).
Design
Single-center, 12-month randomized trial. Participants and investigators were aware of treatment assignments.
Setting
HIV clinic in Baltimore, Maryland.
Patients
93 HIV-infected, opioid-dependent subjects who were not receiving opioid agonist therapy and were not dependent on alcohol or benzodiazepines.
Intervention
The clinic-based BUP strategy included BUP induction and dose titration, urine drug test monitoring, and individual counseling; the referred-treatment arm included case management and referral to an opioid treatment program.
Measurements
Initiation and long-term receipt of opioid agonist therapy, urine drug test results, visit attendance with primary HIV providers, use of antiretroviral therapy, and changes in HIV RNA levels and CD4 cell counts.
Results
The average estimated participation in opioid agonist therapy was 74% (95% CI 61%–84%) in clinic-based BUP and 41% (29%–53%) in referred-treatment (p<0.001). Opioid and cocaine positive urine drug tests were significantly less frequent in clinic-based BUP than in referred-treatment, and study subjects in clinic-based BUP attended significantly more HIV primary care visits than study subjects in referred-treatment. Use of antiretroviral therapy and changes in HIV RNA levels and CD4 cell counts did not differ in the 2 arms of the study.
Limitations
This was a small single-center study, follow-up was only fair, and there was an imbalance in recent drug injection in the study arms at baseline.
Conclusions
This study suggests that management of HIV-infected, opioid-dependent patients with a clinic-based BUP strategy facilitates access to opioid agonist therapy and improves substance abuse treatment outcomes.
Primary Funding Source
Health Resources Services Administration, Special Projects of National Significance.
doi:10.1059/0003-4819-152-11-201006010-00003
PMCID: PMC2886293  PMID: 20513828
HIV; opioid dependence; buprenorphine; opioid agonist treatment; methadone; opioid treatment program
19.  Identifying individuals with virologic failure after initiating effective antiretroviral therapy: The surprising value of mean corpuscular hemoglobin in a cross-sectional study 
Objective
Recent studies have shown that the current guidelines suggesting immunologic monitoring to determine response to highly active antiretroviral therapy (HAART) are inadequate. We assessed whether routinely collected clinical markers could improve prediction of concurrent HIV RNA levels.
Methods
We included individuals followed within the Johns Hopkins HIV Clinical Cohort who initiated antiretroviral therapy and had concurrent HIV RNA and biomarker measurements ≥4 months after HAART. A two tiered approach to determine whether clinical markers could improve prediction included: 1) identification of predictors of HIV RNA levels >500 copies/ml and 2) construction and validation of a prediction model.
Results
Three markers (mean corpuscular hemoglobin [MCH], CD4, and change in percent CD4 from pre-HAART levels) in addition to the change in MCH from pre-HAART levels contained the most predictive information for identifying an HIV RNA >500 copies/ml. However, MCH and change in MCH were the two most predictive followed by CD4 and change in percent CD4. The logistic prediction model in the validation data had an area under the receiver operating characteristic curve of 0.85, and a sensitivity and specificity of 0.74 (95% CI: 0.69-0.79) and 0.89 (95% CI: 0.86-0.91), respectively.
Conclusions
Immunologic criteria have been shown to be a poor guideline for identifying individuals with high HIV RNA levels. MCH and change in MCH were the strongest predictors of HIV RNA levels >500. When combined with CD4 and percent CD4 as covariates in a model, a high level of discrimination between those with and without HIV RNA levels >500 was obtained. These data suggest an unexplored relationship between HIV RNA and MCH.
doi:10.1186/1742-6405-7-25
PMCID: PMC2922076  PMID: 20653950
20.  Competing Risk Regression Models for Epidemiologic Data 
American Journal of Epidemiology  2009;170(2):244-256.
Competing events can preclude the event of interest from occurring in epidemiologic data and can be analyzed by using extensions of survival analysis methods. In this paper, the authors outline 3 regression approaches for estimating 2 key quantities in competing risks analysis: the cause-specific relative hazard (csRH) and the subdistribution relative hazard (sdRH). They compare and contrast the structure of the risk sets and the interpretation of parameters obtained with these methods. They also demonstrate the use of these methods with data from the Women's Interagency HIV Study established in 1993, treating time to initiation of highly active antiretroviral therapy or to clinical disease progression as competing events. In our example, women with an injection drug use history were less likely than those without a history of injection drug use to initiate therapy prior to progression to acquired immunodeficiency syndrome or death by both measures of association (csRH = 0.67, 95% confidence interval: 0.57, 0.80 and sdRH = 0.60, 95% confidence interval: 0.50, 0.71). Moreover, the relative hazards for disease progression prior to treatment were elevated (csRH = 1.71, 95% confidence interval: 1.37, 2.13 and sdRH = 2.01, 95% confidence interval: 1.62, 2.51). Methods for competing risks should be used by epidemiologists, with the choice of method guided by the scientific question.
doi:10.1093/aje/kwp107
PMCID: PMC2732996  PMID: 19494242
competing risks; epidemiologic methods; mixture model; proportional hazards; regression; survival analysis
21.  Effect of Early versus Deferred Antiretroviral Therapy for HIV on Survival 
The New England journal of medicine  2009;360(18):1815-1826.
Background
The optimal time for the initiation of antiretroviral therapy for asymptomatic patients with human immunodeficiency virus (HIV) infection is uncertain.
Methods
We conducted two parallel analyses involving a total of 17,517 asymptomatic patients with HIV infection in the United States and Canada who received medical care during the period from 1996 through 2005. None of the patients had undergone previous antiretroviral therapy. In each group, we stratified the patients according to the CD4+ count (351 to 500 cells per cubic millimeter or >500 cells per cubic millimeter) at the initiation of antiretroviral therapy. In each group, we compared the relative risk of death for patients who initiated therapy when the CD4+ count was above each of the two thresholds of interest (early-therapy group) with that of patients who deferred therapy until the CD4+ count fell below these thresholds (deferred-therapy group).
Results
In the first analysis, which involved 8362 patients, 2084 (25%) initiated therapy at a CD4+ count of 351 to 500 cells per cubic millimeter, and 6278 (75%) deferred therapy. After adjustment for calendar year, cohort of patients, and demographic and clinical characteristics, among patients in the deferred-therapy group there was an increase in the risk of death of 69%, as compared with that in the early-therapy group (relative risk in the deferred-therapy group, 1.69; 95% confidence interval [CI], 1.26 to 2.26; P<0.001). In the second analysis involving 9155 patients, 2220 (24%) initiated therapy at a CD4+ count of more than 500 cells per cubic millimeter and 6935 (76%) deferred therapy. Among patients in the deferred-therapy group, there was an increase in the risk of death of 94% (relative risk, 1.94; 95% CI, 1.37 to 2.79; P<0.001).
Conclusions
The early initiation of antiretroviral therapy before the CD4+ count fell below two prespecified thresholds significantly improved survival, as compared with deferred therapy.
doi:10.1056/NEJMoa0807252
PMCID: PMC2854555  PMID: 19339714
22.  Evaluation of Human Immunodeficiency Virus Biomarkers: Inferences From Interval and Clinical Cohort Studies 
Introduction
Among individuals infected with the human immunodeficiency virus (HIV), biomarkers that predict mortality are also used to determine the time when antiretroviral therapy is initiated. No studies have evaluated the impact of the frequency of marker measurements for either their predictive value of mortality or how they may influence inference of the effect of therapy initiation in analyses from observational data.
Methods
We identified 244 persons who were contemporaneously enrolled in both the AIDS Link to the IntraVenous Experience (an interval cohort) and the Johns Hopkins HIV Clinical Cohort between 1995 and 2004. Data from each study were used separately in 2 ways. We applied time-dependent proportional hazards models to examine the predictive associations between markers and mortality, and marginal structural models to examine the causal inference of therapy on mortality. Biomarkers were used to derive the inverse probability weights.
Results
The timing frequencies of marker measurements in the interval cohort (CD4 interquartile range = 175–194 days) were less heterogeneous than in the clinical cohort (interquartile range = 38–121 days). Despite this, the results were concordant for CD4 (R2 = 0.537 [95% confidence interval = 0.345–0.707] and (R2 = 0.488 [0.297–0.666], respectively). Similar concordance was found for the HIV-1 RNA and hemoglobin analyses. When evaluating the causal effect of highly active antiretroviral therapy (HAART), the relative hazards were 0.34 for the interval cohort study (95% CI = 0.15–0.77) and 0.27 for the clinical cohort study (0.11–0.66).
Conclusion
Utilizing a unique co-enrollment of patients in 2 different types of cohort studies, we find empirical evidence that inferences drawn from these different structures are similar.
doi:10.1097/EDE.0b013e3181a71519
PMCID: PMC2818534  PMID: 19478669
23.  Importance of Age of Onset in Pancreatic Cancer Kindreds 
Background
Young-onset cancer is a hallmark of many familial cancer syndromes, yet the implications of young-onset disease in predicting risk of pancreatic cancer among familial pancreatic cancer (FPC) kindred members remain unclear.
Methods
To understand the relationship between age at onset of pancreatic cancer and risk of pancreatic cancer in kindred members, we compared the observed incidence of pancreatic cancer in 9040 individuals from 1718 kindreds enrolled in the National Familial Pancreas Tumor Registry with that observed in the general US population (Surveillance, Epidemiology, and End Results). Standardized incidence ratios (SIRs) were calculated for data stratified by familial vs sporadic cancer kindred membership, number of affected relatives, youngest age of onset among relatives, and smoking status. Competing risk survival analyses were performed to examine the risk of pancreatic cancer and risk of death from other causes according to youngest age of onset of pancreatic cancer in the family and the number of affected relatives.
Results
Risk of pancreatic cancer was elevated in both FPC kindred members (SIR = 6.79, 95% confidence interval [CI] = 4.54 to 9.75, P < .001) and sporadic pancreatic cancer (SPC) kindred members (SIR = 2.41, 95% CI = 1.04 to 4.74, P = .04) compared with the general population. The presence of a young-onset patient (<50 years) in the family did not alter the risk for SPC kindred members (SIR = 2.74, 95% CI = 0.05 to 15.30, P = .59) compared with those without a young-onset case in the kindred (SIR = 2.36, 95% CI = 0.95 to 4.88, P = .06). However, risk was higher among members of FPC kindreds with a young-onset case in the kindred (SIR = 9.31, 95% CI = 3.42 to 20.28, P < .001) than those without a young-onset case in the kindred (SIR = 6.34, 95% CI = 4.02 to 9.51, P < .001). Competing risk survival analyses indicated that the lifetime risk of pancreatic cancer in FPC kindreds increased with decreasing age of onset in the kindred (hazard ratio = 1.55, 95% CI = 1.19 to 2.03 per year). However, youngest age of onset for pancreatic cancer in the kindred did not affect the risk among SPC kindred members.
Conclusions
Individuals with a family history of pancreatic cancer are at a statistically significantly increased risk of developing pancreatic cancer. Having a member of the family with a young-onset pancreatic cancer confers an added risk in FPC kindreds.
doi:10.1093/jnci/djp466
PMCID: PMC2808346  PMID: 20068195
24.  Exceeding the limits of liver histology markers 
Journal of hepatology  2008;50(1):36-41.
Background/Aims
Alternatives to liver biopsy for staging liver disease caused by hepatitis C virus (HCV) have not appeared accurate enough for widespread clinical use. We characterized the magnitude of the impact of error in the “gold standard” on the observed diagnostic accuracy of surrogate markers.
Methods
We calculated the area under the receiver operating characteristic curve (AUROC) for a surrogate marker against the gold standard (biopsy) for a range of possible performances of each test (biopsy and marker) against truth and a gradient of clinically significant disease prevalence.
Results
In the ‘best’ scenario where liver biopsy accuracy is highest (sensitivity and specificity of biopsy are 90%) and the prevalence of significant disease 40%, the calculated AUROC would be 0.90 for a perfect marker (99% actual accuracy) which is within the range of what has already been observed. With lower biopsy sensitivity and specificity, AUROC determinations > 0.90 could not be achieved even for a marker that perfectly measured disease.
Conclusions
We demonstrate that error in the liver biopsy result itself makes it impossible to distinguish a perfect surrogate from ones that are now judged by some as clinically unacceptable. An alternative gold standard is needed to assess the accuracy of tests used to stage HCV-related liver disease.
doi:10.1016/j.jhep.2008.07.039
PMCID: PMC2637134  PMID: 19012989
liver disease; biopsy; hepatitis C virus; validity; surrogate markers
25.  Evaluating competing adverse and beneficial outcomes using a mixture model 
Statistics in medicine  2008;27(21):4313-4327.
SUMMARY
A competing risk framework occurs when individuals have the potential to experience only one of several mutually exclusive outcomes. Standard survival methods often overestimate the cumulative incidence of events when competing events are censored. Mixture distributions have been previously applied to the competing risk framework to obtain inferences regarding the subdistribution of an event of interest. Often the competing event is treated as a nuisance, but it may be of interest to compare adverse events against the beneficial outcome when dealing with an intervention. In this paper, methods for using a mixture model to estimate an adverse-benefit ratio curve (ratio of the cumulative incidence curves for the two competing events) and the ratio of the subhazards for the two competing events are presented. Both parametric and semi-parametric approaches are described with some remarks for extending the model to include uncertainty in the event type that occurred, left-truncation in order to allow for time-dependent analyses, and uncertainty in the timing of the event resulting in interval censoring. The methods are illustrated with data from a HIV clinical cohort examining whether individuals initiating effective antiretroviral therapy have a greater risk of antiretroviral discontinuation or switching compared to HIV RNA suppression.
doi:10.1002/sim.3293
PMCID: PMC2551745  PMID: 18416435
survival analysis; competing risks; mixture model; cumulative incidence function; subhazard

Results 1-25 (28)