Background. The role of active hepatitis C virus (HCV) replication in chronic kidney disease (CKD) risk has not been clarified.
Methods. We compared CKD incidence in a large cohort of HIV-infected subjects who were HCV seronegative, HCV viremic (detectable HCV RNA), or HCV aviremic (HCV seropositive, undetectable HCV RNA). Stages 3 and 5 CKD were defined according to standard criteria. Progressive CKD was defined as a sustained 25% glomerular filtration rate (GFR) decrease from baseline to a GFR < 60 mL/min/1.73 m2. We used Cox models to calculate adjusted hazard ratios (HRs) and 95% confidence intervals (CIs).
Results. A total of 52 602 HCV seronegative, 9508 HCV viremic, and 913 HCV aviremic subjects were included. Compared with HCV seronegative subjects, HCV viremic subjects were at increased risk for stage 3 CKD (adjusted HR 1.36 [95% CI, 1.26, 1.46]), stage 5 CKD (1.95 [1.64, 2.31]), and progressive CKD (1.31 [1.19, 1.44]), while HCV aviremic subjects were also at increased risk for stage 3 CKD (1.19 [0.98, 1.45]), stage 5 CKD (1.69 [1.07, 2.65]), and progressive CKD (1.31 [1.02, 1.68]).
Conclusions. Compared with HIV-infected subjects who were HCV seronegative, both HCV viremic and HCV aviremic individuals were at increased risk for moderate and advanced CKD.
HIV; hepatitis C virus; chronic kidney disease; hepatitis C RNA; cohort study; glomerular filtration rate; injection drug use
In the last decade, timely initiation of antiretroviral therapy and resulting virologic suppression have greatly improved in North America concurrent with the development of better tolerated and more potent regimens, but significant barriers to treatment uptake remain.
Background. Since the mid-1990s, effective antiretroviral therapy (ART) regimens have improved in potency, tolerability, ease of use, and class diversity. We sought to examine trends in treatment initiation and resulting human immunodeficiency virus (HIV) virologic suppression in North America between 2001 and 2009, and demographic and geographic disparities in these outcomes.
Methods. We analyzed data on HIV-infected individuals newly clinically eligible for ART (ie, first reported CD4+ count <350 cells/µL or AIDS-defining illness, based on treatment guidelines during the study period) from 17 North American AIDS Cohort Collaboration on Research and Design cohorts. Outcomes included timely ART initiation (within 6 months of eligibility) and virologic suppression (≤500 copies/mL, within 1 year). We examined time trends and considered differences by geographic location, age, sex, transmission risk, race/ethnicity, CD4+ count, and viral load, and documented psychosocial barriers to ART initiation, including non–injection drug abuse, alcohol abuse, and mental illness.
Results. Among 10 692 HIV-infected individuals, the cumulative incidence of 6-month ART initiation increased from 51% in 2001 to 72% in 2009 (Ptrend < .001). The cumulative incidence of 1-year virologic suppression increased from 55% to 81%, and among ART initiators, from 84% to 93% (both Ptrend < .001). A greater number of psychosocial barriers were associated with decreased ART initiation, but not virologic suppression once ART was initiated. We found significant heterogeneity by state or province of residence (P < .001).
Conclusions. In the last decade, timely ART initiation and virologic suppression have greatly improved in North America concurrent with the development of better-tolerated and more potent regimens, but significant barriers to treatment uptake remain, both at the individual level and systemwide.
antiretroviral therapy; healthcare disparities; HIV; time factors; viral load
To determine the diagnostic accuracy of a semiautomated 18F-FDG PET/CT measurement of total lesion glycolysis (TLG), maximum and peak standardized uptake value at lean body mass (SUL-Max and SUL-Peak), qualitative estimates of left/right nodal symmetry and FDG uptake for differentiating lymphoma from reactive adenopathy in HIV-infected patients.
We retrospectively analyzed 41 whole-body 18F-FDG PET/CT studies performed in HIV-infected patients for clinical reasons. The study received institutional review board approval. Of the 41 patients, 19 had biopsy-proven untreated lymphoma, and 22 with reactive adenopathy without malignancy on follow-up were used as controls. Nodal and extranodal visual qualitative metabolic scores, SUL-Max, SUL-Peak, CT nodal size, and PERCIST 1.0 threshold-based TLG and metabolic tumor volume (MTV) were determined. The qualitative intensity of nodal involvement and symmetry of uptake were compared using receiver operator curve (ROC) analysis. HIV plasma viral RNA measurements were also obtained.
All of the quantitative PET metrics performed well in differentiating lymphoma from reactive adenopathy and performed better than qualitative visual intensity scores. The areas under the ROC curves (AUC) were significantly higher for TLG=0.96, single SUL-Peak=0.96, single SUL-Max=0.97, and MTV=0.96, compared to 0.67 for CT nodal size (p<0.001). These PET metrics performed best in separating the two populations in aviremic patients, with AUCs of 1 (AUC 0.91 for CT nodal size). TLG, MTV, SUL-Peak and SUL-Max were more reliable markers among viremic individuals, with AUCs between 0.84 and 0.93, compared to other metrics. PET metrics were significantly correlated with plasma viral load in HIV-reactive adenopathy controls. Asymmetrical FDG uptake had an accuracy of 90.4 % for differentiating lymphoma from reactive adenopathy in HIV-infected patients.
Quantitative PET metabolic metrics as well as the qualitative assessment of symmetry of nodal uptake appear to be valuable tools for differentiating lymphoma from reactive adenopathy in HIV-infected patients using FDG PET. These parameters appear more robust in aviremic patients.
PET/CT; HIV; Lymphoma; Lymphadenopathy; TLG
activator-like effectors (TALEs) are proteins secreted
by Xanthomonas bacteria to aid the infection of plant
species. TALEs assist infections by binding to specific DNA sequences
and activating the expression of host genes. Recent results show that
TALE proteins consist of a central repeat domain, which determines
the DNA targeting specificity and can be rapidly synthesized de novo. Considering the highly modular nature of TALEs,
their versatility, and the ease of constructing these proteins, this
technology can have important implications for synthetic biology applications.
Here, we review developments in the area with a particular focus on
modifications for custom and controllable gene regulation.
Potential liver toxicity is an important consideration for antiretroviral selection among patients coinfected with HIV and viral hepatitis (B and/or C). We sought to describe the hepatic safety profile of raltegravir in this population.
Using data from HIV clinical cohorts at Johns Hopkins University and the University of North Carolina at Chapel Hill, we evaluated factors associated with liver enzyme elevations (LEEs) and calculated adverse event incidence rates for patients initiated on raltegravir-containing regimens prior to January 1, 2010. LEEs were graded according to Division of AIDS definitions.
During the study period, 456 patients received raltegravir – of whom 36% were hepatitis-coinfected (138 HCV, 17 HBV, 11 HBV+HCV). Coinfected patients were more likely to have baseline abnormal LEEs, and developed severe (grade 3–4) LEEs at a rate 3.4 times that of HIV-monoinfected patients (95% confidence interval (CI), 1.28, 9.61). Among all participants, the incidence rate for first occurrence of severe LEEs was 5 per 100 person-years (95% CI, 3, 7). In adjusted analyses, coinfected patients had a 2.7-fold increased hazard of severe LEEs (95% CI, 1.03, 7.04). Sixty percent of severe abnormalities occurred within 6 months after starting raltegravir; the drug was discontinued in 3 coinfected patients (1.3%) and 18 monoinfected patients (6.2%).
Compared to HIV-monoinfected patients, those with HIV-hepatitis coinfection are at increased hazard of developing LEEs on raltegravir, at a level similar to other antiretrovirals. Severe events were uncommon, rarely leading to raltegravir discontinuation. With appropriate monitoring, raltegravir-based therapy is safe in hepatitis-coinfected patients.
integrase strand transfer inhibitors; hepatotoxicity; clinical cohort; United States
World-wide, the notable expansion of HIV/AIDS treatment programs in resource-limited settings has lead to an increasing number of patients in need of second-line cART. To adequately address and prepare for this scenario, critical assessments of the outcomes of second-line cART are particularly relevant in settings where monitoring strategies may be inadequate. We evaluated virologic outcomes of second-line combination antiretroviral therapy (cART) among HIV-infected individuals from Brazil.
This study was conducted at the Instituto Nacional de Infectologia Evandro Chagas, Fundação Oswaldo Cruz, at Rio de Janeiro, Brazio. For this study we included all patients who started first-line and second-line cART between 2000 and 2013. Second-line cART required a switch in the anchor drug of first-line cART. We evaluated time from second-line start to virologic failure and factors associated with increased risk of failure using multivariable Cox proportional hazards regression models.
Among the 1,311 patients who started first-line cART a total of 386 patients (29.5%) initiated second-line cART, out of which 35.0% and 60.6% switched from their first-line to their second-line cART when their HIV RNA was undetectable and after documented virologic failure, respectively. At second line cART initiation, median age was 38 years [interquartile range (IQR): 31-45years]. Median CD4 count was significantly different for patients starting second-line cART undetectable [412 cells/mm3 (IQR: 240-617)] compared to those starting second-line cART after documented virologic failure [230 cells/mm3 (IQR: 118-322.5)] (p < 0.01). Median time from second-line cART initiation to failure was also significantly different for patients starting second-line cART undetectable compared to those who with documented virologic failure (log-rank test p < 0.01). Multivariable Cox models showed that younger age, lower education, and HIV RNA level were independently associated with an increased hazard of second-line failure among those with documented virologic failure at start of second-line cART.
We have shown that in a middle-income country with universal access to cART, having a detectable HIV RNA at the start of second-line cART as well as younger age and lower education negatively impact second-line outcomes. Our findings could guide HIV treatment efforts as to which strategies would help maximize the durability of these regimens.
cART; Second-line; Cox proportional hazards regression; HIV/AIDS; Cohort study; Brazil
HIV care and treatment programmes worldwide are transforming as they push to deliver universal access to essential prevention, care and treatment services to persons living with HIV and their communities. The characteristics and capacity of these HIV programmes affect patient outcomes and quality of care. Despite the importance of ensuring optimal outcomes, few studies have addressed the capacity of HIV programmes to deliver comprehensive care. We sought to describe such capacity in HIV programmes in seven regions worldwide.
Staff from 128 sites in 41 countries participating in the International epidemiologic Databases to Evaluate AIDS completed a site survey from 2009 to 2010, including sites in the Asia-Pacific region (n=20), Latin America and the Caribbean (n=7), North America (n=7), Central Africa (n=12), East Africa (n=51), Southern Africa (n=16) and West Africa (n=15). We computed a measure of the comprehensiveness of care based on seven World Health Organization-recommended essential HIV services.
Most sites reported serving urban (61%; region range (rr): 33–100%) and both adult and paediatric populations (77%; rr: 29–96%). Only 45% of HIV clinics that reported treating children had paediatricians on staff. As for the seven essential services, survey respondents reported that CD4+ cell count testing was available to all but one site, while tuberculosis (TB) screening and community outreach services were available in 80 and 72%, respectively. The remaining four essential services – nutritional support (82%), combination antiretroviral therapy adherence support (88%), prevention of mother-to-child transmission (PMTCT) (94%) and other prevention and clinical management services (97%) – were uniformly available. Approximately half (46%) of sites reported offering all seven services. Newer sites and sites in settings with low rankings on the UN Human Development Index (HDI), especially those in the President's Emergency Plan for AIDS Relief focus countries, tended to offer a more comprehensive array of essential services. HIV care programme characteristics and comprehensiveness varied according to the number of years the site had been in operation and the HDI of the site setting, with more recently established clinics in low-HDI settings reporting a more comprehensive array of available services. Survey respondents frequently identified contact tracing of patients, patient outreach, nutritional counselling, onsite viral load testing, universal TB screening and the provision of isoniazid preventive therapy as unavailable services.
This study serves as a baseline for on-going monitoring of the evolution of care delivery over time and lays the groundwork for evaluating HIV treatment outcomes in relation to site capacity for comprehensive care.
HIV/AIDS; HIV care capacity; clinic characteristics; comprehensive care; resource-limited settings
Redox control of protein function involves oxidation and reduction of amino acid residues, but mechanisms and regulators involved are insufficiently understood. Here, we report that methionine-R-sulfoxide reductase B1 (MsrB1) regulates, in conjunction with Mical proteins, mammalian actin assembly via stereoselective methionine oxidation and reduction in a reversible, site-specific manner. Two methionine residues in actin are specifically converted to methionine-R-sulfoxide by Mical1 and Mical2 and reduced back to methionine by selenoprotein MsrB1, supporting actin disassembly and assembly, respectively. Macrophages utilize this redox control during cellular activation by stimulating MsrB1 expression and activity as a part of innate immunity. We identified the regulatory role of MsrB1 as a Mical antagonist in orchestrating actin dynamics and macrophage function. More generally, our study shows that proteins can be regulated by reversible site-specific methionine-R-sulfoxidation.
A simple and highly efficient technique for the analysis of lysophosphatidic acid (LPA) subspecies in human plasma is described. The streamlined sample preparation protocol furnishes the five major LPA subspecies with excellent recoveries. Extensive analysis of the enriched sample reveals only trace levels of other phospholipids. This level of purity not only improves MS analyses, but enables HPLC post-column detection in the visible region with a commercially available fluorescent phospholipids probe. Human plasma samples from different donors were analyzed using the above method and validated by ESI/MS/MS.
We sought to quantify agreement between Institute of Medicine (IOM) and Department of Health and Human Services (DHHS) retention indicators, which have not been compared in the same population, and assess clinical retention within the largest HIV cohort collaboration in the U.S.
Observational study from 2008–2010, using clinical cohort data in the North American AIDS Cohort Collaboration on Research and Design (NA-ACCORD).
Retention definitions used HIV primary care visits. The IOM retention indicator was: ≥2 visits, ≥90 days apart, each calendar year. This was extended to a 2-year period; retention required meeting the definition in both years. The DHHS retention indicator was: ≥1 visit each semester over 2 years, each ≥60 days apart. Kappa statistics detected agreement between indicators and C statistics (areas under Receiver-Operating Characteristic curves) from logistic regression analyses summarized discrimination of the IOM indicator by the DHHS indicator.
Among 36,769 patients in 2008–2009 and 34,017 in 2009–2010, there were higher percentages of participants retained in care under the IOM indicator than the DHHS indicator (80% vs. 75% in 2008–2009; 78% vs. 72% in 2009–2010, respectively) (p<0.01), persisting across all demographic and clinical characteristics (p<0.01). There was high agreement between indicators overall (κ = 0.83 in 2008–2009; κ = 0.79 in 2009–2010, p<0.001), and C statistics revealed a very strong ability to predict retention according to the IOM indicator based on DHHS indicator status, even within characteristic strata.
Although the IOM indicator consistently reported higher retention in care compared with the DHHS indicator, there was strong agreement between IOM and DHHS retention indicators in a cohort demographically similar to persons living with HIV/AIDS in the U.S. Persons with poorer retention represent subgroups of interest for retention improvement programs nationally, particularly in light of the White House Executive Order on the HIV Care Continuum.
This study examines the effect of component downsizing in a modern total knee arthroplasty (TKA) system on the laxity envelope of the knee throughout flexion.
A robotic testing system was utilized to measure laxity envelopes in the implanted knee by in the anterior–posterior (AP), medial–lateral (ML), internal–external (IE) and varus–valgus (VV) directions. Five fresh-frozen cadavers were tested with a modern cruciate retaining TKA implantation, a 1-mm thinner polyethylene insert and a femoral component 2 mm smaller in the AP dimension.
The downsized tibial insert was more lax throughout the flexion arc with up to 2.0 mm more laxity in the AP direction at full extension, a 43.8 % increase over the original implantation. A thinner insert consistently increased laxity throughout the arc of flexion in all degrees of freedom. Downsizing the femoral component resulted in 8.5 mm increase in AP laxity at 90°, a 73.9 % increase. In mid-flexion, downsizing the femur produced similar laxity values to the downsized insert in AP, ML, IE and VV directions.
Downsizing the TKA components had significant effects on laxity throughout flexion. Downsizing a femoral component 2 mm had an equivalent increase in laxity in mid-flexion as downsizing the tibial insert 1 mm. This study quantifies the importance of choosing the appropriate implant component size, having the appropriate size available and the effect of downsizing. The laxity of the implanted knee contributes to how the implant feels to the patient and ultimately the patient’s satisfaction with their new knee.
Knee; Total knee arthroplasty; Kinematics; Laxity; Cadaver; Robotic testing
Motivational Interviewing (MI) can promote behavior change, but HIV care providers rarely have training in MI. Little is known about the use of MI-consistent behavior among untrained providers. This study examines the prevalence of such behaviors and their association with patient intentions to reduce high-risk sexual behavior.
Audio-recorded visits between HIV-infected patients and their healthcare providers were searched for counseling dialogue regarding sexual behavior. The association of providers’ MI-consistence with patients’ statements about behavior change was assessed.
Of 417 total encounters, 27 met inclusion criteria. The odds of patient commitment to change were higher when providers used more reflections (p=0.017), used more MI consistent utterances (p=0.044), demonstrated more empathy (p=0.049), and spent more time discussing sexual behavior (p=0.023). Patients gave more statements in favor of change (change talk) when providers used more reflections (p<0.001) and more empathy (p<0.001), even after adjusting for length of relevant dialogue.
Untrained HIV providers do not consistently use MI techniques when counseling patients about sexual risk reduction. However, when they do, their patients are more likely to express intentions to reduce sexual risk behavior.
MI holds promise as one strategy to reduce transmission of HIV and other sexually-transmitted infections.
Motivational Interviewing; Physicians; HIV/AIDS; Sexual risk reduction; Counseling
We evaluated a two stage ovarian cancer screening strategy that incorporates change of CA 125 over time and age to estimate risk of ovarian cancer. Women with high risk scores were referred for transvaginal ultrasound (TVS).
A single-arm, prospective study of post-menopausal women was conducted. Participants underwent an annual CA 125 blood test. Based on the Risk of Ovarian Cancer Algorithm (ROCA) result, women were triaged to next annual CA 125 (low risk), repeat CA 125 in three months (intermediate risk), or TVS and referral to a gynecologic oncologist (high risk).
4051 women participated over 11 years. The average annual rate of referral to a CA125 in three months was 5.8%, and the average annual referral rate to TVS and review by a gynecologic oncologist was 0.9%. Ten women underwent surgery based on TVS, with four invasive ovarian cancers (one Stage 1A, two Stage 1C and one Stage IIB), two ovarian tumors of low malignant potential (both Stage 1A), one endometrial cancer (Stage 1), and three benign ovarian tumors, providing a positive predictive value of 40% (95% CI 12.2%, 73.8%) for detecting invasive ovarian cancer. The specificity was 99.9% (95% CI 99.7%, 100%). All four women with invasive ovarian cancer were enrolled in the study for at least three years with low risk, annual CA 125 values prior to a rising CA 125.
ROCA followed by TVS demonstrated excellent specificity and PPV in a population of U.S. women at average risk for ovarian cancer.
Ovarian cancer screening; CA125; Cancer screening
Gastric cancer is a leading cause of cancer deaths, but analysis of its molecular and clinical characteristics has been complicated by histological and aetiological heterogeneity. Here we describe a comprehensive molecular evaluation of 295 primary gastric adenocarcinomas as part of The Cancer Genome Atlas (TCGA) project. We propose a molecular classification dividing gastric cancer into four subtypes: tumours positive for Epstein–Barr virus, which display recurrent PIK3CA mutations, extreme DNA hypermethylation, and amplification of JAK2, CD274 (also known as PD-L1) and PDCD1LG2 (also knownasPD-L2); microsatellite unstable tumours, which show elevated mutation rates, including mutations of genes encoding targetable oncogenic signalling proteins; genomically stable tumours, which are enriched for the diffuse histological variant and mutations of RHOA or fusions involving RHO-family GTPase-activating proteins; and tumours with chromosomal instability, which show marked aneuploidy and focal amplification of receptor tyrosine kinases. Identification of these subtypes provides a roadmap for patient stratification and trials of targeted therapies.
Previously, herpes zoster (HZ) was found to occur at a higher rate in the HIV population than the general population. There are, however, limited data about the incidence, risk factors, and clinical outcomes of HZ in the current antiretroviral therapy (ART) era.
We identified HZ episodes in an urban HIV clinic cohort between 2002-2009. Three controls were matched to each case and conditional logistic regression was used to assess for risk factors associated with incident HZ cases. Logistic regression to assess for factors associated with complicated HZ.
183 new HZ cases were identified in 4,353 patients with 19,752 person-years (PY) of follow-up—an incidence rate 9.3/1000 PY. Cases were majority male (62%), and African-American (75%), with a mean age of 39 (IQR 32-44). 50 patients (28%) had complicated HZ with 12% developing post-herpetic neuralgia. In multivariate regression, having started ART within 90 days of the episode (Adjusted OR 4.02, 95% CI:[1.31,12.41]), having a viral load of > 400 copies/mL (1.49, [1.00,2.24]), and having a CD4 <350 cells/mm3 (2.46, [1.42,4.23]) or 350-500 (2.02, [1.14,3.57]) as compared to CD4 > 500 were associated with increased risk of HZ.
The incidence of HZ is lower than previously reported in HIV cohorts, but remains higher than the general population. Over one-fourth of patients developed complicated HZ, which is remarkable given the young age of our population. Risk factors for HZ include markers of poor immune function, suggesting that appropriate ART may reduce the burden of HZ in this population.
Kaposi sarcoma and lymphoma rates were highest immediately after antiretroviral therapy (ART) initiation, particularly among patients with low CD4 cell counts, whereas other cancers increased with time on ART. Calendar year of ART initiation was not associated with subsequent cancer incidence.
Cancer is an important cause of morbidity and mortality in individuals infected with human immunodeficiency virus (HIV), but patterns of cancer incidence after combination antiretroviral therapy (ART) initiation remain poorly characterized.
We evaluated the incidence and timing of cancer diagnoses among patients initiating ART between 1996 and 2011 in a collaboration of 8 US clinical HIV cohorts. Poisson regression was used to estimate incidence rates. Cox regression was used to identify demographic and clinical characteristics associated with cancer incidence after ART initiation.
At initiation of first combination ART among 11 485 patients, median year was 2004 (interquartile range [IQR], 2000–2007) and median CD4 count was 202 cells/mm3 (IQR, 61–338). Incidence rates for Kaposi sarcoma (KS) and lymphomas were highest in the first 6 months after ART initiation (P < .001) and plateaued thereafter, while incidence rates for all other cancers combined increased from 416 to 615 cases per 100 000 person-years from 1 to 10 years after ART initiation (average 7% increase per year; 95% confidence interval, 2%–13%). Lower CD4 count at ART initiation was associated with greater risk of KS, lymphoma, and human papillomavirus–related cancer. Calendar year of ART initiation was not associated with cancer incidence.
KS and lymphoma rates were highest immediately following ART initiation, particularly among patients with low CD4 cell counts, whereas other cancers increased with time on ART, likely reflecting increased cancer risk with aging. Our results underscore recommendations for earlier HIV diagnosis followed by prompt ART initiation along with ongoing aggressive cancer screening and prevention efforts throughout the course of HIV care.
HIV-associated malignancies; AIDS-defining cancer; non-AIDS-defining cancer; combination antiretroviral therapy
Thirty-day hospital readmission rate is receiving increasing attention as a quality of care indicator. The objective of this study was to determine readmission rates and to identify factors associated with readmission among persons living with HIV.
Prospective multicenter observational cohort.
Nine U.S. HIV clinics affiliated through the HIV Research Network.
Patients engaged in HIV care during 2005–2010.
Main outcome measure(s)
Readmission rate was defined as the proportion of hospitalizations followed by a readmission within 30 days. Factors in multivariate analyses included diagnostic categories, patient demographic and clinical characteristics, and having an outpatient follow-up visit.
Among 11,651 total index hospitalizations, the 30-day readmission rate was 19.3%. AIDS defining illnesses (ADI, 9.6% of index hospitalizations) and non-AIDS defining infections (26.4% of index hospitalizations) had readmission rates of 26.2% and 16.6%, respectively. Factors independently associated with readmission included lower CD4 count (AOR 1.80 [1.53, 2.11] for CD4 <50 vs. ≥351 cells/μl), longer length of stay (1.77 [1.53, 2.04] for ≥9 days vs. 1–3 days), and several diagnostic categories including ADI. Having an outpatient follow-up clinic visit was not associated with lower readmission risk (AHR 0.98 [0.88, 1.08]).
The 19.3% readmission rate exceeds the 13.2% rate reported for the general population of 18–64 year-olds. HIV providers may use the 19.3% rate as a basis of comparison. Policymakers may consider the impact of HIV when estimating expected readmissions for a hospital or region. Preventing or recovering from severe immune dysfunction may be the most important factor to reducing readmissions.
Readmission; HIV; AIDS defining illness; Outpatient hospital follow-up; Healthcare utilization
Lymphoma is the leading cause of cancer-related death among HIV-infected patients in the antiretroviral therapy (ART) era.
We studied lymphoma patients in the Centers for AIDS Research Network of Integrated Clinical Systems from 1996 until 2010. We examined differences stratified by histology and diagnosis year. Mortality and predictors of death were analyzed using Kaplan–Meier curves and Cox proportional hazards.
Of 23 050 HIV-infected individuals, 476 (2.1%) developed lymphoma (79 [16.6%] Hodgkin lymphoma [HL]; 201 [42.2%] diffuse large B-cell lymphoma [DLBCL]; 56 [11.8%] Burkitt lymphoma [BL]; 54 [11.3%] primary central nervous system lymphoma [PCNSL]; and 86 [18.1%] other non-Hodgkin lymphoma [NHL]). At diagnosis, HL patients had higher CD4 counts and lower HIV RNA than NHL patients. PCNSL patients had the lowest and BL patients had the highest CD4 counts among NHL categories. During the study period, CD4 count at lymphoma diagnosis progressively increased and HIV RNA decreased. Five-year survival was 61.6% for HL, 50.0% for BL, 44.1% for DLBCL, 43.3% for other NHL, and 22.8% for PCNSL. Mortality was associated with age (adjusted hazard ratio [AHR] = 1.28 per decade increase, 95% confidence interval [CI] = 1.06 to 1.54), lymphoma occurrence on ART (AHR = 2.21, 95% CI = 1.53 to 3.20), CD4 count (AHR = 0.81 per 100 cell/µL increase, 95% CI = 0.72 to 0.90), HIV RNA (AHR = 1.13 per log10copies/mL, 95% CI = 1.00 to 1.27), and histology but not earlier diagnosis year.
HIV-associated lymphoma is heterogeneous and changing, with less immunosuppression and greater HIV control at diagnosis. Stable survival and increased mortality for lymphoma occurring on ART call for greater biologic insights to improve outcomes.
The recommendation for the frequency for routine clinical monitoring of persons with well-controlled HIV infection is based on evidence that relies on observed rather than intended follow-up intervals. We sought to determine if the scheduled follow-up interval is associated with subsequent virologic failure. Participants in this 6-clinic retrospective cohort study had an index clinic visit in 2008 and HIV viral load (VL) ≤400 c/mL. Univariate and multivariate tests evaluated if scheduling the next follow-up appointment at 3, 4, or 6 months predicted VL >400 c/mL at 12 months (VF). Among 2171 participants, 66%, 26%, and 8% were scheduled next follow-up visits at 3, 4, and 6 months, respectively. With missing 12-month VL considered VF, 25%, 25%, and 24% of persons scheduled at 3, 4, and 6 months had VF, respectively (p=0.95). Excluding persons with missing 12-month VL, 7.1%, 5.7%, and 4.5% had VF, respectively (p=0.35). Multivariable models yielded nonsignificant odds of VF by scheduled follow-up interval both when missing 12-month VL were considered VF and when persons with missing 12-month VL were excluded. We conclude that clinicians are able to make safe decisions extending follow-up intervals in persons with viral suppression, at least in the short-term.
Laboratory monitoring is recommended during combination antiretroviral therapy (cART), but the pattern of detected abnormalities and optimal monitoring are unknown. We assessed laboratory abnormalities during initial cART in 2000–2010 across the United States.
Observational study in the Centers for AIDS Research Network of Integrated Clinical Systems Cohort.
Among patients with normal results within a year prior to cART initiation, time to first significant abnormality was assessed by Kaplan–Meier curves stratified by event type, with censoring at first of regimen change, loss to follow-up, or 104 weeks. Incidence rates of first events were estimated using Poisson regression; multivariable analyses identified associated factors. Results were stratified by time (16 weeks) from therapy initiation.
A total of 3470 individuals contributed 3639 person-years. Median age, pre-cART CD4, and follow-up duration were 40 years, 206 cells/μl, and 51 weeks, respectively. Incidence rates for significant abnormalities (per 100 person-years) in the first 16 weeks post-cART initiation were as follows: lipid 49 [95% confidence interval (CI) 41– 58]; hematologic = 44 (40–49); hepatic = 24 (20–27); and renal = 9 (7–11), dropping substantially during weeks 17–104 of cART to lipid = 23 (18–29); hematologic = 5 (4–6); hepatic = 6 (5–8); and renal = 2 (1–3) (all P < 0.05). Among patients receiving initial cART with no prior abnormality (N = 1889), strongest associations for hepatic abnormalities after 16 weeks were hepatitis B and C [hazard ratio 2.3 (95% CI 1.2–4.5) and hazard ratio = 3.0 (1.9–4.5), respectively]. The strongest association for renal abnormalities was hypertension [hazard ratio = 2.8 (1.4–5.6)].
New abnormalities decreased after week 16 of cART. For abnormalities not present by week 16, subsequent monitoring should be guided by comorbidities.
hematologic; hepatic; laboratory; lipid; monitoring HIV; renal
Guidelines recommend hepatitis C virus (HCV) screening for all people living with HIV (PLWH). Understanding HCV testing practices may improve compliance with guidelines and can help identify areas for future intervention.
We evaluated HCV screening and unnecessary repeat HCV testing in 8,590 PLWH initiating care at 12 U.S. HIV clinics between 2006 and 2010, with follow-up through 2011. Multivariable logistic regression examined the association between patient factors and the outcomes: HCV screening (≥1 HCV antibody tests during the study period) and unnecessary repeat HCV testing (≥1 HCV antibody tests in patients with a prior positive test result).
Overall, 82% of patients were screened for HCV, 18% of those screened were HCV antibody-positive, and 40% of HCV antibody-positive patients had unnecessary repeat HCV testing. The likelihood of being screened for HCV increased as the number of outpatient visits rose (adjusted odds ratio 1.02, 95% confidence interval 1.01–1.03). Compared to men who have sex with men (MSM), patients with injection drug use (IDU) were less likely to be screened for HCV (0.63, 0.52–0.78); while individuals with Medicaid were more likely to be screened than those with private insurance (1.30, 1.04–1.62). Patients with heterosexual (1.78, 1.20–2.65) and IDU (1.58, 1.06–2.34) risk compared to MSM, and those with higher numbers of outpatient (1.03, 1.01–1.04) and inpatient (1.09, 1.01–1.19) visits were at greatest risk of unnecessary HCV testing.
Additional efforts to improve compliance with HCV testing guidelines are needed. Leveraging health information technology may increase HCV screening and reduce unnecessary testing.
African Americans with human immunodeficiency virus type 1 (HIV-1) infection and kidney disease are at increased risk of end-stage renal disease requiring renal replacement therapy (RRT), particularly in urban areas with high rates of poverty and injection drug use. It is unknown how the widespread use of highly active antiretroviral therapy (HAART) has affected survival during RRT in this vulnerable population.
African American patients infected with HIV-1 who required RRT were identified from 2 cohorts that included 4509 Africans Americans infected with HIV-1 who were recruited during the period 1988–2004 in Baltimore, Maryland. Survival after initiation of RRT was compared for those who initiated treatment in the pre-HAART and the HAART eras using Kaplan-Meier curves. Cox proportional hazards regression analysis was used to adjust for potential confounders.
RRT was initiated in 162 patients (3.6%) during 10.6 years of follow-up (119 during the HAART era). Compared with patients who started RRT in the pre-HAART era, those in the HAART era were older (P< .001) and more likely to have CD4 cell counts of ≥200 cells/mm3 (P = .01). A total of 126 patients (78%) died during follow-up; among those who initiated RRT during the HAART era, 87 deaths occurred (73%). Median survival time in the pre-HAART era was 22.4 months (95% confidence interval [CI], 9.3–30.7); during the HAART era, it was 19.9 months (95% CI, 14.7–26.5; P = .94). In the multiple Cox regression model, factors independently associated with increased mortality included age (hazard ratio [HR], 1.30; 95% CI, 1.06–1.60; P = .01), lower serum albumin level (HR, 0.72; 95% CI, 0.57–0.91; P< .007), lower CD4 cell count (HR, 0.90; 95% CI, 0.82–0.99; P< .03), and the lack of HAART (HR, 0.52; 95% CI, 0.33–0.82; P = .005).
Older age, lower serum albumin level, lower CD4 cell count, and the lack of HAART are independent predictors of poor survival among African Americans infected with HIV-1 undergoing RRT in a resource-limited urban area. RRT survival was similar in the pre-HAART and HAART eras, likely reflecting inadequate HIV treatment in this population.
Persons with HIV who develop depression have worse medical adherence and outcomes. Poor patient–provider communication may play a role in these outcomes. This cross-sectional study evaluated the influence of patient depression on the quality of patient–provider communication. Patient–provider visits (n = 406) at four HIV care sites were audio-recorded and coded with the Roter Interaction Analysis System (RIAS). Negative binomial and linear regressions using generalized estimating equations tested the association of depressive symptoms, as measured by the Center for Epidemiology Studies Depression scale (CES-D), with RIAS measures and postvisit patient-rated quality of care and provider-reported regard for his or her patient. The patients, averaged 45 years of age (range = 20–77), were predominately male (n = 286, 68.5%), of black race (n = 250, 60%), and on antiretroviral medications (n = 334, 80%). Women had greater mean CES-D depression scores (12.0) than men (10.6; p = 0.03). There were no age, race, or education differences in depression scores. Visits with patients reporting severe depressive symptoms compared to those reporting none/mild depressive symptoms were longer and speech speed was slower. Patients with severe depressive symptoms did more emotional rapport building but less social rapport building, and their providers did more data gathering/counseling (ps < 0.05). In postvisit questionnaires, providers reported lower levels of positive regard for, and rated more negatively patients reporting more depressive symptoms (p < 0.01). In turn, patients reporting more depressive symptoms felt less respected and were less likely to report that their provider knows them as a person than none/mild depressive symptoms patients (ps < 0.05). Greater psychosocial needs of patients presenting with depressive symptoms and limited time/resources to address these needs may partially contribute to providers’ negative attitudes regarding their patients with depressive symptoms. These negative attitudes may ultimately serve to adversely impact patient–provider communication and quality of HIV care.
depression; communication; quality of health care; patient satisfaction; HIV
Missing outcome data due to loss to follow-up occurs frequently in clinical cohort studies of HIV-infected patients. Censoring patients when they become lost can produce inaccurate results if the risk of the outcome among the censored patients differs from the risk of the outcome among patients remaining under observation. We examine whether patients who are considered lost to follow up are at increased risk of mortality compared to those who remain under observation. Patients from the US Centers for AIDS Research Network of Integrated Clinical Systems (CNICS) who newly initiated combination antiretroviral therapy between January 1, 1998 and December 31, 2009 and survived for at least one year were included in the study. Mortality information was available for all participants regardless of continued observation in the CNICS. We compare mortality between patients retained in the cohort and those lost-to-clinic, as commonly defined by a 12-month gap in care. Patients who were considered lost-to-clinic had modestly elevated mortality compared to patients who remained under observation after 5 years (risk ratio (RR): 1.2; 95% CI: 0.9, 1.5). Results were similar after redefining loss-to-clinic as 6 months (RR: 1.0; 95% CI: 0.8, 1.3) or 18 months (RR: 1.2; 95% CI: 0.8, 1.6) without a documented clinic visit. The small increase in mortality associated with becoming lost to clinic suggests that these patients were not lost to care, rather they likely transitioned to care at a facility outside the study. The modestly higher mortality among patients who were lost-to-clinic implies that when we necessarily censor these patients in studies of time-varying exposures, we are likely to incur at most a modest selection bias.