PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-9 (9)
 

Clipboard (0)
None
Journals
Year of Publication
Document Types
1.  HIV viremia and changes in kidney function 
AIDS (London, England)  2009;23(9):1089-1096.
Objective
To evaluate the effect of HIV infection on longitudinal changes in kidney function and to identify independent predictors of kidney function changes in HIV-infected individuals.
Design
A prospective cohort.
Methods
Cystatin C was measured at baseline and at the 5-year follow-up visit of the Study of Fat Redistribution and Metabolic Change in HIV infection in 554 HIV-infected participants and 230 controls. Control participants were obtained from the Coronary Artery Risk Development in Young Adults study. Glomerular filtration rate (eGFRcys) was estimated using the formula 76.7 × cysC−1.19.
Results
Compared with controls, HIV-infected participants had a greater proportion of clinical decliners (annual decrease in eGFRcys > 3 ml/min per 1.73 m2; 18 versus 13%, P=0.002) and clinical improvers (annual increase in eGFRcys > 3 ml/min per 1.73 m2; 26 versus 6%, P< 0.0001). After multivariable adjustment, HIV infection was associated with higher odds of both clinical decline (odds ratio 2.2; 95% confidence interval 1.3, 3.9, P = 0.004) and clinical improvement (odds ratio 7.3; 95% confidence interval 3.9, 13.6, P ≤ 0.0001). Among HIV-infected participants, a decrease in HIV viral load during follow-up was independently associated with clinical improvement; conversely, higher baseline and an increase in viral load during follow-up were associated with clinical decline. No individual antiretroviral drug or drug class appeared to be substantially associated with clinical decline or improvement.
Conclusion
Compared with controls, HIV-infected persons were more likely both to have clinical decline and clinical improvement in kidney function during 5 years of follow-up. The extent of viremic control had a strong association with longitudinal changes in kidney function.
doi:10.1097/QAD.0b013e32832a3f24
PMCID: PMC3725756  PMID: 19352136
cystatin C; glomerular filtration rate; HIV; kidney; viral load
2.  Citalopram is not Effective Therapy for Non-Depressed Patients with Irritable Bowel Syndrome 
Background & Aims
Data are conflicting on the benefit of selective serotonin reuptake inhibitors (SSRIs) for patients with irritable bowel syndrome (IBS); the role of visceral sensitivity in IBS pathophysiology is unclear. We assessed the effects of citalopram and the relationships between, symptoms, and quality of life (QOL), and rectal sensitivity in non-depressed patients with IBS.
Methods
Patients from primary, secondary and tertiary care centers were randomly assigned to groups given citalopram (20 mg/day for 4 weeks, then 40 mg/day for 4 weeks) or placebo. The study was double masked with concealed allocation. Symptoms were assessed weekly; IBS-QOL and rectal sensation were determined from barostat measurements made at the beginning and end of the study.
Results
Patients that received citalopram did not have a higher rate of adequate relief from IBS symptoms than subjects that received placebo (12/27, 44% vs 15/27, 56% respectively; P=0.59), regardless of IBS subtype. The odds ratio for weekly response to citalopram vs placebo was 0.80 (95% confidence interval [CI] 0.61–1.04). Citalopram did not reduce specific symptoms or increase IBS-QOL scores; it had no effect on rectal compliance and a minimal effect on sensation. Changes in IBS-QOL score and pressure-eliciting pain were correlated (r=0.33, 95% CI 0.03–0.57); changes in symptoms and rectal sensitivity or IBS-QOL scores were not correlated.
Conclusions
Citalopram was not superior to placebo in treating non-depressed IBS patients. Changes in symptoms were not correlated with changes in rectal sensation assessed by barostat; Any benefit of citalopram in non-depressed IBS patients is likely to be modest.
doi:10.1016/j.cgh.2009.09.008
PMCID: PMC2818161  PMID: 19765674
3.  White/Black Racial Differences in Risk of End-Stage Renal Disease and Death 
The American journal of medicine  2009;122(7):672-678.
Background
End-stage renal disease disproportionately affects black persons, but it is unknown when in the course of chronic kidney disease racial differences arise. Understanding the natural history of racial differences in kidney disease may help guide efforts to reduce disparities.
Methods
We compared white/black differences in the risk of end-stage renal disease and death by level of estimated glomerular filtration rate (eGFR) at baseline in a national sample of 2,015,891 veterans between 2001 to 2005.
Results
Rates of end-stage renal disease among black patients exceeded those among white patients at all levels of baseline eGFR. The adjusted hazard ratios (HR) for end-stage renal disease associated with black versus white race for patients with an eGFR ≥90, 60-89, 45-59, 30-44, 15-29, and <15 mL/min/1.73m2, respectively were 2.14 (95% confidence interval [CI], 1.72-2.65), 2.30 (95% CI, 2.02-2.61), 3.08 (95% CI, 2.74-3.46), 2.47 (95% CI, 2.26-2.70), 1.86 (95% CI, 1.75-1.98), and 1.23 (95% CI, 1.12- 1.34). We observed a similar pattern for mortality, with equal or higher rates of death among black persons at all levels of eGFR. The highest risk of mortality associated with black race was also observed among those with an eGFR 45-59 mL/min/1.73m2 (HR 1.32, 95% CI, 1.27-1.36).
Conclusion
Racial differences in the risk of end-stage renal disease appear early in the course of kidney disease and are not explained by a survival advantage among blacks. Efforts to identify and slow progression of chronic kidney disease at earlier stages may be needed to reduce racial disparities.
doi:10.1016/j.amjmed.2008.11.021
PMCID: PMC2749005  PMID: 19559170
kidney disease; racial disparities; mortality
4.  NNRTI pharmacokinetics in a large unselected cohort of HIV-infected women 
Background
Small intensive pharmacokinetic (PK) studies of medications in early-phase trials cannot identify the range of factors that influence drug exposure in heterogeneous populations. We performed PK studies in large numbers of HIV-infected women on nonnucleoside-reverse-transcriptase-inhibitors (NNRTIs) under conditions of actual use to assess patient characteristics that influence exposure and evaluated the relationship between exposure and response.
Methods
225 women on NNRTI-based antiretroviral regimens from the Women’s Interagency HIV Study (WIHS) were enrolled into 12 or 24-hour PK studies. Extensive demographic, laboratory and medication covariate data was collected before and during the visit to be used in multivariate models. Total NNRTI drug exposure was estimated by area-under-the-concentration-time curves (AUC).
Results
Hepatic inflammation and renal insufficiency were independently associated with increased nevirapine (NVP) exposure in multivariate analyses; crack cocaine, high fat diets, and amenorrhea were associated with decreased levels (n=106). Higher efavirenz (EFV) exposure was seen with increased transaminase, albumin levels, and orange juice consumption; tenofovir use, increased weight, being African-American and amenorrhea were associated with decreased exposure (n=119). With every 10-fold increase in NVP or EFV exposure, participants were 3.3 and 3.6 times as likely to exhibit virologic suppression, respectively. Patients with higher drug exposure were also more likely to report side effects on therapy.
Conclusions
Our study identifies and quantitates previously unrecognized factors modifying NNRTI exposure in the “real-world” setting. Comprehensive PK studies in representative populations are feasible and may ultimatley lead to dose optimization strategies in patients at risk for failure or adverse events.
PMCID: PMC2700138  PMID: 19408353
HIV; antiretrovirals; nevirapine; efavirenz; pharmacokinetics; drug exposure; women
5.  Protease Inhibitor Levels in Hair Samples Strongly Predict Virologic Responses to HIV Treatment 
AIDS (London, England)  2009;23(4):471-478.
Objective
Antiretroviral (ARV) therapies fail when behavioral or biologic factors lead to inadequate medication exposure. Currently available methods to assess ARV exposure are limited. Levels of ARVs in hair reflect plasma concentrations over weeks to months and may provide a novel method for predicting therapeutic responses.
Design/methods
The Women's Interagency HIV Study, a prospective cohort of HIV-infected women, provided the basis for developing and assessing methods to measure commonly-prescribed protease inhibitors (PIs) - lopinavir (LPV) and atazanavir (ATV) - in small hair samples. We examined the association between hair PI levels and initial virologic responses to therapy in multivariate logistic regression models.
Results
ARV concentrations in hair were strongly and independently associated with treatment response for 224 women starting a new PI-based regimen. For participants initiating LPV/RTV, the odds ratio (OR) for virologic suppression was 39.8 (95%CI 2.8–564) for those with LPV hair levels in the top tertile (>1.9ng/mg) compared to the bottom (≤0.41ng/mg) when controlling for self-reported adherence, age, race, starting viral load and CD4, and prior PI experience. For women starting ATV, the adjusted OR for virologic success was 7.7 (95%CI 2.0-29.7) for those with hair concentrations in the top tertile (>3.4ng/mg) compared to the lowest (≤1.2ng/mg).
Conclusions
PI levels in small hair samples were the strongest independent predictor of virologic success in a diverse group of HIV-infected adults. This noninvasive method for determining ARV exposure may have particular relevance for the epidemic in resource-poor settings due to the ease of collecting and storing hair.
doi:10.1097/QAD.0b013e328325a4a9
PMCID: PMC2654235  PMID: 19165084
Hair levels; therapeutic drug monitoring; antiretroviral exposure; virologic response; protease inhibitors; atazanavir; lopinavir; WIHS cohort
6.  Estimating Complex Multi-State Misclassification Rates for Biopsy-Measured Liver Fibrosis in Patients with Hepatitis C* 
For both clinical and research purposes, biopsies are used to classify liver damage known as fibrosis on an ordinal multi-state scale ranging from no damage to cirrhosis. Misclassification can arise from reading error (misreading of a specimen) or sampling error (the specimen does not accurately represent the liver). Studies of biopsy accuracy have not attempted to synthesize these two sources of error or to estimate actual misclassification rates from either source. Using data from two studies of reading error and two of sampling error, we find surprisingly large possible misclassification rates, including a greater than 50% chance of misclassification for one intermediate stage of fibrosis. We find that some readers tend to misclassify consistently low or consistently high, and some specimens tend to be misclassified low while others tend to be misclassified high. Non-invasive measures of liver fibrosis have generally been evaluated by comparison to simultaneous biopsy results, but biopsy appears to be too unreliable to be considered a gold standard. Non-invasive measures may therefore be more useful than such comparisons suggest. Both stochastic uncertainty and uncertainty about our model assumptions appear to be substantial. Improved studies of biopsy accuracy would include large numbers of both readers and specimens, greater effort to reduce or eliminate reading error in studies of sampling error, and careful estimation of misclassification rates rather than less useful quantities such as kappa statistics.
doi:10.2202/1557-4679.1139
PMCID: PMC2810974  PMID: 20104258
fibrosis; hepatitis C; kappa statistic; latent variables; misclassification
7.  Estimating Complex Multi-State Misclassification Rates for Biopsy-Measured Liver Fibrosis in Patients with Hepatitis C 
For both clinical and research purposes, biopsies are used to classify liver damage known as fibrosis on an ordinal multi-state scale ranging from no damage to cirrhosis. Misclassification can arise from reading error (misreading of a specimen) or sampling error (the specimen does not accurately represent the liver). Studies of biopsy accuracy have not attempted to synthesize these two sources of error or to estimate actual misclassification rates from either source. Using data from two studies of reading error and two of sampling error, we find surprisingly large possible misclassification rates, including a greater than 50% chance of misclassification for one intermediate stage of fibrosis. We find that some readers tend to misclassify consistently low or consistently high, and some specimens tend to be misclassified low while others tend to be misclassified high. Non-invasive measures of liver fibrosis have generally been evaluated by comparison to simultaneous biopsy results, but biopsy appears to be too unreliable to be considered a gold standard. Non-invasive measures may therefore be more useful than such comparisons suggest. Both stochastic uncertainty and uncertainty about our model assumptions appear to be substantial. Improved studies of biopsy accuracy would include large numbers of both readers and specimens, greater effort to reduce or eliminate reading error in studies of sampling error, and careful estimation of misclassification rates rather than less useful quantities such as kappa statistics.
doi:10.2202/1557-4679.1139
PMCID: PMC2810974  PMID: 20104258
8.  Clinical predictors of early second event in patients with clinically isolated syndrome 
Journal of Neurology  2009;256(7):1061-1066.
This study aimed to determine the predictors of increased risk of a second demyelinating event within the first year of an initial demyelinating event (IDE) suggestive of early multiple sclerosis (MS). Patients with MS or clinically isolated syndrome (CIS) seen at the UCSF MS Center within one year of the IDE were studied. Univariate and multivariate Cox models were used to analyze predictors of having a second event within 1 year of the IDE. Of 330 patients with MS/CIS, 111 had a second event within 1 year. Non-white race/ethnicity (HR = 2.39, 95% CI [1.58, 3.60], p < 0.0001) and younger age (HR for each 10-year decrease in age = 1.51, 95% CI [1.28, 1.80], p < 0.0001) were strongly associated with an increased risk of having a second event within one year of onset. Having a lower number of functional systems affected by the IDE was also associated with an increased risk of early second event (HR for every one less FS involved = 1.31, 95% CI [1.06, 1.61], p = 0.011). These results were similar after adjusting for treatment of the IDE with steroids and disease-modifying therapy. Non-white race/ethnicity, younger age, and a lower number of FS affected by the IDE are associated with a substantially increased hazard ratio for a second demyelinating event within 1 year. Since early relapse is predictive of worse long-term outcome, identifying and treating such patients after the IDE may be of benefit to them.
doi:10.1007/s00415-009-5063-0
PMCID: PMC2708331  PMID: 19252775
Multiple sclerosis; Epidemiology; Clinical studies; Demyelinating diseases

Results 1-9 (9)