PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1026999)

Clipboard (0)
None

Related Articles

1.  Non-Markov Multistate Modeling Using Time-Varying Covariates, with Application to Progression of Liver Fibrosis due to Hepatitis C Following Liver Transplant* 
Multistate modeling methods are well-suited for analysis of some chronic diseases that move through distinct stages. The memoryless or Markov assumptions typically made, however, may be suspect for some diseases, such as hepatitis C, where there is interest in whether prognosis depends on history. This paper describes methods for multistate modeling where transition risk can depend on any property of past progression history, including time spent in the current stage and the time taken to reach the current stage. Analysis of 901 measurements of fibrosis in 401 patients following liver transplantation found decreasing risk of progression as time in the current stage increased, even when controlled for several fixed covariates. Longer time to reach the current stage did not appear associated with lower progression risk. Analysis of simulation scenarios based on the transplant study showed that greater misclassification of fibrosis produced more technical difficulties in fitting the models and poorer estimation of covariate effects than did less misclassification or error-free fibrosis measurement. The higher risk of progression when less time has been spent in the current stage could be due to varying disease activity over time, with recent progression indicating an “active” period and consequent higher risk of further progression.
doi:10.2202/1557-4679.1213
PMCID: PMC2836212  PMID: 20305705
fibrosis; hepatitis C; liver transplant; memoryless assumptions; multistate modeling
2.  Fibrosis Progression in African Americans and Caucasian Americans with Chronic Hepatitis C 
Background & Aims
Prior studies suggest the rate of liver fibrosis progression is slower in African-Americans (AA) than Caucasian-Americans (CA) with chronic hepatitis C virus (HCV) infection.
Methods
Using a multi-state Markov model, fibrosis progression was evaluated in a well-characterized cohort of 143 AA and 157 CA adults with untreated chronic HCV genotype 1 infection. In subjects with a history of injection drug use, duration of infection was imputed from a fitted risk model rather than assumed to be the reported first year of use.
Results
The distribution of Ishak fibrosis stages were 0 (8.7%), 1/2 (55.7%), 3/4 (29.3%) and 5/6 (6.3%), and was similar in AA and CA (p= 0.22). After adjusting for biopsy adequacy, AA had a 10% lower rate of fibrosis progression than did CA, but the difference was not statistically significant (hazard ratio = 0.90, 95% confidence intervals = 0.72, 1.12). The overall 20-year estimates of probabilities of progression from stage 0 to stages 1/2, 3/4 and 5/6 were 59.3%, 28.8% and 4.7%. The estimated median time from no fibrosis to cirrhosis was 79 years for the entire cohort, and 74 and 83 years for CA and AA, respectively. In 3-variable models including race and biopsy adequacy, the factors significantly associated with fibrosis progression were age when infected, steatosis, ALT level, and necroinflammatory score.
Conclusions
The rates of fibrosis progression were slow and did not appear to differ substantially between AA and CA.
doi:10.1016/j.cgh.2008.08.006
PMCID: PMC3166617  PMID: 19081528
3.  Estimating past hepatitis C infection risk from reported risk factor histories: implications for imputing age of infection and modeling fibrosis progression 
Background
Chronic hepatitis C virus infection is prevalent and often causes hepatic fibrosis, which can progress to cirrhosis and cause liver cancer or liver failure. Study of fibrosis progression often relies on imputing the time of infection, often as the reported age of first injection drug use. We sought to examine the accuracy of such imputation and implications for modeling factors that influence progression rates.
Methods
We analyzed cross-sectional data on hepatitis C antibody status and reported risk factor histories from two large studies, the Women's Interagency HIV Study and the Urban Health Study, using modern survival analysis methods for current status data to model past infection risk year by year. We compared fitted distributions of past infection risk to reported age of first injection drug use.
Results
Although injection drug use appeared to be a very strong risk factor, models for both studies showed that many subjects had considerable probability of having been infected substantially before or after their reported age of first injection drug use. Persons reporting younger age of first injection drug use were more likely to have been infected after, and persons reporting older age of first injection drug use were more likely to have been infected before.
Conclusion
In cross-sectional studies of fibrosis progression where date of HCV infection is estimated from risk factor histories, modern methods such as multiple imputation should be used to account for the substantial uncertainty about when infection occurred. The models presented here can provide the inputs needed by such methods. Using reported age of first injection drug use as the time of infection in studies of fibrosis progression is likely to produce a spuriously strong association of younger age of infection with slower rate of progression.
doi:10.1186/1471-2334-7-145
PMCID: PMC2238758  PMID: 18070362
4.  Early-Life Family Structure and Microbially Induced Cancer Risk 
PLoS Medicine  2007;4(1):e7.
Background
Cancer may follow exposure to an environmental agent after many decades. The bacterium Helicobacter pylori, known to be acquired early in life, increases risk for gastric adenocarcinoma, but other factors are also important. In this study, we considered whether early-life family structure affects the risk of later developing gastric cancer among H. pylori+ men.
Methods and Findings
We examined a long-term cohort of Japanese-American men followed for 28 y, and performed a nested case-control study among those carrying H. pylori or the subset carrying the most virulent cagA+ H. pylori strains to address whether family structure predicted cancer development. We found that among the men who were H. pylori+ and/or cagA+ (it is possible to be cagA+ and H. pylori− if the H. pylori test is falsely negative), belonging to a large sibship or higher birth order was associated with a significantly increased risk of developing gastric adenocarcinoma late in life. For those with cagA+ strains, the risk of developing gastric cancer was more than twice as high (odds ratio 2.2; 95% confidence interval 1.2–4.0) among those in a sibship of seven or more individuals than in a sibship of between one and three persons.
Conclusions
These results provide evidence that early-life social environment plays a significant role in risk of microbially induced malignancies expressing five to eight decades later, and these findings lead to new models to explain these interactions.
This study suggests that early-life social environment has a significant role in risk of microbially induced malignancies such as gastric adenocarcinoma occuring five to eight decades later.
Editors' Summary
Background.
Although the theory that certain cancers might be caused by infectious agents (such as bacteria and viruses) has been around for some time, concrete evidence linking specific cancers and infections is only recently beginning to emerge. There is now very good evidence that stomach cancer, once one of the frequent types worldwide but now less common, is strongly associated with a particular infection of the stomach lining. This specific bacterium colonizing the stomach, Helicobacter pylori (or H. pylori), often infects people early in childhood through close contact with other people, and tends to stay in the body throughout life. However, most people do not suffer any symptoms as a result of being colonized with H. pylori. Researchers are interested in the relationship between stomach cancer and aspects of someone's upbringing, for example whether an individual has a large number of sisters and brothers and whether they are the youngest or oldest in a large group of siblings. One reason for being interested in this topic is that if H. pylori is mainly spread from one child to another in the home, we might expect children from large sibling groups, and the youngest children in a group, to be at greater risk of being infected, and then more likely to get stomach cancer later in life. Furthermore—and this was the primary reason for the study—the researchers wished to determine whether, among H. pylori+ people, the structure of the family affects the risk of developing stomach cancer much later in life. With all study participants being H. pylori+, the essential comparison was between people of high and low birth order.
Why Was This Study Done?
This group of researchers had already done a previous study that had shown that people who carry H. pylori in their stomachs are more likely to get stomach cancer, and also that younger children in a sibling group are more likely to get stomach cancer. In the period following that study, the examined population has become older and more of the people concerned have developed stomach cancer. This meant that the researchers could go back and extend their previous work to see, more reliably, whether stomach cancer was linked to family structure. It also meant that the researchers could look at the effects of each factor not only in isolation, but also the combined effect of all the different factors. The researchers also stratified for the most virulent strains (those that were cagA+).
What Did the Researchers Do and Find?
In this study, the researchers started out with a pool of 7,429 Japanese-American men living in Hawaii, USA, who had donated blood samples between 1967 and 1975. Of these men, 261 eventually developed stomach cancer. Each of the 261 men was then matched with a similarly aged man from the original pool of 7,429 men who did not have stomach cancer. The researchers then went back to the original blood samples taken many years before and tested the samples to see if the men were infected with H. pylori at the time the sample was taken and, if so, whether a particular strain of the bacterium, cagA, was present. The researchers then looked at whether the risk of getting stomach cancer was associated with the number of siblings a man had and whether he was older or younger than the other siblings.
Similar to the prior study, they found that men who had stomach cancer were three times more likely to carry H. pylori compared to men who did not develop stomach cancer. In men who had H. pylori, those with large numbers of siblings were more likely to get stomach cancer, and this was especially true for men who had the cagA strain of H. pylori. In the whole group of men with cancer, the order of birth (whether a man was older or younger in his sibling group) did not seem to be particularly linked to development of stomach cancer. However, in men who had the cagA strain of H. pylori, those from the largest sibships were at highest risk of developing gastric cancer; in this group, one particular type of cancer (the most common type—intestinal-type gastric cancer) was also associated with later birth order.
What Do These Findings Mean?
The researchers initially thought that men with H. pylori would be at a higher risk of getting stomach cancer if they had a large number of sisters and brothers, and especially if they were a younger sibling in a large group. This idea was supported by their data. These findings support the idea that people often get H. pylori from their older sisters and brothers, but there is not conclusive proof of this. There might be some other factor that explains the association between large family size and stomach cancer, for example that people from large families might be poorer and more at risk from stomach cancer for some other reason. Currently, most doctors do not recommend routinely testing people without any symptoms to see if they have H. pylori, but people with pain or discomfort in the upper abdomen would generally be screened for H. pylori and then treated to eliminate the infection if it is found. The main novel idea is that those people who are born in a large sibship, and/or are of higher birth order, are more likely to acquire their H. pylori from a genetically related person (a sibling) than from an unrelated person (friend/classmate). This “family-structure effect” could be the explanation as to why there is a higher risk of stomach cancer developing later—the strain from a genetically related person already is “preadapted” to the new host, and has a “head-start” on immunity, compared to a strain from an unrelated person. The researchers hypothesize that it is the nature of that initial interaction with the host that sets the stage for the kind of events that lead to cancers decades later.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040007.
A Perspective article by Dimitrios Trichopoulos and Pagona Lagiou discusses these findings further
MedLine Plus encyclopedia entry on stomach cancer
Wikipedia entry on Helicobacter pylori (Wikipedia is an internet encyclopedia that anyone can edit)
The US National Cancer Institute publishes information about stomach cancer
doi:10.1371/journal.pmed.0040007
PMCID: PMC1769414  PMID: 17227131
5.  Progression of hepatic fibrosis in patients with hepatitis C: a prospective repeat liver biopsy study 
Gut  2004;53(3):451-455.
Background: The natural history of hepatitis C virus (HCV) infection remains uncertain. Previous data concerning rates of progression are from studies using estimated dates of infection and single liver biopsy scores. We prospectively studied the rate of progression of fibrosis in HCV infected patients by repeat liver biopsies without intervening treatment.
Patients: We studied 214 HCV infected patients (126 male; median age 36 years (range 5–8)) with predominantly mild liver disease who were prospectively followed without treatment and assessed for risk factors for progression of liver disease. Interbiopsy interval was a median of 2.5 years. Paired biopsies from the same patient were scored by the same pathologist.
Results: Seventy of 219 (33%) patients showed progression of at least 1 fibrosis point in the Ishak score; 23 progressed at least 2 points. Independent predictors of progression were age at first biopsy and any fibrosis on first biopsy. Factors not associated with progression were: necroinflammation, duration of infection, alcohol consumption, alanine aminotransferase levels, current or past hepatitis B virus infection, ferritin, HCV genotype, and steatosis or iron deposition in the initial biopsy.
Conclusions: One third of patients with predominantly mild hepatitis C showed significant fibrosis progression over a median period of 30 months. Histologically, mild hepatitis C is a progressive disease. The overall rate of fibrosis progression in patients with hepatitis C was low but increased in patients who were older or had fibrosis on their index biopsy. These data suggest that HCV infection will place an increasing burden on health care services in the next 20 years.
doi:10.1136/gut.2003.021691
PMCID: PMC1773967  PMID: 14960533
hepatitis C virus; fibrosis; hepatic fibrosis
6.  Impact of human immunodeficiency virus (HIV) infection on the progression of liver fibrosis in hepatitis C virus infected patients 
Gut  2003;52(7):1035-1040.
Objectives: To compare the rate of hepatic fibrosis progression in hepatitis C virus (HCV) infected and human immunodeficiency virus (HIV)-HCV coinfected patients, and to identify factors that may influence fibrosis progression.
Patients and methods: A total of 153 HCV infected and 55 HCV-HIV coinfected patients were identified from two London hospitals. Eligible patients had known dates of HCV acquisition, were HCV-RNA positive, and had undergone a liver biopsy, which was graded using the Ishak score. Univariate and multivariate logistic regression analyses were used to identify factors associated with fibrosis progression rate and the development of advanced fibrosis (stages 3 and 4).
Results: The estimated median fibrosis progression rate was 0.17 units/year (interquartile range (IQR) 0.10–0.25) in HIV-HCV coinfected and 0.13 (IQR 0.07–0.17) in HCV monoinfected patients (p=0.01), equating to an estimated time from HCV infection to cirrhosis of 23 and 32 years, respectively. Older age at infection (p<0.001), HIV positivity (p=0.019), higher alanine aminotransferase (ALT) level (p=0.039), and higher inflammatory activity (p<0.001) on first biopsy were all independently associated with more rapid fibrosis progression. ALT was correlated with histological index (r=0.35, p<0.001). A CD4 cell count ⩽250×106/l was independently associated with advanced liver fibrosis (odds ratio 5.36 (95% confidence interval 1.26–22.79)) and was also correlated with a higher histological index (r=−0.42, p=0.002).
Conclusion: HIV infection modifies the natural history of HCV by accelerating the rate of fibrosis progression by 1.4 fold, and the development of advanced fibrosis threefold. A low CD4 cell count was independently associated with advanced disease and correlated with higher histological index, which suggests that early antiretroviral therapy may be of benefit in slowing HCV progression in coinfected patients.
PMCID: PMC1773713  PMID: 12801963
human immunodeficiency virus; liver fibrosis; hepatitis C virus
7.  Genomic Predictors for Recurrence Patterns of Hepatocellular Carcinoma: Model Derivation and Validation 
PLoS Medicine  2014;11(12):e1001770.
In this study, Lee and colleagues develop a genomic predictor that can identify patients at high risk for late recurrence of hepatocellular carcinoma (HCC) and provided new biomarkers for risk stratification.
Background
Typically observed at 2 y after surgical resection, late recurrence is a major challenge in the management of hepatocellular carcinoma (HCC). We aimed to develop a genomic predictor that can identify patients at high risk for late recurrence and assess its clinical implications.
Methods and Findings
Systematic analysis of gene expression data from human liver undergoing hepatic injury and regeneration revealed a 233-gene signature that was significantly associated with late recurrence of HCC. Using this signature, we developed a prognostic predictor that can identify patients at high risk of late recurrence, and tested and validated the robustness of the predictor in patients (n = 396) who underwent surgery between 1990 and 2011 at four centers (210 recurrences during a median of 3.7 y of follow-up). In multivariate analysis, this signature was the strongest risk factor for late recurrence (hazard ratio, 2.2; 95% confidence interval, 1.3–3.7; p = 0.002). In contrast, our previously developed tumor-derived 65-gene risk score was significantly associated with early recurrence (p = 0.005) but not with late recurrence (p = 0.7). In multivariate analysis, the 65-gene risk score was the strongest risk factor for very early recurrence (<1 y after surgical resection) (hazard ratio, 1.7; 95% confidence interval, 1.1–2.6; p = 0.01). The potential significance of STAT3 activation in late recurrence was predicted by gene network analysis and validated later. We also developed and validated 4- and 20-gene predictors from the full 233-gene predictor. The main limitation of the study is that most of the patients in our study were hepatitis B virus–positive. Further investigations are needed to test our prediction models in patients with different etiologies of HCC, such as hepatitis C virus.
Conclusions
Two independently developed predictors reflected well the differences between early and late recurrence of HCC at the molecular level and provided new biomarkers for risk stratification.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Primary liver cancer—a tumor that starts when a liver cell acquires genetic changes that allow it to grow uncontrollably—is the second-leading cause of cancer-related deaths worldwide, killing more than 600,000 people annually. If hepatocellular cancer (HCC; the most common type of liver cancer) is diagnosed in its early stages, it can be treated by surgically removing part of the liver (resection), by liver transplantation, or by local ablation, which uses an electric current to destroy the cancer cells. Unfortunately, the symptoms of HCC, which include weight loss, tiredness, and jaundice (yellowing of the skin and eyes), are vague and rarely appear until the cancer has spread throughout the liver. Consequently, HCC is rarely diagnosed before the cancer is advanced and untreatable, and has a poor prognosis (likely outcome)—fewer than 5% of patients survive for five or more years after diagnosis. The exact cause of HCC is unclear, but chronic liver (hepatic) injury and inflammation (caused, for example, by infection with hepatitis B virus [HBV] or by alcohol abuse) promote tumor development.
Why Was This Study Done?
Even when it is diagnosed early, HCC has a poor prognosis because it often recurs. Patients treated for HCC can experience two distinct types of tumor recurrence. Early recurrence, which usually happens within the first two years after surgery, arises from the spread of primary cancer cells into the surrounding liver that left behind during surgery. Late recurrence, which typically happens more than two years after surgery, involves the development of completely new tumors and seems to be the result of chronic liver damage. Because early and late recurrence have different clinical courses, it would be useful to be able to predict which patients are at high risk of which type of recurrence. Given that injury, inflammation, and regeneration seem to prime the liver for HCC development, might the gene expression patterns associated with these conditions serve as predictive markers for the identification of patients at risk of late recurrence of HCC? Here, the researchers develop a genomic predictor for the late recurrence of HCC by examining gene expression patterns in tissue samples from livers that were undergoing injury and regeneration.
What Did the Researchers Do and Find?
By comparing gene expression data obtained from liver biopsies taken before and after liver transplantation or resection and recorded in the US National Center for Biotechnology Information Gene Expression Omnibus database, the researchers identified 233 genes whose expression in liver differed before and after liver injury (the hepatic injury and regeneration, or HIR, signature). Statistical analyses indicate that the expression of the HIR signature in archived tissue samples was significantly associated with late recurrence of HCC in three independent groups of patients, but not with early recurrence (a significant association between two variables is one that is unlikely to have arisen by chance). By contrast, a tumor-derived 65-gene signature previously developed by the researchers was significantly associated with early recurrence but not with late recurrence. Notably, as few as four genes from the HIR signature were sufficient to construct a reliable predictor for late recurrence of HCC. Finally, the researchers report that many of the genes in the HIR signature encode proteins involved in inflammation and cell death, but that others encode proteins involved in cellular growth and proliferation such as STAT3, a protein with a well-known role in liver regeneration.
What Do These Findings Mean?
These findings identify a gene expression signature that was significantly associated with late recurrence of HCC in three independent groups of patients. Because most of these patients were infected with HBV, the ability of the HIR signature to predict late occurrence of HCC may be limited to HBV-related HCC and may not be generalizable to HCC related to other causes. Moreover, the predictive ability of the HIR signature needs to be tested in a prospective study in which samples are taken and analyzed at baseline and patients are followed to see whether their HCC recurs; the current retrospective study analyzed stored tissue samples. Importantly, however, the HIR signature associated with late recurrence and the 65-gene signature associated with early recurrence provide new insights into the biological differences between late and early recurrence of HCC at the molecular level. Knowing about these differences may lead to new treatments for HCC and may help clinicians choose the most appropriate treatments for their patients.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001770.
The US National Cancer Institute provides information about all aspects of cancer, including detailed information for patients and professionals about primary liver cancer (in English and Spanish)
The American Cancer Society also provides information about liver cancer (including information on support programs and services; available in several languages)
The UK National Health Service Choices website provides information about primary liver cancer (including a video about coping with cancer)
Cancer Research UK (a not-for-profit organization) also provides detailed information about primary liver cancer (including information about living with primary liver cancer)
MD Anderson Cancer Center provides information about symptoms, diagnosis, treatment, and prevention of primary liver cancer
MedlinePlus provides links to further resources about liver cancer (in English and Spanish)
doi:10.1371/journal.pmed.1001770
PMCID: PMC4275163  PMID: 25536056
8.  Factors associated with the progression of fibrosis on liver biopsy in Alaska Native and American Indian persons with chronic hepatitis C 
BACKGROUND:
Various factors influence the development and rate of fibrosis progression in chronic hepatitis C virus (HCV) infection.
OBJECTIVES:
To examine factors associated with fibrosis in a long-term outcomes study of Alaska Native/American Indian persons who underwent liver biopsy, and to examine the rate of fibrosis progression in persons with subsequent biopsies.
METHODS:
A cross-sectional analysis of the demographic, inflammatory and viral characteristics of persons undergoing liver biopsy compared individuals with early (Ishak fibrosis score of lower than 3) with those with advanced (Ishak score of 3 or greater) fibrosis. Persons who underwent two or more biopsies were analyzed for factors associated with fibrosis progression.
RESULTS:
Of 253 HCV RNA-positive persons who underwent at least one liver biopsy, 76 (30%) had advanced fibrosis. On multivariate analysis, a Knodell histological activity index score of 10 to 14 and an alpha-fetoprotein level of 8 ng/mL or higher were found to be independent predictors of advanced liver fibrosis (P<0.0001 for each). When surrogate markers of liver inflammation (alanine aminotransferase, aspartate aminotransferase/alanine aminotransferase ratio and alpha-fetoprotein) were removed from the model, type 2 diabetes mellitus (P=0.001), steatosis (P=0.03) and duration of HCV infection by 10-year intervals (P=0.02) were associated with advanced fibrosis. Among 52 persons who underwent two or more biopsies a mean of 6.2 years apart, the mean Ishak fibrosis score increased between biopsies (P=0.002), with progression associated with older age at initial biopsy and HCV risk factors.
CONCLUSIONS:
The presence of type 2 diabetes mellitus, steatosis and duration of HCV infection were independent predictors of advanced fibrosis in the present cohort, with significant fibrosis progression demonstrated in persons who underwent serial biopsies.
PMCID: PMC2918486  PMID: 20652161
Hepatitis C; Liver fibrosis; Liver steatosis
9.  Motor Vehicle Crashes in Diabetic Patients with Tight Glycemic Control: A Population-based Case Control Analysis 
PLoS Medicine  2009;6(12):e1000192.
Using a population-based case control analysis, Donald Redelmeier and colleagues found that tighter glycemic control, as measured by the HbA1c, is associated with an increased risk of a motor vehicle crash.
Background
Complications from diabetes mellitus can compromise a driver's ability to safely operate a motor vehicle, yet little is known about whether euglycemia predicts normal driving risks among adults with diabetes. We studied the association between glycosylated hemoglobin (HbA1c) and the risk of a motor vehicle crash using a population-based case control analysis.
Methods and Findings
We identified consecutive drivers reported to vehicle licensing authorities between January 1, 2005 to January 1, 2007 who had a diagnosis of diabetes mellitus and a HbA1c documented. The risk of a crash was calculated taking into account potential confounders including blood glucose monitoring, complications, and treatments. A total of 57 patients were involved in a crash and 738 were not involved in a crash. The mean HbA1c was lower for those in a crash than controls (7.4% versus 7.9%, unpaired t-test, p = 0.019), equal to a 26% increase in the relative risk of a crash for each 1% reduction in HbA1c (odds ratio = 1.26, 95% confidence interval 1.03–1.54). The trend was evident across the range of HbA1c values and persisted after adjustment for measured confounders (odds ratio = 1.25, 95% confidence interval 1.02–1.55). The two other significant risk factors for a crash were a history of severe hypoglycemia requiring outside assistance (odds ratio = 4.07, 95% confidence interval 2.35–7.04) and later age at diabetes diagnosis (odds ratio per decade = 1.29, 95% confidence interval 1.07–1.57).
Conclusions
In this selected population, tighter glycemic control, as measured by the HbA1c, is associated with an increased risk of a motor vehicle crash.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Around 8% of the US population has diabetes, a group of diseases in which the body cannot control levels of glucose (sugar) in the blood. It can lead to serious complications and premature death, but suitable treatment can control the disease and lower the risk of complications.
Type 1 diabetes occurs when the body's immune system prevents the production of insulin, the hormone that controls blood glucose. It accounts for 5%–10% of diabetes cases in adults and the vast majority of cases in childhood. Patients with type 1 diabetes need to inject insulin to survive. Type 2 diabetes is associated with older age, obesity, family history of diabetes, lack of physical activity, and race/ethnicity. As obesity rates rise worldwide, it is expected that the prevalence of type 2 diabetes will increase.
Why Was This Study Done?
Some complications of diabetes affect the ability to drive safely. Prolonged periods of high blood sugar levels can damage eyesight and nerves throughout the body, resulting in pain, tingling, and reduction of feeling or muscle control. Over time, some diabetics may become unaware of the early symptoms of an abnormally low blood sugar level (hypoglycemia) that can cause confusion, clumsiness, or fainting. Severe hypoglycemia can result in seizures or a coma.
It is common for driver licensing authorities to require evidence that a diabetic person's condition is well controlled before they issue a driving license. One measure of this is the percentage of hemoglobin in their blood that has joined up with glucose, known as HbA1c. This provides a measure of average blood glucose levels over the previous 8–12 weeks. A lower reading is considered an indicator of good diabetic control, but conversely, a blood glucose level that is too low can cause hypoglycemia. Normal nondiabetic HbA1c is between 3.5% and 5.5%, but 6.5% is considered good for people with diabetes.
In this study the researchers tested whether blood glucose levels, as measured by levels of HbA1c, were statistically associated with the risk of a motor vehicle crash.
What Did the Researchers Do and Find?
The authors studied 795 diabetic adults who had been in contact with the driver licensing authority in Ontario, Canada between January 1, 2005 and January 1, 2007 and for whom HbA1c levels were recorded. HbA1c levels varied between 4.4% and 14.7%.
Of the drivers considered, 57 were involved in a car crash and 738 were not. The authors found that lower HbA1c levels were associated with an increased risk of a motor vehicle crash, even when they took into account other factors such as time since diagnosis, treatment, age, age when diagnosed, and, if taking insulin, age insulin started.
The authors also found that the risk of a crash quadrupled when a driver had a history of severe hypoglycemia that required outside help and that there was an increase in risk when diabetes had first been diagnosed at an older age.
What Do These Findings Mean?
The authors conclude by emphasizing the difficulty in knowing whether someone with diabetes is fit to drive. They suggest that a patient's HbA1c level is neither necessary nor sufficient to determine whether a diabetic person is fit to drive and these results, which agree with some other studies, call into question the current legal framework of the US, UK, Canada, Germany, Holland, and Australia, which single out diabetic drivers for medical review.
The finding that lower HbA1c levels are associated with an increased risk of a crash is surprising, as it suggests that a driver is less safe if they control their diabetes well. However, a statistical link does not prove that one event causes another. Unknown social or medical factors might explain the results. In this case, the authors point out that a major drawback of their study is that it is not randomized and drivers have free will in choosing how tightly to control their diabetes and also how carefully they drive. The authors considered whether time spent driving might explain the results, but discounted this for several reasons. One more plausible explanation is that intensive treatment to attain a lower HbA1c level for better general health raises the risk of hypoglycemic episodes.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000192.
Wikipedia includes an article on diabetes (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The American Diabetes Association publishes information on diabetes in English and Spanish
The American Diabetes Association also publishes information on US states regulation of drivers with diabetes
The World Health Organization of the United Nations Diabetes Programme works to prevent diabetes, minimize complications, and maximize quality of life
doi:10.1371/journal.pmed.1000192
PMCID: PMC2780354  PMID: 19997624
10.  Similar Progression of Fibrosis between HIV/HCV- and HCV-Infected Patients: Analysis of Paired Liver Biopsy Samples 
Background and Aims
Fibrosis progression might be accelerated in patients that are co-infected with HIV and hepatitis C virus (HIV/HCV). However, no studies have directly compared fibrosis progression by paired liver biopsy between patients infected with HIV and HCV vs. those infected with only HCV.
Methods
Liver biopsy samples were collected from patients with HIV/HCV (n=306) and those with HCV; biopsies from 59 without a sustained virologic response (SVR) or cirrhosis were matched with those from patients with only HCV (controls) for initial fibrosis stage, demographics, and HCV treatment. For HIV/HCV patients, categorical variables at baseline and the area under the curve of continuous variables per unit time were analyzed for associations with fibrosis progression.
Results
Liver biopsies from HIV/HCV patients had more piecemeal necrosis than controls (P=.001) and increased lobular inflammation (P=.002); HIV/HCV patients also had shorter intervals between liver biopsies (4.7 vs. 5.9 yrs, P<.0001). Between the 1st and 2nd biopsies, fibrosis remained unchanged or progressed 1 or 2 units in 55%, 18%, and 18% of HIV/HCV patients, respectively, compared with 45%, 30%, and 9% of controls. The fibrosis progression rate was similar between HIV/HCV and control patients (0.12±0.40 vs. 0.091±0.29 units/yr; P=.72). In paired biopsies from 66 patients, including those with SVR, there were no associations between fibrosis progression and demographics; numbers of CD4+ T cells; levels of aspartate aminotransferase or alanine aminotransferase; use of highly-active anti-retroviral therapy; response to HCV therapy (no treatment, SVR, or non-response); baseline levels of FIB-4; or histological features including inflammation, fibrosis, or steatosis.
Conclusions
Based on analysis of liver biopsy samples, fibrosis progression was similar between HIV/HCV- and HCV-infected patients; no clinical or laboratory parameters predicted disease progression.
doi:10.1016/j.cgh.2010.08.004
PMCID: PMC2997143  PMID: 20728569
Hepatitis C disease progression; co-infection; human immunodeficiency virus
11.  Assessment of Liver Fibrosis: Noninvasive Means 
Liver biopsy, owing to its limitations and risks, is an imperfect gold standard for assessing the severity of the most frequent chronic liver diseases chronic hepatitis C (HCV), B (HBV) non alcoholic (NAFLD) and alcoholic (ALD) fatty liver diseases. This review summarizes the advantages and the limits of the available biomarkers of liver fibrosis. Among a total of 2,237 references, a total of 14 validated serum biomarkers have been identified between 1991 and 2008. Nine were not patented and five were patented. Two alternatives to liver biopsy were the most evaluated FibroTest and Fibroscan. For FibroTest, there was a total of 38 different populations including 7,985 subjects with both FibroTest and biopsy (4,600 HCV, 1,580 HBV, 267 NAFLD, 524 ALD, and 1014 mixed). For Fibroscan, there was a total of 11 published studies including 2,260 subjects (1,466 HCV, 95 cholestatic liver disease, and 699 mixed). For FibroTest, the mean diagnostic value for the diagnosis of advanced fibrosis assessed using standardized area under the ROC curves was 0.84 (95% confidence interval 0.83-0.86), without a significant difference between the causes of liver disease, hepatitis C, hepatitis B, and alcoholic or non alcoholic fatty liver disease. High-risk profiles of false negative/false positive of FibroTest, mainly Gilbert syndrome, hemolysis and acute inflammation, are present in 3% of the populations. In case of discordance between biopsy and FibroTest, half of the failures can be due to biopsy; the prognostic value of FibroTest is at least similar to that of biopsy in HCV, HBV and ALD.
In conclusion this overview of evidence-based data suggests that biomarkers could be used as an alternative to liver biopsy for the first line assessment of fibrosis stage in the four most common chronic liver diseases, namely HCV, HBV, NAFLD and ALD. Neither biomarkers nor biopsy alone is sufficient for taking a definite decision in a given patient; all the clinical and biological data must be taken into account. There is no evidence based data justifying biopsy as a first line estimate of liver fibrosis. Health authorities in some countries have already approved validated biomarkers as the first line procedure for the staging of liver fibrosis.
doi:10.4103/1319-3767.43273
PMCID: PMC2702928  PMID: 19568532
FibroScan; FibroTest; liver fibrosis; SteatoTest
12.  End-stage Renal Disease and African-American Race are Independent Predictors of Mild Liver Fibrosis in Patients with Chronic Hepatitis C Infection 
Journal of Viral Hepatitis  2012;19(5):371-376.
Recipients of hemodialysis for end-stage renal disease have a higher prevalence of hepatitis C virus (HCV) infection relative to the general U.S. population. However, the natural course of HCV infection in patients with renal failure, including African-Americans and Caucasian-Americans, is not well known. We compared the degree of liver inflammation and fibrosis in patients with HCV infection, with and without end-stage renal disease. This was a cross-sectional study of 156 HCV patients with end stage renal disease (130 African Americans and 26 Caucasian Americans) with a liver biopsy between 1992 and 2005. The control group consisted of 138 patients (50 African Americans) with HCV infections and a serum creatinine less than 1.5 mg/dL with a liver biopsy between 1995 and 1998. Specimens were graded for inflammation and fibrosis using Knodell Histological Activity Index. Compared to patients without renal impairment, HCV patients with renal failure were older and more likely to be African American. Patients with renal impairment had lower mean serum transaminases, a higher mean serum alkaline phosphatase levels (all p<0.0001) and less hepatic necro-inflammation (Knodell Inflamation score -I, II and III; p<0.05) and fibrosis (Knodel fibrosis >4; p<0.0001). There were no racial differences in serum liver chemistry and histology scores among patients with renal failure. In a multivariate analysis, younger age, end stage renal disease, African American race, and a lower serum alkaline phosphatase were associated with lower odds for advanced liver fibrosis. Thus HCV patients with end stage renal disease had a lower degree of hepatic inflammation and fibrosis compared to those without renal disease, independent of race.
doi:10.1111/j.1365-2893.2011.01565.x
PMCID: PMC3328295  PMID: 22497817
End-stage renal disease; Hepatitis C; Liver fibrosis; Cirrhosis
13.  Clinical course of hepatitis C virus during the first decade of infection: cohort study 
BMJ : British Medical Journal  2002;324(7335):450.
Objective
To determine the clinical course of hepatitis C virus in the first decade of infection in a group of patients who acquired their infections on a known date.
Design
Cohort study.
Setting
Clinical centres throughout the United Kingdom.
Participants
924 transfusion recipients infected with the hepatitis C virus (HCV) traced during the HCV lookback programme and 475 transfusion recipients who tested negative for antibodies to HCV (controls).
Main outcome measures
Clinical evidence of liver disease and survival after 10 years of infection.
Results
All cause mortality was not significantly different between patients and controls (Cox's hazards ratio 1.41, 95% confidence interval 0.95 to 2.08). Patients were more likely to be certified with a death related to liver disease than were controls (12.84, 1.73 to 95.44), but although the risk of death directly from liver disease was higher in patients than controls this difference was not significant (5.78, 0.72 to 46.70). Forty per cent of the patients who died directly from liver disease were known to have consumed excess alcohol. Clinical follow up of 826 patients showed that liver function was abnormal in 307 (37.2%), and 115 (13.9%) reported physical signs or symptoms of liver disease. Factors associated with developing liver disease were testing positive for HCV ribonucleic acid (odds ratio 6.44, 2.67 to 15.48), having acquired infection when older (at age ⩾ 40 years; 1.80, 1.14 to 2.85), and years since transfusion (odds ratio 1.096 per year, 1.00 to 1.20). For patients with severe disease, sex was also significant (odds ratio for women 0.38, 0.17 to 0.88). Of the 362 patients who had undergone liver biopsy, 328 (91%) had abnormal histological results and 35 (10%) of these were cirrhotic.
Conclusions
Hepatitis C virus infection did not have a great impact on all cause mortality in the first decade of infection. Infected patients were at increased risk of dying directly from liver disease, particularly if they consumed excess alcohol, but this difference was not statistically significant.
What is already known on this topicThe clinical course of HCV infection is unclear because most information has come from studies of patients with established chronic liver diseaseStudies that follow patients from disease onset are rare because most HCV infections are asymptomaticWhat this study addsHCV infection does not have a great impact on all cause mortality in the first decade of infectionInfected patients have an increased risk of dying from a liver related cause, particularly if they consumed excess alcohol
PMCID: PMC65664  PMID: 11859045
14.  Extracorporeal Photophoresis 
Executive Summary
Objective
To assess the effectiveness, safety and cost-effectiveness of extracorporeal photophoresis (ECP) for the treatment of refractory erythrodermic cutaneous T cell lymphoma (CTCL) and refractory chronic graft versus host disease (cGvHD).
Background
Cutaneous T Cell Lymphoma
Cutaneous T cell lymphoma (CTCL) is a general name for a group of skin affecting disorders caused by malignant white blood cells (T lymphocytes). Cutaneous T cell lymphoma is relatively uncommon and represents slightly more than 2% of all lymphomas in the United States. The most frequently diagnosed form of CTCL is mycosis fungoides (MF) and its leukemic variant Sezary syndrome (SS). The relative frequency and disease-specific 5-year survival of 1,905 primary cutaneous lymphomas classified according to the World Health Organization-European Organization for Research and Treatment of Cancer (WHO-EORTC) classification (Appendix 1). Mycosis fungoides had a frequency of 44% and a disease specific 5-year survival of 88%. Sezary syndrome had a frequency of 3% and a disease specific 5-year survival of 24%.
Cutaneous T cell lymphoma has an annual incidence of approximately 0.4 per 100,000 and it mainly occurs in the 5th to 6th decade of life, with a male/female ratio of 2:1. Mycosis fungoides is an indolent lymphoma with patients often having several years of eczematous or dermatitic skin lesions before the diagnosis is finally established. Mycosis fungoides commonly presents as chronic eczematous patches or plaques and can remain stable for many years. Early in the disease biopsies are often difficult to interpret and the diagnosis may only become apparent by observing the patient over time.
The clinical course of MF is unpredictable. Most patients will live normal lives and experience skin symptoms without serious complications. Approximately 10% of MF patients will experience progressive disease involving lymph nodes, peripheral blood, bone marrow and visceral organs. A particular syndrome in these patients involves erythroderma (intense and usually widespread reddening of the skin from dilation of blood vessels, often preceding or associated with exfoliation), and circulating tumour cells. This is known as SS. It has been estimated that approximately 5-10% of CTCL patients have SS. Patients with SS have a median survival of approximately 30 months.
Chronic Graft Versus Host Disease
Allogeneic hematopoietic cell transplantation (HCT) is a treatment used for a variety of malignant and nonmalignant disease of the bone marrow and immune system. The procedure is often associated with serious immunological complications, particularly graft versus host disease (GvHD). A chronic form of GvHD (cGvHD) afflicts many allogeneic HCT recipients, which results in dysfunction of numerous organ systems or even a profound state of immunodeficiency. Chronic GVHD is the most frequent cause of poor long-term outcome and quality of life after allogeneic HCT. The syndrome typically develops several months after transplantation, when the patient may no longer be under the direct care of the transplant team.
Approximately 50% of patients with cGvHD have limited disease and a good prognosis. Of the patients with extensive disease, approximately 60% will respond to treatment and eventually be able to discontinue immunosuppressive therapy. The remaining patients will develop opportunistic infection, or require prolonged treatment with immunosuppressive agents.
Chronic GvHD occurs in at least 30% to 50% of recipients of transplants from human leukocyte antigen matched siblings and at least 60% to 70% of recipients of transplants from unrelated donors. Risk factors include older age of patient or donor, higher degree of histoincompatibility, unrelated versus related donor, use of hematopoietic cells obtained from the blood rather than the marrow, and previous acute GvHD. Bhushan and Collins estimated that the incidence of severe cGvHD has probably increased in recent years because of the use of more unrelated transplants, donor leukocyte infusions, nonmyeloablative transplants and stem cells obtained from the blood rather than the marrow. The syndrome typically occurs 4 to 7 months after transplantation but may begin as early as 2 months or as late as 2 or more years after transplantation. Chronic GvHD may occur by itself, evolve from acute GvHD, or occur after resolution of acute GvHD.
The onset of the syndrome may be abrupt but is frequently insidious with manifestations evolving gradually for several weeks. The extent of involvement varies significantly from mild involvement limited to a few patches of skin to severe involvement of numerous organ systems and profound immunodeficiency. The most commonly involved tissues are the skin, liver, mouth, and eyes. Patients with limited disease have localized skin involvement, evidence of liver dysfunction, or both, whereas those with more involvement of the skin or involvement of other organs have extensive disease.
Treatment
 
Cutaneous T Cell Lymphoma
The optimal management of MF is undetermined because of its low prevalence, and its highly variable natural history, with frequent spontaneous remissions and exacerbations and often prolonged survival.
Nonaggressive approaches to therapy are usually warranted with treatment aimed at improving symptoms and physical appearance while limiting toxicity. Given that multiple skin sites are usually involved, the initial treatment choices are usually topical or intralesional corticosteroids or phototherapy using psoralen (a compound found in plants which make the skin temporarily sensitive to ultraviolet A) (PUVA). PUVA is not curative and its influence on disease progression remains uncertain. Repeated courses are usually required which may lead to an increased risk of both melanoma and nonmelanoma skin cancer. For thicker plaques, particularly if localized, radiotherapy with superficial electrons is an option.
“Second line” therapy for early stage disease is often topical chemotherapy, radiotherapy or total skin electron beam radiation (TSEB).
Treatment of advanced stage (IIB-IV) MF usually consists of topical or systemic therapy in refractory or rapidly progressive SS.
Bone marrow transplantation and peripheral blood stem cell transplantation have been used to treat many malignant hematologic disorders (e.g., leukemias) that are refractory to conventional treatment. Reports on the use of these procedures for the treatment of CTCL are limited and mostly consist of case reports or small case series.
Chronic Graft Versus Host Disease
Patients who develop cGvHD require reinstitution of immunosuppressive medication (if already discontinued) or an increase in dosage and possibly addition of other agents. The current literature regarding cGvHD therapy is less than optimal and many recommendations about therapy are based on common practices that await definitive testing. Patients with disease that is extensive by definition but is indolent in clinical appearance may respond to prednisone. However, patients with more aggressive disease are treated with higher doses of corticosteroids and/or cyclosporine.
Numerous salvage therapies have been considered in patients with refractory cGvHD, including ECP. Due to uncertainty around salvage therapies, Bhushan and Collins suggested that ideally, patients with refractory cGvHD should be entered into clinical trials.
Two Ontario expert consultants jointly estimated that there may be approximately 30 new erythrodermic treatment resistant CTCL patients and 30 new treatment resistant cGvHD patients per year who are unresponsive to other forms of therapy and may be candidates for ECP.
Extracorporeal photopheresis is a procedure that was initially developed as a treatment for CTCL, particularly SS.
Current Technique
Extracorporeal photopheresis is an immunomodulatory technique based on pheresis of light sensitive cells. Whole blood is removed from patients followed by pheresis. Lymphocytes are separated by centrifugation to create a concentrated layer of white blood cells. The lymphocyte layer is treated with methoxsalen (a drug that sensitizes the lymphocytes to light) and exposed to UVA, following which the lymphocytes are returned to the patient. Red blood cells and plasma are returned to the patient between each cycle.
Photosensitization is achieved by administering methoxsalen to the patient orally 2 hours before the procedure, or by injecting methoxsalen directly ino the leucocyte rich fraction. The latter approach avoids potential side effects such as nausea, and provides a more consistent drug level within the machine.
In general, from the time the intravenous line is inserted until the white blood cells are returned to the patient takes approximately 2.5-3.5 hours.
For CTCL, the treatment schedule is generally 2 consecutive days every 4 weeks for a median of 6 months. For cGvHD, an expert in the field estimated that the treatment schedule would be 3 times a week for the 1st month, then 2 consecutive days every 2 weeks after that (i.e., 4 treatments a month) for a median of 6 to 9 months.
Regulatory Status
The UVAR XTS Photopheresis System is licensed by Health Canada as a Class 3 medical device (license # 7703) for the “palliative treatment of skin manifestations of CTCL.” It is not licensed for the treatment of cGvHD.
UVADEX (sterile solution methoxsalen) is not licensed by Health Canada, but can be used in Canada via the Special Access Program. (Personal communication, Therakos, February 16, 2006)
According to the manufacturer, the UVAR XTS photopheresis system licensed by Health Canada can also be used with oral methoxsalen. (Personal communication, Therakos, February 16, 2006) However, oral methoxsalen is associated with side effects, must be taken by the patient in advance of ECP, and has variable absorption in the gastrointestinal tract.
According to Health Canada, UVADEX is not approved for use in Canada. In addition, a review of the Product Monographs of the methoxsalen products that have been approved in Canada showed that none of them have been approved for oral administration in combination with the UVAR XTS photophoresis system for “the palliative treatment of the skin manifestations of cutaneous T-cell Lymphoma”.
In the United States, the UVAR XTS Photopheresis System is approved by the Food and Drug Administration (FDA) for “use in the ultraviolet-A (UVA) irradiation in the presence of the photoactive drug methoxsalen of extracorporeally circulating leukocyte-enriched blood in the palliative treatment of the skin manifestations of CTCL in persons who have not been responsive to other therapy.”
UVADEX is approved by the FDA for use in conjunction with UVR XTS photopheresis system for “use in the ultraviolet-A (UVA) irradiation in the presence of the photoactive drug methoxsalen of extracorporeally circulating leukocyte-enriched blood in the palliative treatment of the skin manifestations of CTCL in persons who have not been responsive to other therapy.”
The use of the UVAR XTS photopheresis system or UVADEX for cGvHD is an off-label use of a FDA approved device/drug.
Summary of Findings
The quality of the trials was examined.
As stated by the GRADE Working Group, the following definitions were used in grading the quality of the evidence.
Cutaneous T Cell Lymphoma
Overall, there is low-quality evidence that ECP improves response rates and survival in patients with refractory erythrodermic CTCL (Table 1).
Limitations in the literature related to ECP for the treatment of refractory erythrodermic CTCL include the following:
Different treatment regimens.
Variety of forms of CTCL (and not necessarily treatment resistant) - MF, erythrodermic MF, SS.
SS with peripheral blood involvement → role of T cell clonality reporting?
Case series (1 small crossover RCT with several limitations)
Small sample sizes.
Retrospective.
Response criteria not clearly defined/consistent.
Unclear how concomitant therapy contributed to responses.
Variation in definitions of concomitant therapy
Comparison to historical controls.
Some patients were excluded from analysis because of progression of disease, toxicity and other reasons.
Unclear/strange statistics
Quality of life not reported as an outcome of interest.
The reported CR range is ~ 16% to 23% and the overall reported CR/PR range is ~ 33% to 80%.
The wide range in reported responses to ECP appears to be due to the variability of the patients treated and the way in which the data were presented and analyzed.
Many patients, in mostly retrospective case series, were concurrently on other therapies and were not assessed for comparability of diagnosis or disease stage (MF versus SS; erythrodermic versus not erythrodermic). Blood involvement in patients receiving ECP (e.g., T cell clonality) was not consistently reported, especially in earlier studies. The definitions of partial and complete response also are not standardized or consistent between studies.
Quality of life was reported in one study; however, the scale was developed by the authors and is not a standard validated scale.
Adverse events associated with ECP appear to be uncommon and most involve catheter related infections and hypotension caused by volume depletion.
GRADE Quality of Studies – Extracorporeal Photopheresis for Refractory Erythrodermic Cutaneous T-Cell Lymphoma
Chronic Graft-Versus-Host Disease
Overall, there is low-quality evidence that ECP improves response rates and survival in patients with refractory cGvHD (Table 2).
Patients in the studies had stem cell transplants due to a variety of hematological disorders (e.g., leukemias, aplastic anemia, thalassemia major, Hodgkin’s lymphoma, non Hodgkin’s lymphoma).
In 2001, The Blue Cross Blue Shield Technology Evaluation Centre concluded that ECP meets the TEC criteria as treatment of cGvHD that is refractory to established therapy.
The Catalan health technology assessment (also published in 2001) concluded that ECP is a new but experimental therapeutic alternative for the treatment of the erythrodermal phase of CTCL and cGvHD in allogenic HPTC and that this therapy should be evaluated in the framework of a RCT.
Quality of life (Lansky/Karnofsky play performance score) was reported in 1 study.
The patients in the studies were all refractory to steroids and other immunosuppressive agents, and these drugs were frequently continued concomitantly with ECP.
Criteria for assessment of organ improvement in cGvHD are variable, but PR was typically defined as >50% improvement from baseline parameters and CR as complete resolution of organ involvement.
Followup was variable and incomplete among the studies.
GRADE Quality of Studies – ECP for Refractory cGvHD
Conclusion
As per the GRADE Working Group, overall recommendations consider 4 main factors.
The tradeoffs, taking into account the estimated size of the effect for the main outcome, the confidence limits around those estimates and the relative value placed on the outcome.
The quality of the evidence (Tables 1 and 2).
Translation of the evidence into practice in a specific setting, taking into consideration important factors that could be expected to modify the size of the expected effects such as proximity to a hospital or availability of necessary expertise.
Uncertainty about the baseline risk for the population of interest.
The GRADE Working Group also recommends that incremental costs of healthcare alternatives should be considered explicitly alongside the expected health benefits and harms. Recommendations rely on judgments about the value of the incremental health benefits in relation to the incremental costs. The last column in Table 3 is the overall trade-off between benefits and harms and incorporates any risk/uncertainty.
For refractory erythrodermic CTCL, the overall GRADE and strength of the recommendation is “weak” – the quality of the evidence is “low” (uncertainties due to methodological limitations in the study design in terms of study quality and directness), and the corresponding risk/uncertainty is increased due to an annual budget impact of approximately $1.5M Cdn (based on 30 patients) while the cost-effectiveness of ECP is unknown and difficult to estimate considering that there are no high quality studies of effectiveness. The device is licensed by Health Canada, but the sterile solution of methoxsalen is not licensed.
With an annual budget impact of $1.5 M Cdn (based on 30 patients), and the current expenditure is $1.3M Cdn (for out of country for 7 patients), the potential cost savings based on 30 patients with refractory erythrodermic CTCL is about $3.8 M Cdn (annual).
For refractory cGvHD, the overall GRADE and strength of the recommendation is “weak” – the quality of the evidence is “low” (uncertainties due to methodological limitations in the study design in terms of study quality and directness), and the corresponding risk/uncertainty is increased due to a budget impact of approximately $1.5M Cdn while the cost-effectiveness of ECP is unknown and difficult to estimate considering that there are no high quality studies of effectiveness. Both the device and sterile solution are not licensed by Health Canada for the treatment of cGvHD.
If all the ECP procedures for patients with refractory erythrodermic CTCL and refractory cGvHD were performed in Ontario, the annual budget impact would be approximately $3M Cdn.
Overall GRADE and Strength of Recommendation (Including Uncertainty)
PMCID: PMC3379535  PMID: 23074497
15.  Maternal Clinical Diagnoses and Hospital Variation in the Risk of Cesarean Delivery: Analyses of a National US Hospital Discharge Database 
PLoS Medicine  2014;11(10):e1001745.
Katy Kozhimannil and colleagues use a national database to examine the extent to which variability in cesarean section rates across the US from 2009–2010 was attributable to individual women's clinical diagnoses.
Please see later in the article for the Editors' Summary
Background
Cesarean delivery is the most common inpatient surgery in the United States, where 1.3 million cesarean sections occur annually, and rates vary widely by hospital. Identifying sources of variation in cesarean use is crucial to improving the consistency and quality of obstetric care. We used hospital discharge records to examine the extent to which variability in the likelihood of cesarean section across US hospitals was attributable to individual women's clinical diagnoses.
Methods and Findings
Using data from the 2009 and 2010 Nationwide Inpatient Sample from the Healthcare Cost and Utilization Project—a 20% sample of US hospitals—we analyzed data for 1,475,457 births in 1,373 hospitals. We fitted multilevel logistic regression models (patients nested in hospitals). The outcome was cesarean (versus vaginal) delivery. Covariates included diagnosis of diabetes in pregnancy, hypertension in pregnancy, hemorrhage during pregnancy or placental complications, fetal distress, and fetal disproportion or obstructed labor; maternal age, race/ethnicity, and insurance status; and hospital size and location/teaching status.
The cesarean section prevalence was 22.0% (95% confidence interval 22.0% to 22.1%) among women with no prior cesareans. In unadjusted models, the between-hospital variation in the individual risk of primary cesarean section was 0.14 (95% credible interval 0.12 to 0.15). The difference in the probability of having a cesarean delivery between hospitals was 25 percentage points. Hospital variability did not decrease after adjusting for patient diagnoses, socio-demographics, and hospital characteristics (0.16 [95% credible interval 0.14 to 0.18]). A limitation is that these data, while nationally representative, did not contain information on parity or gestational age.
Conclusions
Variability across hospitals in the individual risk of cesarean section is not decreased by accounting for differences in maternal diagnoses. These findings highlight the need for more comprehensive or linked data including parity and gestational age as well as examination of other factors—such as hospital policies, practices, and culture—in determining cesarean section use.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
In an ideal world, all babies would be delivered safely and naturally through their mother's vagina. However, increasing numbers of babies are being delivered by cesarean section, a surgical operation in which the baby is delivered through a cut made in the mother's abdomen and womb. In the US, a third of all babies (about 1.3 million babies in 2011) are delivered this way. A cesarean section is usually performed when a vaginal birth would endanger the life of the mother or her unborn child because, for example, the baby is in the wrong position or the labor is not progressing normally. Some cesarean sections are performed as emergency procedures, but others are planned in advance when the need for the operation becomes clear during pregnancy. Although cesarean sections can save lives, women who deliver this way have higher rates of infection, pain, and complications in future pregnancies than women who deliver vaginally, and their babies can have breathing problems.
Why Was This Study Done?
Currently, cesarean section rates vary widely from country to country and from hospital to hospital within countries. Careful assessment of the risks and benefits of cesarean delivery in individual patients can help to ensure that cesarean sections are used only when necessary, but changes to clinical and policy guidelines are also needed to ensure that cesarean delivery is neither overused nor underused. To guide these changes, we need to know whether cesarean section rates vary among hospitals because of case-mix differences (some hospitals may have high rates because they admit many women with complicated pregnancies, for example) or because of differences in modifiable nonclinical factors such as hospital policies and practices. In this retrospective multilevel analysis, the researchers examine whether the current wide variation in cesarean section rates across US hospitals is attributable to differences in maternal clinical diagnoses and patient characteristics or to hospital-level differences in the use of cesarean delivery.
What Did the Researchers Do and Find?
For their study, the researchers used hospital discharge data on nearly 1.5 million births in 1,373 hospitals collected by the 2009 and 2010 US Nationwide Inpatient Sample database, which captures administrative data (for example, length of stay in hospital and clinical complications) from a representative sample of 20% of US hospitals. To assess the chances of cesarean delivery based on hospital and patient characteristics, researchers fitted these data to multilevel logistic regression statistical models. Among women with no prior cesarean deliveries, the (primary) cesarean section rate was 22%, whereas among the whole study population, it was 33% (women who have one cesarean delivery often have a cesarean section for subsequent deliveries). In unadjusted models that compared cesarean section rates between hospitals without considering patient characteristics, the between-hospital variance for primary cesarean section rate was 0.14. Put another way, the likelihood of an individual having a first cesarean delivery varied between 11% and 36% across the hospitals considered. After adjustment for maternal clinical diagnoses, maternal age and other socio-demographic factors, and hospital characteristics such as size, the between-hospital variance for the primary cesarean section rate was 0.16.
What Do These Findings Mean?
The finding that the between-hospital variance for primary cesarean section rate did not decrease after adjusting for maternal characteristics (and other findings presented by the researchers) suggests that differences in case mix or pregnancy complexity may not drive the wide variability in cesarean section rates across US hospitals. However, the lack of information in the US Nationwide Inpatient Sample database on parity (the number of babies a woman has had) or gestational age (the length of time the baby has spent developing inside its mother) limits the strength of this conclusion. Both parity and gestational age strongly predict a woman's risk of a cesarean delivery. Thus, unmeasured differences in the parity of women admitted to different hospitals and/or the gestational age of their babies may be driving some of the variability in cesarean section rates across US hospitals. The lack of hospital-level information on obstetric care policies in the database also means that the many possible administrative explanations for variations across hospitals cannot be assessed. These findings therefore highlight the need for more comprehensive patient data to be collected (including information on parity and gestational age) and on hospital policies, practices, and culture before the variation in cesarean section rate across US hospitals can be fully understood and the use of cesarean delivery can be optimized.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001745.
This study is further discussed in a PLOS Medicine Perspective by Gordon C. S. Smith
The American College of Obstetricians and Gynecologists provides a fact sheet for patients on cesarean birth
The American College of Nurse-Midwives provides a fact sheet for pregnant women on preventing cesarean birth
The US-based Childbirth Connection Project of the non-profit National Partnership for Women and Families has a booklet called “What Every Woman Should Know about Cesarean Section”
The US-based non-profit Nemours Foundation provides detailed information about cesarean sections (in English and Spanish)
The UK National Health Service Choices website provides information for patients about delivery by cesarean section
MedlinePlus provides links to additional resources about cesarean section (in English and Spanish)
The UK non-profit organization Healthtalkonline provides personal stories about women's experiences of cesarean delivery
Information about the US Nationwide Inpatient Sample database is available
doi:10.1371/journal.pmed.1001745
PMCID: PMC4205118  PMID: 25333943
16.  Advancing donor liver age and rapid fibrosis progression following transplantation for hepatitis C 
Gut  2002;51(2):248-252.
Background and aims: Cirrhosis with liver failure due to hepatitis C virus (HCV) infection is the most common indication for liver transplantation (LT). Reinfection of the transplanted liver by HCV is inevitable, and aggressive hepatitis with accelerated progression to graft cirrhosis may be observed. Of concern, recent reports suggest that the outcome of LT for HCV may have deteriorated in recent years. Determinants of rate of progression to cirrhosis in the immunocompetent non-transplant patient are well defined, and the most powerful determinant is patient age at the time of infection. Following LT for HCV, recipient age does not affect outcome of HCV reinfection. However, the impact of donor age on graft fibrosis progression rate following LT has not been examined.
Methods: We have examined post-transplant biopsies to assess histological activity, including fibrosis stage (scored 0–6 units, 6 representing established cirrhosis), and to calculate fibrosis progression rates in 101 post-transplant specimens from 56 HCV infected LT patients. Univariate and multivariate analyses examined the impact of parameters including recipient and donor age and sex on fibrosis progression rate, and on predicted time to cirrhosis.
Results: For the cohort, median fibrosis progression rate was 0.78 units/year, and median interval from transplantation to development of cirrhosis was 7.7 years. In multivariate analysis, donor age (not recipient age) was a powerful determinant (p=0.02) of fibrosis progression rate. When the liver donor was younger than 40 years, median progression rate was 0.6 units/year and interval to cirrhosis was 10 years. When the donor was aged 50 years or more, median progression rate was 2.7 units/year and interval to cirrhosis only 2.2 years. During the observation period there has been a significant increase in donor age (p=0.01) but date of transplantation per se is not a determinant of progression rate when included in multivariate analyses.
Conclusions: Donor age has a major influence on graft outcome following transplantation for HCV. The changing organ donor profile will affect the long term results of LT for HCV. These observations have important implications for donor liver allocation.
PMCID: PMC1773334  PMID: 12117889
hepatitis C virus; liver transplantation; donor age; fibrosis progression
17.  Early Detection, Curative Treatment, and Survival Rates for Hepatocellular Carcinoma Surveillance in Patients with Cirrhosis: A Meta-analysis 
PLoS Medicine  2014;11(4):e1001624.
Amit Singal and colleagues conducted a systematic review of the evidence that surveillance for hepatocellular carcinoma in patients with cirrhosis improves early detection, receipt of curative treatment, and overall survival.
Please see later in the article for the Editors' Summary
Background
Surveillance for hepatocellular carcinoma (HCC) has level I evidence among patients with hepatitis B but only level II evidence in patients with cirrhosis. This lack of randomized data has spurred questions regarding the utility of HCC surveillance in this patient population; however, lack of randomized data does not equate to a lack of data supporting the efficacy of surveillance. The aim of our study was to determine the effect of HCC surveillance on early stage tumor detection, receipt of curative therapy, and overall survival in patients with cirrhosis.
Methods and Findings
We performed a systematic literature review using Medline from January 1990 through January 2014 and a search of national meeting abstracts from 2009–2012. Two investigators identified studies that reported rates of early stage tumor detection, curative treatment receipt, or survival, stratified by HCC surveillance status, among patients with cirrhosis. Both investigators independently extracted data on patient populations, study methods, and results using standardized forms. Pooled odds ratios, according to HCC surveillance status, were calculated for each outcome using the DerSimonian and Laird method for a random effects model.
We identified 47 studies with 15,158 patients, of whom 6,284 (41.4%) had HCC detected by surveillance. HCC surveillance was associated with improved early stage detection (odds ratio [OR] 2.08, 95% CI 1.80–2.37) and curative treatment rates (OR 2.24, 95% CI 1.99–2.52). HCC surveillance was associated with significantly prolonged survival (OR 1.90, 95% CI 1.67–2.17), which remained significant in the subset of studies adjusting for lead-time bias. Limitations of current data included many studies having insufficient duration of follow-up to assess survival and the majority not adjusting for liver function or lead-time bias.
Conclusions
HCC surveillance is associated with significant improvements in early tumor detection, receipt of curative therapy, and overall survival in patients with cirrhosis.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Hepatocellular cancer (HCC) is the commonest form of primary liver cancer—a type of cancer that starts when a cell in the liver acquires genetic changes that allow it to grow uncontrollably. Primary liver cancer is the third leading cause of cancer-related death worldwide, killing more than 600,000 people every year. The symptoms of HCC are vague and rarely appear until the cancer has spread throughout the liver. They include unexplained weight loss, feeling sick, tiredness, and jaundice (yellowing of the skin and eyes). If liver cancer is diagnosed in its early stages, it can be treated by surgically removing part of the liver, by liver transplantation, or by a procedure called radiofrequency ablation in which an electric current is used to destroy the cancer cells. However, most people are diagnosed with HCC when the cancer is advanced and cannot be treated. These individuals are given palliative treatment to relieve pain and discomfort. Although most patients who are diagnosed with HCC at an early stage survive more than 5 years, patients with more advanced HCC have an average survival less than one year. The exact cause of HCC is unknown, but it is thought to be related to cirrhosis (scarring) of the liver. This condition is the end result of long-term (chronic) liver damage caused by, for example, alcohol abuse or infection with hepatitis B virus (HBV).
Why Was This Study Done?
Because HCC tends to be untreatable when it is diagnosed at a late stage, if the tumor can be found early by regularly measuring blood levels of alpha fetoprotein (a liver cancer biomarker) and using ultrasound, outcomes for patients at high risk of developing HCC might be improved. Indeed, American and European guidelines recommend HCC surveillance with ultrasound every 6 months in patients with HBV infection and/or cirrhosis. However, although randomized controlled trial results support HCC surveillance among patients infected with HBV, no randomized trials have investigated its use among patients with cirrhosis. Here, the researchers use predefined criteria to identify all the published cohort and case-control studies (two types of non-randomized studies) that have examined the impact of HCC surveillance on outcomes in patients with cirrhosis. They then pool the data from these studies using a statistical approach called meta-analysis to estimate whether HCC surveillance is associated with improvements in early tumor detection, curative treatment receipt, and survival rates among patients with cirrhosis.
What Did the Researchers Do and Find?
The researchers identified 47 studies that examined the association of HCC surveillance with outcomes in 15,158 patients with cirrhosis who developed HCC. In 41.4% of these patients, HCC was detected by surveillance. Among patients who had undergone HCC surveillance, the pooled rate of early detection was 70.9%, whereas among patients who had not undergone surveillance but who were diagnosed incidentally or who presented with symptoms, the pooled rate of early detection was 29.9%. The researchers calculated that the pooled odds (chances) of early detection among patients undergoing surveillance compared to early detection among patients not undergoing surveillance was 2.08 (an odds ratio [OR] of 2.08). The pooled rate of curative treatment receipt among patients undergoing surveillance was 51.3% compared to only 23.8% among patients not undergoing surveillance (OR 2.24). Finally, among those patients for whom the relevant data were available, 50.8% of patients who had undergone HCC surveillance but only 28.2% of those who had not undergone surveillance survived for at least 3 years after diagnosis (OR 1.90).
What Do These Findings Mean?
These findings show that HCC surveillance is associated with significant improvements (improvements that are unlikely to have happened by chance) in early tumor detection, receipt of curative treatment, and overall survival among patients with cirrhosis. Importantly, the association with improved overall survival remained significant after adjusting for the possibility that patients who underwent surveillance died at the same time as they would have done without surveillance but appeared to survive longer because they were diagnosed earlier (this is called adjustment for lead-time bias). These results must be interpreted cautiously, however, because many of the studies included in the meta-analysis had insufficient follow-up to assess survival adequately, not all the studies adjusted for lead-time bias, and none of the studies assessed potential downstream harms of HCC surveillance such as complications of liver biopsies. Nevertheless, overall, these findings provide sufficient evidence to support guidelines that recommend regular HCC surveillance for patients with cirrhosis.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001624.
The US National Cancer Institute provides information about all aspects of cancer, including detailed information for patients and professionals about primary liver cancer and about screening for primary liver cancer (in English and Spanish)
The American Cancer Society also provides information about liver cancer (available in several languages)
The UK National Health Service Choices website provides information about primary liver cancer and about cirrhosis (including patient stories)
Cancer Research UK (a not-for-profit organization) also provides detailed information about primary liver cancer
MedlinePlus provides links to further resources about liver cancer and cirrhosis (in English and Spanish)
Information is available at the American Liver Foundation
American Association for the Study of Liver Diseases provides practice guidelines
doi:10.1371/journal.pmed.1001624
PMCID: PMC3972088  PMID: 24691105
18.  Emergence of Drug Resistance Is Associated with an Increased Risk of Death among Patients First Starting HAART 
PLoS Medicine  2006;3(9):e356.
Background
The impact of the emergence of drug-resistance mutations on mortality is not well characterized in antiretroviral-naïve patients first starting highly active antiretroviral therapy (HAART). Patients may be able to sustain immunologic function with resistant virus, and there is limited evidence that reduced sensitivity to antiretrovirals leads to rapid disease progression or death. We undertook the present analysis to characterize the determinants of mortality in a prospective cohort study with a median of nearly 5 y of follow-up. The objective of this study was to determine the impact of the emergence of drug-resistance mutations on survival among persons initiating HAART.
Methods and Findings
Participants were antiretroviral therapy naïve at entry and initiated triple combination antiretroviral therapy between August 1, 1996, and September 30, 1999. Marginal structural modeling was used to address potential confounding between time-dependent variables in the Cox proportional hazard regression models. In this analysis resistance to any class of drug was considered as a binary time-dependent exposure to the risk of death, controlling for the effect of other time-dependent confounders. We also considered each separate class of mutation as a binary time-dependent exposure, while controlling for the presence/absence of other mutations. A total of 207 deaths were identified among 1,138 participants over the follow-up period, with an all cause mortality rate of 18.2%. Among the 679 patients with HIV-drug-resistance genotyping done before initiating HAART, HIV-drug resistance to any class was observed in 53 (7.8%) of the patients. During follow-up, HIV-drug resistance to any class was observed in 302 (26.5%) participants. Emergence of any resistance was associated with mortality (hazard ratio: 1.75 [95% confidence interval: 1.27, 2.43]). When we considered each class of resistance separately, persons who exhibited resistance to non-nucleoside reverse transcriptase inhibitors had the highest risk: mortality rates were 3.02 times higher (95% confidence interval: 1.99, 4.57) for these patients than for those who did not exhibit this type of resistance.
Conclusions
We demonstrated that emergence of resistance to non-nucleoside reverse transcriptase inhibitors was associated with a greater risk of subsequent death than was emergence of protease inhibitor resistance. Future research is needed to identify the particular subpopulations of men and women at greatest risk and to elucidate the impact of resistance over a longer follow-up period.
Emergence of resistance to both non-nucleoside reverse transcriptase inhibitors and protease inhibitors was associated with a higher risk of subsequent death, but the risk was greater in patients with NNRTI-resistant HIV.
Editors' Summary
Background.
In the 1980s, infection with the human immunodeficiency virus (HIV) was effectively a death sentence. HIV causes AIDS (acquired immunodeficiency syndrome) by replicating inside immune system cells and destroying them, which leaves infected individuals unable to fight off other viruses and bacteria. The first antiretroviral drugs were developed quickly, but it soon became clear that single antiretrovirals only transiently suppress HIV infection. HIV mutates (accumulates random changes to its genetic material) very rapidly and, although most of these changes (or mutations) are bad for the virus, by chance some make it drug resistant. Highly active antiretroviral therapy (HAART), which was introduced in the mid-1990s, combines three or four antiretroviral drugs that act at different stages of the viral life cycle. For example, they inhibit the reverse transcriptase that the virus uses to replicate its genetic material, or the protease that is necessary to assemble new viruses. With HAART, the replication of any virus that develops resistance to one drug is inhibited by the other drugs in the mix. As a consequence, for many individuals with access to HAART, AIDS has become a chronic rather than a fatal disease. However, being on HAART requires patients to take several pills a day at specific times. In addition, the drugs in the HAART regimens often have side effects.
Why Was This Study Done?
Drug resistance still develops even with HAART, often because patients don't stick to the complicated regimens. The detection of resistance to one drug is usually the prompt to change a patient's drug regimen to head off possible treatment failure. Although most patients treated with HAART live for many years, some still die from AIDS. We don't know much about how the emergence of drug-resistance mutations affects mortality in patients who are starting antiretroviral therapy for the first time. In this study, the researchers looked at how the emergence of drug resistance affected survival in a group of HIV/AIDS patients in British Columbia, Canada. Here, everyone with HIV/AIDS has access to free medical attention, HAART, and laboratory monitoring, and full details of all HAART recipients are entered into a central reporting system.
What Did the Researchers Do and Find?
The researchers enrolled people who started antiretroviral therapy for the first time between August 1996 and September 1999 into the HAART Observational Medical Evaluation and Research (HOMER) cohort. They then excluded anyone who was infected with already drug-resistant HIV strains (based on the presence of drug-resistance mutations in viruses isolated from the patients) at the start of therapy. The remaining 1,138 patients were followed for an average of five years. All the patients received either two nucleoside reverse transcriptase inhibitors and a protease inhibitor, or two nucleoside and one non-nucleoside reverse transcriptase inhibitor (NNRTI). Nearly a fifth of the study participants died during the follow-up period. Most of these patients actually had drug-sensitive viruses, possibly because they had neglected taking their drugs to such an extent that there had been insufficient drug exposure to select for drug-resistant viruses. In a quarter of the patients, however, HIV strains resistant to one or more antiretroviral drugs emerged during the study (again judged by looking for mutations). Detailed statistical analyses indicated that the emergence of any drug resistance nearly doubled the risk of patients dying, and that people carrying viruses resistant to NNRTIs were three times as likely to die as those without resistance to this class of antiretroviral drug.
What Do These Findings Mean?
These results provide new information about the emergence of drug-resistant HIV during HAART and possible effects on the long-term survival of patients. In particular, they suggest that clinicians should watch carefully for the emergence of resistance to NNRTIs in their patients. Because this type of resistance is often due to poor adherence to drug regimens, these results also suggest that increased efforts should be made to ensure that patients comply with the prescribed HAART regimens, especially those whose antiretroviral therapy includes NNRTIs. As with all studies in which a group of individuals who share a common characteristic are studied over time, it is possible that some other, unmeasured difference between the patients who died and those who didn't—rather than emerging drug resistance—is responsible for the observed differences in survival. Additional studies are needed to confirm the findings here, and to investigate whether specific subpopulations of patients are at particular risk of developing drug resistance and/or dying during HAART.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030356.
US National Institute of Allergy and Infectious Diseases fact sheet on HIV infection and AIDS
US Department of Health and Human Services information on AIDS, including details of approved drugs for the treatment of HIV infection
US Centers for Disease Control and Prevention information on HIV/AIDS
Aidsmap, information on HIV and AIDS provided by the charity NAM, which includes details on antiretroviral drugs
doi:10.1371/journal.pmed.0030356
PMCID: PMC1569883  PMID: 16984218
19.  Kidney and liver organ transplantation in persons with human immunodeficiency virus 
Executive Summary
Objective
The objective of this analysis is to determine the effectiveness of solid organ transplantation in persons with end stage organ failure (ESOF) and human immunodeficiency virus (HIV+)
Clinical Need: Condition and Target Population
Patients with end stage organ failure who have been unresponsive to other forms of treatment eventually require solid organ transplantation. Similar to persons who are HIV negative (HIV−), persons living with HIV infection (HIV+) are at risk for ESOF from viral (e.g. hepatitis B and C) and non-viral aetiologies (e.g. coronary artery disease, diabetes, hepatocellular carcinoma). Additionally, HIV+ persons also incur risks of ESOF from HIV-associated nephropathy (HIVAN), accelerated liver damage from hepatitis C virus (HCV+), with which an estimated 30% of HIV positive (HIV+) persons are co-infected, and coronary artery disease secondary to antiretroviral therapy. Concerns that the need for post transplant immunosuppression and/or the interaction of immunosuppressive drugs with antiretroviral agents may accelerate the progression of HIV disease, as well as the risk of opportunistic infections post transplantation, have led to uncertainty regarding the overall benefit of transplantation among HIV+ patients. Moreover, the scarcity of donor organs and their use in a population where the clinical benefit of transplantation is uncertain has limited the availability of organ transplantation to persons living with ESOF and HIV.
With the development of highly active anti retroviral therapy (HAART), which has been available in Canada since 1997, there has been improved survival and health-related quality of life for persons living with HIV. HAART can suppress HIV replication, enhance immune function, and slow disease progression. HAART managed persons can now be expected to live longer than those in the pre-HAART era and as a result many will now experience ESOF well before they experience life-threatening conditions related to HIV infection. Given their improved prognosis and the burden of illness they may experience from ESOF, the benefit of solid organ transplantation for HIV+ patients needs to be reassessed.
Evidence-Based Analysis Methods
Research Questions
What are the effectiveness and cost effectiveness of solid organ transplantation in HIV+ persons with ESOF?
Literature Search
A literature search was performed on September 22, 2009 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from January 1, 1996 to September 22, 2009.
Inclusion Criteria
Systematic review with or without a Meta analysis, RCT, Non-RCT with controls
HIV+ population undergoing solid organ transplantation
HIV+ population managed with HAART therapy
Controls include persons undergoing solid organ transplantation who are i) HIV− ii) HCV+ mono-infected, and iii) HIV+ persons with ESOF not transplanted.
Studies that completed and reported results of a Kaplan-Meier Survival Curve analysis.
Studies with a minimum (mean or medium) follow up of 1-year.
English language citations
Exclusion Criteria
Case reports and case series were excluded form this review.
Outcomes of Interest
i) Risk of Death after transplantation
ii) Death censored graft survival (DCGS)
iii) HIV disease progression defined as the post transplant incidence of:
- opportunistic infections or neoplasms,
- CD4+ T-cell count < 200mm3, and
- any detectable level of plasma HIV viral load.
iv) Acute graft rejection,
v) Return to dialysis,
vi) Recurrence of HCV infection
Summary of Findings
No direct evidence comparing an HIV+ cohort undergoing transplantation with the same not undergoing transplantation (wait list) was found in the literature search.
The results of this review are reported for the following comparison cohorts undergoing transplantation:
i) Kidney Transplantation: HIV+ cohort compared with HIV− cohort
ii) Liver Transplantation: HIV+ cohort compared with HIV− negative cohort
iii) Liver Transplantation: HIV+ HCV+ (co-infected) cohort compared with HCV+ (mono-infected) cohort
Kidney Transplantation: HIV+ vs. HIV−
Based on a pooled HIV+ cohort sample size of 285 patients across four studies, the risk of death after kidney transplantation in an HIV+ cohort does not differ to that of an HIV− cohort [hazard ratio (HR): 0.90; 95% CI: 0.36, 2.23]. The quality of evidence supporting this outcome is very low.
Death censored graft survival was reported in one study with an HIV+ cohort sample size of 100, and was statistically significantly different (p=.03) to that in the HIV− cohort (n=36,492). However, the quality of evidence supporting this outcome was determined to be very low. There was also uncertainty in the rate of return to dialysis after kidney transplantation in both the HIV+ and HIV− groups and the effect, if any, this may have on patient survival. Because of the very low quality evidence rating, the effect of kidney transplantation on HIV-disease progression is uncertain.
The rate of acute graft rejection was determined using the data from one study. There was a nonsignificant difference between the HIV+ and HIV− cohorts (OR 0.13; 95% CI: 0.01, 2.64), although again, because of very low quality evidence there is uncertainty in this estimate of effect.
Liver Transplantation: HIV+ vs. HIV−
Based on a combined HIV+ cohort sample size of 198 patient across five studies, the risk of death after liver transplantation in an HIV+ cohort (with at least 50% of the cohort co-infected with HCV+) is statistically significantly 64% greater compared with an HIV− cohort (HR: 1.64; 95% CI: 1.32, 2.02). The quality of evidence supporting this outcome is very low.
Death censored graft survival was reported for an HIV+ cohort in one study (n=11) however the DCGS rate of the contemporaneous control HIV− cohort was not reported. Because of sparse data the quality of evidence supporting this outcome is very low indicating death censored graft survival is uncertain.
Both the CD4+ T-cell count and HIV viral load appear controlled post transplant with an incidence of opportunistic infection of 20.5%. However, the quality of this evidence for these outcomes is very low indicating uncertainty in these effects. Similarly, because of very low quality evidence there is uncertainty in the rate of acute graft rejection among both the HIV+ and HIV− groups
Liver Transplantation: HIV+/HCV+ vs. HCV+
Based on a combined HIV+/HCV+ cohort sample size of 156 from seven studies, the risk of death after liver transplantation is significantly greater (2.8 fold) in a co-infected cohort compared with an HCV+ mono-infected cohort (HR: 2.81; 95% CI: 1.47, 5.37). The quality of evidence supporting this outcome is very low. Death censored graft survival evidence was not available.
Regarding disease progression, based on a combined sample size of 71 persons in the co-infected cohort, the CD4+ T-cell count and HIV viral load appear controlled post transplant; however, again the quality of evidence supporting this outcome is very low. The rate of opportunistic infection in the co-infected cohort was 7.2%. The quality of evidence supporting this estimate is very low, indicating uncertainty in these estimates of effect.
Based on a combined HIV+/HCV+ cohort (n=57) the rate of acute graft rejection does not differ to that of an HCV+ mono-infected cohort (OR: 0.88; 95% CI: 0.44, 1.76). Also based on a combined HIV+/HCV+ cohort (n=83), the rate of HCV+ recurrence does not differ to that of an HCV+ mono-infected cohort (OR: 0.66; 95% CI: 0.27, 1.59). In both cases, the quality of the supporting evidence was very low.
Overall, because of very low quality evidence there is uncertainty in the effect of kidney or liver transplantation in HIV+ persons with end stage organ failure compared with those not infected with HIV. Examining the economics of this issue, the cost of kidney and liver transplants in an HIV+ patient population are, on average, 56K and 147K per case, based on both Canadian and American experiences.
PMCID: PMC3377507  PMID: 23074407
20.  HIV virological rebounds but not blips predict liver fibrosis progression in antiretroviral-treated HIV/hepatitis C virus-coinfected patients 
HIV Medicine  2014;16(1):24-31.
Objectives
Antiretroviral interruption is associated with liver fibrosis progression in HIV/hepatitis C virus (HCV) coinfection. It is not known what level of HIV viraemia affects fibrosis progression.
Methods
We evaluated 288 HIV/HCV-coinfected cohort participants with undetectable HIV RNA (< 50 HIV-1 RNA copies/mL) on two consecutive visits while on combination antiretroviral therapy (cART) without fibrosis [aspartate aminotransferase to platelet ratio index (APRI) < 1.5], end-stage liver disease or HCV therapy. An HIV blip was defined as a viral load of ≥ 50 and < 1000 copies/mL, preceded and followed by undetectable values. HIV rebound was defined as: (i) HIV RNA ≥ 50 copies/mL on two consecutive visits, or (ii) a single HIV RNA measurement ≥ 1000 copies/mL. Multivariate discrete-time proportional hazards models were used to assess the effect of different viraemia levels on liver fibrosis progression (APRI ≥ 1.5).
Results
The mean age of the patients was 45 years, 74% were male, 81% reported a history of injecting drug use, 51% currently used alcohol and the median baseline CD4 count was 440 [interquartile range (IQR) 298, 609] cells/μL. Fifty-seven (20%) participants [12.4/100 person-years (PY); 95% confidence interval (CI) 9.2−15.7/100 PY] progressed to an APRI ≥ 1.5 over a mean 1.1 (IQR 0.6, 2.0) years of follow-up time at risk. Virological rebound [hazard ratio (HR) 2.3; 95% CI 1.1, 4.7] but not blips (HR 0.5; 95% CI 0.2, 1.1) predicted progression to APRI ≥ 1.5. Each additional 1 log10 copies/mL HIV RNA exposure (cumulative) was associated with a 20% increase in the risk of fibrosis progression (HR 1.2; 95% CI 1.0–1.3).
Conclusions
Liver fibrosis progression was associated with HIV rebound, but not blips, and with increasing cumulative exposure to HIV RNA, highlighting the importance of achieving and maintaining HIV suppression in the setting of HIV/HCV coinfection.
doi:10.1111/hiv.12168
PMCID: PMC4312483  PMID: 24837567
fibrosis; hepatitis C virus; HIV; virological blips; virological rebound
21.  Assessment of Recent HIV-1 Infection by a Line Immunoassay for HIV-1/2 Confirmation 
PLoS Medicine  2007;4(12):e343.
Background
Knowledge of the number of recent HIV infections is important for epidemiologic surveillance. Over the past decade approaches have been developed to estimate this number by testing HIV-seropositive specimens with assays that discriminate the lower concentration and avidity of HIV antibodies in early infection. We have investigated whether this “recency” information can also be gained from an HIV confirmatory assay.
Methods and Findings
The ability of a line immunoassay (INNO-LIA HIV I/II Score, Innogenetics) to distinguish recent from older HIV-1 infection was evaluated in comparison with the Calypte HIV-1 BED Incidence enzyme immunoassay (BED-EIA). Both tests were conducted prospectively in all HIV infections newly diagnosed in Switzerland from July 2005 to June 2006. Clinical and laboratory information indicative of recent or older infection was obtained from physicians at the time of HIV diagnosis and used as the reference standard. BED-EIA and various recency algorithms utilizing the antibody reaction to INNO-LIA's five HIV-1 antigen bands were evaluated by logistic regression analysis. A total of 765 HIV-1 infections, 748 (97.8%) with complete test results, were newly diagnosed during the study. A negative or indeterminate HIV antibody assay at diagnosis, symptoms of primary HIV infection, or a negative HIV test during the past 12 mo classified 195 infections (26.1%) as recent (≤ 12 mo). Symptoms of CDC stages B or C classified 161 infections as older (21.5%), and 392 patients with no symptoms remained unclassified. BED-EIA ruled 65% of the 195 recent infections as recent and 80% of the 161 older infections as older. Two INNO-LIA algorithms showed 50% and 40% sensitivity combined with 95% and 99% specificity, respectively. Estimation of recent infection in the entire study population, based on actual results of the three tests and adjusted for a test's sensitivity and specificity, yielded 37% for BED-EIA compared to 35% and 33% for the two INNO-LIA algorithms. Window-based estimation with BED-EIA yielded 41% (95% confidence interval 36%–46%).
Conclusions
Recency information can be extracted from INNO-LIA-based confirmatory testing at no additional costs. This method should improve epidemiologic surveillance in countries that routinely use INNO-LIA for HIV confirmation.
Jörg Schüpbach and colleagues show that a second-generation Western blot antibody test used to confirm HIV infection can also be used to determine rates of recent HIV infection.
Editors' Summary
Background.
Since the first diagnosed cases of AIDS (acquired immunodeficiency syndrome) in 1981, the AIDS epidemic has spread rapidly. Now, 40 million people are infected with HIV (human immunodeficiency virus), the cause of AIDS. HIV infects and kills immune system cells, leaving infected individuals susceptible to other infectious diseases and tumors. The first, often undiagnosed, stage of HIV infection (primary HIV infection) lasts a few weeks and often involves a flu-like illness. During this stage, the immune system begins to respond to HIV by producing antibodies (proteins that recognize viral molecules called antigens). The time needed for these antibodies to appear on testing “seroconversion” (usually 6–12 weeks) is called the window period of the test; HIV antibody tests done during this period give false negative results. During the second, symptom-free stage of HIV infection, which can last many years, the virus gradually destroys the immune system so that by the third stage of infection unusual infections (for example, persistant yeast infections of the mouth) begin to occur. The fourth stage is characterized by multiple AIDS-indicator conditions such as severe bacterial, fungal, or viral infections, and cancers such as Kaposi sarcoma.
Why Was This Study Done?
To monitor the AIDS/HIV epidemic and HIV prevention programs, it is necessary to know how many people in a population have been recently infected with HIV. Serologic testing algorithms for recent HIV seroconversion (STARHS) provide a way to get this information. Early during seroconversion, low levels of antibodies that bind only weakly to their viral antigens (low-affinity antibodies) are made. Later on, antibody concentrations and tightness of binding increase. STARHS calculate the number of recently infected people by analyzing data from special “detuned” HIV antibody assays (for example, a commercially available test called the BED-EIA) that preferentially detect low-concentration, low-avidity antibodies. This type of test cannot, however, be used to determine whether an individual has an HIV infection, because it will miss a substantial fraction of infected people. Diagnosing HIV in an individual person requires more sensitive tests for antibody detection. In this study, the researchers have investigated whether a test called INNO-LIA, which is already being used in some countries to diagnose HIV infection, can also provide information about the recency (newness) of HIV infections.
What Did the Researchers Do and Find?
Between July 2005 and June 2006, 765 HIV infections were newly diagnosed in Switzerland. Using clinical and laboratory information collected at diagnosis, the researchers classified 195 of these infections as recent infections (occurring within the past year) and 161 as older infections. (The remaining infections could not be classified based on the available medical infomation.) The researchers then compared the ability of INNO-LIA (which measures antibodies to five HIV-1 antigens) and BED-EIA to distinguish recent from older HIV infections. BED-EIA correctly identified as recent 65% of the infections classified as recent based on the clinical information, and identified as older 80% of the infections classified as older based on the clinical information. In other words, this test was 65% sensitive (able to detect 65% of the truly recent infections as defined in this study) and was 80% specific (80% accurate in eliminating non-recent infections.) The two best algorithms (mathematical procedures) for converting INNO-LIA data into estimates of recent HV infections had sensitivities of 50% and 40% and specificities of 95% and 99%, respectively. Using actual test results and taking into account these sensitivities and specificities gave estimates of 35% and 33% for the proportion of the whole study population that had been recently infected. BED-EIA gave an estimate of 37%. Finally, a widely used window-based algorithm for recency estimation that uses the numbers of cases that are defined as recent by BED-EIA and the length of the window period for BED-EIA to calculate the annual number of new infections in populations indicated that 41% of the whole study population had been recently infected.
What Do These Findings Mean?
These findings indicate that numbers of recent HIV infections can be extracted from the INNO-LIA HIV diagnostic test and are comparable to those obtained using a window-based algorithm. The test could, therefore, provide a cost-effective means to improve HIV surveillance in countries like Switzerland that already use it for HIV diagnosis. However, because this approach relies on knowing the sensitivity and specificity of the INNO-LIA algorithms, which may vary between populations, the use of these algorithms to estimate numbers of recent HIV infections must be preceded by an assessment of their sensitivity and specificity in each new setting.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040343.
HIV InSite has comprehensive information on all aspects of HIV/AIDS, including fact sheets on the symptoms of HIV infection, HIV testing, and a chapter on laboratory tests for HIV antibodies
NAM, a UK registered charity, provides information about all aspects of HIV and AIDS, including fact sheets on the stages of HIV infection and HIV testing
The US Centers for Disease Control and Prevention (CDC) provides information on HIV/AIDS, including information on HIV testing and on HIV surveillance by the CDC (in English and Spanish)
Information is available from Avert, an international AIDS charity, on the stages of HIV infection and on HIV testing
Details on the US Centers for Disease Control and Prevention and the World Health Organiztion HIV classification systems are available from the US Department of Veterans Affairs
doi:10.1371/journal.pmed.0040343
PMCID: PMC2100138  PMID: 18052604
22.  Association of Non-alcoholic Fatty Liver Disease with Chronic Kidney Disease: A Systematic Review and Meta-analysis 
PLoS Medicine  2014;11(7):e1001680.
In a systematic review and meta-analysis, Giovanni Musso and colleagues examine the association between non-alcoholic fatty liver disease and chronic kidney disease.
Please see later in the article for the Editors' Summary
Background
Chronic kidney disease (CKD) is a frequent, under-recognized condition and a risk factor for renal failure and cardiovascular disease. Increasing evidence connects non-alcoholic fatty liver disease (NAFLD) to CKD. We conducted a meta-analysis to determine whether the presence and severity of NAFLD are associated with the presence and severity of CKD.
Methods and Findings
English and non-English articles from international online databases from 1980 through January 31, 2014 were searched. Observational studies assessing NAFLD by histology, imaging, or biochemistry and defining CKD as either estimated glomerular filtration rate (eGFR) <60 ml/min/1.73 m2 or proteinuria were included. Two reviewers extracted studies independently and in duplicate. Individual participant data (IPD) were solicited from all selected studies. Studies providing IPD were combined with studies providing only aggregate data with the two-stage method. Main outcomes were pooled using random-effects models. Sensitivity and subgroup analyses were used to explore sources of heterogeneity and the effect of potential confounders. The influences of age, whole-body/abdominal obesity, homeostasis model of insulin resistance (HOMA-IR), and duration of follow-up on effect estimates were assessed by meta-regression. Thirty-three studies (63,902 participants, 16 population-based and 17 hospital-based, 20 cross-sectional, and 13 longitudinal) were included. For 20 studies (61% of included studies, 11 cross-sectional and nine longitudinal, 29,282 participants), we obtained IPD. NAFLD was associated with an increased risk of prevalent (odds ratio [OR] 2.12, 95% CI 1.69–2.66) and incident (hazard ratio [HR] 1.79, 95% CI 1.65–1.95) CKD. Non-alcoholic steatohepatitis (NASH) was associated with a higher prevalence (OR 2.53, 95% CI 1.58–4.05) and incidence (HR 2.12, 95% CI 1.42–3.17) of CKD than simple steatosis. Advanced fibrosis was associated with a higher prevalence (OR 5.20, 95% CI 3.14–8.61) and incidence (HR 3.29, 95% CI 2.30–4.71) of CKD than non-advanced fibrosis. In all analyses, the magnitude and direction of effects remained unaffected by diabetes status, after adjustment for other risk factors, and in other subgroup and meta-regression analyses. In cross-sectional and longitudinal studies, the severity of NAFLD was positively associated with CKD stages. Limitations of analysis are the relatively small size of studies utilizing liver histology and the suboptimal sensitivity of ultrasound and biochemistry for NAFLD detection in population-based studies.
Conclusion
The presence and severity of NAFLD are associated with an increased risk and severity of CKD.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Chronic kidney disease (CKD)—the gradual loss of kidney function—is becoming increasingly common. In the US, for example, more than 10% of the adult population (about 26 million people) and more than 25% of individuals older than 65 years have CKD. Throughout life, the kidneys perform the essential task of filtering waste products (from the normal breakdown of tissues and from food) and excess water from the blood to make urine. CKD gradually destroys the kidneys' filtration units, the rate of blood filtration decreases, and dangerous amounts of waste products build up in the blood. Symptoms of CKD, which rarely occur until the disease is very advanced, include tiredness, swollen feet, and frequent urination, particularly at night. There is no cure for CKD, but progression of the disease can be slowed by controlling high blood pressure and diabetes (two risk factors for CKD), and by adopting a healthy lifestyle. The same interventions also reduce the chances of CKD developing in the first place.
Why Was This Study Done?
CKD is associated with an increased risk of end-stage renal (kidney) disease and of cardiovascular disease. These life-threatening complications are potentially preventable through early identification and treatment of CKD. Because early recognition of CKD has the potential to reduce its health-related burden, the search is on for new modifiable risk factors for CKD. One possible new risk factor is non-alcoholic fatty liver disease (NAFLD), which, like CKD is becoming increasingly common. Healthy livers contain little or no fat but, in the US, 30% of the general adult population and up to 70% of patients who are obese or have diabetes have some degree of NAFLD, which ranges in severity from simple fatty liver (steatosis), through non-alcoholic steatohepatitis (NASH), to NASH with fibrosis (scarring of the liver) and finally cirrhosis (extensive scarring). In this systematic review and meta-analysis, the researchers investigate whether NAFLD is a risk factor for CKD by looking for an association between the two conditions. A systematic review identifies all the research on a given topic using predefined criteria, meta-analysis uses statistical methods to combine the results of several studies.
What Did the Researchers Do and Find?
The researchers identified 33 studies that assessed NAFLD and CKD in nearly 64,000 participants, including 20 cross-sectional studies in which participants were assessed for NAFLD and CKD at a single time point and 13 longitudinal studies in which participants were assessed for NAFLD and then followed up to see whether they subsequently developed CKD. Meta-analysis of the data from the cross-sectional studies indicated that NAFLD was associated with a 2-fold increased risk of prevalent (pre-existing) CKD (an odds ratio [OR]of 2.12; an OR indicates the chance that an outcome will occur given a particular exposure, compared to the chance of the outcome occurring in the absence of that exposure). Meta-analysis of data from the longitudinal studies indicated that NAFLD was associated with a nearly 2-fold increased risk of incident (new) CKD (a hazard ratio [HR] of 1.79; an HR indicates often a particular event happens in one group compared to how often it happens in another group, over time). NASH was associated with a higher prevalence and incidence of CKD than simple steatosis. Similarly, advanced fibrosis was associated with a higher prevalence and incidence of CKD than non-advanced fibrosis.
What Do These Findings Mean?
These findings suggest that NAFLD is associated with an increased prevalence and incidence of CKD and that increased severity of liver disease is associated with an increased risk and severity of CKD. Because these associations persist after allowing for established risk factors for CKD, these findings identify NAFLD as an independent CKD risk factor. Certain aspects of the studies included in this meta-analysis (for example, only a few studies used biopsies to diagnose NAFLD; most used less sensitive tests that may have misclassified some individuals with NAFLD as normal) and the methods used in the meta-analysis may limit the accuracy of these findings. Nevertheless, these findings suggest that individuals with NAFLD should be screened for CKD even in the absence of other risk factors for the disease, and that better treatment of NAFLD may help to prevent CKD.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001680.
The US National Kidney and Urologic Diseases Information Clearinghouse provides information about all aspects of kidney disease; the US National Digestive Diseases Information Clearinghouse provides information about non-alcoholic liver disease
The US National Kidney Disease Education Program provides resources to help improve the understanding, detection, and management of kidney disease (in English and Spanish)
The UK National Health Service Choices website provides information for patients on chronic kidney disease, including some personal stories, and information on non-alcoholic fatty liver disease
The US National Kidney Foundation, a not-for-profit organization, provides information about chronic kidney disease (in English and Spanish)
The not-for-profit UK National Kidney Federation provides support and information for patients with kidney disease and for their carers
The British Liver Trust, a not-for-profit organization, provides information about non-alcoholic fatty liver disease, including a patient story
doi:10.1371/journal.pmed.1001680
PMCID: PMC4106719  PMID: 25050550
23.  FibroSURE as a noninvasive marker of liver fibrosis and inflammation in chronic hepatitis B 
BMC Gastroenterology  2014;14:118.
Background
Noninvasive markers of liver fibrosis have not been extensively studied in patients with chronic hepatitis B virus (HBV) infection. Our aim was to evaluate the capacity of FibroSURE, one of the two noninvasive fibrosis indices commercially available in the United States, to identify HBV infected patients with moderate to severe fibrosis.
Methods
Forty-five patients who underwent liver biopsy at a single tertiary care center were prospectively enrolled and had FibroSURE performed within an average interval of 11 days of the biopsy.
Results
Of the 45 patients, 40% were Asian, 40% were African American, and 13% were Caucasian; 27% were co-infected with HIV and 67% had no or mild fibrosis. We found FibroSURE to have moderate capacity to discriminate between patients with moderate to high fibrosis and those with no to mild fibrosis (area under receiver operating characteristic [AUROC] curve = 0.77; 95% confidence interval [CI] [0.61, 0.92]). When we combined the fibrosis score determined by FibroSURE with aspartate aminotransferase (AST) measurements and HIV co-infection status, the discriminatory ability significantly improved reaching an AUROC of 0.90 (95% CI [0.80, 1.00]). FibroSURE also had a good ability to differentiate patients with no or mild from those with moderate to high inflammation (AUROC = 0.83; 95% CI [0.71, 0.95]).
Conclusions
FibroSURE in combination with AST levels has an excellent capacity to identify moderate to high fibrosis stages in chronic HBV-infected patients. These data suggest that FibroSURE may be a useful substitute for liver biopsy in chronic HBV infection.
doi:10.1186/1471-230X-14-118
PMCID: PMC4086988  PMID: 24990385
Hepatitis B virus; Liver biopsy replacement; Liver fibrosis assessment; Liver histology
24.  Fibrosis progression in paired liver biopsies from HIV/HCV co-infected patients 
Hepatitis Monthly  2011;11(7):525-531.
Background
Chronic hepatitis C is more aggressive during HIV infection. Available data about risk factors of liver fibrosis in HIV/HCV co-infected patients derive from studies based on a single liver biopsy.
Objectives
To evaluate the risk factors of liver fibrosis progression (LFP) and to investigate the role of antiretroviral therapy (ARV) in HIV/HCV patients who underwent paired liver biopsy.
Patients and Methods
We retrospectively studied 58 patients followed at two Infectious Diseases Departments in Northern Italy during the period 1988-2005. All specimens were double-blinded and centrally examined by two pathologists. LFP was defined when an increase of at least one stage occurred in the second biopsy, according to the Ishak-Knodell classification.
Results
In a univariate analysis, serum levels of alanine aminotransferase (ALT) > 150 IU/L at the first biopsy (P = 0.02), and a > 20% decrease in CD4+ cell count between the two biopsies (P = 0.007), were significantly associated with LFP. In multivariate analysis, a > 20% decrease in CD4+ cell count remained independently associated to LFP (Odds Ratio, 3.99; 95% confidence interval, 1.25-12.76; P < 0.02). Analysis of life survival curves confirmed the correlation between CD4+ cell count and LFP.
Conclusions
Our findings highlight that in HIV/HCV coinfected patients, an effective antiretroviral therapy that assures a good immune-virological profile contributes to reducing the risk of LFP.
PMCID: PMC3212761  PMID: 22706343
HIV; HCV; Liver fibrosis; Antiretroviral therapy
25.  Analysis of Effect of Antiviral Therapy on Regression of Liver Fibrosis in Patient with HCV Infection 
Materia Socio-Medica  2014;26(3):172-176.
Background:
HCV infection is characterized by a tendency towards chronicity. Acute HCV infection progresses to chronic infection in 70% of cases. Hepatitis C virus infection can cause progressive liver injury and lead to fibrosis and eventually cirrhosis. The degree of histologic fibrosis is an important marker of the stage of the disease. One of current standard treatment for CHC infection is the combination of PEG-IFN α and ribavirin.
Objectives:
The aim of the study was to investigate the effect of the therapy with Peginterferon alfa-2a or alfa-2b plus Ribavirin on evolution of liver fibrosis in patients with chronic hepatitis C. Also, our aim was to examine whether there was a difference between the genders in the efficacy of these antiviral therapy. Our goal also was to determine effect of the therapy with Peginterferon alfa-2a or alfa-2b plus Ribavirin on evolution of liver steatosis in patients with chronic hepatitis C.
Patients and Methods:
A retrospective study was made of chronic hepatitis C patients who had been treated from 2005 to April 2014 at the Clinic of Gastroenterohepatology, Clinical Center University of Sarajevo. We reviewed 40 patient medical records to collect demographic, epidemiological and clinical information, as information on liver biopsies that was performed prior to the antiviral therapy and FibroScan® test that was performed after the antiviral therapy. For the processing of data SPSS (Statistical Package for the Social Sciences Program) for Windows, ver. 21.0 statistical software was used. Comparisons between qualitative and quantitative variables were performed using the Student t-test. Mann Whitney U test was used to compare differences in variables such as fibrosis stage and steatosis grade. A value of p<0.05 was considered as statistically significant.
Results:
After treatment, there was a statistically significant increase in the number of patients with no fibrosis (p<0.05). There was no statistically significant reduction in the number of patients with cirrhosis (F4) (p>0.05). There was significantly higher decrease of fibrosis progression at the patients that were in an mild-to-moderate fibrosis (F1/F2/F3), patients that were in advanced stage of fibrosis (F4) at the time of the pre-treatment did not have a statistically significant fibrosis reduction. We found significant association in evolution of fibrosis after treatment with PEG-IFN α2a (40) kD and PEG-IFNα2a (12,5) kD with ribavirin (p< 0.05). We also found significant association in evolution of steatosis after treatment with PEG-IFN α2a (40) kD and PEG-IFNα2a (12,5) kD with ribavirin (p < 0.05). There was statistically significant differences (p<0.05) between genders within fibrosis qualitative evolution.
Conclusions:
There were significant regression of fibrosis especially at the patients that were in an mild-to-moderate fibrosis (F1/F2/F3), patients that were in advanced stage of fibrosis (F4) at the time of the pre-treatment did not have a statistically significant fibrosis reduction after treatment with PEG-IFN α2a (40) kD and PEG-IFNα2b (12,5) kD with ribavirin. Our results showed significant improvement in steatosis in patients infected with HCV after treatment with PEG-IFN α2a (40) kD and PEG-IFNα2b (12,5) kD with ribavirin. Those results provides further evidence for direct involvement of HCV and antiviral therapy in the pathogenesis of hepatic steatosis. Female gender showed a higher degree of fibrosis reduction.
doi:10.5455/msm.2014.26.172-176
PMCID: PMC4130668  PMID: 25126010
Antiviral Therapy; Regression

Results 1-25 (1026999)