PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1388668)

Clipboard (0)
None

Related Articles

1.  Blood volume-monitored regulation of ultrafiltration in fluid-overloaded hemodialysis patients: study protocol for a randomized controlled trial 
Trials  2012;13:79.
Background
Data generated with the body composition monitor (BCM, Fresenius) show, based on bioimpedance technology, that chronic fluid overload in hemodialysis patients is associated with poor survival. However, removing excess fluid by lowering dry weight can be accompanied by intradialytic and postdialytic complications. Here, we aim at testing the hypothesis that, in comparison to conventional hemodialysis, blood volume-monitored regulation of ultrafiltration and dialysate conductivity (UCR) and/or regulation of ultrafiltration and temperature (UTR) will decrease complications when ultrafiltration volumes are systematically increased in fluid-overloaded hemodialysis patients.
Methods/design
BCM measurements yield results on fluid overload (in liters), relative to extracellular water (ECW). In this prospective, multicenter, triple-arm, parallel-group, crossover, randomized, controlled clinical trial, we use BCM measurements, routinely introduced in our three maintenance hemodialysis centers shortly prior to the start of the study, to recruit sixty hemodialysis patients with fluid overload (defined as ≥15% ECW). Patients are randomized 1:1:1 into UCR, UTR and conventional hemodialysis groups. BCM-determined, ‘final’ dry weight is set to normohydration weight −7% of ECW postdialysis, and reached by reducing the previous dry weight, in steps of 0.1 kg per 10 kg body weight, during 12 hemodialysis sessions (one study phase). In case of intradialytic complications, dry weight reduction is decreased, according to a prespecified algorithm. A comparison of intra- and post-dialytic complications among study groups constitutes the primary endpoint. In addition, we will assess relative weight reduction, changes in residual renal function, quality of life measures, and predialysis levels of various laboratory parameters including C-reactive protein, troponin T, and N-terminal pro-B-type natriuretic peptide, before and after the first study phase (secondary outcome parameters).
Discussion
Patients are not requested to revert to their initial degree of fluid overload after each study phase. Therefore, the crossover design of the present study merely serves the purpose of secondary endpoint evaluation, for example to determine patient choice of treatment modality. Previous studies on blood volume monitoring have yielded inconsistent results. Since we include only patients with BCM-determined fluid overload, we expect a benefit for all study participants, due to strict fluid management, which decreases the mortality risk of hemodialysis patients.
Trial registration
ClinicalTrials.gov, NCT01416753
doi:10.1186/1745-6215-13-79
PMCID: PMC3493292  PMID: 22682149
Dialysis; Ultrafiltration; Renal dialysis; Fluid shifts; Blood volume; Multicenter study; Randomized controlled trials.
2.  Circulatory Mitochondrial DNA Is a Pro-Inflammatory Agent in Maintenance Hemodialysis Patients 
PLoS ONE  2014;9(12):e113179.
Chronic inflammation is highly prevalent in maintenance hemodialysis (MHD) patients, and it has been shown to be a strong predictor of morbidity and mortality. Mitochondrial DNA (mtDNA) released into circulation after cell damage can promote inflammation in patients and animal models. However, the role and mechanisms of circulatory mtDNA in chronic inflammation in MHD patients remain unknown. Sixty MHD patients and 20 health controls were enrolled in this study. The circulatory mtDNA was detected by quantitative real-time PCR assay. Plasma interleukin 6 (IL-6) and tumor necrosis factor α (TNF-α) were quantitated by ELISA assay. Dialysis systems in MHD patients and in vitro were used to evaluate the effect of different dialysis patterns on circulatory mtDNA. Circulatory mtDNA was elevated in MHD patients comparing to that of health control. Regression analysis demonstrated that plasma mtDNA was positively associated with TNF-α and the product of serum calcium and phosphorus, while negatively associated with hemoglobin and serum albumin in MHD patients. MtDNA induced the secretion of IL-6 and TNF-α in the THP-1 cells. Single high-flux hemodialysis (HF-HD) and on line hemodiafiltration (OL-HDF) but not low-flux hemodialysis (LF-HD) could partially reduce plasma mtDNA in MHD patients. In vitro, both HD and hemofiltration (HF) could fractional remove mtDNA. Collectively, circulatory mtDNA is elevated and its level is closely correlated with chronic inflammation in MHD patients. HF-HD and HDF can partially reduce circulatory mtDNA in MHD patients.
doi:10.1371/journal.pone.0113179
PMCID: PMC4259325  PMID: 25485699
3.  Treg/Th17 imbalance is associated with cardiovascular complications in uremic patients undergoing maintenance hemodialysis 
Biomedical Reports  2013;1(3):413-419.
Investigations of Treg/Th17 imbalance associated with cardiovascular complications in hemodialysis are limited. The aim of this study was to examine the association between Treg/Th17 balance and cardiovascular comorbidity in maintenance hemodialysis (MHD). Uremic patients included in the present study were divided into three groups: the WHD group comprising 30 patients with no cardiovascular complications or maintenance hemodialysis (MHD), the MHD1 group comprising 36 patients presenting with cardiovascular complications during MHD, and the MHD2 group comprising 30 patients with a lack of cardiovascular complications during MHD. The control group comprised 20 healthy volunteers. Th17 and Treg cells were measured by fluorescence-activated cell scanning (FACS). IL-6 and IL-10 levels were determined by enzyme-linked immunosorbent assay (ELISA). Monocyte surface expression of the costimulatory molecules CD80 and CD86 was assessed by FACS after the monocytes were cocultured with Th17 or Treg cells in the presence or absence of IL-17. Results revealed that the percentage of Th17 of total CD4(+) cells was significantly higher in the MHD1 (36.27±9.62% in) and WHD (35.98±8.85%) groups compared with the MHD2 (19.64±5.97%) and healthy (1.12±1.52%) groups. Elevated IL-6 levels were obtained in Th17 cells for the MHD1 and WHD groups, whereas a marked decrease was evident when IL-17 was blocked. However, no significant differences or cardiovascular complications were detected in the expression of CD80 and CD86 in the MHD group, whereas the expression of the uremic subgroups was statistically higher compared with the healthy controls. To the best of our knowledge, this is the first study to demonstrate that the Treg/Th17 imbalance may be associated with the pathogenesis of cardiovascular complications in uremic patients undergoing hemodialysis through the B7-independent upregulation of IL-6 induced by IL-17.
doi:10.3892/br.2013.63
PMCID: PMC3917002  PMID: 24648960
cardiovascular; Th17 cell; regulatory T cell; inflammatory cytokine; costimulatory molecule
4.  Timing and Completeness of Trial Results Posted at ClinicalTrials.gov and Published in Journals 
PLoS Medicine  2013;10(12):e1001566.
Agnes Dechartres and colleagues searched ClinicalTrials.gov for completed drug RCTs with results reported and then searched for corresponding studies in PubMed to evaluate timeliness and completeness of reporting.
Please see later in the article for the Editors' Summary
Background
The US Food and Drug Administration Amendments Act requires results from clinical trials of Food and Drug Administration–approved drugs to be posted at ClinicalTrials.gov within 1 y after trial completion. We compared the timing and completeness of results of drug trials posted at ClinicalTrials.gov and published in journals.
Methods and Findings
We searched ClinicalTrials.gov on March 27, 2012, for randomized controlled trials of drugs with posted results. For a random sample of these trials, we searched PubMed for corresponding publications. Data were extracted independently from ClinicalTrials.gov and from the published articles for trials with results both posted and published. We assessed the time to first public posting or publishing of results and compared the completeness of results posted at ClinicalTrials.gov versus published in journal articles. Completeness was defined as the reporting of all key elements, according to three experts, for the flow of participants, efficacy results, adverse events, and serious adverse events (e.g., for adverse events, reporting of the number of adverse events per arm, without restriction to statistically significant differences between arms for all randomized patients or for those who received at least one treatment dose).
From the 600 trials with results posted at ClinicalTrials.gov, we randomly sampled 50% (n = 297) had no corresponding published article. For trials with both posted and published results (n = 202), the median time between primary completion date and first results publicly posted was 19 mo (first quartile = 14, third quartile = 30 mo), and the median time between primary completion date and journal publication was 21 mo (first quartile = 14, third quartile = 28 mo). Reporting was significantly more complete at ClinicalTrials.gov than in the published article for the flow of participants (64% versus 48% of trials, p<0.001), efficacy results (79% versus 69%, p = 0.02), adverse events (73% versus 45%, p<0.001), and serious adverse events (99% versus 63%, p<0.001).
The main study limitation was that we considered only the publication describing the results for the primary outcomes.
Conclusions
Our results highlight the need to search ClinicalTrials.gov for both unpublished and published trials. Trial results, especially serious adverse events, are more completely reported at ClinicalTrials.gov than in the published article.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
When patients consult a doctor, they expect to be recommended what their doctor believes is the most effective treatment with the fewest adverse effects. To determine which treatment to recommend, clinicians rely on sources that include research studies. Among studies, the best evidence is generally agreed to come from systematic reviews and randomized controlled clinical trials (RCTs), studies that test the efficacy and safety of medical interventions by comparing clinical outcomes in groups of patients randomly chosen to receive different interventions. Decision-making based on the best available evidence is called evidence-based medicine. However, evidence-based medicine can only guide clinicians if trial results are published in a timely and complete manner. Unfortunately, underreporting of trials is common. For example, an RCT in which a new drug performs better than existing drugs is more likely to be published than one in which the new drug performs badly or has unwanted adverse effects (publication bias). There can also be a delay in publishing the results of negative trials (time-lag bias) or a failure to publish complete results for all the prespecified outcomes of a trial (reporting bias). All three types of bias threaten informed medical decision-making and the health of patients.
Why Was This Study Done?
One initiative that aims to prevent these biases was included in the 2007 US Food and Drug Administration Amendments Act (FDAAA). The Food and Drug Administration (FDA) is responsible for approving drugs and devices that are marketed in the US. The FDAAA requires that results from clinical trials of FDA-approved drugs and devices conducted in the United States be made publicly available at ClinicalTrials.gov within one year of trial completion. ClinicalTrials.gov—a web-based registry that includes US and international clinical trials—was established in 2000 in response to the 1997 FDA Modernization Act, which required mandatory registration of trial titles and designs and of the conditions and interventions under study. The FDAAA expanded these mandatory requirements by requiring researchers studying FDA-approved drugs and devices to report additional information such as the baseline characteristics of the participants in each arm of the trial and the results of primary and secondary outcome measures (the effects of the intervention on predefined clinical measurements) and their statistical significance (an indication of whether differences in outcomes might have happened by chance). Researchers of other trials registered in ClinicalTrials.gov are welcome to post trial results as well. Here, the researchers compare the timing and completeness (i.e., whether all relevant information was fully reported) of results of drug trials posted at ClinicalTrials.gov with those published in medical journals.
What Did the Researchers Do and Find?
The researchers searched ClinicalTrials.gov for reports of completed phase III and IV (late-stage) RCTs of drugs with posted results. For a random sample of 600 eligible trials, they searched PubMed (a database of biomedical publications) for corresponding publications. Only 50% of trials with results posted at ClinicalTrials.gov had a matching published article. For 202 trials with both posted and published results, the researchers compared the timing and completeness of the results posted at ClinicalTrials.gov and of results reported in the corresponding journal publication. The median time between the study completion date and the first results being publicly posted at ClinicalTrials.gov was 19 months, whereas the time between completion and publication in a journal was 21 months. The flow of participants through trials was completely reported in 64% of the ClinicalTrials.gov postings but in only 48% of the corresponding publications. Results for the primary outcome measure were completely reported in 79% and 69% of the ClinicalTrials.gov postings and corresponding publications, respectively. Finally, adverse events were completely reported in 73% of the ClinicalTrials.gov postings but in only 45% of the corresponding publications, and serious adverse events were reported in 99% and 63% of the ClinicalTrials.gov postings and corresponding publications, respectively.
What Do These Findings Mean?
These findings suggest that the reporting of trial results is significantly more complete at ClinicalTrials.gov than in published journal articles reporting the main trial results. Certain aspects of this study may affect the accuracy of this conclusion. For example, the researchers compared the results posted at ClinicalTrials.gov only with the results in the publication that described the primary outcome of each trial, even though some trials had multiple publications. Importantly, these findings suggest that, to enable patients and physicians to make informed treatment decisions, experts undertaking assessments of drugs should consider seeking efficacy and safety data posted at ClinicalTrials.gov, both for trials whose results are not published yet and for trials whose results are published. Moreover, they suggest that the use of templates to guide standardized reporting of trial results in journals and broader mandatory posting of results may help to improve the reporting and transparency of clinical trials and, consequently, the evidence available to inform treatment of patients.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001566.
Wikipedia has pages on evidence-based medicine and on publication bias (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The US Food and Drug Administration provides information about drug approval in the US for consumers and health-care professionals, plus detailed information on the 2007 Food and Drug Administration Amendments Act
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, including background information about clinical trials, and a fact sheet detailing the requirements of the 2007 Food and Drug Administration Amendments Act
PLOS Medicine recently launched a Reporting Guidelines Collection, an open access collection of reporting guidelines, commentary, and related research on guidelines from across PLOS journals that aims to help advance the efficiency, effectiveness, and equitability of the dissemination of biomedical information; a 2008 PLOS Medicine editorial discusses the 2007 Food and Drug Administration Amendments Act
doi:10.1371/journal.pmed.1001566
PMCID: PMC3849189  PMID: 24311990
5.  Volume Expansion with Albumin Compared to Gelofusine in Children with Severe Malaria: Results of a Controlled Trial  
PLoS Clinical Trials  2006;1(5):e21.
Objectives:
Previous studies have shown that in children with severe malaria, resuscitation with albumin infusion results in a lower mortality than resuscitation with saline infusion. Whether the apparent benefit of albumin is due solely to its colloidal properties, and thus might also be achieved with other synthetic colloids, or due to the many other unique physiological properties of albumin is unknown. As albumin is costly and not readily available in Africa, examination of more affordable colloids is warranted. In order to inform the design of definitive phase III trials we compared volume expansion with Gelofusine (succinylated modified fluid gelatin 4% intravenous infusion) with albumin.
Design:
This study was a phase II safety and efficacy study.
Setting:
The study was conducted at Kilifi District Hospital, Kenya.
Participants:
The participants were children admitted with severe falciparum malaria (impaired consciousness or deep breathing), metabolic acidosis (base deficit > 8 mmol/l), and clinical features of shock.
Interventions:
The interventions were volume resuscitation with either 4.5% human albumin solution or Gelofusine.
Outcome Measures:
Primary endpoints were the resolution of shock and acidosis; secondary endpoints were in-hospital mortality and adverse events including neurological sequelae.
Results:
A total of 88 children were enrolled: 44 received Gelofusine and 44 received albumin. There was no significant difference in the resolution of shock or acidosis between the groups. Whilst no participant developed pulmonary oedema or fluid overload, fatal neurological events were more common in the group receiving gelatin-based intervention fluids. Mortality was lower in patients receiving albumin (1/44; 2.3%) than in those treated with Gelofusine (7/44; 16%) by intention to treat (Fisher's exact test, p = 0.06), or 1/40 (2.5%) and 4/40 (10%), respectively, for those treated per protocol (p = 0.36). Meta-analysis of published trials to provide a summary estimate of the effect of albumin on mortality showed a pooled relative risk of death with albumin administration of 0.19 (95% confidence interval 0.06–0.59; p = 0.004 compared to other fluid boluses).
Conclusions:
In children with severe malaria, we have shown a consistent survival benefit of receiving albumin infusion compared to other resuscitation fluids, despite comparable effects on the resolution of acidosis and shock. The lack of similar mortality benefit from Gelofusine suggests that the mechanism may involve a specific neuroprotective effect of albumin, rather than solely the effect of the administered colloid. Further exploration of the benefits of albumin is warranted in larger clinical trials.
Editorial Commentary
Background: In Africa, children admitted to hospital with severe malaria are at high risk of death even though effective malaria treatment is available. Death typically occurs during a narrow time window after admission and before antimalarial treatments can start working. Acidosis (excessive acidity of the blood) is thought to predict death, but it is not clear how acidosis arises. One possibility is that hypovolemia (lowered blood fluid volume) is important, which would normally require urgent resuscitation with fluids. However, there is little evidence on what type of fluid should be given. In the trial reported here, carried out in Kenya's Kilifi District Hospital between 2004 and 2006, 88 children admitted with severe malaria were assigned to receive either albumin solution (a colloid solution made from blood protein) or Gelofusine (a synthetic colloid). The primary outcomes that the researchers were interested in were correction of shock and acidosis in the blood after 8 h. However, the researchers also looked at death rate in hospital and adverse events after treatment.
What this trial shows: The investigators found no significant differences in the primary outcomes (correction of shock and acidosis in the blood 8 h after fluids were started) between children given Gelofusine and those given albumin. However, they did see a difference in death rates between children given Gelofusine and those given albumin. Death rates in hospital were lower in the group given albumin, and this was statistically significant. The researchers then combined the data on death rates from this trial with data from two other trials with an albumin arm. This combined analysis also supported the suggestion that death rates with albumin were lower than with other fluids, either Gelofusine or salt solution.
Strengths and limitations: There is currently very little evidence from trials to guide the initial management of fluids in children with severe malaria. The results from this trial indicate that further research is a priority. However, the actual findings from this trial must be tested in larger trials that recruit enough children to establish reliably whether there is a difference in death rate between albumin treatment and treatment with other fluids. This trial was not originally planned to find a clinically relevant difference in death rate, and therefore does not definitively answer that question. Further trials would also need to use a random method to assign participants to the different treatments, rather than alternate blocks (as in this trial). A random method ensures greater comparability of the two groups in the trial, and reduces the chance of selection bias (where assignment of patients to different treatments can be distorted during the enrollment process).
Contribution to the evidence: This study adds data suggesting that fluid resuscitation with albumin solution, as compared to Gelofusine, may reduce the chance of death in children with severe malaria. However, this finding is not definitive and would need to be examined in further carefully controlled trials. If the finding is supported by further research, then a solution to the problems of high cost and limited availability of albumin will need to be found.
doi:10.1371/journal.pctr.0010021
PMCID: PMC1569382  PMID: 16998584
6.  Percutaneous Vertebroplasty for Treatment of Painful Osteoporotic Vertebral Compression Fractures 
Executive Summary
Objective of Analysis
The objective of this analysis is to examine the safety and effectiveness of percutaneous vertebroplasty for treatment of osteoporotic vertebral compression fractures (VCFs) compared with conservative treatment.
Clinical Need and Target Population
Osteoporosis and associated fractures are important health issues in ageing populations. Vertebral compression fracture secondary to osteoporosis is a cause of morbidity in older adults. VCFs can affect both genders, but are more common among elderly females and can occur as a result of a fall or a minor trauma. The fracture may occur spontaneously during a simple activity such as picking up an object or rising up from a chair. Pain originating from the fracture site frequently increases with weight bearing. It is most severe during the first few weeks and decreases with rest and inactivity.
Traditional treatment of painful VCFs includes bed rest, analgesic use, back bracing and muscle relaxants. The comorbidities associated with VCFs include deep venous thrombosis, acceleration of osteopenea, loss of height, respiratory problems and emotional problems due to chronic pain.
Percutaneous vertebroplasty is a minimally invasive surgical procedure that has gained popularity as a new treatment option in the care for these patients. The technique of vertebroplasty was initially developed in France to treat osteolytic metastasis, myeloma, and hemangioma. The indications were further expanded to painful osteoporotic VCFs and subsequently to treatment of asymptomatic VCFs.
The mechanism of pain relief, which occurs within minutes to hours after vertebroplasty, is still not known. Pain pathways in the surrounding tissue appear to be altered in response to mechanical, chemical, vascular, and thermal stimuli after the injection of the cement. It has been suggested that mechanisms other than mechanical stabilization of the fracture, such as thermal injury to the nerve endings, results in immediate pain relief.
Percutaneous Vertebroplasty
Percutaneous vertebroplasty is performed with the patient in prone position and under local or general anesthesia. The procedure involves fluoroscopic imaging to guide the injection of bone cement into the fractured vertebral body to support the fractured bone. After injection of the cement, the patient is placed in supine position for about 1 hour while the cement hardens.
Cement leakage is the most frequent complication of vertebroplasty. The leakages may remain asymptomatic or cause symptoms of nerve irritation through compression of nerve roots. There are several reports of pulmonary cement embolism (PCE) following vertebroplasty. In some cases, the PCE may remain asymptomatic. Symptomatic PCE can be recognized by their clinical signs and symptoms such as chest pain, dyspnea, tachypnea, cyanosis, coughing, hemoptysis, dizziness, and sweating.
Research Methods
Literature Search
A literature search was performed on Feb 9, 2010 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from January 1, 2005 to February 9, 2010.
Studies were initially reviewed by titles and abstracts. For those studies meeting the eligibility criteria, full-text articles were obtained and reviewed. Reference lists were also examined for any additional relevant studies not identified through the search. Articles with an unknown eligibility were reviewed with a second clinical epidemiologist and then a group of epidemiologists until consensus was established. Data extraction was carried out by the author.
Inclusion Criteria
Study design: Randomized controlled trials (RCTs) comparing vertebroplasty with a control group or other interventions
Study population: Adult patients with osteoporotic vertebral fractures
Study sample size: Studies included 20 or more patients
English language full-reports
Published between Jan 1 2005 and Feb 9, 2010
(eligible studies identified through the Auto Alert function of the search were also included)
Exclusion Criteria
Non-randomized studies
Studies on conditions other than VCF (e.g. patients with multiple myeloma or metastatic tumors)
Studies focused on surgical techniques
Studies lacking outcome measures
Results of Evidence-Based Analysis
A systematic search yielded 168 citations. The titles and the abstracts of the citations were reviewed and full text of the identified citations was retrieved for further consideration. Upon review of the full publications and applying the inclusion and exclusion criteria, 5 RCTs were identified. Of these, two compared vertebroplasty with sham procedure, two compared vertebroplasty with conservative treatment, and one compared vertebroplasty with balloon kyphoplasty.
Randomized Controlled Trials
Recently, the results of two blinded randomized placebo-controlled trials of percutaneous vertebroplasty were reported. These trials, providing the highest quality of evidence available to date, do not support the use of vertebroplasty in patients with painful osteoporotic vertebral compression fractures. Based on the results of these trials, vertebroplasty offer no additional benefit over usual care and is not risk free.
In these trials the treatment allocation was blinded to the patients and outcome assessors. The control group received a sham procedure simulating vertebroplasty to minimize the effect of expectations and to reduce the potential for bias in self-reporting of outcomes. Both trials applied stringent exclusion criteria so that the results are generalizable to the patient populations that are candidates for vertebroplasty. In both trials vertebroplasty procedures were performed by highly skilled interventionists. Multiple valid outcome measures including pain, physical, mental, and social function were employed to test the between group differences in outcomes.
Prior to these two trials, there were two open randomized trials in which vertebroplasty was compared with conservative medical treatment. In the first randomized trial, patients were allowed to cross over to the other arm and had to be stopped after two weeks due to the high numbers of patients crossing over. The other study did not allow cross over and recently published the results of 12 months follow-up.
The following is the summary of the results of these 4 trials:
Two blinded RCTs on vertebroplasty provide the highest level of evidence available to date. Results of these two trials are supported by findings of an open randomized trial with 12 months follow-up. Blinded RCTs showed:
No significant differences in pain scores of patients who received vertebroplasty and patients who received a sham procedure as measured at 3 days, 2 weeks and 1 month in one study and at 1 week, 1 month, 3 months, and 6 months in the other.
The observed differences in pain scores between the two groups were neither statistically significant nor clinically important at any time points.
The above findings were consistent with the findings of an open RCT in which patients were followed for 12 months. This study showed that improvement in pain was similar between the two groups at 3 months and were sustained to 12 months.
In the blinded RCTs, physical, mental, and social functioning were measured at the above time points using 4-5 of the following 7 instruments: RDQ, EQ-5D, SF-36 PCS, SF-36 MCS, AQoL, QUALEFFO, SOF-ADL
There were no significant differences in any of these measures between patients who received vertebroplasty and patients who received a sham procedure at any of the above time points (with a few exceptions in favour of control intervention).
These findings were also consistent with the findings of an open RCT which demonstrated no significant between group differences in scores of ED-5Q, SF-36 PCS, SF 36 MCS, DPQ, Barthel, and MMSE which measure physical, mental, and social functioning (with a few exceptions in favour of control intervention).
One small (n=34) open RCT with a two week follow-up detected a significantly higher improvement in pain scores at 1 day after the intervention in vertebroplasty group compared with conservative treatment group. However, at 2 weeks follow-up, this difference was smaller and was not statistically significant.
Conservative treatment was associated with fewer clinically important complications
Risk of new VCFs following vertebroplasty was higher than those in conservative treatment but it requires further investigation.
PMCID: PMC3377535  PMID: 23074396
7.  Short-Term Efficacy of Rofecoxib and Diclofenac in Acute Shoulder Pain: A Placebo-Controlled Randomized Trial 
PLoS Clinical Trials  2007;2(3):e9.
Objectives:
To evaluate the short-term symptomatic efficacy of rofecoxib and diclofenac versus placebo in acute episodes of shoulder pain.
Design:
Randomized controlled trial of 7 days.
Setting:
Rheumatologists and/or general practitioners totaling 47.
Participants:
Acute shoulder pain.
Interventions:
Rofecoxib 50 mg once daily, diclofenac 50 mg three times daily, and placebo.
Outcome measures:
Pain, functional impairment, patient's global assessment of his/her disease activity, and local steroid injection requirement for persistent pain. The primary variable was the Kaplan-Meier estimates of the percentage of patients at day 7 fulfilling the definition of success (improvement in pain intensity and a low pain level sustained to the end of the 7 days of the study; log-rank test).
Results:
There was no difference in the baseline characteristics between the three groups (rofecoxib n = 88, placebo n = 94, and diclofenac n = 89). At day 7, the Kaplan-Meier estimates of successful patients was higher in the treatment groups than in the placebo (54%, 56%, and 38% in the diclofenac, rofecoxib, and placebo groups respectively, p = 0.0070 and p = 0.0239 for placebo versus rofecoxib and diclofenac, respectively). During the 7 days of the study, there was a statistically significant difference between placebo and both active arms (rofecoxib and diclofenac) in all the evaluated outcome measures A local steroid injection had to be performed in 33 (35%) and 19 (22%) patients in the placebo and rofecoxib group respectively. Number needed to treat to avoid such rescue therapy was 7 patients (95% confidence interval 5–15).
Conclusion:
This study highlights the methodological aspects of clinical trials, e.g., eligibility criteria and outcome measures, in acute painful conditions. The data also establish that diclofenac and rofecoxib are effective therapies for the management of acute painful shoulder and that they reduce the requirement for local steroid injection.
Editorial Commentary
Background: Shoulder pain is a very common complaint that presents in primary care, and there are many different possible causes. Acute pain would normally be managed with nonsteroidal anti-inflammatory drugs (NSAIDs), supplemented with steroid injections (which are often reserved for the treatment of severe or persistent pain). One NSAID, diclofenac, is used frequently for this condition, but other NSAIDs might also be effective. A subgroup of NSAIDs called the Cox-2 selective inhibitors specifically inhibit one particular enzyme (cyclo-oxygenase, shortened to Cox-2) which is involved in inflammation and pain. These drugs are thought to be less likely to cause stomach irritation than other NSAIDs. Therefore the researchers in this study carried out a short-term, three-way clinical trial comparing diclofenac with one particular Cox-2 inhibitor, rofecoxib, and placebo in patients with acute shoulder pain. However, rofecoxib was withdrawn from the market in September 2004 because of evidence that use of the drug was associated with an increased risk of heart attacks and strokes, and controversy remains regarding the risk of such events among users of other Cox-2 inhibitors.
What this trial shows: The main aim of this trial was to compare the level of pain relief over seven days of treatment with either diclofenac or rofecoxib, as compared to placebo. The primary outcome measure used in the trial was the proportion of patients achieving a 50% or greater decrease in pain levels over the course of the study, measured using a numerical rating scale. A total of 273 participants were recruited into the trial and at day 7 the proportion achieving a 30% decrease in pain was 38% in the placebo arm, 54% in the diclofenac arm, and 56% in the rofecoxib arm. The differences in this outcome measure between diclofenac and placebo and between rofecoxib and placebo were statistically significant; however, the researchers did not carry out a direct comparison between diclofenac and rofecoxib. The rates of adverse events were roughly comparable between all three arms of the trial, although the study was not originally planned to be large enough to detect differences in the rates of such events, so it is not possible to conclude whether there was any true difference.
Strengths and limitations: The randomization procedures used in the study minimize the possibility of bias in assigning patients to treatment arms. Bias in assessment of outcomes was also minimized by ensuring that steps were taken to prevent investigators and patients from knowing which drugs a particular patient received until the end of the trial. A key limitation of the study is the short follow-up, only seven days, and it is therefore unclear whether efficacy and safety of these drugs would continue for the much longer periods of time (weeks or even months) for which these patients might need pain relief. Finally, patients randomized to the placebo arm received no treatment for the seven days of the study other than acetaminophen or steroid injections (which would result in withdrawal from the trial). This design does not limit interpretation of the data but could be criticized because of concern over whether the patients receiving placebo received adequate pain relief.
Contribution to the evidence: This study provides some data on the efficacy of diclofenac and rofecoxib, as compared to placebo in treatment of this condition. Given that rofecoxib is now withdrawn, the efficacy of this drug is no longer relevant. However, the information from this trial should help in designing future studies of NSAIDs in shoulder pain, for example to define appropriate trial outcomes, sample size, and other aspects of study design.
doi:10.1371/journal.pctr.0020009
PMCID: PMC1817652  PMID: 17347681
8.  Anti-Inflammatory and Anti-Oxidative Nutrition in Hypoalbuminemic Dialysis Patients (AIONID) study: results of the pilot-feasibility, double-blind, randomized, placebo-controlled trial 
Background
Low serum albumin is common and associated with protein-energy wasting, inflammation, and poor outcomes in maintenance hemodialysis (MHD) patients. We hypothesized that in-center (in dialysis clinic) provision of high-protein oral nutrition supplements (ONS) tailored for MHD patients combined with anti-oxidants and anti-inflammatory ingredients with or without an anti-inflammatory appetite stimulator (pentoxifylline, PTX) is well tolerated and can improve serum albumin concentration.
Methods
Between January 2008 and June 2010, 84 adult hypoalbuminemic (albumin <4.0 g/dL) MHD outpatients were double-blindly randomized to receive 16 weeks of interventions including ONS, PTX, ONS with PTX, or placebos. Nutritional and inflammatory markers were compared between the four groups.
Results
Out of 84 subjects (mean ± SD; age, 59 ± 12 years; vintage, 34 ± 34 months), 32 % were Blacks, 54 % females, and 68 % diabetics. ONS, PTX, ONS plus PTX, and placebo were associated with an average change in serum albumin of +0.21 (P = 0.004), +0.14 (P = 0.008), +0.18 (P = 0.001), and +0.03 g/dL (P = 0.59), respectively. No related serious adverse events were observed. In a predetermined intention-to-treat regression analysis modeling post-trial serum albumin as a function of pre-trial albumin and the three different interventions (ref = placebo), only ONS without PTX was associated with a significant albumin rise (+0.17 ± 0.07 g/dL, P = 0.018).
Conclusions
In this pilot-feasibility, 2 × 2 factorial, placebo-controlled trial, daily intake of a CKD-specific high-protein ONS with anti-inflammatory and anti-oxidative ingredients for up to 16 weeks was well tolerated and associated with slight but significant increase in serum albumin levels. Larger long-term controlled trials to examine hard outcomes are indicated.
Electronic supplementary material
The online version of this article (doi:10.1007/s13539-013-0115-9) contains supplementary material.
doi:10.1007/s13539-013-0115-9
PMCID: PMC3830006  PMID: 24052226
Albumin; Hypoalbuminemia; Inflammation; Protein intake; Hemodialysis; Oral nutrition supplements; Anti-oxidant ingredients; Anti-inflammatory ingredients
9.  Intensive Case Management Before and After Prison Release is No More Effective Than Comprehensive Pre-Release Discharge Planning in Linking HIV-Infected Prisoners to Care: A Randomized Trial 
AIDS and behavior  2011;15(2):356-364.
Imprisonment provides opportunities for the diagnosis and successful treatment of HIV, however, the benefits of antiretroviral therapy are frequently lost following release due to suboptimal access and utilization of health care and services. In response, some have advocated for development of intensive case-management interventions spanning incarceration and release to support treatment adherence and community re-entry for HIV-infected releasees. We conducted a randomized controlled trial of a motivational Strengths Model bridging case management intervention (BCM) beginning approximately 3 months prior to and continuing 6 months after release versus a standard of care prison-administered discharge planning program (SOC) for HIV-infected state prison inmates. The primary outcome variable was self-reported access to post-release medical care. Of the 104 inmates enrolled, 89 had at least 1 post-release study visit. Of these, 65.1% of BCM and 54.4% of SOC assigned participants attended a routine medical appointment within 4 weeks of release (P >0.3). By week 12 post-release, 88.4% of the BCM arm and 78.3% of the SOC arm had at attended at least one medical appointment (P = 0.2), increasing in both arms at week 24–90.7% with BCM and 89.1% with SOC (P >0.5). No participant without a routine medical visit by week 24 attended an appointment from weeks 24 to 48. The mean number of clinic visits during the 48 weeks post release was 5.23 (SD = 3.14) for BCM and 4.07 (SD = 3.20) for SOC (P >0.5). There were no significant differences between arms in social service utilization and re-incarceration rates were also similar. We found that a case management intervention bridging incarceration and release was no more effective than a less intensive pre-release discharge planning program in supporting health and social service utilization for HIV-infected individuals released from prison.
doi:10.1007/s10461-010-9843-4
PMCID: PMC3532052  PMID: 21042930
Prisoners; Access to care; Case management
10.  Administered Paricalcitol Dose and Survival in Hemodialysis Patients: A Marginal Structural Model Analysis 
Pharmacoepidemiology and drug safety  2012;21(11):1232-1239.
Purpose
Several observational studies have indicated that vitamin D receptor activators (VDRA), including paricalcitol, are associated with greater survival in maintenance hemodialysis (MHD) patients. However, patients with higher serum parathyroid hormone (PTH), a surrogate of higher death risk, are usually given higher VDRA doses, which can lead to confounding by indication and attenuate the expected survival advantage of high VDRA doses.
Methods
We examined mortality-predictability of low (>1 but <10 μg/week) versus high (≥10 μg/week) dose of administered paricalcitol over time in a contemporary cohort of 15,442 MHD patients (age 64±15 years, 55% men, 44% diabetes, 35% African Americans) from all DaVita dialysis clinics across the USA (7/2001-6/2006 with survival follow-ups until 6/2007) using conventional Cox regression, propensity score (PS) matching, and marginal structural model (MSM) analyses.
Results
In our conventional Cox models and PS matching models, low dose of paricalcitol was not associated with mortality either in baseline (hazard ratio (HR): 1.03, 95%confidence interval (CI): (0.97-1.09)) and (HR: 0.99, 95%CI: (0.86-1.14)) or time-dependent (HR: 1.04, 95%CI: (0.98-1.10)) and (HR: 1.12, 95%CI: (0.98-1.28)) models, respectively. In contrast, compared to high dose of paricalcitol, low dose was associated with a 26% higher risk of mortality (HR: 1.26, 95%CI: (1.19-1.35)) in MSM. The association between dose of paricalcitol and mortality was robust in almost all subgroups of patients using MSMs.
Conclusions
Higher dose of paricalcitol appears causally associated with greater survival in MHD patients. Randomized controlled trials need to verify the survival effect of paricalcitol dose in maintenance hemodialysis patients are indicated.
doi:10.1002/pds.3349
PMCID: PMC4304639  PMID: 22996597
Mortality; propensity score; marginal structural model; mineral and bone disorders; chronic kidney disease; paricalcitol
11.  Randomised trials of human albumin for adults with sepsis: systematic review and meta-analysis with trial sequential analysis of all-cause mortality 
Objective To assess the efficacy and safety of pooled human albumin solutions as part of fluid volume expansion and resuscitation (with or without improvement of baseline hypoalbuminaemia) in critically unwell adults with sepsis of any severity.
Design Systematic review and meta-analysis of randomised clinical trials, with trial sequential analysis, subgroup, and meta-regression analyses.
Data sources PubMed, PubMed Central, Web of Science (includes Medline, Conference Proceedings Citation Index, Data Citation Index, Chinese Science Citation Database, CAB abstracts, Derwent Innovations Index), OvidSP (includes Embase, Ovid Medline, HMIC, PsycINFO, Maternity and Infant Care, Transport Database), Cochrane Library, clinicaltrials.gov, controlled-trials.com, online material, relevant conference proceedings, hand searching of reference lists, and contact with authors as necessary.
Eligibility criteria Prospective randomised clinical trials of adults with sepsis of any severity (with or without baseline hypoalbuminaemia) in critical or intensive care who received pooled human albumin solutions as part of fluid volume expansion and resuscitation (with or without improvement of hypoalbuminaemia) compared with those who received control fluids (crystalloid or colloid), were included if all-cause mortality outcome data were available. No restriction of language, date, publication status, or primary study endpoint was applied.
Data extraction Two reviewers independently assessed articles for inclusion, extracted data to assess risk of bias, trial methods, patients, interventions, comparisons, and outcome. The relative risk of all-cause mortality was calculated using a random effects model accounting for clinical heterogeneity.
Primary outcome measure All-cause mortality at final follow-up.
Results Eighteen articles reporting on 16 primary clinical trials that included 4190 adults in critical or intensive care with sepsis, severe sepsis, or septic shock. A median of 70.0 g daily of pooled human albumin was received over a median of 3 days by adults with a median age of 60.8 years as part of fluid volume expansion and resuscitation, with or without correction of hypoalbuminaemia. The relative risk of death was similar between albumin groups (that received a median of 175 g in total) and control fluid groups (relative risk 0.94; 95% confidence interval 0.87 to 1.01; P=0.11; I2=0%). Trial sequential analysis corrected the 95% confidence interval for random error (0.85 to 1.02; D2=0%). Eighty eight per cent of the required information size (meta-analysis sample size) of 4894 patients was achieved, and the cumulative effect size measure (z score) entered the futility area, supporting the notion of no relative benefit of albumin (GRADE quality of evidence was moderate). Evidence of no difference was also found when albumin was compared with crystalloid fluid (relative risk 0.93; 0.86 to 1.01; P=0.07; I2=0%) in 3878 patents (GRADE quality of evidence was high; 79.9% of required information size) or colloid fluids in 299 patients (relative risk 1.04; 0.79 to 1.38; P=0.76; I2=0%) (GRADE quality of evidence was very low; 5.8% of required information size). When studies at high risk of bias were excluded in a predefined subgroup analysis, the finding of no mortality benefit remained, and the cumulative z score was just outside the boundary of futility. Overall, the meta-analysis was robust to sensitivity, subgroup, meta-regression, and trial sequential analyses.
Conclusions In this analysis, human albumin solutions as part of fluid volume expansion and resuscitation for critically unwell adults with sepsis of any severity (with or without baseline hypoalbuminaemia) were not robustly effective at reducing all-cause mortality. Albumin seems to be safe in this setting, as a signal towards harm was not detected, but this analysis does not support a recommendation for use.
doi:10.1136/bmj.g4561
PMCID: PMC4106199  PMID: 25099709
12.  Association of Serum Phosphorus Concentration with Mortality in Elderly and Non-Elderly Hemodialysis Patients 
Objective
Hypo- and hyperphosphatemia have each been associated with increased mortality in maintenance hemodialysis (MHD) patients. There has not been previous evaluation of a differential relationship between serum phosphorus level and death risk across varying age groups in MHD patients.
Design and Settings
In a 6-year cohort of 107,817 MHD patients treated in a large dialysis organization, we examined the association between serum phosphorus levels with all-cause and cardiovascular mortality within 5 age categories (15-<45, 45-<65, 65-<70, 70-<75 and ≥75 years old) using Cox proportional hazards model adjusted for case-mix covariates and malnutrition inflammation complex syndrome (MICS) surrogates.
Main outcome measure
all-cause and cardiovascular mortality.
Results
The overall mean age of the cohort was 60±16 years, among whom there were 45% women, 35% Blacks and 58% diabetics. The time averaged serum phosphorus level (mean ± SD) within each age category was 6.26±1.4, 5.65±1.2, 5.26±1.1, 5.11±1.0 and 4.88±1.0 mg/dl, respectively (P for trend <0.001). Hyperphosphatemia (>5.5 mg/dl) was consistently associated with increased all-cause and cardiovascular mortality risks across all age categories including after adjustment for case-mix and MICS-related covariates. In fully adjusted models, a low serum phosphorus level (<3.5 mg/dl) was associated with increased all-cause mortality only in elderly MHD patients ≥65 years old (hazard ratio [HR] (95% confidence interval[CI]): 1.21(1.07-1.37), 1.13(1.02-1.25), and 1.28(1.2-1.37) for patients 65-<70, 70-<75, and ≥75 years old, respectively], but not in younger patients (<65 years old). A similar differential for cardiovascular mortality of serum phosphorus levels between old and young age groups was observed.
Conclusions
The association between hyperphosphatemia and mortality is similar across all age groups of MHD patients, whereas hypophosphatemia is associated with increased mortality only in elderly MHD patients. Preventing very low serum phosphorus levels in elderly dialysis patients may be associated with better outcomes, which needs to be examined in future studies.
doi:10.1053/j.jrn.2013.01.018
PMCID: PMC3735629  PMID: 23631888
hemodialysis; mortality; phosphorus; elderly
13.  Current status of maintenance hemodialysis in Beijing, China 
Kidney International Supplements  2013;3(2):167-169.
The Beijing Hemodialysis Quality Control and Improvement Center started patient data collection from 2007. We report here the trends in incidence, prevalence, and mortality of end-stage renal disease (ESRD) patients on maintenance hemodialysis (MHD). The incidence increased from 94 per million population in 2007 to 147.3 per million population in 2010. The leading cause of ESRD changed from chronic glomerulonephritis (32.1%) to diabetes (40.1%). The point prevalence of MHD at the end of 2006 was 269 per million population, and gradually increased to 509 per million population in the end of 2010. The leading cause of ESRD in 2010 prevalent patients was chronic nephritis (33.9%), followed by diabetes (29.5%). The annual mortality varied from 7.4 to 9.0%. Old or diabetic patients suffered a higher mortality. The 2010 prevalent MHD patients achieved KDOQI hemoglobin, calcium, phosphate, and intact parathyroid hormone guidelines, which was comparable to other DOPPS (Dialysis Outcome and Practice Pattern Study) countries; Beijing MHD patients had a relatively higher albumin level.
doi:10.1038/kisup.2013.6
PMCID: PMC4089754  PMID: 25018982
epidemiology; hemodialysis; incidence; mortality; prevalence
14.  Factors Associated with Findings of Published Trials of Drug–Drug Comparisons: Why Some Statins Appear More Efficacious than Others 
PLoS Medicine  2007;4(6):e184.
Background
Published pharmaceutical industry–sponsored trials are more likely than non-industry-sponsored trials to report results and conclusions that favor drug over placebo. Little is known about potential biases in drug–drug comparisons. This study examined associations between research funding source, study design characteristics aimed at reducing bias, and other factors that potentially influence results and conclusions in randomized controlled trials (RCTs) of statin–drug comparisons.
Methods and Findings
This is a cross-sectional study of 192 published RCTs comparing a statin drug to another statin drug or non-statin drug. Data on concealment of allocation, selection bias, blinding, sample size, disclosed funding source, financial ties of authors, results for primary outcomes, and author conclusions were extracted by two coders (weighted kappa 0.80 to 0.97). Univariate and multivariate logistic regression identified associations between independent variables and favorable results and conclusions. Of the RCTs, 50% (95/192) were funded by industry, and 37% (70/192) did not disclose any funding source. Looking at the totality of available evidence, we found that almost all studies (98%, 189/192) used only surrogate outcome measures. Moreover, study design weaknesses common to published statin–drug comparisons included inadequate blinding, lack of concealment of allocation, poor follow-up, and lack of intention-to-treat analyses. In multivariate analysis of the full sample, trials with adequate blinding were less likely to report results favoring the test drug, and sample size was associated with favorable conclusions when controlling for other factors. In multivariate analysis of industry-funded RCTs, funding from the test drug company was associated with results (odds ratio = 20.16 [95% confidence interval 4.37–92.98], p < 0.001) and conclusions (odds ratio = 34.55 [95% confidence interval 7.09–168.4], p < 0.001) that favor the test drug when controlling for other factors. Studies with adequate blinding were less likely to report statistically significant results favoring the test drug.
Conclusions
RCTs of head-to-head comparisons of statins with other drugs are more likely to report results and conclusions favoring the sponsor's product compared to the comparator drug. This bias in drug–drug comparison trials should be considered when making decisions regarding drug choice.
Lisa Bero and colleagues found published trials comparing one statin with another were more likely to report results and conclusions favoring the sponsor's product than the comparison drug.
Editors' Summary
Background.
Randomized controlled trials are generally considered to be the most reliable type of experimental study for evaluating the effectiveness of different treatments. Randomization involves the assignment of participants in the trial to different treatment groups by the play of chance. Properly done, this procedure means that the different groups are comparable at outset, reducing the chance that outside factors could be responsible for treatment effects seen in the trial. When done properly, randomization also ensures that the clinicians recruiting participants into the trial cannot know the treatment group to which a patient will end up being assigned. However, despite these advantages, a large number of factors can still result in bias creeping in. Bias comes about when the findings of research appear to differ in some systematic way from the true result. Other research studies have suggested that funding is a source of bias; studies sponsored by drug companies seem to more often favor the sponsor's drug than trials not sponsored by drug companies
Why Was This Study Done?
The researchers wanted to more precisely understand the impact of different possible sources of bias in the findings of randomized controlled trials. In particular, they wanted to study the outcomes of “head-to-head” drug comparison studies for one particular class of drugs, the statins. Drugs in this class are commonly prescribed to reduce the levels of cholesterol in blood amongst people who are at risk of heart and other types of disease. This drug class is a good example for studying the role of bias in drug–drug comparison trials, because these trials are extensively used in decision making by health-policy makers.
What Did the Researchers Do and Find?
This research study was based on searching PubMed, a biomedical literature database, with the aim of finding all randomized controlled trials of statins carried out between January 1999 and May 2005 (reference lists also were searched). Only trials which compared one statin to another statin or one statin to another type of drug were included. The researchers extracted the following information from each article: the study's source of funding, aspects of study design, the overall results, and the authors' conclusions. The results were categorized to show whether the findings were favorable to the test drug (the newer statin), inconclusive, or not favorable to the test drug. Aspects of each study's design were also categorized in relation to various features, such as how well the randomization was done (in particular, the degree to which the processes used would have prevented physicians from knowing which treatment a patient was likely to receive on enrollment); whether all participants enrolled in the trial were eventually analyzed; and whether investigators or participants knew what treatment an individual was receiving.
One hundred and ninety-two trials were included in this study, and of these, 95 declared drug company funding; 23 declared government or other nonprofit funding while 74 did not declare funding or were not funded. Trials that were properly blinded (where participants and investigators did not know what treatment an individual received) were less likely to have conclusions favoring the test drug. However, large trials were more likely to favor the test drug than smaller trials. When looking specifically at the trials funded by drug companies, the researchers found various factors that predicted whether a result or conclusion favored the test drug. These included the impact of the journal publishing the results; the size of the trial; and whether funding came from the maker of the test drug. However, properly blinded trials were less likely to produce results favoring the test drug. Even once all other factors were accounted for, the funding source for the study was still linked with results and conclusions that favored the maker of the test drug.
What Do These Findings Mean?
This study shows that the type of sponsorship available for randomized controlled trials of statins was strongly linked to the results and conclusions of those studies, even when other factors were taken into account. However, it is not clear from this study why sponsorship has such a strong link to the overall findings. There are many possible reasons why this might be. Some people have suggested that drug companies may deliberately choose lower dosages for the comparison drug when they carry out “head-to-head” trials; this tactic is likely to result in the company's product doing better in the trial. Others have suggested that trials which produce unfavorable results are not published, or that unfavorable outcomes are suppressed. Whatever the reasons for these findings, the implications are important, and suggest that the evidence base relating to statins may be substantially biased.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040184.
The James Lind Library has been created to help people understand fair tests of treatments in health care by illustrating how fair tests have developed over the centuries
The International Committee of Medical Journal Editors has provided guidance regarding sponsorship, authorship, and accountability
The CONSORT statement is a research tool that provides an evidence-based approach for reporting the results of randomized controlled trials
Good Publication Practice guidelines provide standards for responsible publication of research sponsored by pharmaceutical companies
Information from Wikipedia on Statins. Wikipedia is an internet encyclopedia anyone can edit
doi:10.1371/journal.pmed.0040184
PMCID: PMC1885451  PMID: 17550302
15.  Low levels of vitamin C in dialysis patients is associated with decreased prealbumin and increased C-reactive protein 
BMC Nephrology  2011;12:18.
Background
Subclinical inflammation is a common phenomenon in patients on either continuous ambulatory peritoneal dialysis (CAPD) or maintenance hemodialysis (MHD). We hypothesized that vitamin C had anti-inflammation effect because of its electron offering ability. The current study was designed to test the relationship of plasma vitamin C level and some inflammatory markers.
Methods
In this cross-sectional study, 284 dialysis patients were recruited, including 117 MHD and 167 CAPD patients. The demographics were recorded. Plasma vitamin C was measured by high-performance liquid chromatography. And we also measured body mass index (BMI, calculated as weight/height2), Kt/V, serum albumin, serum prealbumin, high-sensitivity C-reactive protein (hsCRP), ferritin, hemoglobin. The relationships between vitamin C and albumin, pre-albumin and hsCRP levels were tested by Spearman correlation analysis and multiple regression analysis.
Patients were classified into three subgroups by vitamin C level according to previous recommendation [1,2] in MHD and CAPD patients respectively: group A: < 2 ug/ml (< 11.4 umol/l, deficiency), group B: 2-4 ug/ml (11.4-22.8 umol/l, insufficiency) and group C: > 4 ug/ml (> 22.8 umol/l, normal and above).
Results
Patients showed a widely distribution of plasma vitamin C levels in the total 284 dialysis patients. Vitamin C deficiency (< 2 ug/ml) was present in 95(33.45%) and insufficiency (2-4 ug/ml) in 88(30.99%). 73(25.70%) patients had plasma vitamin C levels within normal range (4-14 ug/ml) and 28(9.86%) at higher than normal levels (> 14 ug/ml). The similar proportion of different vitamin C levels was found in both MHD and CAPD groups.
Plasma vitamin C level was inversely associated with hsCRP concentration (Spearman r = -0.201, P = 0.001) and positively associated with prealbumin (Spearman r = 0.268, P < 0.001), albumin levels (Spearman r = 0.161, P = 0.007). In multiple linear regression analysis, plasma vitamin C level was inversely associated with log10hsCRP (P = 0.048) and positively with prealbumin levels (P = 0.002) adjusted for gender, age, diabetes, modality of dialysis and some other confounding effects.
Conclusions
The investigation indicates that vitamin C deficiency is common in both MHD patients and CAPD patients. Plasma vitamin C level is positively associated with serum prealbumin level and negatively associated with hsCRP level in both groups. Vitamin C deficiency may play an important role in the increased inflammatory status in dialysis patients. Further studies are needed to determine whether inflammatory status in dialysis patients can be improved by using vitamin C supplements.
doi:10.1186/1471-2369-12-18
PMCID: PMC3112084  PMID: 21548917
16.  Advanced Electrophysiologic Mapping Systems 
Executive Summary
Objective
To assess the effectiveness, cost-effectiveness, and demand in Ontario for catheter ablation of complex arrhythmias guided by advanced nonfluoroscopy mapping systems. Particular attention was paid to ablation for atrial fibrillation (AF).
Clinical Need
Tachycardia
Tachycardia refers to a diverse group of arrhythmias characterized by heart rates that are greater than 100 beats per minute. It results from abnormal firing of electrical impulses from heart tissues or abnormal electrical pathways in the heart because of scars. Tachycardia may be asymptomatic, or it may adversely affect quality of life owing to symptoms such as palpitations, headaches, shortness of breath, weakness, dizziness, and syncope. Atrial fibrillation, the most common sustained arrhythmia, affects about 99,000 people in Ontario. It is associated with higher morbidity and mortality because of increased risk of stroke, embolism, and congestive heart failure. In atrial fibrillation, most of the abnormal arrhythmogenic foci are located inside the pulmonary veins, although the atrium may also be responsible for triggering or perpetuating atrial fibrillation. Ventricular tachycardia, often found in patients with ischemic heart disease and a history of myocardial infarction, is often life-threatening; it accounts for about 50% of sudden deaths.
Treatment of Tachycardia
The first line of treatment for tachycardia is antiarrhythmic drugs; for atrial fibrillation, anticoagulation drugs are also used to prevent stroke. For patients refractory to or unable to tolerate antiarrhythmic drugs, ablation of the arrhythmogenic heart tissues is the only option. Surgical ablation such as the Cox-Maze procedure is more invasive. Catheter ablation, involving the delivery of energy (most commonly radiofrequency) via a percutaneous catheter system guided by X-ray fluoroscopy, has been used in place of surgical ablation for many patients. However, this conventional approach in catheter ablation has not been found to be effective for the treatment of complex arrhythmias such as chronic atrial fibrillation or ventricular tachycardia. Advanced nonfluoroscopic mapping systems have been developed for guiding the ablation of these complex arrhythmias.
The Technology
Four nonfluoroscopic advanced mapping systems have been licensed by Health Canada:
CARTO EP mapping System (manufactured by Biosense Webster, CA) uses weak magnetic fields and a special mapping/ablation catheter with a magnetic sensor to locate the catheter and reconstruct a 3-dimensional geometry of the heart superimposed with colour-coded electric potential maps to guide ablation.
EnSite System (manufactured by Endocardial Solutions Inc., MN) includes a multi-electrode non-contact catheter that conducts simultaneous mapping. A processing unit uses the electrical data to computes more than 3,000 isopotential electrograms that are displayed on a reconstructed 3-dimensional geometry of the heart chamber. The navigational system, EnSite NavX, can be used separately with most mapping catheters.
The LocaLisa Intracardiac System (manufactured by Medtronics Inc, MN) is a navigational system that uses an electrical field to locate the mapping catheter. It reconstructs the location of the electrodes on the mapping catheter in 3-dimensional virtual space, thereby enabling an ablation catheter to be directed to the electrode that identifies abnormal electric potential.
Polar Constellation Advanced Mapping Catheter System (manufactured by Boston Scientific, MA) is a multielectrode basket catheter with 64 electrodes on 8 splines. Once deployed, each electrode is automatically traced. The information enables a 3-dimensional model of the basket catheter to be computed. Colour-coded activation maps are reconstructed online and displayed on a monitor. By using this catheter, a precise electrical map of the atrium can be obtained in several heartbeats.
Review Strategy
A systematic search of Cochrane, MEDLINE and EMBASE was conducted to identify studies that compared ablation guided by any of the advanced systems to fluoroscopy-guided ablation of tachycardia. English-language studies with sample sizes greater than or equal to 20 that were published between 2000 and 2005 were included. Observational studies on safety of advanced mapping systems and fluoroscopy were also included. Outcomes of interest were acute success, defined as termination of arrhythmia immediately following ablation; long-term success, defined as being arrhythmia free at follow-up; total procedure time; fluoroscopy time; radiation dose; number of radiofrequency pulses; complications; cost; and the cost-effectiveness ratio.
Quality of the individual studies was assessed using established criteria. Quality of the overall evidence was determined by applying the GRADE evaluation system. (3) Qualitative synthesis of the data was performed. Quantitative analysis using Revman 4.2 was performed when appropriate.
Quality of the Studies
Thirty-four studies met the inclusion criteria. These comprised 18 studies on CARTO (4 randomized controlled trials [RCTs] and 14 non-RCTs), 3 RCTs on EnSite NavX, 4 studies on LocaLisa Navigational System (1 RCT and 3 non-RCTs), 2 studies on EnSite and CARTO, 1 on Polar Constellation basket catheter, and 7 studies on radiation safety.
The quality of the studies ranged from moderate to low. Most of the studies had small sample sizes with selection bias, and there was no blinding of patients or care providers in any of the studies. Duration of follow-up ranged from 6 weeks to 29 months, with most having at least 6 months of follow-up. There was heterogeneity with respect to the approach to ablation, definition of success, and drug management before and after the ablation procedure.
Summary of Findings
Evidence is based on a small number of small RCTS and non-RCTS with methodological flaws.
Advanced nonfluoroscopy mapping/navigation systems provided real time 3-dimensional images with integration of anatomic and electrical potential information that enable better visualization of areas of interest for ablation
Advanced nonfluoroscopy mapping/navigation systems appear to be safe; they consistently shortened the fluoroscopy duration and radiation exposure.
Evidence suggests that nonfluoroscopy mapping and navigation systems may be used as adjuncts to rather than replacements for fluoroscopy in guiding the ablation of complex arrhythmias.
Most studies showed a nonsignificant trend toward lower overall failure rate for advanced mapping-guided ablation compared with fluoroscopy-guided mapping.
Pooled analyses of small RCTs and non-RCTs that compared fluoroscopy- with nonfluoroscopy-guided ablation of atrial fibrillation and atrial flutter showed that advanced nonfluoroscopy mapping and navigational systems:
Yielded acute success rates of 69% to 100%, not significantly different from fluoroscopy ablation.
Had overall failure rates at 3 months to 19 months of 1% to 40% (median 25%).
Resulted in a 10% relative reduction in overall failure rate for advanced mapping guided-ablation compared to fluoroscopy guided ablation for the treatment of atrial fibrillation.
Yielded added benefit over fluoroscopy in guiding the ablation of complex arrhythmia. The advanced systems were shown to reduce the arrhythmia burden and the need for antiarrhythmic drugs in patients with complex arrhythmia who had failed fluoroscopy-guided ablation
Based on predominantly observational studies, circumferential PV ablation guided by a nonfluoroscopy system was shown to do the following:
Result in freedom from atrial fibrillation (with or without antiarrhythmic drug) in 75% to 95% of patients (median 79%). This effect was maintained up to 28 months.
Result in freedom from atrial fibrillation without antiarrhythmic drugs in 47% to 95% of patients (median 63%).
Improve patient survival at 28 months after the procedure as compared with drug therapy.
Require special skills; patient outcomes are operator dependent, and there is a significant learning curve effect.
Complication rates of pulmonary vein ablation guided by an advanced mapping/navigation system ranged from 0% to 10% with a median of 6% during a follow-up period of 6 months to 29 months.
The complication rate of the study with the longest follow-up was 8%.
The most common complications of advanced catheter-guided ablation were stroke, transient ischemic attack, cardiac tamponade, myocardial infarction, atrial flutter, congestive heart failure, and pulmonary vein stenosis. A small number of cases with fatal atrial-esophageal fistula had been reported and were attributed to the high radiofrequency energy used rather than to the advanced mapping systems.
Economic Analysis
An Ontario-based economic analysis suggests that the cumulative incremental upfront costs of catheter ablation of atrial fibrillation guided by advanced nonfluoroscopy mapping could be recouped in 4.7 years through cost avoidance arising from less need for antiarrhythmic drugs and fewer hospitalization for stroke and heart failure.
Expert Opinion
Expert consultants to the Medical Advisory Secretariat noted the following:
Nonfluoroscopy mapping is not necessary for simple ablation procedures (e.g., typical flutter). However, it is essential in the ablation of complex arrhythmias including these:
Symptomatic, drug-refractory atrial fibrillation
Arrhythmias in people who have had surgery for congenital heart disease (e.g., macro re-entrant tachycardia in people who have had surgery for congenital heart disease).
Ventricular tachycardia due to myocardial infarction
Atypical atrial flutter
Advanced mapping systems represent an enabling technology in the ablation of complex arrhythmias. The ablation of these complex cases would not have been feasible or advisable with fluoroscopy-guided ablation and, therefore, comparative studies would not be feasible or ethical in such cases.
Many of the studies included patients with relatively simple arrhythmias (e.g., typical atrial flutter and atrial ventricular nodal re-entrant tachycardia), for which the success rates using the fluoroscopy approach were extremely high and unlikely to be improved upon using nonfluoroscopic mapping.
By age 50, almost 100% of people who have had surgery for congenital heart disease will develop arrhythmia.
Some centres are under greater pressure because of expertise in complex ablation procedures for subsets of patients.
The use of advanced mapping systems requires the support of additional electrophysiologic laboratory time and nursing time.
Conclusions
For patients suffering from symptomatic, drug-refractory atrial fibrillation and are otherwise healthy, catheter ablation offers a treatment option that is less invasive than is open surgical ablation.
Small RCTs that may have been limited by type 2 errors showed significant reductions in fluoroscopy exposure in nonfluoroscopy-guided ablation and a trend toward lower overall failure rate that did not reach statistical significance.
Pooled analysis suggests that advanced mapping systems may reduce the overall failure rate in the ablation of atrial fibrillation.
Observational studies suggest that ablation guided by complex mapping/navigation systems is a promising treatment for complex arrhythmias such as highly symptomatic, drug-refractory atrial fibrillation for which rate control is not an option
In people with atrial fibrillation, ablation guided by advanced nonfluoroscopy mapping resulted in arrhythmia free rates of 80% or higher, reduced mortality, and better quality of life at experienced centres.
Although generally safe, serious complications such as stroke, atrial-esophageal, and pulmonary vein stenosis had been reported following ablation procedures.
Experts advised that advanced mapping systems are also required for catheter ablation of:
Hemodynamically unstable ventricular tachycardia from ischemic heart disease
Macro re-entrant atrial tachycardia after surgical correction of congenital heart disease
Atypical atrial flutter
Catheter ablation of atrial fibrillation is still evolving, and it appears that different ablative techniques may be appropriate depending on the characteristics of the patient and the atrial fibrillation.
Data from centres that perform electrophysiological mapping suggest that patients with drug-refractory atrial fibrillation may be the largest group with unmet need for advanced mapping-guided catheter ablation in Ontario.
Nonfluoroscopy mapping-guided pulmonary vein ablation for the treatment of atrial fibrillation has a significant learning effect; therefore, it is advisable for the province to establish centres of excellence to ensure a critical volume, to gain efficiency and to minimize the need for antiarrhythmic drugs after ablation and the need for future repeat ablation procedures.
PMCID: PMC3379531  PMID: 23074499
17.  Low calcium dialysate combined with CaCO3 in hyperphosphatemia in hemodialysis patients 
This aim of this study was to observe the effects of the application of low calcium dialysate (LCD) combined with oral administration of CaCO3 in the treatment of hyperphosphatemia, as well as blood Ca2+, calcium-phosphate product (CPP), parathyroid hormone (PTH) and blood pressure in patients undergoing hemodialysis. Thirty-one maintenance hemodialysis (MHD) patients with hyperphosphatemia, but normal blood Ca2+, underwent dialysis with an initial dialy-sate Ca2+ concentration (DCa) of 1.50 mmol/l for six months and then with 1.25 mmol/l for six months. The patients who underwent dialysis with a DCa of 1.25 mmol/l were treated orally with 0.3 g CaCO3 tablets three times a day. In the third and sixth months [observation end point (OEP)] of the dialysis, the concentrations of Ca2+, phosphorus and intact PTH (iPTH) were measured; blood pressure and side-effects prior to and following dialysis were also observed. The Ca2+, CPP and iPTH levels increased (P<0.05) in the sixth month of treatment with a DCa of 1.50 mmol/l. However, the Ca2+ concentration declined to a certain degree, CPPs decreased significantly (P<0.05) and the iPTH concentration increased following treatment with a DCa of 1.25 mmol/l for six months. The incidence rate of adverse effects of LCD was 12.9% (4/31); the effects were mainly muscle spasms, hypotension and elevated PTH. The periodic application of LCD combined with the oral administration of CaCO3 effectively reduced serum phosphorus and CPPs among MHD patients with hyperphosphatemia, indicating that the treatment may be used clinically.
doi:10.3892/etm.2013.1067
PMCID: PMC3702715  PMID: 23837063
low calcium dialysis; hyperphosphatemia; calcium-phosphate product; parathyroid hormone
18.  Use of hyaluronan in the selection of sperm for intracytoplasmic sperm injection (ICSI): significant improvement in clinical outcomes—multicenter, double-blinded and randomized controlled trial 
STUDY QUESTION
Does the selection of sperm for ICSI based on their ability to bind to hyaluronan improve the clinical pregnancy rates (CPR) (primary end-point), implantation (IR) and pregnancy loss rates (PLR)?
SUMMARY ANSWER
In couples where ≤65% of sperm bound hyaluronan, the selection of hyaluronan-bound (HB) sperm for ICSI led to a statistically significant reduction in PLR.
WHAT IS KNOWN AND WHAT THIS PAPER ADDS
HB sperm demonstrate enhanced developmental parameters which have been associated with successful fertilization and embryogenesis. Sperm selected for ICSI using a liquid source of hyaluronan achieved an improvement in IR. A pilot study by the primary author demonstrated that the use of HB sperm in ICSI was associated with improved CPR. The current study represents the single largest prospective, multicenter, double-blinded and randomized controlled trial to evaluate the use of hyaluronan in the selection of sperm for ICSI.
DESIGN
Using the hyaluronan binding assay, an HB score was determined for the fresh or initial (I-HB) and processed or final semen specimen (F-HB). Patients were classified as >65% or ≤65% I-HB and stratified accordingly. Patients with I-HB scores ≤65% were randomized into control and HB selection (HYAL) groups whereas patients with I-HB >65% were randomized to non-participatory (NP), control or HYAL groups, in a ratio of 2:1:1. The NP group was included in the >65% study arm to balance the higher prevalence of patients with I-HB scores >65%. In the control group, oocytes received sperm selected via the conventional assessment of motility and morphology. In the HYAL group, HB sperm meeting the same visual criteria were selected for injection. Patient participants and clinical care providers were blinded to group assignment.
PARTICIPANTS AND SETTING
Eight hundred two couples treated with ICSI in 10 private and hospital-based IVF programs were enrolled in this study. Of the 484 patients stratified to the I-HB > 65% arm, 115 participants were randomized to the control group, 122 participants were randomized to the HYAL group and 247 participants were randomized to the NP group. Of the 318 patients stratified to the I-HB ≤ 65% arm, 164 participants were randomized to the control group and 154 participants were randomized to the HYAL group.
MAIN RESULTS AND THE ROLE OF CHANCE
HYAL patients with an F-HB score ≤65% demonstrated an IR of 37.4% compared with 30.7% for control [n = 63, 58, P > 0.05, (95% CI of the difference −7.7 to 21.3)]. In addition, the CPR associated with patients randomized to the HYAL group was 50.8% when compared with 37.9% for those randomized to the control group (n = 63, 58, P > 0.05). The 12.9% difference was associated with a risk ratio (RR) of 1.340 (RR 95% CI 0.89–2.0). HYAL patients with I-HB and F-HB scores ≤65% revealed a statistically significant reduction in their PLR (I-HB: 3.3 versus 15.1%, n = 73, 60, P = 0.021, RR of 0.22 (RR 95% CI 0.05–0.96) (F-HB: 0.0%, 18.5%, n = 27, 32, P = 0.016, RR not applicable due to 0.0% value) over control patients. The study was originally planned to have 200 participants per arm providing 86.1% power to detect an increase in CPR from 35 to 50% at α = 0.05 but was stopped early for financial reasons. As a pilot study had demonstrated that sperm preparation protocols may increase the HB score, the design of the current study incorporated a priori collection and analysis of the data by both the I-HB and the F-HB scores. Analysis by both the I-HB and F-HB score acknowledged the potential impact of sperm preparation protocols.
BIAS, CONFOUNDING AND OTHER REASONS FOR CAUTION
Selection bias was controlled by randomization. Geographic and seasonal bias was controlled by recruiting from 10 geographically unique sites and by sampling over a 2-year period. The potential for population effect was controlled by adjusting for higher prevalence rates of >65% I-HB that naturally occur by adding the NP arm and to concurrently recruit >65% and ≤65% I-HB subjects. Monitoring and site audits occurred regularly to ensure standardization of data collection, adherence to the study protocol and subject recruitment. Subgroup analysis based on the F-HB score was envisaged in the study design.
GENERALIZABILITY TO OTHER POPULATIONS
The study included clinics using different sperm preparation methods, located in different regions of the USA and proceeded in every month of the year. Therefore, the results are widely applicable.
STUDY FUNDING/COMPETING INTEREST(S)
This study was funded by Biocoat, Inc., Horsham, PA, USA. The statistical analysis plan and subsequent analyses were performed by Sherrine Eid, a biostatistician. The manuscript was prepared by Kathryn C. Worrilow, Ph.D. and the study team members. Biocoat, Inc. was permitted to review the manuscript and suggest changes, but the final decision on content was exclusively retained by the authors. K.C.W is a scientific advisor to Biocoat, Inc. S.E. is a consultant to Biocoat, Inc. D.W. has nothing to disclose. M.P., S.S., J.W., K.I., C.K. and T.E. have nothing to disclose. G.D.B. is a consultant to Cooper Surgical and Unisense. J.L. is on the scientific advisory board of Origio.
TRIAL REGISTRATION NUMBER
NCT00741494.
doi:10.1093/humrep/des417
PMCID: PMC3545641  PMID: 23203216
19.  Dual factor pulse pressure: body mass index and outcome in type 2 diabetic subjects on maintenance hemodialysis. A longitudinal study 2003–2006 
Vascular Health and Risk Management  2008;4(6):1401-1406.
Background:
Inverse associations between risk factors and mortality have been reported in epidemiological studies of patients on maintenance hemodialysis (MHD).
Objective:
The aim of this prospective study was to estimate the effect of the dual variable pulse pressure (PP) – body mass index (BMI) on cardiovascular (CV) events and death in type 2 diabetic (T2D) subjects on MHD in a Caribbean population.
Methods:
Eighty Afro-Caribbean T2D patients on MHD were studied prospectively from 2003 to 2006. Proportional-hazard modeling was used.
Results:
Of all, 23.8% had a high PP (PP ≥ 75th percentile), 76.3% had BMI < 30 Kg/m2, 21.3% had the dual factor high PP – absence of obesity. During the study period, 23 patients died and 13 CV events occurred. In the presence of the dual variable and after adjustment for age, gender, duration of MHD, and pre-existing CV complications, the adjusted hazard ratio (HR) (95% CI) of CV events and death were respectively 2.7 (0.8–8.3); P = 0.09 and 2.4 (1.1–5.9); P = 0.04.
Conclusions:
The dual factor, high PP – absence of obesity, is a prognosis factor of outcome. In type 2 diabetics on MHD, a specific management strategy should be proposed in nonobese subjects with wide pulse pressure in order to decrease or prevent the incidence of fatal and nonfatal events.
PMCID: PMC2663455  PMID: 19337552
dual factor; pulse pressure; body mass index; type 2 diabetes; outcome
20.  Serum catalytic Iron: A novel biomarker for coronary artery disease in patients on maintenance hemodialysis 
Indian Journal of Nephrology  2013;23(5):332-337.
Cardiovascular disease is the leading cause of morbidity and mortality in maintenance hemodialysis (MHD) patients. We evaluated the role of serum catalytic iron (SCI) as a biomarker for coronary artery disease (CAD) in patients on MHD. SCI was measured in 59 stable MHD patients. All patients underwent coronary angiography. Significant CAD was defined as a > 70% narrowing in at least one epicardial coronary artery. Levels of SCI were compared with a group of healthy controls. Significant CAD was detected in 22 (37.3%) patients, with one vessel disease in 14 (63.63%) and multi-vessel disease in eight (36.36%) patients. The MHD patients had elevated levels of SCI (4.70 ± 1.79 μmol/L) compared with normal health survey participants (0.11 ± 0.01 μmol/L) (P < 0.0001). MHD patients who had no CAD had SCI levels of 1.36 ± 0.34 μmol/L compared with those having significant CAD (8.92 ± 4.12 μmol/L) (P < 0.0001). Patients on MHD and diabetes had stronger correlation between SCI and prevalence of CAD compared with non-diabetics. Patients having one vessel disease had SCI of 8.85 ± 4.67 μmol/L versus multi-vessel disease with SCI of 9.05 ± 8.34 μmol/L, P = 0.48. In multivariate analysis, SCI and diabetes mellitus were independently associated with significant CAD. We confirm the high prevalence of significant CAD in MHD patients. Elevated SCI levels are associated with presence of significant coronary disease in such patients. The association of SCI is higher in diabetic versus the non-diabetic subgroup. This is an important potentially modifiable biomarker of CAD in MHD patients.
doi:10.4103/0971-4065.116293
PMCID: PMC3764705  PMID: 24049267
Coronary artery disease; maintenance hemodialysis; oxidative stress; serum catalytic iron
21.  Daily physical activity and physical function in adult maintenance hemodialysis patients 
Background
Maintenance hemodialysis (MHD) patients reportedly display reduced daily physical activity (DPA) and physical performance. Low daily physical activity and decreased physical performance are each associated with worse outcomes in chronic kidney disease patients. Although daily physical activity and physical performance might be expected to be related, few studies have examined such relationships in MHD patients, and methods for examining daily physical activity often utilized questionnaires rather than activity monitors. We hypothesized that daily physical activity and physical performance are reduced and correlated with each other even in relatively healthier MHD patients.
Methods
Daily physical activity, 6-min walk distance (6-MWT), sit-to-stand, and stair-climbing tests were measured in 72 MHD patients (32 % diabetics) with limited comorbidities and 39 normal adults of similar age and gender mix. Daily physical activity was examined by a physical activity monitor. The human activity profile was also employed.
Results
Daily physical activity with the activity monitor, time-averaged over 7 days, and all three physical performance tests were impaired in MHD patients, to about 60–70 % of normal values (p < 0.0001 for each measurement). Human activity profile scores were also impaired (p < 0.0001). MHD patients spent more time sleeping or in marked physical inactivity (p < 0.0001) and less time in ≥ moderate activity (p < 0.0001). These findings persisted when comparisons to normals were restricted to men or women separately. After adjustment, daily physical activity correlated with 6-MWT but not the two other physical performance tests. Human activity profile scores correlated more closely with all three performance tests than did DPA.
Conclusions
Even in relatively healthy MHD patients, daily physical activity and physical performance are substantially impaired and correlated. Whether training that increases daily physical activity or physical performance will improve clinical outcome in MHD patients needs to be examined.
doi:10.1007/s13539-014-0131-4
PMCID: PMC4159490  PMID: 24777474
Chronic kidney disease; Kidney failure; Physical performance; Exercise; Exercise capacity
22.  Switching HIV Treatment in Adults Based on CD4 Count Versus Viral Load Monitoring: A Randomized, Non-Inferiority Trial in Thailand 
PLoS Medicine  2013;10(8):e1001494.
Using a randomized controlled trial, Marc Lallemant and colleagues ask if a CD4-based monitoring and treatment switching strategy provides a similar clinical outcome compared to the standard viral load-based strategy for adults with HIV in Thailand.
Please see later in the article for the Editors' Summary
Background
Viral load (VL) is recommended for monitoring the response to highly active antiretroviral therapy (HAART) but is not routinely available in most low- and middle-income countries. The purpose of the study was to determine whether a CD4-based monitoring and switching strategy would provide a similar clinical outcome compared to the standard VL-based strategy in Thailand.
Methods and Findings
The Programs for HIV Prevention and Treatment (PHPT-3) non-inferiority randomized clinical trial compared a treatment switching strategy based on CD4-only (CD4) monitoring versus viral-load (VL). Consenting participants were antiretroviral-naïve HIV-infected adults (CD4 count 50–250/mm3) initiating non-nucleotide reverse transcriptase inhibitor (NNRTI)-based therapy. Randomization, stratified by site (21 public hospitals), was performed centrally after enrollment. Clinicians were unaware of the VL values of patients randomized to the CD4 arm. Participants switched to second-line combination with confirmed CD4 decline >30% from peak (within 200 cells from baseline) in the CD4 arm, or confirmed VL >400 copies/ml in the VL arm. Primary endpoint was clinical failure at 3 years, defined as death, new AIDS-defining event, or CD4 <50 cells/mm3. The 3-year Kaplan-Meier cumulative risks of clinical failure were compared for non-inferiority with a margin of 7.4%. In the intent to treat analysis, data were censored at the date of death or at last visit. The secondary endpoints were difference in future-drug-option (FDO) score, a measure of resistance profiles, virologic and immunologic responses, and the safety and tolerance of HAART. 716 participants were randomized, 356 to VL monitoring and 360 to CD4 monitoring. At 3 years, 319 participants (90%) in VL and 326 (91%) in CD4 were alive and on follow-up. The cumulative risk of clinical failure was 8.0% (95% CI 5.6–11.4) in VL versus 7.4% (5.1–10.7) in CD4, and the upper-limit of the one-sided 95% CI of the difference was 3.4%, meeting the pre-determined non-inferiority criterion. Probability of switch for study criteria was 5.2% (3.2–8.4) in VL versus 7.5% (5.0–11.1) in CD4 (p = 0.097). Median time from treatment initiation to switch was 11.7 months (7.7–19.4) in VL and 24.7 months (15.9–35.0) in CD4 (p = 0.001). The median duration of viremia >400 copies/ml at switch was 7.2 months (5.8–8.0) in VL versus 15.8 months (8.5–20.4) in CD4 (p = 0.002). FDO scores were not significantly different at time of switch. No adverse events related to the monitoring strategy were reported.
Conclusions
The 3-year rates of clinical failure and loss of treatment options did not differ between strategies although the longer-term consequences of CD4 monitoring would need to be investigated. These results provide reassurance to treatment programs currently based on CD4 monitoring as VL measurement becomes more affordable and feasible in resource-limited settings.
Trial registration
ClinicalTrials.gov NCT00162682
Please see later in the article for the Editors' Summary
Editors' Summary
Background
About 34 million people (most of them living in low-and middle-income countries) are currently infected with HIV, the virus that causes AIDS. HIV infection leads to the destruction of immune system cells (including CD4 cells, a type of white blood cell), leaving infected individuals susceptible to other infections. Early in the AIDS epidemic, most HIV-infected individuals died within 10 years of infection. Then, in 1996, highly active antiretroviral therapy (HAART)—combined drugs regimens that suppress viral replication and allow restoration of the immune system—became available. For people living in affluent countries, HIV/AIDS became a chronic condition but, because HAART was expensive, HIV/AIDS remained a fatal illness for people living in resource-limited countries. In 2003, the international community declared HIV/AIDS a global health emergency and, in 2006, it set the target of achieving universal global access to HAART by 2010. By the end of 2011, 8 million of the estimated 14.8 million people in need of HAART in low- and middle-income countries were receiving treatment.
Why Was This Study Done?
At the time this trial was conceived, national and international recommendations were that HIV-positive individuals should start HAART when their CD4 count fell below 200 cells/mm3 and should have their CD4 count regularly monitored to optimize HAART. In 2013, the World Health Organization (WHO) recommendations were updated to promote expanded eligibility for HAART with a CD4 of 500 cells/mm3 or less for adults, adolescents, and older children although priority is given to individuals with CD4 count of 350 cells/mm3 or less. Because HIV often becomes resistant to first-line antiretroviral drugs, WHO also recommends that viral load—the amount of virus in the blood—should be monitored so that suspected treatment failures can be confirmed and patients switched to second-line drugs in a timely manner. This monitoring and switching strategy is widely used in resource-rich settings, but is still very difficult to implement for low- and middle-income countries where resources for monitoring are limited and access to costly second-line drugs is restricted. In this randomized non-inferiority trial, the researchers compare the performance of a CD4-based treatment monitoring and switching strategy with the standard viral load-based strategy among HIV-positive adults in Thailand. In a randomized trial, individuals are assigned different interventions by the play of chance and followed up to compare the effects of these interventions; a non-inferiority trial investigates whether one treatment is not worse than another.
What Did the Researchers Do and Find?
The researchers assigned about 700 HIV-positive adults who were beginning HAART for the first time to have their CD4 count (CD4 arm) or their CD4 count and viral load (VL arm) determined every 3 months. Participants were switched to a second-line therapy if their CD4 count declined by more than 30% from their peak CD4 count (CD4 arm) or if a viral load of more than 400 copies/ml was recorded (VL arm). The 3-year cumulative risk of clinical failure (defined as death, a new AIDS-defining event, or a CD4 count of less than 50 cells/mm3) was 8% in the VL arm and 7.4% in the CD4 arm. This difference in clinical failure risk met the researchers' predefined criterion for non-inferiority. The probability of a treatment switch was similar in the two arms, but the average time from treatment initiation to treatment switch and the average duration of a high viral load after treatment switch were both longer in the CD4 arm than in the VL arm. Finally, the future-drug-option score, a measure of viral drug resistance profiles, was similar in the two arms at the time of treatment switch.
What Do These Findings Mean?
These findings suggest that, in Thailand, a CD4 switching strategy is non-inferior in terms of clinical outcomes among HIV-positive adults 3 years after beginning HAART when compared to the recommended viral load-based switching strategy and that there is no difference between the strategies in terms of viral suppression and immune restoration after 3-years follow-up. Importantly, however, even though patients in the CD4 arm spent longer with a high viral load than patients in the VL arm, the emergence of HIV mutants resistant to antiretroviral drugs was similar in the two arms. Although these findings provide no information about the long-term outcomes of the two monitoring strategies and may not be generalizable to routine care settings, they nevertheless provide reassurance that using CD4 counts alone to monitor HAART in HIV treatment programs in resource-limited settings is an appropriate strategy to use as viral load measurement becomes more affordable and feasible in these settings.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001494.
The World Health Organization provides information on all aspects of HIV/AIDS (in several languages); its 2010 recommendations for antiretroviral therapy for HIV infection in adults and adolescents are available as well as the June 2013 Consolidated guidelines on the use of antiretroviral drugs for treating and preventing HIV infection: recommendations for a public health approach
The 2012 UNAIDS World AIDS Day Report provides up-to-date information about the AIDS epidemic and efforts to halt it
Information is available from the US National Institute of Allergy and Infectious Diseases on HIV infection and AIDS
NAM/aidsmap provides basic information about HIV/AIDS and summaries of recent research findings on HIV care and treatment
Information is available from Avert, an international AIDS charity on many aspects of HIV/AIDS, including information on the global HIV/AIDS epidemic, on HIV and AIDS in Thailand, on universal access to AIDS treatment, and on starting, monitoring and switching HIV treatment (in English and Spanish)
The UK National Health Service Choices website provides information (including personal stories) about HIV and AIDS
More information about this trial (the PHPT-3 trial) is available
Patient stories about living with HIV/AIDS are available through Avert; the nonprofit website Healthtalkonline also provides personal stories about living with HIV, including stories about HIV treatment
doi:10.1371/journal.pmed.1001494
PMCID: PMC3735458  PMID: 23940461
23.  A Randomised Trial Comparing Genotypic and Virtual Phenotypic Interpretation of HIV Drug Resistance: The CREST Study 
PLoS Clinical Trials  2006;1(3):e18.
Objectives:
The aim of this study was to compare the efficacy of different HIV drug resistance test reports (genotype and virtual phenotype) in patients who were changing their antiretroviral therapy (ART).
Design:
Randomised, open-label trial with 48-week followup.
Setting:
The study was conducted in a network of primary healthcare sites in Australia and New Zealand.
Participants:
Patients failing current ART with plasma HIV RNA > 2000 copies/mL who wished to change their current ART were eligible. Subjects were required to be > 18 years of age, previously treated with ART, have no intercurrent illnesses requiring active therapy, and to have provided written informed consent.
Interventions:
Eligible subjects were randomly assigned to receive a genotype (group A) or genotype plus virtual phenotype (group B) prior to selection of their new antiretroviral regimen.
Outcome Measures:
Patient groups were compared for patterns of ART selection and surrogate outcomes (plasma viral load and CD4 counts) on an intention-to-treat basis over a 48-week period.
Results:
Three hundred and twenty seven patients completing > one month of followup were included in these analyses. Resistance tests were the primary means by which ART regimens were selected (group A: 64%, group B: 62%; p = 0.32). At 48 weeks, there were no significant differences between the groups for mean change from baseline plasma HIV RNA (group A: 0.68 log copies/mL, group B: 0.58 log copies/mL; p = 0.23) and mean change from baseline CD4+ cell count (group A: 37 cells/mm3, group B: 50 cells/mm3; p = 0.28).
Conclusions:
In the absence of clear demonstrated benefits arising from the use of the virtual phenotype interpretation, this study suggests resistance testing using genotyping linked to a reliable interpretive algorithm is adequate for the management of HIV infection.
Editorial Commentary
Background: Antiretroviral drugs are used to treat patients with HIV infection, with good evidence that they improve prognosis. However, mutations develop in the HIV genome that allow it to evade successful treatment—known as drug resistance—and such mutations are known against every class of antiretroviral drug. Resistance can cause treatment failure and limit the treatment options available. Different types of tests are often used to detect resistance and to work out whether patients should switch to a different drug regimen. Currently, the different types of tests include genotype testing (direct sequencing of genes from virus samples infecting a patient); phenotype testing (a test that assesses the sensitivity of a patient's HIV sample to different drugs), and virtual phenotype testing (a way of interpreting genotype data that estimates the likely viral response to different drugs). The researchers of this study did a trial to find out whether providing an additional virtual phenotype report would be beneficial to patients, as compared with a genotype report alone. The main outcome was HIV viral load after 12 months of treatment, but the researchers also looked at differences in drug regimens prescribed, number of treatment changes in the study, and changes in CD4+ (the type of white blood cell infected by HIV) counts.
What this trial shows: The researchers found that the main endpoint of the trial (HIV viral load after 12 months) was no different in patients whose clinicians had received a virtual phenotype report as well as a genotype report, compared with those who had received a genotype report alone. In addition, the average number of drugs prescribed was no different between patients in the two different arms of the trial, and there was no difference in number of drug regimen changes, and no change in immune response (measured using CD4+ cell levels). However, more drugs predicted to be sensitive were prescribed by clinicians who got both a genotype and virtual phenotype report, as compared with clinicians who received only the genotype report.
Strengths and limitations: The size of the trial (338 patients recruited) was large enough to properly test the hypothesis that providing a virtual phenotype report as well as a genotype report would result in lower HIV viral loads. Randomization of patients to either intervention ensured that the comparison groups were well-balanced, and the researchers also tested whether selection bias had affected the results (i.e., testing for the possibility that clinicians could predict which intervention participants would receive, and change recruitment into the trial as a result). They found no evidence for selection bias occurring within the trial. However, interpreting the results is difficult because the trial did not directly compare the two different testing platforms, but rather looked at whether providing a virtual phenotype report as well as a genotype report was better than providing a genotype report alone. The investigators also acknowledge that since the trial was conducted, the cutoffs for interpreting genotype information as resistant have been lowered. The findings may therefore not translate precisely to the current situation.
Contribution to the evidence: Other cohort studies and clinical trials have shown that patients offered resistance testing respond better to antiretroviral therapy compared with those who were not, but the clinical effectiveness of different resistance testing methods is not known. This study provides additional data on the respective benefits of genotype testing versus genotype plus provision of virtual phenotype. Another trial comparing genotype versus virtual phenotype has also found that the different interpretation methods perform similarly.
doi:10.1371/journal.pctr.0010018
PMCID: PMC1523224  PMID: 16878178
24.  Bioimpedance Spectroscopy for Assessment of Volume Status in Patients before and after General Anaesthesia 
PLoS ONE  2014;9(10):e111139.
Background
Technically assisted assessment of volume status before surgery may be useful to direct intraoperative fluid administration. We therefore tested a recently developed whole-body bioimpedance spectroscopy device to determine pre- to postoperative fluid distribution.
Methods
Using a three-compartment physiologic tissue model, the body composition monitor (BCM, Fresenius Medical Care, Germany) measures total body fluid volume, extracellular volume, intracellular volume and fluid overload as surplus or deficit of ‘normal’ extracellular volume. BCM-measurements were performed before and after standardized general anaesthesia for gynaecological procedures (laparotomies, laparoscopies and vaginal surgeries). BCM results were blinded to the attending anaesthesiologist and data analysed using the 2-sided, paired Student’s t-test and multiple linear regression.
Results
In 71 females aged 45±15 years with body weight 67±13 kg and duration of anaesthesia 154±68 min, pre- to postoperative fluid overload increased from −0.7±1.1 L to 0.1±1.0 L, corresponding to −5.1±7.5% and 0.8±6.7% of normal extracellular volume, respectively (both p<0.001), after patients had received 1.9±0.9 L intravenous crystalloid fluid. Perioperative urinary excretion was 0.4±0.3 L. The increase in extracellular volume was paralleled by an increase in total body fluid volume, while intracellular volume increased only slightly and without reaching statistical significance (p = 0.15). Net perioperative fluid balance (administered fluid volume minus urinary excretion) was significantly associated with change in extracellular volume (r2 = 0.65), but was not associated with change in intracellular volume (r2 = 0.01).
Conclusions
Routine intraoperative fluid administration results in a significant, and clinically meaningful increase in the extracellular compartment. BCM-measurements yielded plausible results and may become useful to guide intraoperative fluid therapy in future studies.
doi:10.1371/journal.pone.0111139
PMCID: PMC4215896  PMID: 25360698
25.  Factors in AIDS Dementia Complex Trial Design: Results and Lessons from the Abacavir Trial 
PLoS Clinical Trials  2007;2(3):e13.
Objectives:
To determine the efficacy of adding abacavir (Ziagen, ABC) to optimal stable background antiretroviral therapy (SBG) to AIDS dementia complex (ADC) patients and address trial design.
Design:
Phase III randomized, double-blind placebo-controlled trial.
Setting:
Tertiary outpatient clinics.
Participants:
ADC patients on SBG for ≥8 wk.
Interventions:
Participants were randomized to ABC or matched placebo for 12 wk.
Outcome Measures:
The primary outcome measure was the change in the summary neuropsychological Z score (NPZ). Secondary measures were HIV RNA and the immune activation markers β-2 microglobulin, soluble tumor necrosis factor (TNF) receptor 2, and quinolinic acid.
Results:
105 participants were enrolled. The median change in NPZ at week 12 was +0.76 for the ABC + SBG and +0.63 for the SBG groups (p = 0.735). The lack of efficacy was unlikely related to possible limited antiviral efficacy of ABC: at week 12 more ABC than placebo participants had plasma HIV RNA ≤400 copies/mL (p = 0.002). There were, however, other factors. Two thirds of patients were subsequently found to have had baseline resistance to ABC. Second, there was an unanticipated beneficial effect of SBG that extended beyond 8 wk to 5 mo, thereby rendering some of the patients at baseline unstable. Third, there was an unexpectedly large variability in neuropsychological performance that underpowered the study. Fourth, there was a relative lack of activity of ADC: 56% of all patients had baseline cerebrospinal fluid (CSF) HIV-1 RNA <100 copies/mL and 83% had CSF β-2 microglobulin <3 nmol/L at baseline.
Conclusions:
The addition of ABC to SBG for ADC patients was not efficacious, possibly because of the inefficacy of ABC per se, baseline drug resistance, prolonged benefit from existing therapy, difficulties with sample size calculations, and lack of disease activity. Assessment of these trial design factors is critical in the design of future ADC trials.
Editorial Commentary
Background: AIDS dementia complex (ADC) was first identified early in the HIV epidemic and at that time affected a substantial proportion of patients with AIDS. Patients with ADC experience dementia as well as disordered behavior and problems with movement and balance. ADC is now much less common in locations where patients have access to HAART (highly active antiretroviral therapy), consisting of combinations of several drugs that attack the virus at different stages of its life cycle. At present, however, there are no generally accepted guidelines for the best treatment of people with ADC. It has been thought for some time that the best treatment regimens for people with ADC would include drugs that cross the barrier between blood and brain well. Shortly following the introduction of HAART, a trial was carried out to find out whether adding one particular drug, abacavir, to existing combinations of drugs would be beneficial in people with ADC. This drug is known to cross the barrier between the blood and brain. The trial enrolled HIV-positive individuals with mild to moderate ADC and who were already receiving antiretroviral drug treatment. 105 participants were assigned at random to receive either high-dose abacavir or a placebo, in addition to their existing therapy. The primary outcome of the trial was a summary of performance on a set of different tests, designed to evaluate cognitive, behavior, and movement skills, at 12 weeks. Other outcomes included levels of HIV RNA (viral load) in fluid around the brain, as well as other neurological evaluations, and the level of HIV RNA and CD4+ T cells (the cells infected by HIV) in blood.
What this trial shows: When comparing the change in performance scores for individuals randomized to either abacavir or placebo, the results showed an improvement in scores for both groups, but no significant difference in improvement between the two groups. Similarly, the levels of HIV RNA in the cerebrospinal fluid did not differ between the two groups being compared, and other neurological tests did not show any differences between the two groups. However, at 12 weeks, patients receiving abacavir were more likely to have low levels of HIV RNA in their blood, suggesting that abacavir was active against the virus, but this did not translate into an additional improvement of these patients' dementia. The overall rates of adverse events were roughly comparable between the two groups in the trial, although participants receiving abacavir were more likely to experience certain types of events, such as nausea.
Strengths and limitations: The trial was appropriately randomized and controlled, using central telephone procedures for randomizing participants and subsequent blinding of patients and trial investigators. These procedures help minimize the chance of bias in assigning participants to the different arms as well as in the subsequent performance of individuals within the trial and the assessment of their outcomes. Limitations in the study design have been identified. One limitation is that individuals enrolled into the trial may not in fact have been receiving their existing HAART regimen for long enough to experience its optimal effect, and therefore the improvement seen in both groups could have resulted from an ongoing response to their existing regimen. It is also possible that patients improved in their test scores over the course of the trial simply because they became more familiar with the tests and not because their condition improved. This is a problem in all such trials that try to improve mental function. Finally, a limitation may have been the inclusion of patients who did not have active disease leading to worsening dementia.
Contribution to the evidence: The findings from this trial suggest that adding high-dose abacavir to existing HAART is not beneficial for patients with ADC. However, the trial provides several insights into the way that future studies of this type can be done, and which typically pose a number of challenging design problems. In particular, sensitive markers are needed that will allow researchers to monitor progression of ADC and patients' response to therapy.
doi:10.1371/journal.pctr.0020013
PMCID: PMC1845158  PMID: 17401456

Results 1-25 (1388668)