PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1212289)

Clipboard (0)
None

Related Articles

1.  Blood volume-monitored regulation of ultrafiltration in fluid-overloaded hemodialysis patients: study protocol for a randomized controlled trial 
Trials  2012;13:79.
Background
Data generated with the body composition monitor (BCM, Fresenius) show, based on bioimpedance technology, that chronic fluid overload in hemodialysis patients is associated with poor survival. However, removing excess fluid by lowering dry weight can be accompanied by intradialytic and postdialytic complications. Here, we aim at testing the hypothesis that, in comparison to conventional hemodialysis, blood volume-monitored regulation of ultrafiltration and dialysate conductivity (UCR) and/or regulation of ultrafiltration and temperature (UTR) will decrease complications when ultrafiltration volumes are systematically increased in fluid-overloaded hemodialysis patients.
Methods/design
BCM measurements yield results on fluid overload (in liters), relative to extracellular water (ECW). In this prospective, multicenter, triple-arm, parallel-group, crossover, randomized, controlled clinical trial, we use BCM measurements, routinely introduced in our three maintenance hemodialysis centers shortly prior to the start of the study, to recruit sixty hemodialysis patients with fluid overload (defined as ≥15% ECW). Patients are randomized 1:1:1 into UCR, UTR and conventional hemodialysis groups. BCM-determined, ‘final’ dry weight is set to normohydration weight −7% of ECW postdialysis, and reached by reducing the previous dry weight, in steps of 0.1 kg per 10 kg body weight, during 12 hemodialysis sessions (one study phase). In case of intradialytic complications, dry weight reduction is decreased, according to a prespecified algorithm. A comparison of intra- and post-dialytic complications among study groups constitutes the primary endpoint. In addition, we will assess relative weight reduction, changes in residual renal function, quality of life measures, and predialysis levels of various laboratory parameters including C-reactive protein, troponin T, and N-terminal pro-B-type natriuretic peptide, before and after the first study phase (secondary outcome parameters).
Discussion
Patients are not requested to revert to their initial degree of fluid overload after each study phase. Therefore, the crossover design of the present study merely serves the purpose of secondary endpoint evaluation, for example to determine patient choice of treatment modality. Previous studies on blood volume monitoring have yielded inconsistent results. Since we include only patients with BCM-determined fluid overload, we expect a benefit for all study participants, due to strict fluid management, which decreases the mortality risk of hemodialysis patients.
Trial registration
ClinicalTrials.gov, NCT01416753
doi:10.1186/1745-6215-13-79
PMCID: PMC3493292  PMID: 22682149
Dialysis; Ultrafiltration; Renal dialysis; Fluid shifts; Blood volume; Multicenter study; Randomized controlled trials.
2.  Treg/Th17 imbalance is associated with cardiovascular complications in uremic patients undergoing maintenance hemodialysis 
Biomedical Reports  2013;1(3):413-419.
Investigations of Treg/Th17 imbalance associated with cardiovascular complications in hemodialysis are limited. The aim of this study was to examine the association between Treg/Th17 balance and cardiovascular comorbidity in maintenance hemodialysis (MHD). Uremic patients included in the present study were divided into three groups: the WHD group comprising 30 patients with no cardiovascular complications or maintenance hemodialysis (MHD), the MHD1 group comprising 36 patients presenting with cardiovascular complications during MHD, and the MHD2 group comprising 30 patients with a lack of cardiovascular complications during MHD. The control group comprised 20 healthy volunteers. Th17 and Treg cells were measured by fluorescence-activated cell scanning (FACS). IL-6 and IL-10 levels were determined by enzyme-linked immunosorbent assay (ELISA). Monocyte surface expression of the costimulatory molecules CD80 and CD86 was assessed by FACS after the monocytes were cocultured with Th17 or Treg cells in the presence or absence of IL-17. Results revealed that the percentage of Th17 of total CD4(+) cells was significantly higher in the MHD1 (36.27±9.62% in) and WHD (35.98±8.85%) groups compared with the MHD2 (19.64±5.97%) and healthy (1.12±1.52%) groups. Elevated IL-6 levels were obtained in Th17 cells for the MHD1 and WHD groups, whereas a marked decrease was evident when IL-17 was blocked. However, no significant differences or cardiovascular complications were detected in the expression of CD80 and CD86 in the MHD group, whereas the expression of the uremic subgroups was statistically higher compared with the healthy controls. To the best of our knowledge, this is the first study to demonstrate that the Treg/Th17 imbalance may be associated with the pathogenesis of cardiovascular complications in uremic patients undergoing hemodialysis through the B7-independent upregulation of IL-6 induced by IL-17.
doi:10.3892/br.2013.63
PMCID: PMC3917002  PMID: 24648960
cardiovascular; Th17 cell; regulatory T cell; inflammatory cytokine; costimulatory molecule
3.  Timing and Completeness of Trial Results Posted at ClinicalTrials.gov and Published in Journals 
PLoS Medicine  2013;10(12):e1001566.
Agnes Dechartres and colleagues searched ClinicalTrials.gov for completed drug RCTs with results reported and then searched for corresponding studies in PubMed to evaluate timeliness and completeness of reporting.
Please see later in the article for the Editors' Summary
Background
The US Food and Drug Administration Amendments Act requires results from clinical trials of Food and Drug Administration–approved drugs to be posted at ClinicalTrials.gov within 1 y after trial completion. We compared the timing and completeness of results of drug trials posted at ClinicalTrials.gov and published in journals.
Methods and Findings
We searched ClinicalTrials.gov on March 27, 2012, for randomized controlled trials of drugs with posted results. For a random sample of these trials, we searched PubMed for corresponding publications. Data were extracted independently from ClinicalTrials.gov and from the published articles for trials with results both posted and published. We assessed the time to first public posting or publishing of results and compared the completeness of results posted at ClinicalTrials.gov versus published in journal articles. Completeness was defined as the reporting of all key elements, according to three experts, for the flow of participants, efficacy results, adverse events, and serious adverse events (e.g., for adverse events, reporting of the number of adverse events per arm, without restriction to statistically significant differences between arms for all randomized patients or for those who received at least one treatment dose).
From the 600 trials with results posted at ClinicalTrials.gov, we randomly sampled 50% (n = 297) had no corresponding published article. For trials with both posted and published results (n = 202), the median time between primary completion date and first results publicly posted was 19 mo (first quartile = 14, third quartile = 30 mo), and the median time between primary completion date and journal publication was 21 mo (first quartile = 14, third quartile = 28 mo). Reporting was significantly more complete at ClinicalTrials.gov than in the published article for the flow of participants (64% versus 48% of trials, p<0.001), efficacy results (79% versus 69%, p = 0.02), adverse events (73% versus 45%, p<0.001), and serious adverse events (99% versus 63%, p<0.001).
The main study limitation was that we considered only the publication describing the results for the primary outcomes.
Conclusions
Our results highlight the need to search ClinicalTrials.gov for both unpublished and published trials. Trial results, especially serious adverse events, are more completely reported at ClinicalTrials.gov than in the published article.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
When patients consult a doctor, they expect to be recommended what their doctor believes is the most effective treatment with the fewest adverse effects. To determine which treatment to recommend, clinicians rely on sources that include research studies. Among studies, the best evidence is generally agreed to come from systematic reviews and randomized controlled clinical trials (RCTs), studies that test the efficacy and safety of medical interventions by comparing clinical outcomes in groups of patients randomly chosen to receive different interventions. Decision-making based on the best available evidence is called evidence-based medicine. However, evidence-based medicine can only guide clinicians if trial results are published in a timely and complete manner. Unfortunately, underreporting of trials is common. For example, an RCT in which a new drug performs better than existing drugs is more likely to be published than one in which the new drug performs badly or has unwanted adverse effects (publication bias). There can also be a delay in publishing the results of negative trials (time-lag bias) or a failure to publish complete results for all the prespecified outcomes of a trial (reporting bias). All three types of bias threaten informed medical decision-making and the health of patients.
Why Was This Study Done?
One initiative that aims to prevent these biases was included in the 2007 US Food and Drug Administration Amendments Act (FDAAA). The Food and Drug Administration (FDA) is responsible for approving drugs and devices that are marketed in the US. The FDAAA requires that results from clinical trials of FDA-approved drugs and devices conducted in the United States be made publicly available at ClinicalTrials.gov within one year of trial completion. ClinicalTrials.gov—a web-based registry that includes US and international clinical trials—was established in 2000 in response to the 1997 FDA Modernization Act, which required mandatory registration of trial titles and designs and of the conditions and interventions under study. The FDAAA expanded these mandatory requirements by requiring researchers studying FDA-approved drugs and devices to report additional information such as the baseline characteristics of the participants in each arm of the trial and the results of primary and secondary outcome measures (the effects of the intervention on predefined clinical measurements) and their statistical significance (an indication of whether differences in outcomes might have happened by chance). Researchers of other trials registered in ClinicalTrials.gov are welcome to post trial results as well. Here, the researchers compare the timing and completeness (i.e., whether all relevant information was fully reported) of results of drug trials posted at ClinicalTrials.gov with those published in medical journals.
What Did the Researchers Do and Find?
The researchers searched ClinicalTrials.gov for reports of completed phase III and IV (late-stage) RCTs of drugs with posted results. For a random sample of 600 eligible trials, they searched PubMed (a database of biomedical publications) for corresponding publications. Only 50% of trials with results posted at ClinicalTrials.gov had a matching published article. For 202 trials with both posted and published results, the researchers compared the timing and completeness of the results posted at ClinicalTrials.gov and of results reported in the corresponding journal publication. The median time between the study completion date and the first results being publicly posted at ClinicalTrials.gov was 19 months, whereas the time between completion and publication in a journal was 21 months. The flow of participants through trials was completely reported in 64% of the ClinicalTrials.gov postings but in only 48% of the corresponding publications. Results for the primary outcome measure were completely reported in 79% and 69% of the ClinicalTrials.gov postings and corresponding publications, respectively. Finally, adverse events were completely reported in 73% of the ClinicalTrials.gov postings but in only 45% of the corresponding publications, and serious adverse events were reported in 99% and 63% of the ClinicalTrials.gov postings and corresponding publications, respectively.
What Do These Findings Mean?
These findings suggest that the reporting of trial results is significantly more complete at ClinicalTrials.gov than in published journal articles reporting the main trial results. Certain aspects of this study may affect the accuracy of this conclusion. For example, the researchers compared the results posted at ClinicalTrials.gov only with the results in the publication that described the primary outcome of each trial, even though some trials had multiple publications. Importantly, these findings suggest that, to enable patients and physicians to make informed treatment decisions, experts undertaking assessments of drugs should consider seeking efficacy and safety data posted at ClinicalTrials.gov, both for trials whose results are not published yet and for trials whose results are published. Moreover, they suggest that the use of templates to guide standardized reporting of trial results in journals and broader mandatory posting of results may help to improve the reporting and transparency of clinical trials and, consequently, the evidence available to inform treatment of patients.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001566.
Wikipedia has pages on evidence-based medicine and on publication bias (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The US Food and Drug Administration provides information about drug approval in the US for consumers and health-care professionals, plus detailed information on the 2007 Food and Drug Administration Amendments Act
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, including background information about clinical trials, and a fact sheet detailing the requirements of the 2007 Food and Drug Administration Amendments Act
PLOS Medicine recently launched a Reporting Guidelines Collection, an open access collection of reporting guidelines, commentary, and related research on guidelines from across PLOS journals that aims to help advance the efficiency, effectiveness, and equitability of the dissemination of biomedical information; a 2008 PLOS Medicine editorial discusses the 2007 Food and Drug Administration Amendments Act
doi:10.1371/journal.pmed.1001566
PMCID: PMC3849189  PMID: 24311990
4.  Percutaneous Vertebroplasty for Treatment of Painful Osteoporotic Vertebral Compression Fractures 
Executive Summary
Objective of Analysis
The objective of this analysis is to examine the safety and effectiveness of percutaneous vertebroplasty for treatment of osteoporotic vertebral compression fractures (VCFs) compared with conservative treatment.
Clinical Need and Target Population
Osteoporosis and associated fractures are important health issues in ageing populations. Vertebral compression fracture secondary to osteoporosis is a cause of morbidity in older adults. VCFs can affect both genders, but are more common among elderly females and can occur as a result of a fall or a minor trauma. The fracture may occur spontaneously during a simple activity such as picking up an object or rising up from a chair. Pain originating from the fracture site frequently increases with weight bearing. It is most severe during the first few weeks and decreases with rest and inactivity.
Traditional treatment of painful VCFs includes bed rest, analgesic use, back bracing and muscle relaxants. The comorbidities associated with VCFs include deep venous thrombosis, acceleration of osteopenea, loss of height, respiratory problems and emotional problems due to chronic pain.
Percutaneous vertebroplasty is a minimally invasive surgical procedure that has gained popularity as a new treatment option in the care for these patients. The technique of vertebroplasty was initially developed in France to treat osteolytic metastasis, myeloma, and hemangioma. The indications were further expanded to painful osteoporotic VCFs and subsequently to treatment of asymptomatic VCFs.
The mechanism of pain relief, which occurs within minutes to hours after vertebroplasty, is still not known. Pain pathways in the surrounding tissue appear to be altered in response to mechanical, chemical, vascular, and thermal stimuli after the injection of the cement. It has been suggested that mechanisms other than mechanical stabilization of the fracture, such as thermal injury to the nerve endings, results in immediate pain relief.
Percutaneous Vertebroplasty
Percutaneous vertebroplasty is performed with the patient in prone position and under local or general anesthesia. The procedure involves fluoroscopic imaging to guide the injection of bone cement into the fractured vertebral body to support the fractured bone. After injection of the cement, the patient is placed in supine position for about 1 hour while the cement hardens.
Cement leakage is the most frequent complication of vertebroplasty. The leakages may remain asymptomatic or cause symptoms of nerve irritation through compression of nerve roots. There are several reports of pulmonary cement embolism (PCE) following vertebroplasty. In some cases, the PCE may remain asymptomatic. Symptomatic PCE can be recognized by their clinical signs and symptoms such as chest pain, dyspnea, tachypnea, cyanosis, coughing, hemoptysis, dizziness, and sweating.
Research Methods
Literature Search
A literature search was performed on Feb 9, 2010 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from January 1, 2005 to February 9, 2010.
Studies were initially reviewed by titles and abstracts. For those studies meeting the eligibility criteria, full-text articles were obtained and reviewed. Reference lists were also examined for any additional relevant studies not identified through the search. Articles with an unknown eligibility were reviewed with a second clinical epidemiologist and then a group of epidemiologists until consensus was established. Data extraction was carried out by the author.
Inclusion Criteria
Study design: Randomized controlled trials (RCTs) comparing vertebroplasty with a control group or other interventions
Study population: Adult patients with osteoporotic vertebral fractures
Study sample size: Studies included 20 or more patients
English language full-reports
Published between Jan 1 2005 and Feb 9, 2010
(eligible studies identified through the Auto Alert function of the search were also included)
Exclusion Criteria
Non-randomized studies
Studies on conditions other than VCF (e.g. patients with multiple myeloma or metastatic tumors)
Studies focused on surgical techniques
Studies lacking outcome measures
Results of Evidence-Based Analysis
A systematic search yielded 168 citations. The titles and the abstracts of the citations were reviewed and full text of the identified citations was retrieved for further consideration. Upon review of the full publications and applying the inclusion and exclusion criteria, 5 RCTs were identified. Of these, two compared vertebroplasty with sham procedure, two compared vertebroplasty with conservative treatment, and one compared vertebroplasty with balloon kyphoplasty.
Randomized Controlled Trials
Recently, the results of two blinded randomized placebo-controlled trials of percutaneous vertebroplasty were reported. These trials, providing the highest quality of evidence available to date, do not support the use of vertebroplasty in patients with painful osteoporotic vertebral compression fractures. Based on the results of these trials, vertebroplasty offer no additional benefit over usual care and is not risk free.
In these trials the treatment allocation was blinded to the patients and outcome assessors. The control group received a sham procedure simulating vertebroplasty to minimize the effect of expectations and to reduce the potential for bias in self-reporting of outcomes. Both trials applied stringent exclusion criteria so that the results are generalizable to the patient populations that are candidates for vertebroplasty. In both trials vertebroplasty procedures were performed by highly skilled interventionists. Multiple valid outcome measures including pain, physical, mental, and social function were employed to test the between group differences in outcomes.
Prior to these two trials, there were two open randomized trials in which vertebroplasty was compared with conservative medical treatment. In the first randomized trial, patients were allowed to cross over to the other arm and had to be stopped after two weeks due to the high numbers of patients crossing over. The other study did not allow cross over and recently published the results of 12 months follow-up.
The following is the summary of the results of these 4 trials:
Two blinded RCTs on vertebroplasty provide the highest level of evidence available to date. Results of these two trials are supported by findings of an open randomized trial with 12 months follow-up. Blinded RCTs showed:
No significant differences in pain scores of patients who received vertebroplasty and patients who received a sham procedure as measured at 3 days, 2 weeks and 1 month in one study and at 1 week, 1 month, 3 months, and 6 months in the other.
The observed differences in pain scores between the two groups were neither statistically significant nor clinically important at any time points.
The above findings were consistent with the findings of an open RCT in which patients were followed for 12 months. This study showed that improvement in pain was similar between the two groups at 3 months and were sustained to 12 months.
In the blinded RCTs, physical, mental, and social functioning were measured at the above time points using 4-5 of the following 7 instruments: RDQ, EQ-5D, SF-36 PCS, SF-36 MCS, AQoL, QUALEFFO, SOF-ADL
There were no significant differences in any of these measures between patients who received vertebroplasty and patients who received a sham procedure at any of the above time points (with a few exceptions in favour of control intervention).
These findings were also consistent with the findings of an open RCT which demonstrated no significant between group differences in scores of ED-5Q, SF-36 PCS, SF 36 MCS, DPQ, Barthel, and MMSE which measure physical, mental, and social functioning (with a few exceptions in favour of control intervention).
One small (n=34) open RCT with a two week follow-up detected a significantly higher improvement in pain scores at 1 day after the intervention in vertebroplasty group compared with conservative treatment group. However, at 2 weeks follow-up, this difference was smaller and was not statistically significant.
Conservative treatment was associated with fewer clinically important complications
Risk of new VCFs following vertebroplasty was higher than those in conservative treatment but it requires further investigation.
PMCID: PMC3377535  PMID: 23074396
5.  Volume Expansion with Albumin Compared to Gelofusine in Children with Severe Malaria: Results of a Controlled Trial  
PLoS Clinical Trials  2006;1(5):e21.
Objectives:
Previous studies have shown that in children with severe malaria, resuscitation with albumin infusion results in a lower mortality than resuscitation with saline infusion. Whether the apparent benefit of albumin is due solely to its colloidal properties, and thus might also be achieved with other synthetic colloids, or due to the many other unique physiological properties of albumin is unknown. As albumin is costly and not readily available in Africa, examination of more affordable colloids is warranted. In order to inform the design of definitive phase III trials we compared volume expansion with Gelofusine (succinylated modified fluid gelatin 4% intravenous infusion) with albumin.
Design:
This study was a phase II safety and efficacy study.
Setting:
The study was conducted at Kilifi District Hospital, Kenya.
Participants:
The participants were children admitted with severe falciparum malaria (impaired consciousness or deep breathing), metabolic acidosis (base deficit > 8 mmol/l), and clinical features of shock.
Interventions:
The interventions were volume resuscitation with either 4.5% human albumin solution or Gelofusine.
Outcome Measures:
Primary endpoints were the resolution of shock and acidosis; secondary endpoints were in-hospital mortality and adverse events including neurological sequelae.
Results:
A total of 88 children were enrolled: 44 received Gelofusine and 44 received albumin. There was no significant difference in the resolution of shock or acidosis between the groups. Whilst no participant developed pulmonary oedema or fluid overload, fatal neurological events were more common in the group receiving gelatin-based intervention fluids. Mortality was lower in patients receiving albumin (1/44; 2.3%) than in those treated with Gelofusine (7/44; 16%) by intention to treat (Fisher's exact test, p = 0.06), or 1/40 (2.5%) and 4/40 (10%), respectively, for those treated per protocol (p = 0.36). Meta-analysis of published trials to provide a summary estimate of the effect of albumin on mortality showed a pooled relative risk of death with albumin administration of 0.19 (95% confidence interval 0.06–0.59; p = 0.004 compared to other fluid boluses).
Conclusions:
In children with severe malaria, we have shown a consistent survival benefit of receiving albumin infusion compared to other resuscitation fluids, despite comparable effects on the resolution of acidosis and shock. The lack of similar mortality benefit from Gelofusine suggests that the mechanism may involve a specific neuroprotective effect of albumin, rather than solely the effect of the administered colloid. Further exploration of the benefits of albumin is warranted in larger clinical trials.
Editorial Commentary
Background: In Africa, children admitted to hospital with severe malaria are at high risk of death even though effective malaria treatment is available. Death typically occurs during a narrow time window after admission and before antimalarial treatments can start working. Acidosis (excessive acidity of the blood) is thought to predict death, but it is not clear how acidosis arises. One possibility is that hypovolemia (lowered blood fluid volume) is important, which would normally require urgent resuscitation with fluids. However, there is little evidence on what type of fluid should be given. In the trial reported here, carried out in Kenya's Kilifi District Hospital between 2004 and 2006, 88 children admitted with severe malaria were assigned to receive either albumin solution (a colloid solution made from blood protein) or Gelofusine (a synthetic colloid). The primary outcomes that the researchers were interested in were correction of shock and acidosis in the blood after 8 h. However, the researchers also looked at death rate in hospital and adverse events after treatment.
What this trial shows: The investigators found no significant differences in the primary outcomes (correction of shock and acidosis in the blood 8 h after fluids were started) between children given Gelofusine and those given albumin. However, they did see a difference in death rates between children given Gelofusine and those given albumin. Death rates in hospital were lower in the group given albumin, and this was statistically significant. The researchers then combined the data on death rates from this trial with data from two other trials with an albumin arm. This combined analysis also supported the suggestion that death rates with albumin were lower than with other fluids, either Gelofusine or salt solution.
Strengths and limitations: There is currently very little evidence from trials to guide the initial management of fluids in children with severe malaria. The results from this trial indicate that further research is a priority. However, the actual findings from this trial must be tested in larger trials that recruit enough children to establish reliably whether there is a difference in death rate between albumin treatment and treatment with other fluids. This trial was not originally planned to find a clinically relevant difference in death rate, and therefore does not definitively answer that question. Further trials would also need to use a random method to assign participants to the different treatments, rather than alternate blocks (as in this trial). A random method ensures greater comparability of the two groups in the trial, and reduces the chance of selection bias (where assignment of patients to different treatments can be distorted during the enrollment process).
Contribution to the evidence: This study adds data suggesting that fluid resuscitation with albumin solution, as compared to Gelofusine, may reduce the chance of death in children with severe malaria. However, this finding is not definitive and would need to be examined in further carefully controlled trials. If the finding is supported by further research, then a solution to the problems of high cost and limited availability of albumin will need to be found.
doi:10.1371/journal.pctr.0010021
PMCID: PMC1569382  PMID: 16998584
6.  Short-Term Efficacy of Rofecoxib and Diclofenac in Acute Shoulder Pain: A Placebo-Controlled Randomized Trial 
PLoS Clinical Trials  2007;2(3):e9.
Objectives:
To evaluate the short-term symptomatic efficacy of rofecoxib and diclofenac versus placebo in acute episodes of shoulder pain.
Design:
Randomized controlled trial of 7 days.
Setting:
Rheumatologists and/or general practitioners totaling 47.
Participants:
Acute shoulder pain.
Interventions:
Rofecoxib 50 mg once daily, diclofenac 50 mg three times daily, and placebo.
Outcome measures:
Pain, functional impairment, patient's global assessment of his/her disease activity, and local steroid injection requirement for persistent pain. The primary variable was the Kaplan-Meier estimates of the percentage of patients at day 7 fulfilling the definition of success (improvement in pain intensity and a low pain level sustained to the end of the 7 days of the study; log-rank test).
Results:
There was no difference in the baseline characteristics between the three groups (rofecoxib n = 88, placebo n = 94, and diclofenac n = 89). At day 7, the Kaplan-Meier estimates of successful patients was higher in the treatment groups than in the placebo (54%, 56%, and 38% in the diclofenac, rofecoxib, and placebo groups respectively, p = 0.0070 and p = 0.0239 for placebo versus rofecoxib and diclofenac, respectively). During the 7 days of the study, there was a statistically significant difference between placebo and both active arms (rofecoxib and diclofenac) in all the evaluated outcome measures A local steroid injection had to be performed in 33 (35%) and 19 (22%) patients in the placebo and rofecoxib group respectively. Number needed to treat to avoid such rescue therapy was 7 patients (95% confidence interval 5–15).
Conclusion:
This study highlights the methodological aspects of clinical trials, e.g., eligibility criteria and outcome measures, in acute painful conditions. The data also establish that diclofenac and rofecoxib are effective therapies for the management of acute painful shoulder and that they reduce the requirement for local steroid injection.
Editorial Commentary
Background: Shoulder pain is a very common complaint that presents in primary care, and there are many different possible causes. Acute pain would normally be managed with nonsteroidal anti-inflammatory drugs (NSAIDs), supplemented with steroid injections (which are often reserved for the treatment of severe or persistent pain). One NSAID, diclofenac, is used frequently for this condition, but other NSAIDs might also be effective. A subgroup of NSAIDs called the Cox-2 selective inhibitors specifically inhibit one particular enzyme (cyclo-oxygenase, shortened to Cox-2) which is involved in inflammation and pain. These drugs are thought to be less likely to cause stomach irritation than other NSAIDs. Therefore the researchers in this study carried out a short-term, three-way clinical trial comparing diclofenac with one particular Cox-2 inhibitor, rofecoxib, and placebo in patients with acute shoulder pain. However, rofecoxib was withdrawn from the market in September 2004 because of evidence that use of the drug was associated with an increased risk of heart attacks and strokes, and controversy remains regarding the risk of such events among users of other Cox-2 inhibitors.
What this trial shows: The main aim of this trial was to compare the level of pain relief over seven days of treatment with either diclofenac or rofecoxib, as compared to placebo. The primary outcome measure used in the trial was the proportion of patients achieving a 50% or greater decrease in pain levels over the course of the study, measured using a numerical rating scale. A total of 273 participants were recruited into the trial and at day 7 the proportion achieving a 30% decrease in pain was 38% in the placebo arm, 54% in the diclofenac arm, and 56% in the rofecoxib arm. The differences in this outcome measure between diclofenac and placebo and between rofecoxib and placebo were statistically significant; however, the researchers did not carry out a direct comparison between diclofenac and rofecoxib. The rates of adverse events were roughly comparable between all three arms of the trial, although the study was not originally planned to be large enough to detect differences in the rates of such events, so it is not possible to conclude whether there was any true difference.
Strengths and limitations: The randomization procedures used in the study minimize the possibility of bias in assigning patients to treatment arms. Bias in assessment of outcomes was also minimized by ensuring that steps were taken to prevent investigators and patients from knowing which drugs a particular patient received until the end of the trial. A key limitation of the study is the short follow-up, only seven days, and it is therefore unclear whether efficacy and safety of these drugs would continue for the much longer periods of time (weeks or even months) for which these patients might need pain relief. Finally, patients randomized to the placebo arm received no treatment for the seven days of the study other than acetaminophen or steroid injections (which would result in withdrawal from the trial). This design does not limit interpretation of the data but could be criticized because of concern over whether the patients receiving placebo received adequate pain relief.
Contribution to the evidence: This study provides some data on the efficacy of diclofenac and rofecoxib, as compared to placebo in treatment of this condition. Given that rofecoxib is now withdrawn, the efficacy of this drug is no longer relevant. However, the information from this trial should help in designing future studies of NSAIDs in shoulder pain, for example to define appropriate trial outcomes, sample size, and other aspects of study design.
doi:10.1371/journal.pctr.0020009
PMCID: PMC1817652  PMID: 17347681
7.  Anti-Inflammatory and Anti-Oxidative Nutrition in Hypoalbuminemic Dialysis Patients (AIONID) study: results of the pilot-feasibility, double-blind, randomized, placebo-controlled trial 
Background
Low serum albumin is common and associated with protein-energy wasting, inflammation, and poor outcomes in maintenance hemodialysis (MHD) patients. We hypothesized that in-center (in dialysis clinic) provision of high-protein oral nutrition supplements (ONS) tailored for MHD patients combined with anti-oxidants and anti-inflammatory ingredients with or without an anti-inflammatory appetite stimulator (pentoxifylline, PTX) is well tolerated and can improve serum albumin concentration.
Methods
Between January 2008 and June 2010, 84 adult hypoalbuminemic (albumin <4.0 g/dL) MHD outpatients were double-blindly randomized to receive 16 weeks of interventions including ONS, PTX, ONS with PTX, or placebos. Nutritional and inflammatory markers were compared between the four groups.
Results
Out of 84 subjects (mean ± SD; age, 59 ± 12 years; vintage, 34 ± 34 months), 32 % were Blacks, 54 % females, and 68 % diabetics. ONS, PTX, ONS plus PTX, and placebo were associated with an average change in serum albumin of +0.21 (P = 0.004), +0.14 (P = 0.008), +0.18 (P = 0.001), and +0.03 g/dL (P = 0.59), respectively. No related serious adverse events were observed. In a predetermined intention-to-treat regression analysis modeling post-trial serum albumin as a function of pre-trial albumin and the three different interventions (ref = placebo), only ONS without PTX was associated with a significant albumin rise (+0.17 ± 0.07 g/dL, P = 0.018).
Conclusions
In this pilot-feasibility, 2 × 2 factorial, placebo-controlled trial, daily intake of a CKD-specific high-protein ONS with anti-inflammatory and anti-oxidative ingredients for up to 16 weeks was well tolerated and associated with slight but significant increase in serum albumin levels. Larger long-term controlled trials to examine hard outcomes are indicated.
Electronic supplementary material
The online version of this article (doi:10.1007/s13539-013-0115-9) contains supplementary material.
doi:10.1007/s13539-013-0115-9
PMCID: PMC3830006  PMID: 24052226
Albumin; Hypoalbuminemia; Inflammation; Protein intake; Hemodialysis; Oral nutrition supplements; Anti-oxidant ingredients; Anti-inflammatory ingredients
8.  Intensive Case Management Before and After Prison Release is No More Effective Than Comprehensive Pre-Release Discharge Planning in Linking HIV-Infected Prisoners to Care: A Randomized Trial 
AIDS and behavior  2011;15(2):356-364.
Imprisonment provides opportunities for the diagnosis and successful treatment of HIV, however, the benefits of antiretroviral therapy are frequently lost following release due to suboptimal access and utilization of health care and services. In response, some have advocated for development of intensive case-management interventions spanning incarceration and release to support treatment adherence and community re-entry for HIV-infected releasees. We conducted a randomized controlled trial of a motivational Strengths Model bridging case management intervention (BCM) beginning approximately 3 months prior to and continuing 6 months after release versus a standard of care prison-administered discharge planning program (SOC) for HIV-infected state prison inmates. The primary outcome variable was self-reported access to post-release medical care. Of the 104 inmates enrolled, 89 had at least 1 post-release study visit. Of these, 65.1% of BCM and 54.4% of SOC assigned participants attended a routine medical appointment within 4 weeks of release (P >0.3). By week 12 post-release, 88.4% of the BCM arm and 78.3% of the SOC arm had at attended at least one medical appointment (P = 0.2), increasing in both arms at week 24–90.7% with BCM and 89.1% with SOC (P >0.5). No participant without a routine medical visit by week 24 attended an appointment from weeks 24 to 48. The mean number of clinic visits during the 48 weeks post release was 5.23 (SD = 3.14) for BCM and 4.07 (SD = 3.20) for SOC (P >0.5). There were no significant differences between arms in social service utilization and re-incarceration rates were also similar. We found that a case management intervention bridging incarceration and release was no more effective than a less intensive pre-release discharge planning program in supporting health and social service utilization for HIV-infected individuals released from prison.
doi:10.1007/s10461-010-9843-4
PMCID: PMC3532052  PMID: 21042930
Prisoners; Access to care; Case management
9.  Factors Associated with Findings of Published Trials of Drug–Drug Comparisons: Why Some Statins Appear More Efficacious than Others 
PLoS Medicine  2007;4(6):e184.
Background
Published pharmaceutical industry–sponsored trials are more likely than non-industry-sponsored trials to report results and conclusions that favor drug over placebo. Little is known about potential biases in drug–drug comparisons. This study examined associations between research funding source, study design characteristics aimed at reducing bias, and other factors that potentially influence results and conclusions in randomized controlled trials (RCTs) of statin–drug comparisons.
Methods and Findings
This is a cross-sectional study of 192 published RCTs comparing a statin drug to another statin drug or non-statin drug. Data on concealment of allocation, selection bias, blinding, sample size, disclosed funding source, financial ties of authors, results for primary outcomes, and author conclusions were extracted by two coders (weighted kappa 0.80 to 0.97). Univariate and multivariate logistic regression identified associations between independent variables and favorable results and conclusions. Of the RCTs, 50% (95/192) were funded by industry, and 37% (70/192) did not disclose any funding source. Looking at the totality of available evidence, we found that almost all studies (98%, 189/192) used only surrogate outcome measures. Moreover, study design weaknesses common to published statin–drug comparisons included inadequate blinding, lack of concealment of allocation, poor follow-up, and lack of intention-to-treat analyses. In multivariate analysis of the full sample, trials with adequate blinding were less likely to report results favoring the test drug, and sample size was associated with favorable conclusions when controlling for other factors. In multivariate analysis of industry-funded RCTs, funding from the test drug company was associated with results (odds ratio = 20.16 [95% confidence interval 4.37–92.98], p < 0.001) and conclusions (odds ratio = 34.55 [95% confidence interval 7.09–168.4], p < 0.001) that favor the test drug when controlling for other factors. Studies with adequate blinding were less likely to report statistically significant results favoring the test drug.
Conclusions
RCTs of head-to-head comparisons of statins with other drugs are more likely to report results and conclusions favoring the sponsor's product compared to the comparator drug. This bias in drug–drug comparison trials should be considered when making decisions regarding drug choice.
Lisa Bero and colleagues found published trials comparing one statin with another were more likely to report results and conclusions favoring the sponsor's product than the comparison drug.
Editors' Summary
Background.
Randomized controlled trials are generally considered to be the most reliable type of experimental study for evaluating the effectiveness of different treatments. Randomization involves the assignment of participants in the trial to different treatment groups by the play of chance. Properly done, this procedure means that the different groups are comparable at outset, reducing the chance that outside factors could be responsible for treatment effects seen in the trial. When done properly, randomization also ensures that the clinicians recruiting participants into the trial cannot know the treatment group to which a patient will end up being assigned. However, despite these advantages, a large number of factors can still result in bias creeping in. Bias comes about when the findings of research appear to differ in some systematic way from the true result. Other research studies have suggested that funding is a source of bias; studies sponsored by drug companies seem to more often favor the sponsor's drug than trials not sponsored by drug companies
Why Was This Study Done?
The researchers wanted to more precisely understand the impact of different possible sources of bias in the findings of randomized controlled trials. In particular, they wanted to study the outcomes of “head-to-head” drug comparison studies for one particular class of drugs, the statins. Drugs in this class are commonly prescribed to reduce the levels of cholesterol in blood amongst people who are at risk of heart and other types of disease. This drug class is a good example for studying the role of bias in drug–drug comparison trials, because these trials are extensively used in decision making by health-policy makers.
What Did the Researchers Do and Find?
This research study was based on searching PubMed, a biomedical literature database, with the aim of finding all randomized controlled trials of statins carried out between January 1999 and May 2005 (reference lists also were searched). Only trials which compared one statin to another statin or one statin to another type of drug were included. The researchers extracted the following information from each article: the study's source of funding, aspects of study design, the overall results, and the authors' conclusions. The results were categorized to show whether the findings were favorable to the test drug (the newer statin), inconclusive, or not favorable to the test drug. Aspects of each study's design were also categorized in relation to various features, such as how well the randomization was done (in particular, the degree to which the processes used would have prevented physicians from knowing which treatment a patient was likely to receive on enrollment); whether all participants enrolled in the trial were eventually analyzed; and whether investigators or participants knew what treatment an individual was receiving.
One hundred and ninety-two trials were included in this study, and of these, 95 declared drug company funding; 23 declared government or other nonprofit funding while 74 did not declare funding or were not funded. Trials that were properly blinded (where participants and investigators did not know what treatment an individual received) were less likely to have conclusions favoring the test drug. However, large trials were more likely to favor the test drug than smaller trials. When looking specifically at the trials funded by drug companies, the researchers found various factors that predicted whether a result or conclusion favored the test drug. These included the impact of the journal publishing the results; the size of the trial; and whether funding came from the maker of the test drug. However, properly blinded trials were less likely to produce results favoring the test drug. Even once all other factors were accounted for, the funding source for the study was still linked with results and conclusions that favored the maker of the test drug.
What Do These Findings Mean?
This study shows that the type of sponsorship available for randomized controlled trials of statins was strongly linked to the results and conclusions of those studies, even when other factors were taken into account. However, it is not clear from this study why sponsorship has such a strong link to the overall findings. There are many possible reasons why this might be. Some people have suggested that drug companies may deliberately choose lower dosages for the comparison drug when they carry out “head-to-head” trials; this tactic is likely to result in the company's product doing better in the trial. Others have suggested that trials which produce unfavorable results are not published, or that unfavorable outcomes are suppressed. Whatever the reasons for these findings, the implications are important, and suggest that the evidence base relating to statins may be substantially biased.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040184.
The James Lind Library has been created to help people understand fair tests of treatments in health care by illustrating how fair tests have developed over the centuries
The International Committee of Medical Journal Editors has provided guidance regarding sponsorship, authorship, and accountability
The CONSORT statement is a research tool that provides an evidence-based approach for reporting the results of randomized controlled trials
Good Publication Practice guidelines provide standards for responsible publication of research sponsored by pharmaceutical companies
Information from Wikipedia on Statins. Wikipedia is an internet encyclopedia anyone can edit
doi:10.1371/journal.pmed.0040184
PMCID: PMC1885451  PMID: 17550302
10.  Advanced Electrophysiologic Mapping Systems 
Executive Summary
Objective
To assess the effectiveness, cost-effectiveness, and demand in Ontario for catheter ablation of complex arrhythmias guided by advanced nonfluoroscopy mapping systems. Particular attention was paid to ablation for atrial fibrillation (AF).
Clinical Need
Tachycardia
Tachycardia refers to a diverse group of arrhythmias characterized by heart rates that are greater than 100 beats per minute. It results from abnormal firing of electrical impulses from heart tissues or abnormal electrical pathways in the heart because of scars. Tachycardia may be asymptomatic, or it may adversely affect quality of life owing to symptoms such as palpitations, headaches, shortness of breath, weakness, dizziness, and syncope. Atrial fibrillation, the most common sustained arrhythmia, affects about 99,000 people in Ontario. It is associated with higher morbidity and mortality because of increased risk of stroke, embolism, and congestive heart failure. In atrial fibrillation, most of the abnormal arrhythmogenic foci are located inside the pulmonary veins, although the atrium may also be responsible for triggering or perpetuating atrial fibrillation. Ventricular tachycardia, often found in patients with ischemic heart disease and a history of myocardial infarction, is often life-threatening; it accounts for about 50% of sudden deaths.
Treatment of Tachycardia
The first line of treatment for tachycardia is antiarrhythmic drugs; for atrial fibrillation, anticoagulation drugs are also used to prevent stroke. For patients refractory to or unable to tolerate antiarrhythmic drugs, ablation of the arrhythmogenic heart tissues is the only option. Surgical ablation such as the Cox-Maze procedure is more invasive. Catheter ablation, involving the delivery of energy (most commonly radiofrequency) via a percutaneous catheter system guided by X-ray fluoroscopy, has been used in place of surgical ablation for many patients. However, this conventional approach in catheter ablation has not been found to be effective for the treatment of complex arrhythmias such as chronic atrial fibrillation or ventricular tachycardia. Advanced nonfluoroscopic mapping systems have been developed for guiding the ablation of these complex arrhythmias.
The Technology
Four nonfluoroscopic advanced mapping systems have been licensed by Health Canada:
CARTO EP mapping System (manufactured by Biosense Webster, CA) uses weak magnetic fields and a special mapping/ablation catheter with a magnetic sensor to locate the catheter and reconstruct a 3-dimensional geometry of the heart superimposed with colour-coded electric potential maps to guide ablation.
EnSite System (manufactured by Endocardial Solutions Inc., MN) includes a multi-electrode non-contact catheter that conducts simultaneous mapping. A processing unit uses the electrical data to computes more than 3,000 isopotential electrograms that are displayed on a reconstructed 3-dimensional geometry of the heart chamber. The navigational system, EnSite NavX, can be used separately with most mapping catheters.
The LocaLisa Intracardiac System (manufactured by Medtronics Inc, MN) is a navigational system that uses an electrical field to locate the mapping catheter. It reconstructs the location of the electrodes on the mapping catheter in 3-dimensional virtual space, thereby enabling an ablation catheter to be directed to the electrode that identifies abnormal electric potential.
Polar Constellation Advanced Mapping Catheter System (manufactured by Boston Scientific, MA) is a multielectrode basket catheter with 64 electrodes on 8 splines. Once deployed, each electrode is automatically traced. The information enables a 3-dimensional model of the basket catheter to be computed. Colour-coded activation maps are reconstructed online and displayed on a monitor. By using this catheter, a precise electrical map of the atrium can be obtained in several heartbeats.
Review Strategy
A systematic search of Cochrane, MEDLINE and EMBASE was conducted to identify studies that compared ablation guided by any of the advanced systems to fluoroscopy-guided ablation of tachycardia. English-language studies with sample sizes greater than or equal to 20 that were published between 2000 and 2005 were included. Observational studies on safety of advanced mapping systems and fluoroscopy were also included. Outcomes of interest were acute success, defined as termination of arrhythmia immediately following ablation; long-term success, defined as being arrhythmia free at follow-up; total procedure time; fluoroscopy time; radiation dose; number of radiofrequency pulses; complications; cost; and the cost-effectiveness ratio.
Quality of the individual studies was assessed using established criteria. Quality of the overall evidence was determined by applying the GRADE evaluation system. (3) Qualitative synthesis of the data was performed. Quantitative analysis using Revman 4.2 was performed when appropriate.
Quality of the Studies
Thirty-four studies met the inclusion criteria. These comprised 18 studies on CARTO (4 randomized controlled trials [RCTs] and 14 non-RCTs), 3 RCTs on EnSite NavX, 4 studies on LocaLisa Navigational System (1 RCT and 3 non-RCTs), 2 studies on EnSite and CARTO, 1 on Polar Constellation basket catheter, and 7 studies on radiation safety.
The quality of the studies ranged from moderate to low. Most of the studies had small sample sizes with selection bias, and there was no blinding of patients or care providers in any of the studies. Duration of follow-up ranged from 6 weeks to 29 months, with most having at least 6 months of follow-up. There was heterogeneity with respect to the approach to ablation, definition of success, and drug management before and after the ablation procedure.
Summary of Findings
Evidence is based on a small number of small RCTS and non-RCTS with methodological flaws.
Advanced nonfluoroscopy mapping/navigation systems provided real time 3-dimensional images with integration of anatomic and electrical potential information that enable better visualization of areas of interest for ablation
Advanced nonfluoroscopy mapping/navigation systems appear to be safe; they consistently shortened the fluoroscopy duration and radiation exposure.
Evidence suggests that nonfluoroscopy mapping and navigation systems may be used as adjuncts to rather than replacements for fluoroscopy in guiding the ablation of complex arrhythmias.
Most studies showed a nonsignificant trend toward lower overall failure rate for advanced mapping-guided ablation compared with fluoroscopy-guided mapping.
Pooled analyses of small RCTs and non-RCTs that compared fluoroscopy- with nonfluoroscopy-guided ablation of atrial fibrillation and atrial flutter showed that advanced nonfluoroscopy mapping and navigational systems:
Yielded acute success rates of 69% to 100%, not significantly different from fluoroscopy ablation.
Had overall failure rates at 3 months to 19 months of 1% to 40% (median 25%).
Resulted in a 10% relative reduction in overall failure rate for advanced mapping guided-ablation compared to fluoroscopy guided ablation for the treatment of atrial fibrillation.
Yielded added benefit over fluoroscopy in guiding the ablation of complex arrhythmia. The advanced systems were shown to reduce the arrhythmia burden and the need for antiarrhythmic drugs in patients with complex arrhythmia who had failed fluoroscopy-guided ablation
Based on predominantly observational studies, circumferential PV ablation guided by a nonfluoroscopy system was shown to do the following:
Result in freedom from atrial fibrillation (with or without antiarrhythmic drug) in 75% to 95% of patients (median 79%). This effect was maintained up to 28 months.
Result in freedom from atrial fibrillation without antiarrhythmic drugs in 47% to 95% of patients (median 63%).
Improve patient survival at 28 months after the procedure as compared with drug therapy.
Require special skills; patient outcomes are operator dependent, and there is a significant learning curve effect.
Complication rates of pulmonary vein ablation guided by an advanced mapping/navigation system ranged from 0% to 10% with a median of 6% during a follow-up period of 6 months to 29 months.
The complication rate of the study with the longest follow-up was 8%.
The most common complications of advanced catheter-guided ablation were stroke, transient ischemic attack, cardiac tamponade, myocardial infarction, atrial flutter, congestive heart failure, and pulmonary vein stenosis. A small number of cases with fatal atrial-esophageal fistula had been reported and were attributed to the high radiofrequency energy used rather than to the advanced mapping systems.
Economic Analysis
An Ontario-based economic analysis suggests that the cumulative incremental upfront costs of catheter ablation of atrial fibrillation guided by advanced nonfluoroscopy mapping could be recouped in 4.7 years through cost avoidance arising from less need for antiarrhythmic drugs and fewer hospitalization for stroke and heart failure.
Expert Opinion
Expert consultants to the Medical Advisory Secretariat noted the following:
Nonfluoroscopy mapping is not necessary for simple ablation procedures (e.g., typical flutter). However, it is essential in the ablation of complex arrhythmias including these:
Symptomatic, drug-refractory atrial fibrillation
Arrhythmias in people who have had surgery for congenital heart disease (e.g., macro re-entrant tachycardia in people who have had surgery for congenital heart disease).
Ventricular tachycardia due to myocardial infarction
Atypical atrial flutter
Advanced mapping systems represent an enabling technology in the ablation of complex arrhythmias. The ablation of these complex cases would not have been feasible or advisable with fluoroscopy-guided ablation and, therefore, comparative studies would not be feasible or ethical in such cases.
Many of the studies included patients with relatively simple arrhythmias (e.g., typical atrial flutter and atrial ventricular nodal re-entrant tachycardia), for which the success rates using the fluoroscopy approach were extremely high and unlikely to be improved upon using nonfluoroscopic mapping.
By age 50, almost 100% of people who have had surgery for congenital heart disease will develop arrhythmia.
Some centres are under greater pressure because of expertise in complex ablation procedures for subsets of patients.
The use of advanced mapping systems requires the support of additional electrophysiologic laboratory time and nursing time.
Conclusions
For patients suffering from symptomatic, drug-refractory atrial fibrillation and are otherwise healthy, catheter ablation offers a treatment option that is less invasive than is open surgical ablation.
Small RCTs that may have been limited by type 2 errors showed significant reductions in fluoroscopy exposure in nonfluoroscopy-guided ablation and a trend toward lower overall failure rate that did not reach statistical significance.
Pooled analysis suggests that advanced mapping systems may reduce the overall failure rate in the ablation of atrial fibrillation.
Observational studies suggest that ablation guided by complex mapping/navigation systems is a promising treatment for complex arrhythmias such as highly symptomatic, drug-refractory atrial fibrillation for which rate control is not an option
In people with atrial fibrillation, ablation guided by advanced nonfluoroscopy mapping resulted in arrhythmia free rates of 80% or higher, reduced mortality, and better quality of life at experienced centres.
Although generally safe, serious complications such as stroke, atrial-esophageal, and pulmonary vein stenosis had been reported following ablation procedures.
Experts advised that advanced mapping systems are also required for catheter ablation of:
Hemodynamically unstable ventricular tachycardia from ischemic heart disease
Macro re-entrant atrial tachycardia after surgical correction of congenital heart disease
Atypical atrial flutter
Catheter ablation of atrial fibrillation is still evolving, and it appears that different ablative techniques may be appropriate depending on the characteristics of the patient and the atrial fibrillation.
Data from centres that perform electrophysiological mapping suggest that patients with drug-refractory atrial fibrillation may be the largest group with unmet need for advanced mapping-guided catheter ablation in Ontario.
Nonfluoroscopy mapping-guided pulmonary vein ablation for the treatment of atrial fibrillation has a significant learning effect; therefore, it is advisable for the province to establish centres of excellence to ensure a critical volume, to gain efficiency and to minimize the need for antiarrhythmic drugs after ablation and the need for future repeat ablation procedures.
PMCID: PMC3379531  PMID: 23074499
11.  Low levels of vitamin C in dialysis patients is associated with decreased prealbumin and increased C-reactive protein 
BMC Nephrology  2011;12:18.
Background
Subclinical inflammation is a common phenomenon in patients on either continuous ambulatory peritoneal dialysis (CAPD) or maintenance hemodialysis (MHD). We hypothesized that vitamin C had anti-inflammation effect because of its electron offering ability. The current study was designed to test the relationship of plasma vitamin C level and some inflammatory markers.
Methods
In this cross-sectional study, 284 dialysis patients were recruited, including 117 MHD and 167 CAPD patients. The demographics were recorded. Plasma vitamin C was measured by high-performance liquid chromatography. And we also measured body mass index (BMI, calculated as weight/height2), Kt/V, serum albumin, serum prealbumin, high-sensitivity C-reactive protein (hsCRP), ferritin, hemoglobin. The relationships between vitamin C and albumin, pre-albumin and hsCRP levels were tested by Spearman correlation analysis and multiple regression analysis.
Patients were classified into three subgroups by vitamin C level according to previous recommendation [1,2] in MHD and CAPD patients respectively: group A: < 2 ug/ml (< 11.4 umol/l, deficiency), group B: 2-4 ug/ml (11.4-22.8 umol/l, insufficiency) and group C: > 4 ug/ml (> 22.8 umol/l, normal and above).
Results
Patients showed a widely distribution of plasma vitamin C levels in the total 284 dialysis patients. Vitamin C deficiency (< 2 ug/ml) was present in 95(33.45%) and insufficiency (2-4 ug/ml) in 88(30.99%). 73(25.70%) patients had plasma vitamin C levels within normal range (4-14 ug/ml) and 28(9.86%) at higher than normal levels (> 14 ug/ml). The similar proportion of different vitamin C levels was found in both MHD and CAPD groups.
Plasma vitamin C level was inversely associated with hsCRP concentration (Spearman r = -0.201, P = 0.001) and positively associated with prealbumin (Spearman r = 0.268, P < 0.001), albumin levels (Spearman r = 0.161, P = 0.007). In multiple linear regression analysis, plasma vitamin C level was inversely associated with log10hsCRP (P = 0.048) and positively with prealbumin levels (P = 0.002) adjusted for gender, age, diabetes, modality of dialysis and some other confounding effects.
Conclusions
The investigation indicates that vitamin C deficiency is common in both MHD patients and CAPD patients. Plasma vitamin C level is positively associated with serum prealbumin level and negatively associated with hsCRP level in both groups. Vitamin C deficiency may play an important role in the increased inflammatory status in dialysis patients. Further studies are needed to determine whether inflammatory status in dialysis patients can be improved by using vitamin C supplements.
doi:10.1186/1471-2369-12-18
PMCID: PMC3112084  PMID: 21548917
12.  Use of hyaluronan in the selection of sperm for intracytoplasmic sperm injection (ICSI): significant improvement in clinical outcomes—multicenter, double-blinded and randomized controlled trial 
STUDY QUESTION
Does the selection of sperm for ICSI based on their ability to bind to hyaluronan improve the clinical pregnancy rates (CPR) (primary end-point), implantation (IR) and pregnancy loss rates (PLR)?
SUMMARY ANSWER
In couples where ≤65% of sperm bound hyaluronan, the selection of hyaluronan-bound (HB) sperm for ICSI led to a statistically significant reduction in PLR.
WHAT IS KNOWN AND WHAT THIS PAPER ADDS
HB sperm demonstrate enhanced developmental parameters which have been associated with successful fertilization and embryogenesis. Sperm selected for ICSI using a liquid source of hyaluronan achieved an improvement in IR. A pilot study by the primary author demonstrated that the use of HB sperm in ICSI was associated with improved CPR. The current study represents the single largest prospective, multicenter, double-blinded and randomized controlled trial to evaluate the use of hyaluronan in the selection of sperm for ICSI.
DESIGN
Using the hyaluronan binding assay, an HB score was determined for the fresh or initial (I-HB) and processed or final semen specimen (F-HB). Patients were classified as >65% or ≤65% I-HB and stratified accordingly. Patients with I-HB scores ≤65% were randomized into control and HB selection (HYAL) groups whereas patients with I-HB >65% were randomized to non-participatory (NP), control or HYAL groups, in a ratio of 2:1:1. The NP group was included in the >65% study arm to balance the higher prevalence of patients with I-HB scores >65%. In the control group, oocytes received sperm selected via the conventional assessment of motility and morphology. In the HYAL group, HB sperm meeting the same visual criteria were selected for injection. Patient participants and clinical care providers were blinded to group assignment.
PARTICIPANTS AND SETTING
Eight hundred two couples treated with ICSI in 10 private and hospital-based IVF programs were enrolled in this study. Of the 484 patients stratified to the I-HB > 65% arm, 115 participants were randomized to the control group, 122 participants were randomized to the HYAL group and 247 participants were randomized to the NP group. Of the 318 patients stratified to the I-HB ≤ 65% arm, 164 participants were randomized to the control group and 154 participants were randomized to the HYAL group.
MAIN RESULTS AND THE ROLE OF CHANCE
HYAL patients with an F-HB score ≤65% demonstrated an IR of 37.4% compared with 30.7% for control [n = 63, 58, P > 0.05, (95% CI of the difference −7.7 to 21.3)]. In addition, the CPR associated with patients randomized to the HYAL group was 50.8% when compared with 37.9% for those randomized to the control group (n = 63, 58, P > 0.05). The 12.9% difference was associated with a risk ratio (RR) of 1.340 (RR 95% CI 0.89–2.0). HYAL patients with I-HB and F-HB scores ≤65% revealed a statistically significant reduction in their PLR (I-HB: 3.3 versus 15.1%, n = 73, 60, P = 0.021, RR of 0.22 (RR 95% CI 0.05–0.96) (F-HB: 0.0%, 18.5%, n = 27, 32, P = 0.016, RR not applicable due to 0.0% value) over control patients. The study was originally planned to have 200 participants per arm providing 86.1% power to detect an increase in CPR from 35 to 50% at α = 0.05 but was stopped early for financial reasons. As a pilot study had demonstrated that sperm preparation protocols may increase the HB score, the design of the current study incorporated a priori collection and analysis of the data by both the I-HB and the F-HB scores. Analysis by both the I-HB and F-HB score acknowledged the potential impact of sperm preparation protocols.
BIAS, CONFOUNDING AND OTHER REASONS FOR CAUTION
Selection bias was controlled by randomization. Geographic and seasonal bias was controlled by recruiting from 10 geographically unique sites and by sampling over a 2-year period. The potential for population effect was controlled by adjusting for higher prevalence rates of >65% I-HB that naturally occur by adding the NP arm and to concurrently recruit >65% and ≤65% I-HB subjects. Monitoring and site audits occurred regularly to ensure standardization of data collection, adherence to the study protocol and subject recruitment. Subgroup analysis based on the F-HB score was envisaged in the study design.
GENERALIZABILITY TO OTHER POPULATIONS
The study included clinics using different sperm preparation methods, located in different regions of the USA and proceeded in every month of the year. Therefore, the results are widely applicable.
STUDY FUNDING/COMPETING INTEREST(S)
This study was funded by Biocoat, Inc., Horsham, PA, USA. The statistical analysis plan and subsequent analyses were performed by Sherrine Eid, a biostatistician. The manuscript was prepared by Kathryn C. Worrilow, Ph.D. and the study team members. Biocoat, Inc. was permitted to review the manuscript and suggest changes, but the final decision on content was exclusively retained by the authors. K.C.W is a scientific advisor to Biocoat, Inc. S.E. is a consultant to Biocoat, Inc. D.W. has nothing to disclose. M.P., S.S., J.W., K.I., C.K. and T.E. have nothing to disclose. G.D.B. is a consultant to Cooper Surgical and Unisense. J.L. is on the scientific advisory board of Origio.
TRIAL REGISTRATION NUMBER
NCT00741494.
doi:10.1093/humrep/des417
PMCID: PMC3545641  PMID: 23203216
13.  Low calcium dialysate combined with CaCO3 in hyperphosphatemia in hemodialysis patients 
This aim of this study was to observe the effects of the application of low calcium dialysate (LCD) combined with oral administration of CaCO3 in the treatment of hyperphosphatemia, as well as blood Ca2+, calcium-phosphate product (CPP), parathyroid hormone (PTH) and blood pressure in patients undergoing hemodialysis. Thirty-one maintenance hemodialysis (MHD) patients with hyperphosphatemia, but normal blood Ca2+, underwent dialysis with an initial dialy-sate Ca2+ concentration (DCa) of 1.50 mmol/l for six months and then with 1.25 mmol/l for six months. The patients who underwent dialysis with a DCa of 1.25 mmol/l were treated orally with 0.3 g CaCO3 tablets three times a day. In the third and sixth months [observation end point (OEP)] of the dialysis, the concentrations of Ca2+, phosphorus and intact PTH (iPTH) were measured; blood pressure and side-effects prior to and following dialysis were also observed. The Ca2+, CPP and iPTH levels increased (P<0.05) in the sixth month of treatment with a DCa of 1.50 mmol/l. However, the Ca2+ concentration declined to a certain degree, CPPs decreased significantly (P<0.05) and the iPTH concentration increased following treatment with a DCa of 1.25 mmol/l for six months. The incidence rate of adverse effects of LCD was 12.9% (4/31); the effects were mainly muscle spasms, hypotension and elevated PTH. The periodic application of LCD combined with the oral administration of CaCO3 effectively reduced serum phosphorus and CPPs among MHD patients with hyperphosphatemia, indicating that the treatment may be used clinically.
doi:10.3892/etm.2013.1067
PMCID: PMC3702715  PMID: 23837063
low calcium dialysis; hyperphosphatemia; calcium-phosphate product; parathyroid hormone
14.  Dual factor pulse pressure: body mass index and outcome in type 2 diabetic subjects on maintenance hemodialysis. A longitudinal study 2003–2006 
Vascular Health and Risk Management  2008;4(6):1401-1406.
Background:
Inverse associations between risk factors and mortality have been reported in epidemiological studies of patients on maintenance hemodialysis (MHD).
Objective:
The aim of this prospective study was to estimate the effect of the dual variable pulse pressure (PP) – body mass index (BMI) on cardiovascular (CV) events and death in type 2 diabetic (T2D) subjects on MHD in a Caribbean population.
Methods:
Eighty Afro-Caribbean T2D patients on MHD were studied prospectively from 2003 to 2006. Proportional-hazard modeling was used.
Results:
Of all, 23.8% had a high PP (PP ≥ 75th percentile), 76.3% had BMI < 30 Kg/m2, 21.3% had the dual factor high PP – absence of obesity. During the study period, 23 patients died and 13 CV events occurred. In the presence of the dual variable and after adjustment for age, gender, duration of MHD, and pre-existing CV complications, the adjusted hazard ratio (HR) (95% CI) of CV events and death were respectively 2.7 (0.8–8.3); P = 0.09 and 2.4 (1.1–5.9); P = 0.04.
Conclusions:
The dual factor, high PP – absence of obesity, is a prognosis factor of outcome. In type 2 diabetics on MHD, a specific management strategy should be proposed in nonobese subjects with wide pulse pressure in order to decrease or prevent the incidence of fatal and nonfatal events.
PMCID: PMC2663455  PMID: 19337552
dual factor; pulse pressure; body mass index; type 2 diabetes; outcome
15.  Serum catalytic Iron: A novel biomarker for coronary artery disease in patients on maintenance hemodialysis 
Indian Journal of Nephrology  2013;23(5):332-337.
Cardiovascular disease is the leading cause of morbidity and mortality in maintenance hemodialysis (MHD) patients. We evaluated the role of serum catalytic iron (SCI) as a biomarker for coronary artery disease (CAD) in patients on MHD. SCI was measured in 59 stable MHD patients. All patients underwent coronary angiography. Significant CAD was defined as a > 70% narrowing in at least one epicardial coronary artery. Levels of SCI were compared with a group of healthy controls. Significant CAD was detected in 22 (37.3%) patients, with one vessel disease in 14 (63.63%) and multi-vessel disease in eight (36.36%) patients. The MHD patients had elevated levels of SCI (4.70 ± 1.79 μmol/L) compared with normal health survey participants (0.11 ± 0.01 μmol/L) (P < 0.0001). MHD patients who had no CAD had SCI levels of 1.36 ± 0.34 μmol/L compared with those having significant CAD (8.92 ± 4.12 μmol/L) (P < 0.0001). Patients on MHD and diabetes had stronger correlation between SCI and prevalence of CAD compared with non-diabetics. Patients having one vessel disease had SCI of 8.85 ± 4.67 μmol/L versus multi-vessel disease with SCI of 9.05 ± 8.34 μmol/L, P = 0.48. In multivariate analysis, SCI and diabetes mellitus were independently associated with significant CAD. We confirm the high prevalence of significant CAD in MHD patients. Elevated SCI levels are associated with presence of significant coronary disease in such patients. The association of SCI is higher in diabetic versus the non-diabetic subgroup. This is an important potentially modifiable biomarker of CAD in MHD patients.
doi:10.4103/0971-4065.116293
PMCID: PMC3764705  PMID: 24049267
Coronary artery disease; maintenance hemodialysis; oxidative stress; serum catalytic iron
16.  Switching HIV Treatment in Adults Based on CD4 Count Versus Viral Load Monitoring: A Randomized, Non-Inferiority Trial in Thailand 
PLoS Medicine  2013;10(8):e1001494.
Using a randomized controlled trial, Marc Lallemant and colleagues ask if a CD4-based monitoring and treatment switching strategy provides a similar clinical outcome compared to the standard viral load-based strategy for adults with HIV in Thailand.
Please see later in the article for the Editors' Summary
Background
Viral load (VL) is recommended for monitoring the response to highly active antiretroviral therapy (HAART) but is not routinely available in most low- and middle-income countries. The purpose of the study was to determine whether a CD4-based monitoring and switching strategy would provide a similar clinical outcome compared to the standard VL-based strategy in Thailand.
Methods and Findings
The Programs for HIV Prevention and Treatment (PHPT-3) non-inferiority randomized clinical trial compared a treatment switching strategy based on CD4-only (CD4) monitoring versus viral-load (VL). Consenting participants were antiretroviral-naïve HIV-infected adults (CD4 count 50–250/mm3) initiating non-nucleotide reverse transcriptase inhibitor (NNRTI)-based therapy. Randomization, stratified by site (21 public hospitals), was performed centrally after enrollment. Clinicians were unaware of the VL values of patients randomized to the CD4 arm. Participants switched to second-line combination with confirmed CD4 decline >30% from peak (within 200 cells from baseline) in the CD4 arm, or confirmed VL >400 copies/ml in the VL arm. Primary endpoint was clinical failure at 3 years, defined as death, new AIDS-defining event, or CD4 <50 cells/mm3. The 3-year Kaplan-Meier cumulative risks of clinical failure were compared for non-inferiority with a margin of 7.4%. In the intent to treat analysis, data were censored at the date of death or at last visit. The secondary endpoints were difference in future-drug-option (FDO) score, a measure of resistance profiles, virologic and immunologic responses, and the safety and tolerance of HAART. 716 participants were randomized, 356 to VL monitoring and 360 to CD4 monitoring. At 3 years, 319 participants (90%) in VL and 326 (91%) in CD4 were alive and on follow-up. The cumulative risk of clinical failure was 8.0% (95% CI 5.6–11.4) in VL versus 7.4% (5.1–10.7) in CD4, and the upper-limit of the one-sided 95% CI of the difference was 3.4%, meeting the pre-determined non-inferiority criterion. Probability of switch for study criteria was 5.2% (3.2–8.4) in VL versus 7.5% (5.0–11.1) in CD4 (p = 0.097). Median time from treatment initiation to switch was 11.7 months (7.7–19.4) in VL and 24.7 months (15.9–35.0) in CD4 (p = 0.001). The median duration of viremia >400 copies/ml at switch was 7.2 months (5.8–8.0) in VL versus 15.8 months (8.5–20.4) in CD4 (p = 0.002). FDO scores were not significantly different at time of switch. No adverse events related to the monitoring strategy were reported.
Conclusions
The 3-year rates of clinical failure and loss of treatment options did not differ between strategies although the longer-term consequences of CD4 monitoring would need to be investigated. These results provide reassurance to treatment programs currently based on CD4 monitoring as VL measurement becomes more affordable and feasible in resource-limited settings.
Trial registration
ClinicalTrials.gov NCT00162682
Please see later in the article for the Editors' Summary
Editors' Summary
Background
About 34 million people (most of them living in low-and middle-income countries) are currently infected with HIV, the virus that causes AIDS. HIV infection leads to the destruction of immune system cells (including CD4 cells, a type of white blood cell), leaving infected individuals susceptible to other infections. Early in the AIDS epidemic, most HIV-infected individuals died within 10 years of infection. Then, in 1996, highly active antiretroviral therapy (HAART)—combined drugs regimens that suppress viral replication and allow restoration of the immune system—became available. For people living in affluent countries, HIV/AIDS became a chronic condition but, because HAART was expensive, HIV/AIDS remained a fatal illness for people living in resource-limited countries. In 2003, the international community declared HIV/AIDS a global health emergency and, in 2006, it set the target of achieving universal global access to HAART by 2010. By the end of 2011, 8 million of the estimated 14.8 million people in need of HAART in low- and middle-income countries were receiving treatment.
Why Was This Study Done?
At the time this trial was conceived, national and international recommendations were that HIV-positive individuals should start HAART when their CD4 count fell below 200 cells/mm3 and should have their CD4 count regularly monitored to optimize HAART. In 2013, the World Health Organization (WHO) recommendations were updated to promote expanded eligibility for HAART with a CD4 of 500 cells/mm3 or less for adults, adolescents, and older children although priority is given to individuals with CD4 count of 350 cells/mm3 or less. Because HIV often becomes resistant to first-line antiretroviral drugs, WHO also recommends that viral load—the amount of virus in the blood—should be monitored so that suspected treatment failures can be confirmed and patients switched to second-line drugs in a timely manner. This monitoring and switching strategy is widely used in resource-rich settings, but is still very difficult to implement for low- and middle-income countries where resources for monitoring are limited and access to costly second-line drugs is restricted. In this randomized non-inferiority trial, the researchers compare the performance of a CD4-based treatment monitoring and switching strategy with the standard viral load-based strategy among HIV-positive adults in Thailand. In a randomized trial, individuals are assigned different interventions by the play of chance and followed up to compare the effects of these interventions; a non-inferiority trial investigates whether one treatment is not worse than another.
What Did the Researchers Do and Find?
The researchers assigned about 700 HIV-positive adults who were beginning HAART for the first time to have their CD4 count (CD4 arm) or their CD4 count and viral load (VL arm) determined every 3 months. Participants were switched to a second-line therapy if their CD4 count declined by more than 30% from their peak CD4 count (CD4 arm) or if a viral load of more than 400 copies/ml was recorded (VL arm). The 3-year cumulative risk of clinical failure (defined as death, a new AIDS-defining event, or a CD4 count of less than 50 cells/mm3) was 8% in the VL arm and 7.4% in the CD4 arm. This difference in clinical failure risk met the researchers' predefined criterion for non-inferiority. The probability of a treatment switch was similar in the two arms, but the average time from treatment initiation to treatment switch and the average duration of a high viral load after treatment switch were both longer in the CD4 arm than in the VL arm. Finally, the future-drug-option score, a measure of viral drug resistance profiles, was similar in the two arms at the time of treatment switch.
What Do These Findings Mean?
These findings suggest that, in Thailand, a CD4 switching strategy is non-inferior in terms of clinical outcomes among HIV-positive adults 3 years after beginning HAART when compared to the recommended viral load-based switching strategy and that there is no difference between the strategies in terms of viral suppression and immune restoration after 3-years follow-up. Importantly, however, even though patients in the CD4 arm spent longer with a high viral load than patients in the VL arm, the emergence of HIV mutants resistant to antiretroviral drugs was similar in the two arms. Although these findings provide no information about the long-term outcomes of the two monitoring strategies and may not be generalizable to routine care settings, they nevertheless provide reassurance that using CD4 counts alone to monitor HAART in HIV treatment programs in resource-limited settings is an appropriate strategy to use as viral load measurement becomes more affordable and feasible in these settings.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001494.
The World Health Organization provides information on all aspects of HIV/AIDS (in several languages); its 2010 recommendations for antiretroviral therapy for HIV infection in adults and adolescents are available as well as the June 2013 Consolidated guidelines on the use of antiretroviral drugs for treating and preventing HIV infection: recommendations for a public health approach
The 2012 UNAIDS World AIDS Day Report provides up-to-date information about the AIDS epidemic and efforts to halt it
Information is available from the US National Institute of Allergy and Infectious Diseases on HIV infection and AIDS
NAM/aidsmap provides basic information about HIV/AIDS and summaries of recent research findings on HIV care and treatment
Information is available from Avert, an international AIDS charity on many aspects of HIV/AIDS, including information on the global HIV/AIDS epidemic, on HIV and AIDS in Thailand, on universal access to AIDS treatment, and on starting, monitoring and switching HIV treatment (in English and Spanish)
The UK National Health Service Choices website provides information (including personal stories) about HIV and AIDS
More information about this trial (the PHPT-3 trial) is available
Patient stories about living with HIV/AIDS are available through Avert; the nonprofit website Healthtalkonline also provides personal stories about living with HIV, including stories about HIV treatment
doi:10.1371/journal.pmed.1001494
PMCID: PMC3735458  PMID: 23940461
17.  A Randomised Trial Comparing Genotypic and Virtual Phenotypic Interpretation of HIV Drug Resistance: The CREST Study 
PLoS Clinical Trials  2006;1(3):e18.
Objectives:
The aim of this study was to compare the efficacy of different HIV drug resistance test reports (genotype and virtual phenotype) in patients who were changing their antiretroviral therapy (ART).
Design:
Randomised, open-label trial with 48-week followup.
Setting:
The study was conducted in a network of primary healthcare sites in Australia and New Zealand.
Participants:
Patients failing current ART with plasma HIV RNA > 2000 copies/mL who wished to change their current ART were eligible. Subjects were required to be > 18 years of age, previously treated with ART, have no intercurrent illnesses requiring active therapy, and to have provided written informed consent.
Interventions:
Eligible subjects were randomly assigned to receive a genotype (group A) or genotype plus virtual phenotype (group B) prior to selection of their new antiretroviral regimen.
Outcome Measures:
Patient groups were compared for patterns of ART selection and surrogate outcomes (plasma viral load and CD4 counts) on an intention-to-treat basis over a 48-week period.
Results:
Three hundred and twenty seven patients completing > one month of followup were included in these analyses. Resistance tests were the primary means by which ART regimens were selected (group A: 64%, group B: 62%; p = 0.32). At 48 weeks, there were no significant differences between the groups for mean change from baseline plasma HIV RNA (group A: 0.68 log copies/mL, group B: 0.58 log copies/mL; p = 0.23) and mean change from baseline CD4+ cell count (group A: 37 cells/mm3, group B: 50 cells/mm3; p = 0.28).
Conclusions:
In the absence of clear demonstrated benefits arising from the use of the virtual phenotype interpretation, this study suggests resistance testing using genotyping linked to a reliable interpretive algorithm is adequate for the management of HIV infection.
Editorial Commentary
Background: Antiretroviral drugs are used to treat patients with HIV infection, with good evidence that they improve prognosis. However, mutations develop in the HIV genome that allow it to evade successful treatment—known as drug resistance—and such mutations are known against every class of antiretroviral drug. Resistance can cause treatment failure and limit the treatment options available. Different types of tests are often used to detect resistance and to work out whether patients should switch to a different drug regimen. Currently, the different types of tests include genotype testing (direct sequencing of genes from virus samples infecting a patient); phenotype testing (a test that assesses the sensitivity of a patient's HIV sample to different drugs), and virtual phenotype testing (a way of interpreting genotype data that estimates the likely viral response to different drugs). The researchers of this study did a trial to find out whether providing an additional virtual phenotype report would be beneficial to patients, as compared with a genotype report alone. The main outcome was HIV viral load after 12 months of treatment, but the researchers also looked at differences in drug regimens prescribed, number of treatment changes in the study, and changes in CD4+ (the type of white blood cell infected by HIV) counts.
What this trial shows: The researchers found that the main endpoint of the trial (HIV viral load after 12 months) was no different in patients whose clinicians had received a virtual phenotype report as well as a genotype report, compared with those who had received a genotype report alone. In addition, the average number of drugs prescribed was no different between patients in the two different arms of the trial, and there was no difference in number of drug regimen changes, and no change in immune response (measured using CD4+ cell levels). However, more drugs predicted to be sensitive were prescribed by clinicians who got both a genotype and virtual phenotype report, as compared with clinicians who received only the genotype report.
Strengths and limitations: The size of the trial (338 patients recruited) was large enough to properly test the hypothesis that providing a virtual phenotype report as well as a genotype report would result in lower HIV viral loads. Randomization of patients to either intervention ensured that the comparison groups were well-balanced, and the researchers also tested whether selection bias had affected the results (i.e., testing for the possibility that clinicians could predict which intervention participants would receive, and change recruitment into the trial as a result). They found no evidence for selection bias occurring within the trial. However, interpreting the results is difficult because the trial did not directly compare the two different testing platforms, but rather looked at whether providing a virtual phenotype report as well as a genotype report was better than providing a genotype report alone. The investigators also acknowledge that since the trial was conducted, the cutoffs for interpreting genotype information as resistant have been lowered. The findings may therefore not translate precisely to the current situation.
Contribution to the evidence: Other cohort studies and clinical trials have shown that patients offered resistance testing respond better to antiretroviral therapy compared with those who were not, but the clinical effectiveness of different resistance testing methods is not known. This study provides additional data on the respective benefits of genotype testing versus genotype plus provision of virtual phenotype. Another trial comparing genotype versus virtual phenotype has also found that the different interpretation methods perform similarly.
doi:10.1371/journal.pctr.0010018
PMCID: PMC1523224  PMID: 16878178
18.  Factors in AIDS Dementia Complex Trial Design: Results and Lessons from the Abacavir Trial 
PLoS Clinical Trials  2007;2(3):e13.
Objectives:
To determine the efficacy of adding abacavir (Ziagen, ABC) to optimal stable background antiretroviral therapy (SBG) to AIDS dementia complex (ADC) patients and address trial design.
Design:
Phase III randomized, double-blind placebo-controlled trial.
Setting:
Tertiary outpatient clinics.
Participants:
ADC patients on SBG for ≥8 wk.
Interventions:
Participants were randomized to ABC or matched placebo for 12 wk.
Outcome Measures:
The primary outcome measure was the change in the summary neuropsychological Z score (NPZ). Secondary measures were HIV RNA and the immune activation markers β-2 microglobulin, soluble tumor necrosis factor (TNF) receptor 2, and quinolinic acid.
Results:
105 participants were enrolled. The median change in NPZ at week 12 was +0.76 for the ABC + SBG and +0.63 for the SBG groups (p = 0.735). The lack of efficacy was unlikely related to possible limited antiviral efficacy of ABC: at week 12 more ABC than placebo participants had plasma HIV RNA ≤400 copies/mL (p = 0.002). There were, however, other factors. Two thirds of patients were subsequently found to have had baseline resistance to ABC. Second, there was an unanticipated beneficial effect of SBG that extended beyond 8 wk to 5 mo, thereby rendering some of the patients at baseline unstable. Third, there was an unexpectedly large variability in neuropsychological performance that underpowered the study. Fourth, there was a relative lack of activity of ADC: 56% of all patients had baseline cerebrospinal fluid (CSF) HIV-1 RNA <100 copies/mL and 83% had CSF β-2 microglobulin <3 nmol/L at baseline.
Conclusions:
The addition of ABC to SBG for ADC patients was not efficacious, possibly because of the inefficacy of ABC per se, baseline drug resistance, prolonged benefit from existing therapy, difficulties with sample size calculations, and lack of disease activity. Assessment of these trial design factors is critical in the design of future ADC trials.
Editorial Commentary
Background: AIDS dementia complex (ADC) was first identified early in the HIV epidemic and at that time affected a substantial proportion of patients with AIDS. Patients with ADC experience dementia as well as disordered behavior and problems with movement and balance. ADC is now much less common in locations where patients have access to HAART (highly active antiretroviral therapy), consisting of combinations of several drugs that attack the virus at different stages of its life cycle. At present, however, there are no generally accepted guidelines for the best treatment of people with ADC. It has been thought for some time that the best treatment regimens for people with ADC would include drugs that cross the barrier between blood and brain well. Shortly following the introduction of HAART, a trial was carried out to find out whether adding one particular drug, abacavir, to existing combinations of drugs would be beneficial in people with ADC. This drug is known to cross the barrier between the blood and brain. The trial enrolled HIV-positive individuals with mild to moderate ADC and who were already receiving antiretroviral drug treatment. 105 participants were assigned at random to receive either high-dose abacavir or a placebo, in addition to their existing therapy. The primary outcome of the trial was a summary of performance on a set of different tests, designed to evaluate cognitive, behavior, and movement skills, at 12 weeks. Other outcomes included levels of HIV RNA (viral load) in fluid around the brain, as well as other neurological evaluations, and the level of HIV RNA and CD4+ T cells (the cells infected by HIV) in blood.
What this trial shows: When comparing the change in performance scores for individuals randomized to either abacavir or placebo, the results showed an improvement in scores for both groups, but no significant difference in improvement between the two groups. Similarly, the levels of HIV RNA in the cerebrospinal fluid did not differ between the two groups being compared, and other neurological tests did not show any differences between the two groups. However, at 12 weeks, patients receiving abacavir were more likely to have low levels of HIV RNA in their blood, suggesting that abacavir was active against the virus, but this did not translate into an additional improvement of these patients' dementia. The overall rates of adverse events were roughly comparable between the two groups in the trial, although participants receiving abacavir were more likely to experience certain types of events, such as nausea.
Strengths and limitations: The trial was appropriately randomized and controlled, using central telephone procedures for randomizing participants and subsequent blinding of patients and trial investigators. These procedures help minimize the chance of bias in assigning participants to the different arms as well as in the subsequent performance of individuals within the trial and the assessment of their outcomes. Limitations in the study design have been identified. One limitation is that individuals enrolled into the trial may not in fact have been receiving their existing HAART regimen for long enough to experience its optimal effect, and therefore the improvement seen in both groups could have resulted from an ongoing response to their existing regimen. It is also possible that patients improved in their test scores over the course of the trial simply because they became more familiar with the tests and not because their condition improved. This is a problem in all such trials that try to improve mental function. Finally, a limitation may have been the inclusion of patients who did not have active disease leading to worsening dementia.
Contribution to the evidence: The findings from this trial suggest that adding high-dose abacavir to existing HAART is not beneficial for patients with ADC. However, the trial provides several insights into the way that future studies of this type can be done, and which typically pose a number of challenging design problems. In particular, sensitive markers are needed that will allow researchers to monitor progression of ADC and patients' response to therapy.
doi:10.1371/journal.pctr.0020013
PMCID: PMC1845158  PMID: 17401456
19.  Publication Bias in Antipsychotic Trials: An Analysis of Efficacy Comparing the Published Literature to the US Food and Drug Administration Database 
PLoS Medicine  2012;9(3):e1001189.
A comparison of data held by the U.S. Food and Drug Administration (FDA) against data from journal reports of clinical trials enables estimation of the extent of publication bias for antipsychotics.
Background
Publication bias compromises the validity of evidence-based medicine, yet a growing body of research shows that this problem is widespread. Efficacy data from drug regulatory agencies, e.g., the US Food and Drug Administration (FDA), can serve as a benchmark or control against which data in journal articles can be checked. Thus one may determine whether publication bias is present and quantify the extent to which it inflates apparent drug efficacy.
Methods and Findings
FDA Drug Approval Packages for eight second-generation antipsychotics—aripiprazole, iloperidone, olanzapine, paliperidone, quetiapine, risperidone, risperidone long-acting injection (risperidone LAI), and ziprasidone—were used to identify a cohort of 24 FDA-registered premarketing trials. The results of these trials according to the FDA were compared with the results conveyed in corresponding journal articles. The relationship between study outcome and publication status was examined, and effect sizes derived from the two data sources were compared. Among the 24 FDA-registered trials, four (17%) were unpublished. Of these, three failed to show that the study drug had a statistical advantage over placebo, and one showed the study drug was statistically inferior to the active comparator. Among the 20 published trials, the five that were not positive, according to the FDA, showed some evidence of outcome reporting bias. However, the association between trial outcome and publication status did not reach statistical significance. Further, the apparent increase in the effect size point estimate due to publication bias was modest (8%) and not statistically significant. On the other hand, the effect size for unpublished trials (0.23, 95% confidence interval 0.07 to 0.39) was less than half that for the published trials (0.47, 95% confidence interval 0.40 to 0.54), a difference that was significant.
Conclusions
The magnitude of publication bias found for antipsychotics was less than that found previously for antidepressants, possibly because antipsychotics demonstrate superiority to placebo more consistently. Without increased access to regulatory agency data, publication bias will continue to blur distinctions between effective and ineffective drugs.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
People assume that, when they are ill, health-care professionals will ensure that they get the best available treatment. But how do clinicians know which treatment is likely to be most effective? In the past, clinicians used their own experience to make such decisions. Nowadays, they rely on evidence-based medicine—the systematic review and appraisal of trials, studies that investigate the efficacy and safety of medical interventions in patients. Evidence-based medicine can guide clinicians, however, only if all the results from clinical trials are published in an unbiased manner. Unfortunately, “publication bias” is common. For example, the results of trials in which a new drug did not perform better than existing drugs or in which it had unwanted side effects often remain unpublished. Moreover, published trials can be subject to outcome reporting bias—the publication may only include those trial outcomes that support the use of the new treatment rather than presenting all the available data.
Why Was This Study Done?
If only strongly positive results are published and negative results and side-effects remain unpublished, a drug will seem safer and more effective than it is in reality, which could affect clinical decision-making and patient outcomes. But how big a problem is publication bias? Here, researchers use US Food and Drug Administration (FDA) reviews as a benchmark to quantify the extent to which publication bias may be altering the apparent efficacy of second-generation antipsychotics (drugs used to treat schizophrenia and other mental illnesses that are characterized by a loss of contact with reality). In the US, all new drugs have to be approved by the FDA before they can be marketed. During this approval process, the FDA collects and keeps complete information about premarketing trials, including descriptions of their design and prespecified outcome measures and all the data collected during the trials. Thus, a comparison of the results included in the FDA reviews for a group of trials and the results that appear in the literature for the same trials can provide direct evidence about publication bias.
What Did the Researchers Do and Find?
The researchers identified 24 FDA-registered premarketing trials that investigated the use of eight second-generation antipsychotics for the treatment of schizophrenia or schizoaffective disorder. They searched the published literature for reports of these trials, and, by comparing the results of these trials according to the FDA with the results in the published articles, they examined the relationship between the study outcome (did the FDA consider it positive or negative?) and publication and looked for outcome reporting bias. Four of the 24 FDA-registered trials were unpublished. Three of these unpublished trials failed to show that the study drug was more effective than a placebo (a “dummy” pill); the fourth showed that the study drug was inferior to another drug already in use in the US. Among the 20 published trials, the five that the FDA judged not positive showed some evidence of publication bias. However, the association between trial outcome and publication status did not reach statistical significance (it might have happened by chance), and the mean effect size (a measure of drug effectiveness) derived from the published literature was only slightly higher than that derived from the FDA records. By contrast, within the FDA dataset, the mean effect size of the published trials was approximately double that of the unpublished trials.
What Do These Findings Mean?
The accuracy of these findings is limited by the small number of trials analyzed. Moreover, this study considers only the efficacy and not the safety of these drugs, it assumes that the FDA database is complete and unbiased, and its findings are not generalizable to other conditions that antipsychotics are used to treat. Nevertheless, these findings show that publication bias in the reporting of trials of second-generation antipsychotic drugs enhances the apparent efficacy of these drugs. Although the magnitude of the publication bias seen here is less than that seen in a similar study of antidepressant drugs, these findings show how selective reporting of clinical trial data undermines the integrity of the evidence base and can deprive clinicians of accurate data on which to base their prescribing decisions. Increased access to FDA reviews, suggest the researchers, is therefore essential to prevent publication bias continuing to blur distinctions between effective and ineffective drugs.
Additional Information
Please access these web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001189.
The US Food and Drug Administration provides information about drug approval in the US for consumers and health-care professionals
Detailed information about the process by which drugs are approved is on the web site of the FDA Center for Drug Evaluation and Research; also, FDA Drug Approval Packages are available for many drugs; the FDA Transparency Initiative, which was launched in June 2009, is an agency-wide effort to improve the transparency of the FDA
FDA-approved product labeling on drugs marketed in the US can be found at the US National Library of Medicine's DailyMed web page
Wikipedia has a page on publication bias (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
MedlinePlus provides links to sources of information on schizophrenia and on psychotic disorders (in English and Spanish)
Patient experiences of psychosis, including the effects of medication, are provided by the charity HealthtalkOnline
doi:10.1371/journal.pmed.1001189
PMCID: PMC3308934  PMID: 22448149
20.  Effects of Sevelamer and Calcium-Based Phosphate Binders on Lipid and Inflammatory Markers in Hemodialysis Patients 
American Journal of Nephrology  2007;28(2):275-279.
Introduction
Cardiovascular disease accounts for almost half of all deaths in individuals with chronic kidney disease stage 5 despite advances in both dialysis treatment and cardiology. A combination of lipid-lowering and anti-inflammatory effects along with avoidance of hypercalcemia should be taken into account when choosing phosphorus binders for maintenance hemodialysis (MHD) patients.
Methods
We examined the association of sevelamer versus calcium-based phosphorus binders with lipid profile, inflammatory markers including C-reactive protein (CRP), and mineral metabolism in MHD patients who participated in the Nutritional and Inflammatory Evaluation of Dialysis Patients (NIED) study from October 2001 to July 2005.
Results
Of the 787 MHD patients in the NIED study, 697 were on either sevelamer, a calcium-based binder, or both and eligible for this study. We compared the groups based on taking sevelamer monotherapy (n = 283) or calcium binder monotherapy (n = 266) for serum phosphate control. There were no differences between the groups on dialysis vintage. There were significant differences in age, serum calcium and phosphorus levels, as well as intact parathyroid hormone levels. Using a logistic regression models, the sevelamer group had a higher odds of serum CRP <10 mg/l [odds ratio (OR): 1.06, 95% CI: 1.02–1.11] and LDL cholesterol <70 mg/dl (OR: 1.33, 95% CI: 1.19–1.47) when compared to the calcium binder group independent of age, vintage, body mass index, statin use or other variables.
Conclusion
The improvements in multiple surrogate markers of inflammation and lipids in the NIED study make sevelamer a promising therapy for treatment in MHD patients with high risk of cardiovascular disease and mortality.
doi:10.1159/000111061
PMCID: PMC2785908  PMID: 17992011
End-stage renal disease; Cardiac computed tomography; Coronary calcium; Phosphate binders
21.  Antioxidative vitamines for prevention of cardiovascular disease for patients after renal transplantation and patients with chronic renal failure  
Introduction
The mortality from cardiovascular disease in patients with chronic renal failure is much higher than in the general population. In particular, patients with chronic renal failure with replacement therapies (dialysis patients and patients with renal transplantation) show both increased traditional risk factors and risk factors due to the dysfunction of the renal system. In combination with necessary medication for renal insufficiency oxidative stress is elevated. Progression of atherosclerosis is promoted due to increased oxidation of lipids and endothelium damage. This link between lipid oxidation and artherogenesis provides the rationale for the supposed beneficial effect of supplementation with antioxidative vitamins (vitamin A, C and E). Such an effect could not be demonstrated for patients with a history of cardiovascular disease and without kidney diseases. However, in high risk patients with chronic renal failure and renal replacement therapies this could be different.
Objectives
The objective of this systematic literature review was to assess the clinical effectiveness and cost-effectiveness of supplementation with antioxidative vitamins A, C or E to reduce cardiovascular events in patients with chronic kidney diseases, dialysis-requiring patients and patients after a renal transplantation with or without cardiovascular diseases.
Methods
A systematic literature review was conducted with documented search and selection of the literature, using a priori defined inclusion and exclusion criteria as well as a documented extraction and assessment of the literature according to the methods of evidence-based medicine.
Results
21 publications met the inclusion criteria for the evaluation of clinical effectiveness. No study could be identified for the economic evaluation. Two studies (four publications) analysed the effect of oral supplementation on the secondary prevention of clinical cardiovascular endpoints. Studies analysing the effect on patients without a history of cardiovascular disease could not be identified. 17 studies analysed the effect of oral supplementation or infusion with antioxidative vitamins or the supplementation with dialysis membranes coated with vitamin E on intermediate outcomes like oxidative stress or vessel parameters.
The two randomized clinical trials analysing the effect of orally supplemented vitamin E on clinical endpoints in patients with mild-to-moderate renal insufficiency and for haemodialysis patients respectively reported different results. After 4.5 years supplementation with a daily dose of 400 IU vitamin E renal insufficiency patients showed neither a beneficial nor a harmful effect on a combined event rate of myocardial infarction, stroke or death by cardiovascular causes. The second study reported a 50% risk reduction (RR=0.46, 95%-KI: 0.27-0.78, p=0.014) on the combined event rate of fatal myocardial infarction, nonfatal myocardial infarction, stroke, peripheral vascular disease or unstable angina pectoris in the study arm with vitamin E-Supplementation of 800 IU daily.
In 16 of 17 studies with intermediate endpoints the supplementation with vitamins was associated with a change of one or several of the examined endpoints in the expected direction. This means that the concentrations of the markers for oxidative stress decreased in the Vitamin E-group, the progression of aortic calcification (only one study) was reduced, the intima media thickness decreased and the lipid profile improved. No studies regarding costs or cost-effectiveness were identified.
Discussion
A possible explanation for the different results in the two studies with clinical endpoints may be due to the different study populations with different risk profiles, to different dosage during the intervention or to variation by chance. Due to the absence of clinically meaningful endpoints, the relevance of studies analysing the effect of antioxidative vitamins on intermediate endpoints like oxidative stress markers is basically limited to show single intermediate steps of the postulated biological effect mechanisms by which a potentially preventive effect could possibly be mediated. The mainly unsatisfactory planning and reporting quality of the 17 identified studies and a possible "publication bias" are further limitations.
Conclusion
The available evidence is not sufficient to support or to reject an effect of antioxidative vitamins on secondary prevention for cardiovascular disease for patients with chronic renal insufficiency or renal replacement therapy. There is a lack of randomized, placebo-controlled studies with a sufficient number of cases and clinical endpoints of cardiovascular disease, on the effect of antioxidative vitamins either orally applied or given by vitamin E-modified dialysers.
No data are available about supplementation with antioxidative vitamins for primary prevention of cardiovascular disease. Therefore the current evidence does not allow to draw conclusions concerning this subject either. As opposed to patients with a history of cardiovascular disease without kidney diseases where there is enough evidence to exclude a beneficial effect on secondary prevention of cardiovascular disease for patients with chronic renal insufficiency and renal replacement therapy this question remains unanswered. Conclusions about costs and cost-effectiveness also cannot be drawn.
PMCID: PMC3011345  PMID: 21289965
22.  Association of Serum Phosphorus Variability with Coronary Artery Calcification among Hemodialysis Patients 
PLoS ONE  2014;9(4):e93360.
Coronary artery calcification (CAC) is associated with increased mortality in patients on maintenance hemodialysis (MHD), but the pathogenesis of this condition is not well understood. We evaluated the relationship of CAC score (CACs) and variability in serum phosphorus in MHD patients. Seventy-seven adults on MHD at Huashan Hospital (Shanghai) were enrolled in July, 2010. CAC of all the patients were measured by computed tomography and CACs was calculated by the Agatston method at the entry of enrollment. Patients were divided into three categories according to their CACs (0∼10, 11∼400, and >400). Blood chemistry was recorded every 3 months from January 2008 to July 2010. Phosphorus variation was defined by the standard deviation (SD) or coefficient of variation (CV) and it was calculated from the past records. The ordinal multivariate logistic regression analysis was used to analyze the predictors of CAC. The mean patient age (± SD) was 61.7 years (±11.3) and 51% of patients were men. The mean CACs was 609.6 (±1062.9), the median CACs was 168.5, and 78% of patients had CACs more than 0. Multivariate analysis indicated that female gender (OR = 0.20, 95% CI = 0.07–0.55), age (OR = 2.31, 95% CI = 1.32–4.04), serum fibroblast growth factor 23 (OR = 2.25, 95% CI = 1.31–3.85), SD-phosphorus calculated from the most recent 6 measurements (OR = 2.12; 95% CI = 1.23–3.63), and CV-phosphorus calculated from the most recent 6 measurements (OR = 1.90, 95% CI = 1.16–3.11) were significantly and independently associated with CACs. These associations persisted for phosphorus variation calculated from past 7, 8, 9, 10, and 11 follow-up values. Variability of serum phosphorus may contribute significantly to CAC and keeping serum phosphorus stable may decrease coronary calcification and associated morbidity and mortality in MHD patients.
doi:10.1371/journal.pone.0093360
PMCID: PMC3991577  PMID: 24747427
23.  Intermittent Preventive Treatment of Malaria Provides Substantial Protection against Malaria in Children Already Protected by an Insecticide-Treated Bednet in Mali: A Randomised, Double-Blind, Placebo-Controlled Trial 
PLoS Medicine  2011;8(2):e1000407.
A randomized trial reported by Alassane Dicko and colleagues shows that intermittent preventive treatment for malaria in children who are protected from mosquitoes by insecticide-treated bednets provides substantial protection from malaria.
Background
Previous studies have shown that in areas of seasonal malaria transmission, intermittent preventive treatment of malaria in children (IPTc), targeting the transmission season, reduces the incidence of clinical malaria. However, these studies were conducted in communities with low coverage with insecticide-treated nets (ITNs). Whether IPTc provides additional protection to children sleeping under an ITN has not been established.
Methods and Findings
To assess whether IPTc provides additional protection to children sleeping under an ITN, we conducted a randomised, double-blind, placebo-controlled trial of IPTc with sulphadoxine pyrimethamine (SP) plus amodiaquine (AQ) in three localities in Kati, Mali. After screening, eligible children aged 3–59 mo were given a long-lasting insecticide-treated net (LLIN) and randomised to receive three rounds of active drugs or placebos. Treatments were administered under observation at monthly intervals during the high malaria transmission season in August, September, and October 2008. Adverse events were monitored immediately after the administration of each course of IPTc and throughout the follow-up period. The primary endpoint was clinical episodes of malaria recorded through passive surveillance by study clinicians available at all times during the follow-up. Cross-sectional surveys were conducted in 150 randomly selected children weekly and in all children at the end of the malaria transmission season to assess usage of ITNs and the impact of IPTc on the prevalence of malaria, anaemia, and malnutrition. Cox regression was used to compare incidence rates between intervention and control arms. The effects of IPTc on the prevalence of malaria infection and anaemia were estimated using logistic regression. 3,065 children were screened and 3,017 (1,508 in the control and 1,509 in the intervention arm) were enrolled in the study. 1,485 children (98.5%) in the control arm and 1,481 (98.1%) in the intervention arm completed follow-up. During the intervention period, the proportion of children reported to have slept under an ITN was 99.7% in the control and 99.3% in intervention arm (p = 0.45). A total of 672 episodes of clinical malaria defined as fever or a history of fever and the presence of at least 5,000 asexual forms of Plasmodium falciparum per microlitre (incidence rate of 1.90; 95% confidence interval [CI] 1.76–2.05 episodes per person year) were observed in the control arm versus 126 (incidence rate of 0.34; 95% CI 0.29–0.41 episodes per person year) in the intervention arm, indicating a protective effect (PE) of 82% (95% CI 78%–85%) (p<0.001) on the primary endpoint. There were 15 episodes of severe malaria in children in the control arm compared to two in children in the intervention group giving a PE of 87% (95% CI 42%–99%) (p = 0.001). IPTc reduced the prevalence of malaria infection by 85% (95% CI 73%–92%) (p<0.001) during the intervention period and by 46% (95% CI 31%–68%) (p<0.001) at the end of the intervention period. The prevalence of moderate anaemia (haemoglobin [Hb] <8 g/dl) was reduced by 47% (95% CI 15%–67%) (p<0.007) at the end of intervention period. The frequencies of adverse events were similar between the two arms. There was no drug-related serious adverse event.
Conclusions
IPTc given during the malaria transmission season provided substantial protection against clinical episodes of malaria, malaria infection, and anaemia in children using an LLIN. SP+AQ was safe and well tolerated. These findings indicate that IPTc could make a valuable contribution to malaria control in areas of seasonal malaria transmission alongside other interventions.
Trial Registration
ClinicalTrials.gov NCT00738946
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Malaria accounts for one in five of all childhood deaths in Africa and of the one million annual malarial deaths world-wide, over 75% occur in African children <5 years old infected with Plasmodium falciparum. Malaria also causes severe morbidity in children, such as anemia, low birth-weight, epilepsy, and neurological problems, which compromise the health and development of millions of children living in malaria endemic areas. As much of the impact of malaria on African children can be effectively prevented, significant efforts have been made in recent years to improve malaria control, such as the implementation of intermittent preventive treatment (IPT) of malaria.
IPT involves administration of antimalarial drugs at defined time intervals to individuals, regardless of whether they are known to be infected with malaria, to prevent morbidity and mortality. IPT was initially recommended for pregnant women and recently this strategy was extended to include infants (IPTi). Now, there is also intermittent preventive treatment of malaria in children (IPTc), which is designed to protect against seasonal malaria transmission including those above one year of age.
Why Was This Study Done?
Large clinical trials have shown that IPTc involving the administration of two to three doses of an antimalarial drug (sulphadoxine pyrimethamine [SP] and artesunate [AS] or amodiaquine [AQ]) during the high malaria transmission season effectively reduces the incidence of malaria. However, these studies were conducted in countries where the use of insecticide-treated bednets—an intervention that provides at least 50% protection against morbidity from malaria and is the main tool used for malaria control in most of sub-Saharan Africa—was relatively low. Therefore, it is unclear whether IPTc will be as effective in children who sleep under insecticide-treated bednets as has been previously shown in communities where insecticide-treated bednet usage is low. So to determine the answer to this important question, the researchers conducted a randomized, placebo controlled trial of IPTc with SP+AQ (chosen because of the effectiveness of this combination in a pilot study) in children who slept under an insecticide-treated bednet in an area of seasonal malaria transmission in Mali.
What Did the Researchers Do and Find?
The researchers enrolled 3,017 eligible children aged 3–59 months into a randomized double-blind, placebo-controlled trial during the 2008 malaria transmission season in Mali. All children were given a long-lasting insecticide-treated bednet at the start of the study with instructions to their family on the correct use of the net. Children were then randomized into two arms—1,509 were allocated to the intervention group and 1,508 to the control group—to receive three courses of IPTc with SP plus AQ or placebos given at monthly intervals during the peak malaria transmission season. The researchers monitored the incidence of malaria throughout the malaria season and also monitored the use of long-lasting insecticide-treated bednets throughout the study period. In addition, researchers conducted a cross-sectional survey in 150 randomly selected children every week and in every child enrolled in the trial 6 weeks after the last course of IPTc, to measure their temperature, height and weight, and blood hemoglobin and parasite level.
The number of children who slept under their long-lasting insecticide-treated bednet was similar in both arms. During the intervention period, the researchers observed a total of 672 episodes of clinical malaria (defined as fever or a history of fever and the presence of at least 5,000 asexual forms of Plasmodium falciparum per microliter) in the control arm versus 126 episodes in the intervention arm, which is an incidence rate of 1.90 episodes per person year in the control arm versus 0.34 in the interventions arm—giving a protective efficacy of 87%. IPTc reduced the prevalence of malaria infection during the intervention period by 85% and by 46% at the end of the intervention period. The prevalence of moderate anemia was also reduced (by 47%) at the end of intervention period. The frequencies of adverse events were similar between the two arms and there were no drug-related serious adverse events.
What Do These Findings Mean?
The results of this study show that in peak malarial transmission season in Mali, IPTc provides substantial additional protection against episodes of clinical malaria and severe malaria in children sleeping under long-lasting insecticide-treated bednets. In addition, intermittent preventive treatment of malaria with SP plus AQ appears to be safe and well tolerated for use in children.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000407.
This topic is further discussed in two PLoS Medicine research articles by Konat et al. and Bojang et al., and a PLoS Medicine Perspective by Beeson
Roll Back Malaria has information about malaria in children, including intervention strategies and an information sheet on insecticide-treated bednets
UNICEF also provides comprehensive information about malaria in children
The Intermittent Preventive Treatment in Infants Consortium (ipti) provides information on intermittent preventive treatment in infants
doi:10.1371/journal.pmed.1000407
PMCID: PMC3032550  PMID: 21304923
24.  Guided optimization of fluid status in haemodialysis patients 
Background. Achieving normohydration remains a non-trivial issue in haemodialysis therapy. Guiding the haemodialysis patient on the path between fluid overload and dehydration should be the clinical target, although it can be difficult to achieve this target in practice. Objective and clinically applicable methods for the determination of the normohydration status on an individual basis are needed to help in the identification of an appropriate target weight.
Methods. The aim of this prospective trial was to guide the patient population of a complete dialysis centre towards normohydration over the course of approximately 1 year. Fluid status was assessed frequently (at least monthly) in haemodialysis patients (n = 52) with the body composition monitor (BCM), which is based on whole body bioimpedance spectroscopy. The BCM provides the clinician with an objective target for normohydration. The patient population was divided into three groups: the hyperhydrated group (relative fluid overload >15% of extracellular water (ECW); n = 13; Group A), the adverse event group (patients with more than two adverse events in the last 4 weeks; n = 12; Group B) and the remaining patients (n = 27; Group C).
Results. In the hyperhydrated group (Group A), fluid overload was reduced by 2.0 L (P < 0.001) without increasing the occurrence of intradialytic adverse events. This resulted in a reduction in systolic blood pressure of 25 mmHg (P = 0.012). Additionally, a 35% reduction in antihypertensive medication (P = 0.031) was achieved. In the adverse event group (Group B), the fluid status was increased by 1.3 L (P = 0.004) resulting in a 73% reduction in intradialytic adverse events (P < 0.001) without significantly increasing the blood pressure.
Conclusion. The BCM provides an objective assessment of normohydration that is clinically applicable. Guiding the patients towards this target of normohydration leads to better control of hypertension in hyperhydrated patients, less intradialytic adverse events and improved cardiac function.
doi:10.1093/ndt/gfp487
PMCID: PMC2809248  PMID: 19793930
adverse event; bioimpedance spectroscopy; fluid overload; hypertension; normohydration
25.  Persistent organ dysfunction plus death: a novel, composite outcome measure for critical care trials 
Critical Care  2011;15(2):R98.
Introduction
Due to resource limitations, few critical care interventions have been rigorously evaluated with adequately powered randomized clinical trials (RCTs). There is a need to improve the efficiency of RCTs in critical care so that more definitive high quality RCTs can be completed with the available resources. The objective of this study was to validate and demonstrate the utility of a novel composite outcome measure, persistent organ dysfunction (POD) plus death, for clinical trials of critically ill patients.
Methods
We performed a secondary analysis of a dataset from a prospective randomized trial involving 38 intensive care units (ICUs) in Canada, Europe, and the United States. We define POD as the persistence of organ dysfunction requiring supportive technologies during the convalescent phase of critical illness and it is present when a patient has an ongoing requirement for vasopressors, dialysis, or mechanical ventilation at the outcome assessments time points. In 600 patients enrolled in a randomized trial of nutrition therapy and followed prospectively for six months, we evaluated the prevalence of POD and its association with outcome.
Results
At 28 days, 2.3% of patients had circulatory failure, 13.7% had renal failure, 8.7% had respiratory failure, and 27.2% had died, for an overall prevalence of POD + death = 46.0%. Of survivors at Day 28, those with POD, compared to those without POD, had a higher mortality rate in the six-month follow-up period, had longer ICU and hospital stays, and a reduced quality of life at three months. Given these rates of POD + death and using a two-sided Chi-squared test at alpha = 0.05, we would require 616 patients per arm to detect a 25% relative risk reduction (RRR) in mortality, but only 286 per arm to detect the same RRR in POD + mortality.
Conclusions
POD + death may be a valid composite outcome measure and compared to mortality endpoints, may reduce the sample size requirements of clinical trials of critically ill patients. Further validation in larger clinical trials is required.
doi:10.1186/cc10110
PMCID: PMC3219367  PMID: 21418560

Results 1-25 (1212289)