Search tips
Search criteria

Results 1-25 (1085649)

Clipboard (0)

Related Articles

1.  Comparison of Pegfilgrastim Prescribing Practice to National Guidelines at a University Hospital Outpatient Oncology Clinic 
Journal of Oncology Practice  2012;9(4):203-206.
At one institution, approximately one half of primary prophylaxis pegfilgrastim was not indicated per published guidelines, highlighting a need to change prescribing practices, to reduce costs without harming patients.
Pegfilgrastim reduces the risk of febrile neutropenia (FN) and is indicated as primary prophylaxis when the risk of FN approaches 20% in each chemotherapy cycle. There have been few reports evaluating the appropriate use of pegfilgrastim in comparison with published guidelines. We sought to determine possible over-prescribing as a way to maintain quality and reduce cost.
A retrospective medical record review was performed to determine whether pegfilgrastim was used appropriately in the primary prophylaxis of FN in chemotherapy regimens with less than 20% risk of FN. Patients were identified by means of administrative records, and data were collected from the electronic medical record at an academic cancer center outpatient clinic serving approximately 13,000 patients per year.
Two hundred ninety-two patients were identified, of whom 124 were initially evaluated and 88 were included. Thirty-three patients (37%) had no risk factors, and 20 (22%) had one risk factor that would justify pegfilgrastim use with low- or intermediate-risk regimens. The most common cancer diagnosis of patients with zero or one risk factor was lymphoma, and the most common regimens with overuse of pegfilgrastim were doxorubicin-bleomycin-vinblastine-dacarbazine (ABVD) and ritux-imab-cyclophosphamide-doxorubicin-vincristine-prednisone (R-CHOP). One hundred eighty-four pegfilgrastim doses (46%) were classified as avoidable. The cost to the health system for unnecessary drug use was $712,264 in 1 year.
At one institution, approximately one half of all primary prophylaxis pegfilgrastim was not indicated per published guidelines. This represents an excellent opportunity to change prescribing practices to reduce costs without harming patients.
PMCID: PMC3710170  PMID: 23942922
2.  Pegfilgrastim prophylaxis is associated with a lower risk of hospitalization of cancer patients than filgrastim prophylaxis: a retrospective United States claims analysis of granulocyte colony-stimulating factors (G-CSF) 
BMC Cancer  2013;13:11.
Myelosuppressive chemotherapy can lead to dose-limiting febrile neutropenia. Prophylactic use of recombinant human G-CSF such as daily filgrastim and once-per-cycle pegfilgrastim may reduce the incidence of febrile neutropenia. This comparative study examined the effect of pegfilgrastim versus daily filgrastim on the risk of hospitalization.
This retrospective United States claims analysis utilized 2004–2009 data for filgrastim- and pegfilgrastim-treated patients receiving chemotherapy for non-Hodgkin’s lymphoma (NHL) or breast, lung, ovarian, or colorectal cancers. Cycles in which pegfilgrastim or filgrastim was administered within 5 days from initiation of chemotherapy (considered to represent prophylaxis) were pooled for analysis. Neutropenia-related hospitalization and other healthcare encounters were defined with a “narrow” criterion for claims with an ICD-9 code for neutropenia and with a “broad” criterion for claims with an ICD-9 code for neutropenia, fever, or infection. Odds ratios (OR) for hospitalization and 95% confidence intervals (CI) were estimated by generalized estimating equation (GEE) models and adjusted for patient, tumor, and treatment characteristics. Per-cycle healthcare utilization and costs were examined for cycles with pegfilgrastim or filgrastim prophylaxis.
We identified 3,535 patients receiving G-CSF prophylaxis, representing 12,056 chemotherapy cycles (11,683 pegfilgrastim, 373 filgrastim). The mean duration of filgrastim prophylaxis in the sample was 4.8 days. The mean duration of pegfilgrastim prophylaxis in the sample was 1.0 day, consistent with the recommended dosage of pegfilgrastim - a single injection once per chemotherapy cycle. Cycles with prophylactic pegfilgrastim were associated with a decreased risk of neutropenia-related hospitalization (narrow definition: OR = 0.43, 95% CI: 0.16–1.13; broad definition: OR = 0.38, 95% CI: 0.24–0.59) and all-cause hospitalization (OR = 0.50, 95% CI: 0.35–0.72) versus cycles with prophylactic filgrastim. For neutropenia-related utilization by setting of care, there were more ambulatory visits and hospitalizations per cycle associated with filgrastim prophylaxis than with pegfilgrastim prophylaxis. Mean per-cycle neutropenia-related costs were also higher with prophylactic filgrastim than with prophylactic pegfilgrastim.
In this comparative effectiveness study, pegfilgrastim prophylaxis was associated with a reduced risk of neutropenia-related or all-cause hospitalization relative to filgrastim prophylaxis.
PMCID: PMC3559272  PMID: 23298389
3.  Results of a prospective dose intensity and neutropenia prophylaxis evaluation programme (DIEPP) in cancer patients at risk of febrile neutropenia due to myelosuppressive chemotherapy 
Wiener Klinische Wochenschrift  2016;128:238-247.
To describe the incidence of febrile neutropenia (FN) and use of pegfilgrastim in cancer patients with high overall risk of FN and to investigate the relationship between granulocyte-colony stimulating factor (G-CSF) guideline adherence and chemotherapy delivery in Central and Eastern Europe (CEE) and Austria.
Dose Intensity Evaluation Program and Prophylaxis (DIEPP) was a multicentre, prospective, and observational study of adult patients with breast cancer, lymphoma, lung cancer, gastric cancer, and ovarian cancer, who received chemotherapy with pegfilgrastim support and who had an overall risk of FN ≥ 20 %. Physicians assessed patient risk factors and reported their reasons for administering pegfilgrastim.
Patients were enrolled from 113 centres in CEE and Austria between August 2010 and July 2013, and data were analysed from 1072 patients. The most common tumour types were breast cancer (50 %) and lymphoma (24 %). FN incidence was 5 % overall. FN occurred in 3 % of patients (28/875) who received pegfilgrastim as primary prophylaxis (PP) and 13 % of patients (19/142) who received it as secondary prophylaxis (SP); 79 % of FN events in SP patients occurred in the first cycle before pegfilgrastim was administered. The three most frequently chosen reasons for using pegfilgrastim were planned chemotherapy with high FN risk, female gender, and advanced disease. Overall, 40 % of patients received > 90 % of their planned chemotherapy dose within 3 days of the planned schedule.
FN incidence was relatively low with pegfilgrastim PP in patients with a physician-assessed overall FN risk of ≥ 20 %. The most important reasons for pegfilgrastim use were consistent with the investigators’ risk assessment and international guidelines.
PMCID: PMC4861750  PMID: 26745973
Febrile neutropenia; Neoplasms; Chemotherapy; Granulocyte colony-stimulating factor; Observational study
4.  Countering the Misincentivization of Cancer Medicine by Real-Time Personal Professional Education 
In the United States, public and private payer misincentivization of medical care and the invisibility of costs to the consumers of that care have conspired to create unsustainable growth in health care expenditure that undermines our economy, diminishes our productivity, and limits our international competitiveness. Cancer medicine provides a small yet salient example. On average, Medicare reimburses oncologists 6% above the average acquisition price for essential anticancer agents and supportive therapies. The costs of these agents vary across a stunning five orders of magnitude, from a few dollars to more than $400,000 per course of treatment. The profitability to providers varies across approximately four orders of magnitude, from cents to thousands of dollars per treatment. National guidelines (National Comprehensive Cancer Network [NCCN], American Society of Clinical Oncology [ASCO]) help providers select the most effective therapies without regard for cost.
We created an oncologist-to-oncologist professional education program to help cancer physicians optimally use expensive long-acting white blood cell growth factors, in accordance with these national guidelines. We then compared their use across a population of approximately 97,000 Medicare members before and after our intervention. Baseline use was recorded over two consecutive quarters (2009 to 2010). In March 2010, our oncologists initiated real-time discussions with the oncologists of 22 separate groups if these agents were ordered for use with regimens that placed patients at less than 10% risk of febrile neutropenia, according to NCCN guidelines. Neither NCCN nor ASCO recommend the routine use of these agents in this low-risk group. The care of 82 such patients was thoroughly discussed in the following 6 months.
The monthly costs for these agents decreased by more than 50% by the final month of our intervention, although savings began immediately, reducing costs by more than $150,000 per quarter. No episode of febrile neutropenia was recorded in any patient in the intervention group. These savings generalize to the entire Medicare population at $30 million each month.
We conclude that personal, oncologist-to-oncologist, real-time professional education will favorably modify oncologic prescribing behavior and can do so with significant immediate savings at no risk to patients with cancer.
PMCID: PMC3457828
5.  Information from Pharmaceutical Companies and the Quality, Quantity, and Cost of Physicians' Prescribing: A Systematic Review 
PLoS Medicine  2010;7(10):e1000352.
Geoff Spurling and colleagues report findings of a systematic review looking at the relationship between exposure to promotional material from pharmaceutical companies and the quality, quantity, and cost of prescribing. They fail to find evidence of improvements in prescribing after exposure, and find some evidence of an association with higher prescribing frequency, higher costs, or lower prescribing quality.
Pharmaceutical companies spent $57.5 billion on pharmaceutical promotion in the United States in 2004. The industry claims that promotion provides scientific and educational information to physicians. While some evidence indicates that promotion may adversely influence prescribing, physicians hold a wide range of views about pharmaceutical promotion. The objective of this review is to examine the relationship between exposure to information from pharmaceutical companies and the quality, quantity, and cost of physicians' prescribing.
Methods and Findings
We searched for studies of physicians with prescribing rights who were exposed to information from pharmaceutical companies (promotional or otherwise). Exposures included pharmaceutical sales representative visits, journal advertisements, attendance at pharmaceutical sponsored meetings, mailed information, prescribing software, and participation in sponsored clinical trials. The outcomes measured were quality, quantity, and cost of physicians' prescribing. We searched Medline (1966 to February 2008), International Pharmaceutical Abstracts (1970 to February 2008), Embase (1997 to February 2008), Current Contents (2001 to 2008), and Central (The Cochrane Library Issue 3, 2007) using the search terms developed with an expert librarian. Additionally, we reviewed reference lists and contacted experts and pharmaceutical companies for information. Randomized and observational studies evaluating information from pharmaceutical companies and measures of physicians' prescribing were independently appraised for methodological quality by two authors. Studies were excluded where insufficient study information precluded appraisal. The full text of 255 articles was retrieved from electronic databases (7,185 studies) and other sources (138 studies). Articles were then excluded because they did not fulfil inclusion criteria (179) or quality appraisal criteria (18), leaving 58 included studies with 87 distinct analyses. Data were extracted independently by two authors and a narrative synthesis performed following the MOOSE guidelines. Of the set of studies examining prescribing quality outcomes, five found associations between exposure to pharmaceutical company information and lower quality prescribing, four did not detect an association, and one found associations with lower and higher quality prescribing. 38 included studies found associations between exposure and higher frequency of prescribing and 13 did not detect an association. Five included studies found evidence for association with higher costs, four found no association, and one found an association with lower costs. The narrative synthesis finding of variable results was supported by a meta-analysis of studies of prescribing frequency that found significant heterogeneity. The observational nature of most included studies is the main limitation of this review.
With rare exceptions, studies of exposure to information provided directly by pharmaceutical companies have found associations with higher prescribing frequency, higher costs, or lower prescribing quality or have not found significant associations. We did not find evidence of net improvements in prescribing, but the available literature does not exclude the possibility that prescribing may sometimes be improved. Still, we recommend that practitioners follow the precautionary principle and thus avoid exposure to information from pharmaceutical companies.
Please see later in the article for the Editors' Summary
Editors' Summary
A prescription drug is a medication that can be supplied only with a written instruction (“prescription”) from a physician or other licensed healthcare professional. In 2009, 3.9 billion drug prescriptions were dispensed in the US alone and US pharmaceutical companies made US$300 billion in sales revenue. Every year, a large proportion of this revenue is spent on drug promotion. In 2004, for example, a quarter of US drug revenue was spent on pharmaceutical promotion. The pharmaceutical industry claims that drug promotion—visits from pharmaceutical sales representatives, advertisements in journals and prescribing software, sponsorship of meetings, mailed information—helps to inform and educate healthcare professionals about the risks and benefits of their products and thereby ensures that patients receive the best possible care. Physicians, however, hold a wide range of views about pharmaceutical promotion. Some see it as a useful and convenient source of information. Others deny that they are influenced by pharmaceutical company promotion but claim that it influences other physicians. Meanwhile, several professional organizations have called for tighter control of promotional activities because of fears that pharmaceutical promotion might encourage physicians to prescribe inappropriate or needlessly expensive drugs.
Why Was This Study Done?
But is there any evidence that pharmaceutical promotion adversely influences prescribing? Reviews of the research literature undertaken in 2000 and 2005 provide some evidence that drug promotion influences prescribing behavior. However, these reviews only partly assessed the relationship between information from pharmaceutical companies and prescribing costs and quality and are now out of date. In this study, therefore, the researchers undertake a systematic review (a study that uses predefined criteria to identify all the research on a given topic) to reexamine the relationship between exposure to information from pharmaceutical companies and the quality, quantity, and cost of physicians' prescribing.
What Did the Researchers Do and Find?
The researchers searched the literature for studies of licensed physicians who were exposed to promotional and other information from pharmaceutical companies. They identified 58 studies that included a measure of exposure to any type of information directly provided by pharmaceutical companies and a measure of physicians' prescribing behavior. They then undertook a “narrative synthesis,” a descriptive analysis of the data in these studies. Ten of the studies, they report, examined the relationship between exposure to pharmaceutical company information and prescribing quality (as judged, for example, by physician drug choices in response to clinical vignettes). All but one of these studies suggested that exposure to drug company information was associated with lower prescribing quality or no association was detected. In the 51 studies that examined the relationship between exposure to drug company information and prescribing frequency, exposure to information was associated with more frequent prescribing or no association was detected. Thus, for example, 17 out of 29 studies of the effect of pharmaceutical sales representatives' visits found an association between visits and increased prescribing; none found an association with less frequent prescribing. Finally, eight studies examined the relationship between exposure to pharmaceutical company information and prescribing costs. With one exception, these studies indicated that exposure to information was associated with a higher cost of prescribing or no association was detected. So, for example, one study found that physicians with low prescribing costs were more likely to have rarely or never read promotional mail or journal advertisements from pharmaceutical companies than physicians with high prescribing costs.
What Do These Findings Mean?
With rare exceptions, these findings suggest that exposure to pharmaceutical company information is associated with either no effect on physicians' prescribing behavior or with adverse affects (reduced quality, increased frequency, or increased costs). Because most of the studies included in the review were observational studies—the physicians in the studies were not randomly selected to receive or not receive drug company information—it is not possible to conclude that exposure to information actually causes any changes in physician behavior. Furthermore, although these findings provide no evidence for any net improvement in prescribing after exposure to pharmaceutical company information, the researchers note that it would be wrong to conclude that improvements do not sometimes happen. The findings support the case for reforms to reduce negative influence to prescribing from pharmaceutical promotion.
Additional Information
Please access these Web sites via the online version of this summary at
Wikipedia has pages on prescription drugs and on pharmaceutical marketing (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The UK General Medical Council provides guidelines on good practice in prescribing medicines
The US Food and Drug Administration provides information on prescription drugs and on its Bad Ad Program
Healthy Skepticism is an international nonprofit membership association that aims to improve health by reducing harm from misleading health information
The Drug Promotion Database was developed by the World Health Organization Department of Essential Drugs & Medicines Policy and Health Action International Europe to address unethical and inappropriate drug promotion
PMCID: PMC2957394  PMID: 20976098
6.  Multicentre, Prospective Observational Study of Pegfilgrastim Primary Prophylaxis in Patients at High Risk of Febrile Neutropenia in Poland: PROFIL Study 
Contemporary Oncology  2015;19(3):214-219.
Aim of the study
PROFIL was a prospective observational study conducted to investigate physicians’ evaluation of febrile neutropenia (FN) risk and reasons for giving pegfilgrastim primary prophylaxis (PP) in routine clinical practice in Poland.
Material and methods
Adult cancer patients treated with chemotherapy (CT), assessed by investigators as having high overall FN risk, and who received pegfilgrastim in cycle 1 were enrolled between 03/2009 and 09/2010. Investigators assessed FN risk of the CT regimen, individual risk factors, and overall FN risk, and were asked to provide the most important reasons for providing pegfilgrastim PP. Investigator-assessed CT FN risk was compared with guideline classification.
Data were analysed from 1006 breast, ovarian, and lung cancer, and non-Hodgkin (NHL) and Hodgkin lymphoma (HL) patients. The most important reasons for using pegfilgrastim PP were high CT FN risk and advanced disease; these were consistent across tumour types and treatment intent. The investigators generally assessed high CT FN risk in agreement with guideline classification. Febrile neutropenia occurred in 4% of patients, most commonly in HL, NHL, and patients with advanced disease.
High CT FN risk and advanced stage of disease were found to be the most important reasons for providing pegfilgrastim PP by physicians in Poland.
PMCID: PMC4631289  PMID: 26557762
febrile neutropenia; chemotherapy; pegfilgrastim; GCSF; risk category
7.  A single dose of pegfilgrastim compared with daily filgrastim for supporting neutrophil recovery in patients treated for low-to-intermediate risk acute myeloid leukemia: results from a randomized, double-blind, phase 2 trial 
BMC Cancer  2008;8:195.
Patients with acute myeloid leukemia (AML) are often neutropenic as a result of their disease. Furthermore, these patients typically experience profound neutropenia following induction and/or consolidation chemotherapy and this may result in serious, potentially life-threatening, infection. This randomized, double-blind, phase 2 clinical trial compared the efficacy and tolerability of pegfilgrastim with filgrastim for assisting neutrophil recovery following induction and consolidation chemotherapy for de novo AML in patients with low-to-intermediate risk cytogenetics.
Patients (n = 84) received one or two courses of standard induction chemotherapy (idarubicin + cytarabine), followed by one course of consolidation therapy (high-dose cytarabine) if complete remission was achieved. They were randomized to receive either single-dose pegfilgrastim 6 mg or daily filgrastim 5 μg/kg, beginning 24 hours after induction and consolidation chemotherapy.
The median time to recovery from severe neutropenia was 22.0 days for both pegfilgrastim (n = 42) and filgrastim (n = 41) groups during Induction 1 (difference 0.0 days; 95% CI: -1.9 to 1.9). During Consolidation, recovery occurred after a median of 17.0 days for pegfilgrastim versus 16.5 days for filgrastim (difference 0.5 days; 95% CI: -1.1 to 2.1). Therapeutic pegfilgrastim serum concentrations were maintained throughout neutropenia. Pegfilgrastim was well tolerated, with an adverse event profile similar to that of filgrastim.
These data suggest no clinically meaningful difference between a single dose of pegfilgrastim and multiple daily doses of filgrastim for shortening the duration of severe neutropenia following chemotherapy in de novo AML patients with low-to-intermediate risk cytogenetics.
Trial registration NCT00114764
PMCID: PMC2483721  PMID: 18616811
8.  Assessing the impact of a targeted electronic medical record intervention on the use of growth factor in cancer patients 
Patients receiving chemotherapy are at risk for febrile neutropenia following treatment. The American Society of Clinical Oncology (ASCO) and National Comprehensive Cancer Network (NCCN) recommend screening patients for risk of febrile neutropenia and risk stratification based on likelihood of febrile neutropenia events. The impact of the implementation of an electronic medical record (EMR) system on physician compliance with growth factor support guidelines has not been studied.
To investigate whether implementation of automated orders in EMRs can improve adherence to national guidelines in prophylactic G-CSF use in chemotherapy patients.
A retrospective chart review of cancer patients receiving chemotherapy from January 1, 2007 to August 1, 2008 (pre-EMR) and January 1, 2011 to December 31, 2011 (post-EMR) was conducted. Institutional adherence to ASCO and NCCN guidelines for G-CSF after the implementation of automatic electronic orders for pegfilgrastim in patients who received a high-risk chemotherapy regimen were examined. The results were compared with a similar study that had been conducted before the implementation of the EMR system.
The number of regimens that included guideline-driven growth factor usage and nonusage was 75.6% in the post-intervention arm, compared with 67.5% in the pre-intervention arm. This is a statistically significant difference between the pre-EMR and post-EMR compliance with national guidelines on growth factor usage ( P = .041, based on chi-square test). The post-EMR implementation data of 1,042 individual new chemotherapy regimens showed correct use of G-CSF in 89.13% high-risk chemotherapy regimens and 58.74% intermediate-risk regimens, with risk factors and incorrect usage in 26.23% of intermediate-risk regimens without risk factors and 19.34% of low-risk regimens. The appropriateness of use in high- and low-risk regimens was the most compliant, because growth factor was built into chemotherapy plans of high-risk regimens and omitted from low-risk regimens.
This project was limited by a change in EMR systems at West Virginia University hospitals on January 1, 2009. All pre-EMR data was collected before 2009 and could not be further collected once the project began in 2013.
Appropriateness of growth factor usage can be improved when integrated into an EMR. This can improve compliance and adherence to national recommendations. Further development and understanding of EMR is needed to improve usage to meet national guidelines, with particular attention paid to integration of risk factors into EMR to improve growth factor usage compliance.
PMCID: PMC4792513  PMID: 26287033
9.  Risk of chemotherapy-induced febrile neutropenia in cancer patients receiving pegfilgrastim prophylaxis: does timing of administration matter? 
Supportive Care in Cancer  2015;24:2309-2316.
Contrary to the approved indication for pegfilgrastim prophylaxis, some patients receive it on the same day as the last administration of chemotherapy in clinical practice, which could adversely impact risk of febrile neutropenia (FN). An evaluation of the timing of pegfilgrastim prophylaxis and FN risk was undertaken.
A retrospective cohort design and data from two US private health care claims repositories were employed. Study population comprised adults who received intermediate/high-risk chemotherapy regimens for solid tumors or non-Hodgkin’s lymphoma (NHL) and received pegfilgrastim prophylaxis in ≥1 cycle; all cycles with pegfilgrastim were pooled for analyses. Odds ratios (OR) for FN during the cycle were estimated for patients who received pegfilgrastim on the same day (day 1) as the last administration of chemotherapy versus days 2–4 from chemotherapy completion.
The study population included 45,592 patients who received pegfilgrastim in 179,152 cycles (n = 37,095 in cycle 1); in 12 % of cycles, patients received pegfilgrastim on the same day as chemotherapy. Odds of FN were higher for patients receiving pegfilgrastim prophylaxis on the same day as chemotherapy versus days 2–4 from chemotherapy in cycle 1 (OR = 1.6, 95 % CI = 1.3–1.9, p < 0.001) and all cycles (OR = 1.5, 95 % CI = 1.3–1.6, p < 0.001).
In this large-scale evaluation of adults who received intermediate/high-risk regimens for solid tumors or NHL in US clinical practice, FN incidence was found to be significantly higher among those who received pegfilgrastim prophylaxis on the same day as chemotherapy completion versus days 2–4 from chemotherapy completion, underscoring the importance of adhering to the indicated administration schedule.
Electronic supplementary material
The online version of this article (doi:10.1007/s00520-015-3036-7) contains supplementary material, which is available to authorized users.
PMCID: PMC4805705  PMID: 26607482
Febrile neutropenia; Pegfilgrastim; Neulasta; Granulocyte colony-stimulating factor
10.  Granulocyte colony-stimulating factors for febrile neutropenia prophylaxis following chemotherapy: systematic review and meta-analysis 
BMC Cancer  2011;11:404.
Febrile neutropenia (FN) occurs following myelosuppressive chemotherapy and is associated with morbidity, mortality, costs, and chemotherapy reductions and delays. Granulocyte colony-stimulating factors (G-CSFs) stimulate neutrophil production and may reduce FN incidence when given prophylactically following chemotherapy.
A systematic review and meta-analysis assessed the effectiveness of G-CSFs (pegfilgrastim, filgrastim or lenograstim) in reducing FN incidence in adults undergoing chemotherapy for solid tumours or lymphoma. G-CSFs were compared with no primary G-CSF prophylaxis and with one another. Nine databases were searched in December 2009. Meta-analysis used a random effects model due to heterogeneity.
Twenty studies compared primary G-CSF prophylaxis with no primary G-CSF prophylaxis: five studies of pegfilgrastim; ten of filgrastim; and five of lenograstim. All three G-CSFs significantly reduced FN incidence, with relative risks of 0.30 (95% CI: 0.14 to 0.65) for pegfilgrastim, 0.57 (95% CI: 0.48 to 0.69) for filgrastim, and 0.62 (95% CI: 0.44 to 0.88) for lenograstim. Overall, the relative risk of FN for any primary G-CSF prophylaxis versus no primary G-CSF prophylaxis was 0.51 (95% CI: 0.41 to 0.62). In terms of comparisons between different G-CSFs, five studies compared pegfilgrastim with filgrastim. FN incidence was significantly lower for pegfilgrastim than filgrastim, with a relative risk of 0.66 (95% CI: 0.44 to 0.98).
Primary prophylaxis with G-CSFs significantly reduces FN incidence in adults undergoing chemotherapy for solid tumours or lymphoma. Pegfilgrastim reduces FN incidence to a significantly greater extent than filgrastim.
PMCID: PMC3203098  PMID: 21943360
11.  Efficacy and safety of lipegfilgrastim versus pegfilgrastim: a randomized, multicenter, active-control phase 3 trial in patients with breast cancer receiving doxorubicin/docetaxel chemotherapy 
BMC Cancer  2013;13:386.
Lipegfilgrastim is a novel glyco-pegylated granulocyte-colony stimulating factor in development for neutropenia prophylaxis in cancer patients receiving chemotherapy. This phase III, double-blind, randomized, active-controlled, noninferiority trial compared the efficacy and safety of lipegfilgrastim versus pegfilgrastim in chemotherapy-naïve breast cancer patients receiving doxorubicin/docetaxel chemotherapy.
Patients with high-risk stage II, III, or IV breast cancer and an absolute neutrophil count ≥1.5 × 109 cells/L were randomized to a single 6-mg subcutaneous injection of lipegfilgrastim (n = 101) or pegfilgrastim (n = 101) on day 2 of each 21-day chemotherapy cycle (4 cycles maximum). The primary efficacy endpoint was the duration of severe neutropenia during cycle 1.
Cycle 1: The mean duration of severe neutropenia for the lipegfilgrastim and pegfilgrastim groups was 0.7 and 0.8 days, respectively (λ = −0.218 [95% confidence interval: –0.498%, 0.062%], p = 0.126), and no severe neutropenia was observed in 56% and 49% of patients in the lipegfilgrastim and pegfilgrastim groups, respectively. All cycles: In the efficacy population, febrile neutropenia occurred in three pegfilgrastim-treated patients (all in cycle 1) and zero lipegfilgrastim-treated patients. Drug-related adverse events in the safety population were reported in 28% and 26% of patients i006E the lipegfilgrastim and pegfilgrastim groups, respectively.
This study demonstrates that lipegfilgrastim 6 mg is as effective as pegfilgrastim in reducing neutropenia in patients with breast cancer receiving myelosuppressive chemotherapy.
Trial Registration
Eudra EEACTA200901599910
The study protocol, two global amendments (Nos. 1 and 2), informed consent documents, and other appropriate study-related documents were reviewed and approved by the Ministry of Health of Ukraine Central Ethics Committee and local independent ethics committees (IECs).
PMCID: PMC3751756  PMID: 23945072
Neutropenia; Febrile neutropenia; Breast cancer; Recombinant granulocyte-colony stimulating factor; Lipegfilgrastim; Pegfilgrastim
12.  A Phase III Study of Balugrastim Versus Pegfilgrastim in Breast Cancer Patients Receiving Chemotherapy With Doxorubicin and Docetaxel 
The Oncologist  2015;21(1):7-15.
This study aimed to evaluate the efficacy and safety of once-per-cycle balugrastim versus pegfilgrastim for neutrophil support in breast cancer patients receiving myelosuppressive chemotherapy. Once-per-cycle balugrastim is not inferior to pegfilgrastim in reducing cycle 1 duration of severe neutropenia in breast cancer patients receiving chemotherapy. Both drugs have comparable safety profiles.
This study aimed to evaluate the efficacy and safety of once-per-cycle balugrastim versus pegfilgrastim for neutrophil support in breast cancer patients receiving myelosuppressive chemotherapy.
Breast cancer patients (n = 256) were randomized to 40 or 50 mg of subcutaneous balugrastim or 6 mg of pegfilgrastim ≈24 hours after chemotherapy (60 mg/m2 doxorubicin and 75 mg/m2 docetaxel, every 21 days for up to 4 cycles). The primary efficacy parameter was the duration of severe neutropenia (DSN) in cycle 1. Secondary parameters included DSN (cycles 2–4), absolute neutrophil count (ANC) nadir, febrile neutropenia rates, and time to ANC recovery (cycles 1–4). Safety, pharmacokinetics, and immunogenicity were assessed.
Mean cycle 1 DSN was 1.0 day with 40 mg of balugrastim, 1.3 with 50 mg of balugrastim, and 1.2 with pegfilgrastim (upper limit of 95% confidence intervals for between-group DSN differences was <1.0 day for both balugrastim doses versus pegfilgrastim). Between-group efficacy parameters were comparable except for time to ANC recovery in cycle 1 (40 mg of balugrastim, 2.0 days; 50 mg of balugrastim, 2.1; pegfilgrastim, 2.6). Median terminal elimination half-life was ≈37 hours for 40 mg of balugrastim, ≈36 for 50 mg of balugrastim, and ≈45 for pegfilgrastim. Antibody response to balugrastim was low and transient, with no neutralizing effect.
Once-per-cycle balugrastim is not inferior to pegfilgrastim in reducing cycle 1 DSN in breast cancer patients receiving chemotherapy; both drugs have comparable safety profiles.
Implications for Practice:
This paper provides efficacy and safety data for a new, once-per-cycle granulocyte colony-stimulating factor, balugrastim, for the prevention of chemotherapy-induced neutropenia in patients with breast cancer receiving myelosuppressive chemotherapy. In this phase III trial, balugrastim was shown to be not inferior to pegfilgrastim in the duration of severe neutropenia in cycle 1 of doxorubicin/docetaxel chemotherapy, and the safety profiles of the two agents were similar. Once-per-cycle balugrastim is a safe and effective alternative to pegfilgrastim for hematopoietic support in patients with breast cancer receiving myelosuppressive chemotherapy associated with a greater than 20% risk of developing febrile neutropenia.
PMCID: PMC4709202  PMID: 26668251
Balugrastim; Breast cancer; Granulocyte colony-stimulating factor; Myelosuppressive therapy; Pegfilgrastim
13.  Feasibility of four cycles of docetaxel and cyclophosphamide every 14 days as an adjuvant regimen for breast cancer: A Wisconsin Oncology Network Study 
Clinical breast cancer  2013;14(3):205-211.
Dose-dense therapies have had a major impact on reducing toxicity and improving outcomes in breast cancer. A combination of docetaxel plus cyclophosphamide (TC) every 3 weeks has emerged as a common chemotherapy regimen used for treatment of node-negative or lower-risk node-positive breast cancer. We tested whether it is feasible to deliver TC on a dose-dense schedule, with therapy completed within 10 weeks.
We enrolled women with early stage breast cancer on a single-arm phase II study of adjuvant dose-dense TC (ddTC) through a regional oncology network. All women completed primary surgery prior to accrual and subsequent therapy with TC was deemed appropriate by the treating physician. Planned treatment was docetaxel 75 mg/m2 plus cyclophosphamide 600 mg/m2 every 2 weeks for 4 cycles with subcutaneous pegfilgrastim 6 mg administered 24-48 hours after the administration of each chemotherapy cycle.
Of 42 women enrolled, 41 were evaluable by prespecified criteria. Of these, 37 (90.2%) completed therapy within 10 weeks and 34 (83%) completed therapy at 8 weeks without dose modification. Rates of neuropathy were similar to that reported previously. The rate of neutropenic fever was low (2.5%). Rash and plantar/palmar erythrodythesia were common and reached grade 3 in four subjects (9.8%).
Dose-dense TC is feasible with tolerability profiles similar to standard TC and a low likelihood of neutropenic fever. This study supports further clinical development of this 8-week adjuvant chemotherapy regimen.
PMCID: PMC4000576  PMID: 24342730
chemotherapy; granulocyte-colony stimulating factor; pegfilgrastim
14.  Internet-Based Device-Assisted Remote Monitoring of Cardiovascular Implantable Electronic Devices 
Executive Summary
The objective of this Medical Advisory Secretariat (MAS) report was to conduct a systematic review of the available published evidence on the safety, effectiveness, and cost-effectiveness of Internet-based device-assisted remote monitoring systems (RMSs) for therapeutic cardiac implantable electronic devices (CIEDs) such as pacemakers (PMs), implantable cardioverter-defibrillators (ICDs), and cardiac resynchronization therapy (CRT) devices. The MAS evidence-based review was performed to support public financing decisions.
Clinical Need: Condition and Target Population
Sudden cardiac death (SCD) is a major cause of fatalities in developed countries. In the United States almost half a million people die of SCD annually, resulting in more deaths than stroke, lung cancer, breast cancer, and AIDS combined. In Canada each year more than 40,000 people die from a cardiovascular related cause; approximately half of these deaths are attributable to SCD.
Most cases of SCD occur in the general population typically in those without a known history of heart disease. Most SCDs are caused by cardiac arrhythmia, an abnormal heart rhythm caused by malfunctions of the heart’s electrical system. Up to half of patients with significant heart failure (HF) also have advanced conduction abnormalities.
Cardiac arrhythmias are managed by a variety of drugs, ablative procedures, and therapeutic CIEDs. The range of CIEDs includes pacemakers (PMs), implantable cardioverter-defibrillators (ICDs), and cardiac resynchronization therapy (CRT) devices. Bradycardia is the main indication for PMs and individuals at high risk for SCD are often treated by ICDs.
Heart failure (HF) is also a significant health problem and is the most frequent cause of hospitalization in those over 65 years of age. Patients with moderate to severe HF may also have cardiac arrhythmias, although the cause may be related more to heart pump or haemodynamic failure. The presence of HF, however, increases the risk of SCD five-fold, regardless of aetiology. Patients with HF who remain highly symptomatic despite optimal drug therapy are sometimes also treated with CRT devices.
With an increasing prevalence of age-related conditions such as chronic HF and the expanding indications for ICD therapy, the rate of ICD placement has been dramatically increasing. The appropriate indications for ICD placement, as well as the rate of ICD placement, are increasingly an issue. In the United States, after the introduction of expanded coverage of ICDs, a national ICD registry was created in 2005 to track these devices. A recent survey based on this national ICD registry reported that 22.5% (25,145) of patients had received a non-evidence based ICD and that these patients experienced significantly higher in-hospital mortality and post-procedural complications.
In addition to the increased ICD device placement and the upfront device costs, there is the need for lifelong follow-up or surveillance, placing a significant burden on patients and device clinics. In 2007, over 1.6 million CIEDs were implanted in Europe and the United States, which translates to over 5.5 million patient encounters per year if the recommended follow-up practices are considered. A safe and effective RMS could potentially improve the efficiency of long-term follow-up of patients and their CIEDs.
In addition to being therapeutic devices, CIEDs have extensive diagnostic abilities. All CIEDs can be interrogated and reprogrammed during an in-clinic visit using an inductive programming wand. Remote monitoring would allow patients to transmit information recorded in their devices from the comfort of their own homes. Currently most ICD devices also have the potential to be remotely monitored. Remote monitoring (RM) can be used to check system integrity, to alert on arrhythmic episodes, and to potentially replace in-clinic follow-ups and manage disease remotely. They do not currently have the capability of being reprogrammed remotely, although this feature is being tested in pilot settings.
Every RMS is specifically designed by a manufacturer for their cardiac implant devices. For Internet-based device-assisted RMSs, this customization includes details such as web application, multiplatform sensors, custom algorithms, programming information, and types and methods of alerting patients and/or physicians. The addition of peripherals for monitoring weight and pressure or communicating with patients through the onsite communicators also varies by manufacturer. Internet-based device-assisted RMSs for CIEDs are intended to function as a surveillance system rather than an emergency system.
Health care providers therefore need to learn each application, and as more than one application may be used at one site, multiple applications may need to be reviewed for alarms. All RMSs deliver system integrity alerting; however, some systems seem to be better geared to fast arrhythmic alerting, whereas other systems appear to be more intended for remote follow-up or supplemental remote disease management. The different RMSs may therefore have different impacts on workflow organization because of their varying frequency of interrogation and methods of alerts. The integration of these proprietary RM web-based registry systems with hospital-based electronic health record systems has so far not been commonly implemented.
Currently there are 2 general types of RMSs: those that transmit device diagnostic information automatically and without patient assistance to secure Internet-based registry systems, and those that require patient assistance to transmit information. Both systems employ the use of preprogrammed alerts that are either transmitted automatically or at regular scheduled intervals to patients and/or physicians.
The current web applications, programming, and registry systems differ greatly between the manufacturers of transmitting cardiac devices. In Canada there are currently 4 manufacturers—Medtronic Inc., Biotronik, Boston Scientific Corp., and St Jude Medical Inc.—which have regulatory approval for remote transmitting CIEDs. Remote monitoring systems are proprietary to the manufacturer of the implant device. An RMS for one device will not work with another device, and the RMS may not work with all versions of the manufacturer’s devices.
All Internet-based device-assisted RMSs have common components. The implanted device is equipped with a micro-antenna that communicates with a small external device (at bedside or wearable) commonly known as the transmitter. Transmitters are able to interrogate programmed parameters and diagnostic data stored in the patients’ implant device. The information transfer to the communicator can occur at preset time intervals with the participation of the patient (waving a wand over the device) or it can be sent automatically (wirelessly) without their participation. The encrypted data are then uploaded to an Internet-based database on a secure central server. The data processing facilities at the central database, depending on the clinical urgency, can trigger an alert for the physician(s) that can be sent via email, fax, text message, or phone. The details are also posted on the secure website for viewing by the physician (or their delegate) at their convenience.
Research Questions
The research directions and specific research questions for this evidence review were as follows:
To identify the Internet-based device-assisted RMSs available for follow-up of patients with therapeutic CIEDs such as PMs, ICDs, and CRT devices.
To identify the potential risks, operational issues, or organizational issues related to Internet-based device-assisted RM for CIEDs.
To evaluate the safety, acceptability, and effectiveness of Internet-based device-assisted RMSs for CIEDs such as PMs, ICDs, and CRT devices.
To evaluate the safety, effectiveness, and cost-effectiveness of Internet-based device-assisted RMSs for CIEDs compared to usual outpatient in-office monitoring strategies.
To evaluate the resource implications or budget impact of RMSs for CIEDs in Ontario, Canada.
Research Methods
Literature Search
The review included a systematic review of published scientific literature and consultations with experts and manufacturers of all 4 approved RMSs for CIEDs in Canada. Information on CIED cardiac implant clinics was also obtained from Provincial Programs, a division within the Ministry of Health and Long-Term Care with a mandate for cardiac implant specialty care. Various administrative databases and registries were used to outline the current clinical follow-up burden of CIEDs in Ontario. The provincial population-based ICD database developed and maintained by the Institute for Clinical Evaluative Sciences (ICES) was used to review the current follow-up practices with Ontario patients implanted with ICD devices.
Search Strategy
A literature search was performed on September 21, 2010 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from 1950 to September 2010. Search alerts were generated and reviewed for additional relevant literature until December 31, 2010. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search.
Inclusion Criteria
published between 1950 and September 2010;
English language full-reports and human studies;
original reports including clinical evaluations of Internet-based device-assisted RMSs for CIEDs in clinical settings;
reports including standardized measurements on outcome events such as technical success, safety, effectiveness, cost, measures of health care utilization, morbidity, mortality, quality of life or patient satisfaction;
randomized controlled trials (RCTs), systematic reviews and meta-analyses, cohort and controlled clinical studies.
Exclusion Criteria
non-systematic reviews, letters, comments and editorials;
reports not involving standardized outcome events;
clinical reports not involving Internet-based device assisted RM systems for CIEDs in clinical settings;
reports involving studies testing or validating algorithms without RM;
studies with small samples (<10 subjects).
Outcomes of Interest
The outcomes of interest included: technical outcomes, emergency department visits, complications, major adverse events, symptoms, hospital admissions, clinic visits (scheduled and/or unscheduled), survival, morbidity (disease progression, stroke, etc.), patient satisfaction, and quality of life.
Summary of Findings
The MAS evidence review was performed to review available evidence on Internet-based device-assisted RMSs for CIEDs published until September 2010. The search identified 6 systematic reviews, 7 randomized controlled trials, and 19 reports for 16 cohort studies—3 of these being registry-based and 4 being multi-centered. The evidence is summarized in the 3 sections that follow.
1. Effectiveness of Remote Monitoring Systems of CIEDs for Cardiac Arrhythmia and Device Functioning
In total, 15 reports on 13 cohort studies involving investigations with 4 different RMSs for CIEDs in cardiology implant clinic groups were identified in the review. The 4 RMSs were: Care Link Network® (Medtronic Inc,, Minneapolis, MN, USA); Home Monitoring® (Biotronic, Berlin, Germany); House Call 11® (St Jude Medical Inc., St Pauls, MN, USA); and a manufacturer-independent RMS. Eight of these reports were with the Home Monitoring® RMS (12,949 patients), 3 were with the Care Link® RMS (167 patients), 1 was with the House Call 11® RMS (124 patients), and 1 was with a manufacturer-independent RMS (44 patients). All of the studies, except for 2 in the United States, (1 with Home Monitoring® and 1 with House Call 11®), were performed in European countries.
The RMSs in the studies were evaluated with different cardiac implant device populations: ICDs only (6 studies), ICD and CRT devices (3 studies), PM and ICD and CRT devices (4 studies), and PMs only (2 studies). The patient populations were predominately male (range, 52%–87%) in all studies, with mean ages ranging from 58 to 76 years. One study population was unique in that RMSs were evaluated for ICDs implanted solely for primary prevention in young patients (mean age, 44 years) with Brugada syndrome, which carries an inherited increased genetic risk for sudden heart attack in young adults.
Most of the cohort studies reported on the feasibility of RMSs in clinical settings with limited follow-up. In the short follow-up periods of the studies, the majority of the events were related to detection of medical events rather than system configuration or device abnormalities. The results of the studies are summarized below:
The interrogation of devices on the web platform, both for continuous and scheduled transmissions, was significantly quicker with remote follow-up, both for nurses and physicians.
In a case-control study focusing on a Brugada population–based registry with patients followed-up remotely, there were significantly fewer outpatient visits and greater detection of inappropriate shocks. One death occurred in the control group not followed remotely and post-mortem analysis indicated early signs of lead failure prior to the event.
Two studies examined the role of RMSs in following ICD leads under regulatory advisory in a European clinical setting and noted:
– Fewer inappropriate shocks were administered in the RM group.
– Urgent in-office interrogations and surgical revisions were performed within 12 days of remote alerts.
– No signs of lead fracture were detected at in-office follow-up; all were detected at remote follow-up.
Only 1 study reported evaluating quality of life in patients followed up remotely at 3 and 6 months; no values were reported.
Patient satisfaction was evaluated in 5 cohort studies, all in short term follow-up: 1 for the Home Monitoring® RMS, 3 for the Care Link® RMS, and 1 for the House Call 11® RMS.
– Patients reported receiving a sense of security from the transmitter, a good relationship with nurses and physicians, positive implications for their health, and satisfaction with RM and organization of services.
– Although patients reported that the system was easy to implement and required less than 10 minutes to transmit information, a variable proportion of patients (range, 9% 39%) reported that they needed the assistance of a caregiver for their transmission.
– The majority of patients would recommend RM to other ICD patients.
– Patients with hearing or other physical or mental conditions hindering the use of the system were excluded from studies, but the frequency of this was not reported.
Physician satisfaction was evaluated in 3 studies, all with the Care Link® RMS:
– Physicians reported an ease of use and high satisfaction with a generally short-term use of the RMS.
– Physicians reported being able to address the problems in unscheduled patient transmissions or physician initiated transmissions remotely, and were able to handle the majority of the troubleshooting calls remotely.
– Both nurses and physicians reported a high level of satisfaction with the web registry system.
2. Effectiveness of Remote Monitoring Systems in Heart Failure Patients for Cardiac Arrhythmia and Heart Failure Episodes
Remote follow-up of HF patients implanted with ICD or CRT devices, generally managed in specialized HF clinics, was evaluated in 3 cohort studies: 1 involved the Home Monitoring® RMS and 2 involved the Care Link® RMS. In these RMSs, in addition to the standard diagnostic features, the cardiac devices continuously assess other variables such as patient activity, mean heart rate, and heart rate variability. Intra-thoracic impedance, a proxy measure for lung fluid overload, was also measured in the Care Link® studies. The overall diagnostic performance of these measures cannot be evaluated, as the information was not reported for patients who did not experience intra-thoracic impedance threshold crossings or did not undergo interventions. The trial results involved descriptive information on transmissions and alerts in patients experiencing high morbidity and hospitalization in the short study periods.
3. Comparative Effectiveness of Remote Monitoring Systems for CIEDs
Seven RCTs were identified evaluating RMSs for CIEDs: 2 were for PMs (1276 patients) and 5 were for ICD/CRT devices (3733 patients). Studies performed in the clinical setting in the United States involved both the Care Link® RMS and the Home Monitoring® RMS, whereas all studies performed in European countries involved only the Home Monitoring® RMS.
3A. Randomized Controlled Trials of Remote Monitoring Systems for Pacemakers
Two trials, both multicenter RCTs, were conducted in different countries with different RMSs and study objectives. The PREFER trial was a large trial (897 patients) performed in the United States examining the ability of Care Link®, an Internet-based remote PM interrogation system, to detect clinically actionable events (CAEs) sooner than the current in-office follow-up supplemented with transtelephonic monitoring transmissions, a limited form of remote device interrogation. The trial results are summarized below:
In the 375-day mean follow-up, 382 patients were identified with at least 1 CAE—111 patients in the control arm and 271 in the remote arm.
The event rate detected per patient for every type of CAE, except for loss of atrial capture, was higher in the remote arm than the control arm.
The median time to first detection of CAEs (4.9 vs. 6.3 months) was significantly shorter in the RMS group compared to the control group (P < 0.0001).
Additionally, only 2% (3/190) of the CAEs in the control arm were detected during a transtelephonic monitoring transmission (the rest were detected at in-office follow-ups), whereas 66% (446/676) of the CAEs were detected during remote interrogation.
The second study, the OEDIPE trial, was a smaller trial (379 patients) performed in France evaluating the ability of the Home Monitoring® RMS to shorten PM post-operative hospitalization while preserving the safety of conventional management of longer hospital stays.
Implementation and operationalization of the RMS was reported to be successful in 91% (346/379) of the patients and represented 8144 transmissions.
In the RM group 6.5% of patients failed to send messages (10 due to improper use of the transmitter, 2 with unmanageable stress). Of the 172 patients transmitting, 108 patients sent a total of 167 warnings during the trial, with a greater proportion of warnings being attributed to medical rather than technical causes.
Forty percent had no warning message transmission and among these, 6 patients experienced a major adverse event and 1 patient experienced a non-major adverse event. Of the 6 patients having a major adverse event, 5 contacted their physician.
The mean medical reaction time was faster in the RM group (6.5 ± 7.6 days vs. 11.4 ± 11.6 days).
The mean duration of hospitalization was significantly shorter (P < 0.001) for the RM group than the control group (3.2 ± 3.2 days vs. 4.8 ± 3.7 days).
Quality of life estimates by the SF-36 questionnaire were similar for the 2 groups at 1-month follow-up.
3B. Randomized Controlled Trials Evaluating Remote Monitoring Systems for ICD or CRT Devices
The 5 studies evaluating the impact of RMSs with ICD/CRT devices were conducted in the United States and in European countries and involved 2 RMSs—Care Link® and Home Monitoring ®. The objectives of the trials varied and 3 of the trials were smaller pilot investigations.
The first of the smaller studies (151 patients) evaluated patient satisfaction, achievement of patient outcomes, and the cost-effectiveness of the Care Link® RMS compared to quarterly in-office device interrogations with 1-year follow-up.
Individual outcomes such as hospitalizations, emergency department visits, and unscheduled clinic visits were not significantly different between the study groups.
Except for a significantly higher detection of atrial fibrillation in the RM group, data on ICD detection and therapy were similar in the study groups.
Health-related quality of life evaluated by the EuroQoL at 6-month or 12-month follow-up was not different between study groups.
Patients were more satisfied with their ICD care in the clinic follow-up group than in the remote follow-up group at 6-month follow-up, but were equally satisfied at 12- month follow-up.
The second small pilot trial (20 patients) examined the impact of RM follow-up with the House Call 11® system on work schedules and cost savings in patients randomized to 2 study arms varying in the degree of remote follow-up.
The total time including device interrogation, transmission time, data analysis, and physician time required was significantly shorter for the RM follow-up group.
The in-clinic waiting time was eliminated for patients in the RM follow-up group.
The physician talk time was significantly reduced in the RM follow-up group (P < 0.05).
The time for the actual device interrogation did not differ in the study groups.
The third small trial (115 patients) examined the impact of RM with the Home Monitoring® system compared to scheduled trimonthly in-clinic visits on the number of unplanned visits, total costs, health-related quality of life (SF-36), and overall mortality.
There was a 63.2% reduction in in-office visits in the RM group.
Hospitalizations or overall mortality (values not stated) were not significantly different between the study groups.
Patient-induced visits were higher in the RM group than the in-clinic follow-up group.
The TRUST Trial
The TRUST trial was a large multicenter RCT conducted at 102 centers in the United States involving the Home Monitoring® RMS for ICD devices for 1450 patients. The primary objectives of the trial were to determine if remote follow-up could be safely substituted for in-office clinic follow-up (3 in-office visits replaced) and still enable earlier physician detection of clinically actionable events.
Adherence to the protocol follow-up schedule was significantly higher in the RM group than the in-office follow-up group (93.5% vs. 88.7%, P < 0.001).
Actionability of trimonthly scheduled checks was low (6.6%) in both study groups. Overall, actionable causes were reprogramming (76.2%), medication changes (24.8%), and lead/system revisions (4%), and these were not different between the 2 study groups.
The overall mean number of in-clinic and hospital visits was significantly lower in the RM group than the in-office follow-up group (2.1 per patient-year vs. 3.8 per patient-year, P < 0.001), representing a 45% visit reduction at 12 months.
The median time from onset of first arrhythmia to physician evaluation was significantly shorter (P < 0.001) in the RM group than in the in-office follow-up group for all arrhythmias (1 day vs. 35.5 days).
The median time to detect clinically asymptomatic arrhythmia events—atrial fibrillation (AF), ventricular fibrillation (VF), ventricular tachycardia (VT), and supra-ventricular tachycardia (SVT)—was also significantly shorter (P < 0.001) in the RM group compared to the in-office follow-up group (1 day vs. 41.5 days) and was significantly quicker for each of the clinical arrhythmia events—AF (5.5 days vs. 40 days), VT (1 day vs. 28 days), VF (1 day vs. 36 days), and SVT (2 days vs. 39 days).
System-related problems occurred infrequently in both groups—in 1.5% of patients (14/908) in the RM group and in 0.7% of patients (3/432) in the in-office follow-up group.
The overall adverse event rate over 12 months was not significantly different between the 2 groups and individual adverse events were also not significantly different between the RM group and the in-office follow-up group: death (3.4% vs. 4.9%), stroke (0.3% vs. 1.2%), and surgical intervention (6.6% vs. 4.9%), respectively.
The 12-month cumulative survival was 96.4% (95% confidence interval [CI], 95.5%–97.6%) in the RM group and 94.2% (95% confidence interval [CI], 91.8%–96.6%) in the in-office follow-up group, and was not significantly different between the 2 groups (P = 0.174).
The CONNECT trial, another major multicenter RCT, involved the Care Link® RMS for ICD/CRT devices in a15-month follow-up study of 1,997 patients at 133 sites in the United States. The primary objective of the trial was to determine whether automatically transmitted physician alerts decreased the time from the occurrence of clinically relevant events to medical decisions. The trial results are summarized below:
Of the 575 clinical alerts sent in the study, 246 did not trigger an automatic physician alert. Transmission failures were related to technical issues such as the alert not being programmed or not being reset, and/or a variety of patient factors such as not being at home and the monitor not being plugged in or set up.
The overall mean time from the clinically relevant event to the clinical decision was significantly shorter (P < 0.001) by 17.4 days in the remote follow-up group (4.6 days for 172 patients) than the in-office follow-up group (22 days for 145 patients).
– The median time to a clinical decision was shorter in the remote follow-up group than in the in-office follow-up group for an AT/AF burden greater than or equal to 12 hours (3 days vs. 24 days) and a fast VF rate greater than or equal to 120 beats per minute (4 days vs. 23 days).
Although infrequent, similar low numbers of events involving low battery and VF detection/therapy turned off were noted in both groups. More alerts, however, were noted for out-of-range lead impedance in the RM group (18 vs. 6 patients), and the time to detect these critical events was significantly shorter in the RM group (same day vs. 17 days).
Total in-office clinic visits were reduced by 38% from 6.27 visits per patient-year in the in-office follow-up group to 3.29 visits per patient-year in the remote follow-up group.
Health care utilization visits (N = 6,227) that included cardiovascular-related hospitalization, emergency department visits, and unscheduled clinic visits were not significantly higher in the remote follow-up group.
The overall mean length of hospitalization was significantly shorter (P = 0.002) for those in the remote follow-up group (3.3 days vs. 4.0 days) and was shorter both for patients with ICD (3.0 days vs. 3.6 days) and CRT (3.8 days vs. 4.7 days) implants.
The mortality rate between the study arms was not significantly different between the follow-up groups for the ICDs (P = 0.31) or the CRT devices with defribillator (P = 0.46).
There is limited clinical trial information on the effectiveness of RMSs for PMs. However, for RMSs for ICD devices, multiple cohort studies and 2 large multicenter RCTs demonstrated feasibility and significant reductions in in-office clinic follow-ups with RMSs in the first year post implantation. The detection rates of clinically significant events (and asymptomatic events) were higher, and the time to a clinical decision for these events was significantly shorter, in the remote follow-up groups than in the in-office follow-up groups. The earlier detection of clinical events in the remote follow-up groups, however, was not associated with lower morbidity or mortality rates in the 1-year follow-up. The substitution of almost all the first year in-office clinic follow-ups with RM was also not associated with an increased health care utilization such as emergency department visits or hospitalizations.
The follow-up in the trials was generally short-term, up to 1 year, and was a more limited assessment of potential longer term device/lead integrity complications or issues. None of the studies compared the different RMSs, particularly the different RMSs involving patient-scheduled transmissions or automatic transmissions. Patients’ acceptance of and satisfaction with RM were reported to be high, but the impact of RM on patients’ health-related quality of life, particularly the psychological aspects, was not evaluated thoroughly. Patients who are not technologically competent, having hearing or other physical/mental impairments, were identified as potentially disadvantaged with remote surveillance. Cohort studies consistently identified subgroups of patients who preferred in-office follow-up. The evaluation of costs and workflow impact to the health care system were evaluated in European or American clinical settings, and only in a limited way.
Internet-based device-assisted RMSs involve a new approach to monitoring patients, their disease progression, and their CIEDs. Remote monitoring also has the potential to improve the current postmarket surveillance systems of evolving CIEDs and their ongoing hardware and software modifications. At this point, however, there is insufficient information to evaluate the overall impact to the health care system, although the time saving and convenience to patients and physicians associated with a substitution of in-office follow-up by RM is more certain. The broader issues surrounding infrastructure, impacts on existing clinical care systems, and regulatory concerns need to be considered for the implementation of Internet-based RMSs in jurisdictions involving different clinical practices.
PMCID: PMC3377571  PMID: 23074419
15.  The impact of primary prophylaxis with granulocyte colony-stimulating factors on febrile neutropenia during chemotherapy: a systematic review and meta-analysis of randomized controlled trials 
Supportive Care in Cancer  2015;23(11):3131-3140.
The study aims to assess the relative efficacy of granulocyte colony-stimulating factor (G-CSF) products administered as primary prophylaxis (PP) to patients with cancer receiving myelosuppressive chemotherapy.
A systematic literature review identified publications (January 1990 to September 2013) of randomized controlled trials evaluating PP with filgrastim, pegfilgrastim, lenograstim, or lipegfilgrastim in adults receiving myelosuppressive chemotherapy for solid tumors or non-Hodgkin lymphoma. Direct, indirect, and mixed-treatment comparison (MTC) were used to estimate the odds ratio and 95 % credible interval of febrile neutropenia (FN) during cycle 1 and all cycles of chemotherapy combined without adjusting for differences in relative dose intensity (RDI) between study treatment arms.
Twenty-seven publications representing 30 randomized controlled trials were included. Using MTC over all chemotherapy cycles, PP with filgrastim, pegfilgrastim, lenograstim, and lipegfilgrastim versus no G-CSF PP or placebo were associated with statistically significantly reduced FN risk. FN risk was also significantly reduced with pegfilgrastim PP versus filgrastim PP. Over all chemotherapy cycles, there was a numerical but statistically nonsignificant increase in the FN risk for lipegfilgrastim PP versus pegfilgrastim PP. Using MTC in cycle 1, PP with filgrastim, pegfilgrastim, and lipegfilgrastim versus no G-CSF PP or placebo were associated with statistically significantly reduced FN risk.
In this meta-analysis, using MTC without adjustment for RDI, PP with all G-CSFs evaluated reduced the FN risk in patients receiving myelosuppressive chemotherapy. Future studies are needed to assess the influence of RDI on FN outcomes and to eliminate potential bias between G-CSF arms receiving more intensive chemotherapy than control arms.
Electronic supplementary material
The online version of this article (doi:10.1007/s00520-015-2686-9) contains supplementary material, which is available to authorized users.
PMCID: PMC4584106  PMID: 25821144
Meta-analysis; Pegfilgrastim; Filgrastim; G-CSF; Febrile neutropenia; Lipegfilgrastim
16.  Phase III placebo-controlled, double-blind, randomized trial of pegfilgrastim to reduce the risk of febrile neutropenia in breast cancer patients receiving docetaxel/cyclophosphamide chemotherapy 
Supportive Care in Cancer  2015;23(4):1137-1143.
Pegfilgrastim is a pegylated form of filgrastim, a recombinant protein of granulocyte colony-stimulating factor, that is used to reduce the risk of febrile neutropenia (FN). Here, we report the results of a phase III trial of pegfilgrastim in breast cancer patients receiving docetaxel and cyclophosphamide (TC) chemotherapy.
We conducted a double-blind, placebo-controlled, randomized trial to determine the efficacy of pegfilgrastim in reducing the risk of FN in early-stage breast cancer patients. A total of 351 women (177 in the pegfilgrastim group and 174 in the placebo group) between 20 and 69 years of age with stage I–III invasive breast carcinoma who were to receive TC chemotherapy (docetaxel 75 mg/m2 and cyclophosphamide 600 mg/m2 every 3 weeks) as either neoadjuvant or adjuvant therapy were enrolled; 346 of these patients were treated with either pegfilgrastim (n = 173) or placebo (n = 173).
The incidence of FN was significantly lower in the pegfilgrastim group than in the placebo group (1.2 vs. 68.8 %, respectively; P < 0.001). In addition, patients in the pegfilgrastim group required less hospitalization and antibiotics for FN. Most adverse events were consistent with those expected for breast cancer subjects receiving TC chemotherapy.
Pegfilgrastim is safe and significantly reduces the incidence of FN in breast cancer patients.
Electronic supplementary material
The online version of this article (doi:10.1007/s00520-014-2597-1) contains supplementary material, which is available to authorized users.
PMCID: PMC4381099  PMID: 25576433
Adjuvant therapy; Breast cancer; Febrile neutropenia; Granulocyte colony-stimulating factor (G-CSF); Pegfilgrastim; Docetaxel/cyclophosphamide therapy
17.  Clinical Utility of Vitamin D Testing 
Executive Summary
This report from the Medical Advisory Secretariat (MAS) was intended to evaluate the clinical utility of vitamin D testing in average risk Canadians and in those with kidney disease. As a separate analysis, this report also includes a systematic literature review of the prevalence of vitamin D deficiency in these two subgroups.
This evaluation did not set out to determine the serum vitamin D thresholds that might apply to non-bone health outcomes. For bone health outcomes, no high or moderate quality evidence could be found to support a target serum level above 50 nmol/L. Similarly, no high or moderate quality evidence could be found to support vitamin D’s effects in non-bone health outcomes, other than falls.
Vitamin D
Vitamin D is a lipid soluble vitamin that acts as a hormone. It stimulates intestinal calcium absorption and is important in maintaining adequate phosphate levels for bone mineralization, bone growth, and remodelling. It’s also believed to be involved in the regulation of cell growth proliferation and apoptosis (programmed cell death), as well as modulation of the immune system and other functions. Alone or in combination with calcium, Vitamin D has also been shown to reduce the risk of fractures in elderly men (≥ 65 years), postmenopausal women, and the risk of falls in community-dwelling seniors. However, in a comprehensive systematic review, inconsistent results were found concerning the effects of vitamin D in conditions such as cancer, all-cause mortality, and cardiovascular disease. In fact, no high or moderate quality evidence could be found concerning the effects of vitamin D in such non-bone health outcomes. Given the uncertainties surrounding the effects of vitamin D in non-bone health related outcomes, it was decided that this evaluation should focus on falls and the effects of vitamin D in bone health and exclusively within average-risk individuals and patients with kidney disease.
Synthesis of vitamin D occurs naturally in the skin through exposure to ultraviolet B (UVB) radiation from sunlight, but it can also be obtained from dietary sources including fortified foods, and supplements. Foods rich in vitamin D include fatty fish, egg yolks, fish liver oil, and some types of mushrooms. Since it is usually difficult to obtain sufficient vitamin D from non-fortified foods, either due to low content or infrequent use, most vitamin D is obtained from fortified foods, exposure to sunlight, and supplements.
Clinical Need: Condition and Target Population
Vitamin D deficiency may lead to rickets in infants and osteomalacia in adults. Factors believed to be associated with vitamin D deficiency include:
darker skin pigmentation,
winter season,
living at higher latitudes,
skin coverage,
kidney disease,
malabsorption syndromes such as Crohn’s disease, cystic fibrosis, and
genetic factors.
Patients with chronic kidney disease (CKD) are at a higher risk of vitamin D deficiency due to either renal losses or decreased synthesis of 1,25-dihydroxyvitamin D.
Health Canada currently recommends that, until the daily recommended intakes (DRI) for vitamin D are updated, Canada’s Food Guide (Eating Well with Canada’s Food Guide) should be followed with respect to vitamin D intake. Issued in 2007, the Guide recommends that Canadians consume two cups (500 ml) of fortified milk or fortified soy beverages daily in order to obtain a daily intake of 200 IU. In addition, men and women over the age of 50 should take 400 IU of vitamin D supplements daily. Additional recommendations were made for breastfed infants.
A Canadian survey evaluated the median vitamin D intake derived from diet alone (excluding supplements) among 35,000 Canadians, 10,900 of which were from Ontario. Among Ontarian males ages 9 and up, the median daily dietary vitamin D intake ranged between 196 IU and 272 IU per day. Among females, it varied from 152 IU to 196 IU per day. In boys and girls ages 1 to 3, the median daily dietary vitamin D intake was 248 IU, while among those 4 to 8 years it was 224 IU.
Vitamin D Testing
Two laboratory tests for vitamin D are available, 25-hydroxy vitamin D, referred to as 25(OH)D, and 1,25-dihydroxyvitamin D. Vitamin D status is assessed by measuring the serum 25(OH)D levels, which can be assayed using radioimmunoassays, competitive protein-binding assays (CPBA), high pressure liquid chromatography (HPLC), and liquid chromatography-tandem mass spectrometry (LC-MS/MS). These may yield different results with inter-assay variation reaching up to 25% (at lower serum levels) and intra-assay variation reaching 10%.
The optimal serum concentration of vitamin D has not been established and it may change across different stages of life. Similarly, there is currently no consensus on target serum vitamin D levels. There does, however, appear to be a consensus on the definition of vitamin D deficiency at 25(OH)D < 25 nmol/l, which is based on the risk of diseases such as rickets and osteomalacia. Higher target serum levels have also been proposed based on subclinical endpoints such as parathyroid hormone (PTH). Therefore, in this report, two conservative target serum levels have been adopted, 25 nmol/L (based on the risk of rickets and osteomalacia), and 40 to 50 nmol/L (based on vitamin D’s interaction with PTH).
Ontario Context
Volume & Cost
The volume of vitamin D tests done in Ontario has been increasing over the past 5 years with a steep increase of 169,000 tests in 2007 to more than 393,400 tests in 2008. The number of tests continues to rise with the projected number of tests for 2009 exceeding 731,000. According to the Ontario Schedule of Benefits, the billing cost of each test is $51.7 for 25(OH)D (L606, 100 LMS units, $0.517/unit) and $77.6 for 1,25-dihydroxyvitamin D (L605, 150 LMS units, $0.517/unit). Province wide, the total annual cost of vitamin D testing has increased from approximately $1.7M in 2004 to over $21.0M in 2008. The projected annual cost for 2009 is approximately $38.8M.
Evidence-Based Analysis
The objective of this report is to evaluate the clinical utility of vitamin D testing in the average risk population and in those with kidney disease. As a separate analysis, the report also sought to evaluate the prevalence of vitamin D deficiency in Canada. The specific research questions addressed were thus:
What is the clinical utility of vitamin D testing in the average risk population and in subjects with kidney disease?
What is the prevalence of vitamin D deficiency in the average risk population in Canada?
What is the prevalence of vitamin D deficiency in patients with kidney disease in Canada?
Clinical utility was defined as the ability to improve bone health outcomes with the focus on the average risk population (excluding those with osteoporosis) and patients with kidney disease.
Literature Search
A literature search was performed on July 17th, 2009 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from January 1, 1998 until July 17th, 2009. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria, full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search. Articles with unknown eligibility were reviewed with a second clinical epidemiologist, then a group of epidemiologists until consensus was established. The quality of evidence was assessed as high, moderate, low or very low according to GRADE methodology.
Observational studies that evaluated the prevalence of vitamin D deficiency in Canada in the population of interest were included based on the inclusion and exclusion criteria listed below. The baseline values were used in this report in the case of interventional studies that evaluated the effect of vitamin D intake on serum levels. Studies published in grey literature were included if no studies published in the peer-reviewed literature were identified for specific outcomes or subgroups.
Considering that vitamin D status may be affected by factors such as latitude, sun exposure, food fortification, among others, the search focused on prevalence studies published in Canada. In cases where no Canadian prevalence studies were identified, the decision was made to include studies from the United States, given the similar policies in vitamin D food fortification and recommended daily intake.
Inclusion Criteria
Studies published in English
Publications that reported the prevalence of vitamin D deficiency in Canada
Studies that included subjects from the general population or with kidney disease
Studies in children or adults
Studies published between January 1998 and July 17th 2009
Exclusion Criteria
Studies that included subjects defined according to a specific disease other than kidney disease
Letters, comments, and editorials
Studies that measured the serum vitamin D levels but did not report the percentage of subjects with serum levels below a given threshold
Outcomes of Interest
Prevalence of serum vitamin D less than 25 nmol/L
Prevalence of serum vitamin D less than 40 to 50 nmol/L
Serum 25-hydroxyvitamin D was the metabolite used to assess vitamin D status. Results from adult and children studies were reported separately. Subgroup analyses according to factors that affect serum vitamin D levels (e.g., seasonal effects, skin pigmentation, and vitamin D intake) were reported if enough information was provided in the studies
Quality of Evidence
The quality of the prevalence studies was based on the method of subject recruitment and sampling, possibility of selection bias, and generalizability to the source population. The overall quality of the trials was examined according to the GRADE Working Group criteria.
Summary of Findings
Fourteen prevalence studies examining Canadian adults and children met the eligibility criteria. With the exception of one longitudinal study, the studies had a cross-sectional design. Two studies were conducted among Canadian adults with renal disease but none studied Canadian children with renal disease (though three such US studies were included). No systematic reviews or health technology assessments that evaluated the prevalence of vitamin D deficiency in Canada were identified. Two studies were published in grey literature, consisting of a Canadian survey designed to measure serum vitamin D levels and a study in infants presented as an abstract at a conference. Also included were the results of vitamin D tests performed in community laboratories in Ontario between October 2008 and September 2009 (provided by the Ontario Association of Medical Laboratories).
Different threshold levels were used in the studies, thus we reported the percentage of subjects with serum levels of between 25 and 30 nmol/L and between 37.5 and 50 nmol/L. Some studies stratified the results according to factors affecting vitamin D status and two used multivariate models to investigate the effects of these characteristics (including age, season, BMI, vitamin D intake, skin pigmentation, and season) on serum 25(OH)D levels. It’s unclear, however, if these studies were adequately powered for these subgroup analyses.
Study participants generally consisted of healthy, community-dwelling subjects and most excluded individuals with conditions or medications that alter vitamin D or bone metabolism, such as kidney or liver disease. Although the studies were conducted in different parts of Canada, fewer were performed in Northern latitudes, i.e. above 53°N, which is equivalent to the city of Edmonton.
Serum vitamin D levels of < 25 to 30 nmol/L were observed in 0% to 25.5% of the subjects included in five studies; the weighted average was 3.8% (95% CI: 3.0, 4.6). The preliminary results of the Canadian survey showed that approximately 5% of the subjects had serum levels below 29.5 nmol/L. The results of over 600,000 vitamin D tests performed in Ontarian community laboratories between October 2008 and September 2009 showed that 2.6% of adults (> 18 years) had serum levels < 25 nmol/L.
The prevalence of serum vitamin D levels below 37.5-50 nmol/L reported among studies varied widely, ranging from 8% to 73.6% with a weighted average of 22.5%. The preliminary results of the CHMS survey showed that between 10% and 25% of subjects had serum levels below 37 to 48 nmol/L. The results of the vitamin D tests performed in community laboratories showed that 10% to 25% of the individuals had serum levels between 39 and 50 nmol/L.
In an attempt to explain this inter-study variation, the study results were stratified according to factors affecting serum vitamin D levels, as summarized below. These results should be interpreted with caution as none were adjusted for other potential confounders. Adequately powered multivariate analyses would be necessary to determine the contribution of risk factors to lower serum 25(OH)D levels.
Seasonal variation
Three adult studies evaluating serum vitamin D levels in different seasons observed a trend towards a higher prevalence of serum levels < 37.5 to 50 nmol/L during the winter and spring months, specifically 21% to 39%, compared to 8% to 14% in the summer. The weighted average was 23.6% over the winter/spring months and 9.6% over summer. The difference between the seasons was not statistically significant in one study and not reported in the other two studies.
Skin Pigmentation
Four studies observed a trend toward a higher prevalence of serum vitamin D levels < 37.5 to 50 nmol/L in subjects with darker skin pigmentation compared to those with lighter skin pigmentation, with weighted averages of 46.8% among adults with darker skin colour and 15.9% among those with fairer skin.
Vitamin D intake and serum levels
Four adult studies evaluated serum vitamin D levels according to vitamin D intake and showed an overall trend toward a lower prevalence of serum levels < 37.5 to 50 nmol/L with higher levels of vitamin D intake. One study observed a dose-response relationship between higher vitamin D intake from supplements, diet (milk), and sun exposure (results not adjusted for other variables). It was observed that subjects taking 50 to 400 IU or > 400 IU of vitamin D per day had a 6% and 3% prevalence of serum vitamin D level < 40 nmol/L, respectively, versus 29% in subjects not on vitamin D supplementation. Similarly, among subjects drinking one or two glasses of milk per day, the prevalence of serum vitamin D levels < 40 nmol/L was found to be 15%, versus 6% in those who drink more than two glasses of milk per day and 21% among those who do not drink milk. On the other hand, one study observed little variation in serum vitamin D levels during winter according to milk intake, with the proportion of subjects exhibiting vitamin D levels of < 40 nmol/L being 21% among those drinking 0-2 glasses per day, 26% among those drinking > 2 glasses, and 20% among non-milk drinkers.
The overall quality of evidence for the studies conducted among adults was deemed to be low, although it was considered moderate for the subgroups of skin pigmentation and seasonal variation.
Newborn, Children and Adolescents
Five Canadian studies evaluated serum vitamin D levels in newborns, children, and adolescents. In four of these, it was found that between 0 and 36% of children exhibited deficiency across age groups with a weighted average of 6.4%. The results of over 28,000 vitamin D tests performed in children 0 to 18 years old in Ontario laboratories (Oct. 2008 to Sept. 2009) showed that 4.4% had serum levels of < 25 nmol/L.
According to two studies, 32% of infants 24 to 30 months old and 35.3% of newborns had serum vitamin D levels of < 50 nmol/L. Two studies of children 2 to 16 years old reported that 24.5% and 34% had serum vitamin D levels below 37.5 to 40 nmol/L. In both studies, older children exhibited a higher prevalence than younger children, with weighted averages 34.4% and 10.3%, respectively. The overall weighted average of the prevalence of serum vitamin D levels < 37.5 to 50 nmol/L among pediatric studies was 25.8%. The preliminary results of the Canadian survey showed that between 10% and 25% of subjects between 6 and 11 years (N= 435) had serum levels below 50 nmol/L, while for those 12 to 19 years, 25% to 50% exhibited serum vitamin D levels below 50 nmol/L.
The effects of season, skin pigmentation, and vitamin D intake were not explored in Canadian pediatric studies. A Canadian surveillance study did, however, report 104 confirmed cases1 (2.9 cases per 100,000 children) of vitamin D-deficient rickets among Canadian children age 1 to 18 between 2002 and 2004, 57 (55%) of which from Ontario. The highest incidence occurred among children living in the North, i.e., the Yukon, Northwest Territories, and Nunavut. In 92 (89%) cases, skin pigmentation was categorized as intermediate to dark, 98 (94%) had been breastfed, and 25 (24%) were offspring of immigrants to Canada. There were no cases of rickets in children receiving ≥ 400 IU VD supplementation/day.
Overall, the quality of evidence of the studies of children was considered very low.
Kidney Disease
Two studies evaluated serum vitamin D levels in Canadian adults with kidney disease. The first included 128 patients with chronic kidney disease stages 3 to 5, 38% of which had serum vitamin D levels of < 37.5 nmol/L (measured between April and July). This is higher than what was reported in Canadian studies of the general population during the summer months (i.e. between 8% and 14%). In the second, which examined 419 subjects who had received a renal transplantation (mean time since transplantation: 7.2 ± 6.4 years), the prevalence of serum vitamin D levels < 40 nmol/L was 27.3%. The authors concluded that the prevalence observed in the study population was similar to what is expected in the general population.
No studies evaluating serum vitamin D levels in Canadian pediatric patients with kidney disease could be identified, although three such US studies among children with chronic kidney disease stages 1 to 5 were. The mean age varied between 10.7 and 12.5 years in two studies but was not reported in the third. Across all three studies, the prevalence of serum vitamin D levels below the range of 37.5 to 50 nmol/L varied between 21% and 39%, which is not considerably different from what was observed in studies of healthy Canadian children (24% to 35%).
Overall, the quality of evidence in adults and children with kidney disease was considered very low.
Clinical Utility of Vitamin D Testing
A high quality comprehensive systematic review published in August 2007 evaluated the association between serum vitamin D levels and different bone health outcomes in different age groups. A total of 72 studies were included. The authors observed that there was a trend towards improvement in some bone health outcomes with higher serum vitamin D levels. Nevertheless, precise thresholds for improved bone health outcomes could not be defined across age groups. Further, no new studies on the association were identified during an updated systematic review on vitamin D published in July 2009.
With regards to non-bone health outcomes, there is no high or even moderate quality evidence that supports the effectiveness of vitamin D in outcomes such as cancer, cardiovascular outcomes, and all-cause mortality. Even if there is any residual uncertainty, there is no evidence that testing vitamin D levels encourages adherence to Health Canada’s guidelines for vitamin D intake. A normal serum vitamin D threshold required to prevent non-bone health related conditions cannot be resolved until a causal effect or correlation has been demonstrated between vitamin D levels and these conditions. This is as an ongoing research issue around which there is currently too much uncertainty to base any conclusions that would support routine vitamin D testing.
For patients with chronic kidney disease (CKD), there is again no high or moderate quality evidence supporting improved outcomes through the use of calcitriol or vitamin D analogs. In the absence of such data, the authors of the guidelines for CKD patients consider it best practice to maintain serum calcium and phosphate at normal levels, while supplementation with active vitamin D should be considered if serum PTH levels are elevated. As previously stated, the authors of guidelines for CKD patients believe that there is not enough evidence to support routine vitamin D [25(OH)D] testing. According to what is stated in the guidelines, decisions regarding the commencement or discontinuation of treatment with calcitriol or vitamin D analogs should be based on serum PTH, calcium, and phosphate levels.
Limitations associated with the evidence of vitamin D testing include ambiguities in the definition of an ‘adequate threshold level’ and both inter- and intra- assay variability. The MAS considers both the lack of a consensus on the target serum vitamin D levels and assay limitations directly affect and undermine the clinical utility of testing. The evidence supporting the clinical utility of vitamin D testing is thus considered to be of very low quality.
Daily vitamin D intake, either through diet or supplementation, should follow Health Canada’s recommendations for healthy individuals of different age groups. For those with medical conditions such as renal disease, liver disease, and malabsorption syndromes, and for those taking medications that may affect vitamin D absorption/metabolism, physician guidance should be followed with respect to both vitamin D testing and supplementation.
Studies indicate that vitamin D, alone or in combination with calcium, may decrease the risk of fractures and falls among older adults.
There is no high or moderate quality evidence to support the effectiveness of vitamin D in other outcomes such as cancer, cardiovascular outcomes, and all-cause mortality.
Studies suggest that the prevalence of vitamin D deficiency in Canadian adults and children is relatively low (approximately 5%), and between 10% and 25% have serum levels below 40 to 50 nmol/L (based on very low to low grade evidence).
Given the limitations associated with serum vitamin D measurement, ambiguities in the definition of a ‘target serum level’, and the availability of clear guidelines on vitamin D supplementation from Health Canada, vitamin D testing is not warranted for the average risk population.
Health Canada has issued recommendations regarding the adequate daily intake of vitamin D, but current studies suggest that the mean dietary intake is below these recommendations. Accordingly, Health Canada’s guidelines and recommendations should be promoted.
Based on a moderate level of evidence, individuals with darker skin pigmentation appear to have a higher risk of low serum vitamin D levels than those with lighter skin pigmentation and therefore may need to be specially targeted with respect to optimum vitamin D intake. The cause-effect of this association is currently unclear.
Individuals with medical conditions such as renal and liver disease, osteoporosis, and malabsorption syndromes, as well as those taking medications that may affect vitamin D absorption/metabolism, should follow their physician’s guidance concerning both vitamin D testing and supplementation.
PMCID: PMC3377517  PMID: 23074397
18.  Evaluation of Prescriber Responses to Pharmacist Recommendations Communicated by Fax in a Medication Therapy Management Program (MTMP) 
As defined by the Medicare Prescription Drug, Improvement, and Modernization Act of 2003, medication therapy management programs (MTMPs) must be designed to decrease adverse drug events and improve patient outcomes by promoting appropriate medication use. WellPoint Inc. contracted with the pharmacist-run University of Arizona College of Pharmacy Medication Management Center (UA MMC) to provide a pilot telephone-based MTMP to approximately 5,000 high-risk beneficiaries from among its nearly 2 million Medicare prescription drug plan (PDP) beneficiaries. Eligibility for the program was determined by a minimum of 2 of 6 chronic diseases (dyslipidemia, cardiovascular disease, depression, diabetes mellitus, congestive heart failure, and chronic obstructive pulmonary disease; at least 1 of the latter 2 diseases must be present), at least 3 Part-D covered medications, and greater than $4,000 per year in predicted drug spending. In addition to these criteria, WellPoint Inc. used the Johns Hopkins adjusted clinical groups (ACG) predictive model to identify the high-risk beneficiaries to be enrolled in the program. Medication therapy reviews were conducted for these patients. If any medication-related problems (MRPs) were identified, the patient’s prescribers were contacted via a fax communication with recommendation(s) to resolve these MRPs. The UA MMC fax interventions were categorized as cost saving, guideline adherence, or safety concerns.
To (a) determine prescriber responses to pharmacist-initiated recommendations in an MTMP for the 3 intervention categories, (b) compare prescriber responses between intervention categories, and (c) compare prescriber response by prescriber type (primary care physician [PCP] vs. specialist) within each intervention category.
A retrospective analysis of pharmacist-initiated interventions from August through December 2008 was performed using data collected from the UA MMC database. Data were collected on intervention category (cost saving, guideline adherence, or safety concerns), and responses of prescribers were recorded as either approval or decline (no response was considered decline). Prescriber specialty was identified from searching records of state medical boards. Logistic regression analyses with the robust variance option to adjust for correlation within prescribers were conducted to compare prescriber approval rates between and within intervention categories. Significance was assessed at alpha 0.05.
Of 4,967 Medicare Part D beneficiaries determined to be MTMP-eligible, 4,277 beneficiaries (86.1%) were available for assessment (400 declined, 186 disenrolled, and 104 were deceased). Pharmacists initiated 1,548 valid medication recommendations (i.e., recommendations were excluded for deceased patients, incorrect prescribers, and where prescriber specialty was not identified). These recommendations for 1,174 beneficiaries (27.5% of those available) were faxed to prescribers requesting approval. Mean (SD) age for beneficiaries having recommendations was 72.9 (9.4) years, and the majority (57.6%) was female. By category of recommendation, 58.3% (n = 902) were guideline adherence, 33.3% (n = 515) were cost saving, and 8.5% (n = 131) were safety concerns. Prescriber approval rates were 47.2% overall (n = 731/1,548), 41.4% (n = 373/902) for guideline adherence, 58.3% (n = 300/515) for cost savings, and 44.3% (n = 58/131) for safety concerns; 817 recommendations were not approved by prescribers (n = 255 [16.5%] denials and 562 no response [36.3%]). Prescriber approval was significantly higher for cost-saving interventions compared with guideline adherence interventions (odds ratio [OR]=1.98, 95% CI = 1.56–2.51, P < 0.001) and compared with safety interventions (OR = 1.76, 95% CI = 1.19, 2.59, P = 0.004); there was no significant difference in the prescriber approval rates for the interventions for safety versus guideline adherence. The overall approval rate was higher for PCPs (49.8%, n = 525/1,054) versus specialists (41.7%, n = 206/494; OR = 1.39, 95% CI = 1.08–1.78, P = 0.011) and for the category for guideline adherence interventions (44.0% for PCPs vs. 35.9% for specialists; OR = 1.40, 95% CI = 1.01–1.95, P = 0.044), but not for the other 2 intervention categories.
Prescriber approval rates for pharmacist recommendations for drug therapy changes for MTMP beneficiaries were approximately 47% overall and higher for recommendations that involved cost savings compared with recommendations for safety concerns or guideline adherence. Compared with specialists, PCPs had higher approval rates for pharmacist recommendations overall and for the intervention category guideline adherence.
PMCID: PMC5013826  PMID: 21657804
19.  Clinical Utility of Serologic Testing for Celiac Disease in Ontario 
Executive Summary
Objective of Analysis
The objective of this evidence-based evaluation is to assess the accuracy of serologic tests in the diagnosis of celiac disease in subjects with symptoms consistent with this disease. Furthermore the impact of these tests in the diagnostic pathway of the disease and decision making was also evaluated.
Celiac Disease
Celiac disease is an autoimmune disease that develops in genetically predisposed individuals. The immunological response is triggered by ingestion of gluten, a protein that is present in wheat, rye, and barley. The treatment consists of strict lifelong adherence to a gluten-free diet (GFD).
Patients with celiac disease may present with a myriad of symptoms such as diarrhea, abdominal pain, weight loss, iron deficiency anemia, dermatitis herpetiformis, among others.
Serologic Testing in the Diagnosis Celiac Disease
There are a number of serologic tests used in the diagnosis of celiac disease.
Anti-gliadin antibody (AGA)
Anti-endomysial antibody (EMA)
Anti-tissue transglutaminase antibody (tTG)
Anti-deamidated gliadin peptides antibodies (DGP)
Serologic tests are automated with the exception of the EMA test, which is more time-consuming and operator-dependent than the other tests. For each serologic test, both immunoglobulin A (IgA) or G (IgG) can be measured, however, IgA measurement is the standard antibody measured in celiac disease.
Diagnosis of Celiac Disease
According to celiac disease guidelines, the diagnosis of celiac disease is established by small bowel biopsy. Serologic tests are used to initially detect and to support the diagnosis of celiac disease. A small bowel biopsy is indicated in individuals with a positive serologic test. In some cases an endoscopy and small bowel biopsy may be required even with a negative serologic test. The diagnosis of celiac disease must be performed on a gluten-containing diet since the small intestine abnormalities and the serologic antibody levels may resolve or improve on a GFD.
Since IgA measurement is the standard for the serologic celiac disease tests, false negatives may occur in IgA-deficient individuals.
Incidence and Prevalence of Celiac Disease
The incidence and prevalence of celiac disease in the general population and in subjects with symptoms consistent with or at higher risk of celiac disease based on systematic reviews published in 2004 and 2009 are summarized below.
Incidence of Celiac Disease in the General Population
Adults or mixed population: 1 to 17/100,000/year
Children: 2 to 51/100,000/year
In one of the studies, a stratified analysis showed that there was a higher incidence of celiac disease in younger children compared to older children, i.e., 51 cases/100,000/year in 0 to 2 year-olds, 33/100,000/year in 2 to 5 year-olds, and 10/100,000/year in children 5 to 15 years old.
Prevalence of Celiac Disease in the General Population
The prevalence of celiac disease reported in population-based studies identified in the 2004 systematic review varied between 0.14% and 1.87% (median: 0.47%, interquartile range: 0.25%, 0.71%). According to the authors of the review, the prevalence did not vary by age group, i.e., adults and children.
Prevalence of Celiac Disease in High Risk Subjects
Type 1 diabetes (adults and children): 1 to 11%
Autoimmune thyroid disease: 2.9 to 3.3%
First degree relatives of patients with celiac disease: 2 to 20%
Prevalence of Celiac Disease in Subjects with Symptoms Consistent with the Disease
The prevalence of celiac disease in subjects with symptoms consistent with the disease varied widely among studies, i.e., 1.5% to 50% in adult studies, and 1.1% to 17% in pediatric studies. Differences in prevalence may be related to the referral pattern as the authors of a systematic review noted that the prevalence tended to be higher in studies whose population originated from tertiary referral centres compared to general practice.
Research Questions
What is the sensitivity and specificity of serologic tests in the diagnosis celiac disease?
What is the clinical validity of serologic tests in the diagnosis of celiac disease? The clinical validity was defined as the ability of the test to change diagnosis.
What is the clinical utility of serologic tests in the diagnosis of celiac disease? The clinical utility was defined as the impact of the test on decision making.
What is the budget impact of serologic tests in the diagnosis of celiac disease?
What is the cost-effectiveness of serologic tests in the diagnosis of celiac disease?
Literature Search
A literature search was performed on November 13th, 2009 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from January 1st 2003 and November 13th 2010. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria, full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search. Articles with unknown eligibility were reviewed with a second clinical epidemiologist, then a group of epidemiologists until consensus was established. The quality of evidence was assessed as high, moderate, low or very low according to GRADE methodology.
Studies that evaluated diagnostic accuracy, i.e., both sensitivity and specificity of serology tests in the diagnosis of celiac disease.
Study population consisted of untreated patients with symptoms consistent with celiac disease.
Studies in which both serologic celiac disease tests and small bowel biopsy (gold standard) were used in all subjects.
Systematic reviews, meta-analyses, randomized controlled trials, prospective observational studies, and retrospective cohort studies.
At least 20 subjects included in the celiac disease group.
English language.
Human studies.
Studies published from 2000 on.
Clearly defined cut-off value for the serology test. If more than one test was evaluated, only those tests for which a cut-off was provided were included.
Description of small bowel biopsy procedure clearly outlined (location, number of biopsies per patient), unless if specified that celiac disease diagnosis guidelines were followed.
Patients in the treatment group had untreated CD.
Studies on screening of the general asymptomatic population.
Studies that evaluated rapid diagnostic kits for use either at home or in physician’s offices.
Studies that evaluated diagnostic modalities other than serologic tests such as capsule endoscopy, push enteroscopy, or genetic testing.
Cut-off for serologic tests defined based on controls included in the study.
Study population defined based on positive serology or subjects pre-screened by serology tests.
Celiac disease status known before study enrolment.
Sensitivity or specificity estimates based on repeated testing for the same subject.
Non-peer-reviewed literature such as editorials and letters to the editor.
The population consisted of adults and children with untreated, undiagnosed celiac disease with symptoms consistent with the disease.
Serologic Celiac Disease Tests Evaluated
Anti-gliadin antibody (AGA)
Anti-endomysial antibody (EMA)
Anti-tissue transglutaminase antibody (tTG)
Anti-deamidated gliadin peptides antibody (DGP)
Combinations of some of the serologic tests listed above were evaluated in some studies
Both IgA and IgG antibodies were evaluated for the serologic tests listed above.
Outcomes of Interest
Positive and negative likelihood ratios
Diagnostic odds ratio (OR)
Area under the sROC curve (AUC)
Small bowel biopsy was used as the gold standard in order to estimate the sensitivity and specificity of each serologic test.
Statistical Analysis
Pooled estimates of sensitivity, specificity and diagnostic odds ratios (DORs) for the different serologic tests were calculated using a bivariate, binomial generalized linear mixed model. Statistical significance for differences in sensitivity and specificity between serologic tests was defined by P values less than 0.05, where “false discovery rate” adjustments were made for multiple hypothesis testing. The bivariate regression analyses were performed using SAS version 9.2 (SAS Institute Inc.; Cary, NC, USA). Using the bivariate model parameters, summary receiver operating characteristic (sROC) curves were produced using Review Manager 5.0.22 (The Nordiac Cochrane Centre, The Cochrane Collaboration, 2008). The area under the sROC curve (AUC) was estimated by bivariate mixed-efects binary regression modeling framework. Model specification, estimation and prediction are carried out with xtmelogit in Stata release 10 (Statacorp, 2007). Statistical tests for the differences in AUC estimates could not be carried out.
The study results were stratified according to patient or disease characteristics such as age, severity of Marsh grade abnormalities, among others, if reported in the studies. The literature indicates that the diagnostic accuracy of serologic tests for celiac disease may be affected in patients with chronic liver disease, therefore, the studies identified through the systematic literature review that evaluated the diagnostic accuracy of serologic tests for celiac disease in patients with chronic liver disease were summarized. The effect of the GFD in patiens diagnosed with celiac disease was also summarized if reported in the studies eligible for the analysis.
Summary of Findings
Published Systematic Reviews
Five systematic reviews of studies that evaluated the diagnostic accuracy of serologic celiac disease tests were identified through our literature search. Seventeen individual studies identified in adults and children were eligible for this evaluation.
In general, the studies included evaluated the sensitivity and specificity of at least one serologic test in subjects with symptoms consistent with celiac disease. The gold standard used to confirm the celiac disease diagnosis was small bowel biopsy. Serologic tests evaluated included tTG, EMA, AGA, and DGP, using either IgA or IgG antibodies. Indirect immunoflurorescence was used for the EMA serologic tests whereas enzyme-linked immunosorbent assay (ELISA) was used for the other serologic tests.
Common symptoms described in the studies were chronic diarrhea, abdominal pain, bloating, unexplained weight loss, unexplained anemia, and dermatitis herpetiformis.
The main conclusions of the published systematic reviews are summarized below.
IgA tTG and/or IgA EMA have a high accuracy (pooled sensitivity: 90% to 98%, pooled specificity: 95% to 99% depending on the pooled analysis).
Most reviews found that AGA (IgA or IgG) are not as accurate as IgA tTG and/or EMA tests.
A 2009 systematic review concluded that DGP (IgA or IgG) seems to have a similar accuracy compared to tTG, however, since only 2 studies identified evaluated its accuracy, the authors believe that additional data is required to draw firm conclusions.
Two systematic reviews also concluded that combining two serologic celiac disease tests has little contribution to the accuracy of the diagnosis.
MAS Analysis
The pooled analysis performed by MAS showed that IgA tTG has a sensitivity of 92.1% [95% confidence interval (CI) 88.0, 96.3], compared to 89.2% (83.3, 95.1, p=0.12) for IgA DGP, 85.1% (79.5, 94.4, p=0.07) for IgA EMA, and 74.9% (63.6, 86.2, p=0.0003) for IgA AGA. Among the IgG-based tests, the results suggest that IgG DGP has a sensitivity of 88.4% (95% CI: 82.1, 94.6), 44.7% (30.3, 59.2) for tTG, and 69.1% (56.0, 82.2) for AGA. The difference was significant when IgG DGP was compared to IgG tTG but not IgG AGA. Combining serologic celiac disease tests yielded a slightly higher sensitivity compared to individual IgA-based serologic tests.
IgA deficiency
The prevalence of total or severe IgA deficiency was low in the studies identified varying between 0 and 1.7% as reported in 3 studies in which IgA deficiency was not used as a referral indication for celiac disease serologic testing. The results of IgG-based serologic tests were positive in all patients with IgA deficiency in which celiac disease was confirmed by small bowel biopsy as reported in four studies.
The MAS pooled analysis indicates a high specificity across the different serologic tests including the combination strategy, pooled estimates ranged from 90.1% to 98.7% depending on the test.
Likelihood Ratios
According to the likelihood ratio estimates, both IgA tTG and serologic test combinationa were considered very useful tests (positive likelihood ratio above ten and the negative likelihood ratio below 0.1).
Moderately useful tests included IgA EMA, IgA DGP, and IgG DGP (positive likelihood ratio between five and ten and the negative likelihood ratio between 0.1 and 0.2).
Somewhat useful tests: IgA AGA, IgG AGA, generating small but sometimes important changes from pre- to post-test probability (positive LR between 2 and 5 and negative LR between 0.2 and 0.5)
Not Useful: IgG tTG, altering pre- to post-test probability to a small and rarely important degree (positive LR between 1 and 2 and negative LR between 0.5 and 1).
Diagnostic Odds Ratios (DOR)
Among the individual serologic tests, IgA tTG had the highest DOR, 136.5 (95% CI: 51.9, 221.2). The statistical significance of the difference in DORs among tests was not calculated, however, considering the wide confidence intervals obtained, the differences may not be statistically significant.
Area Under the sROC Curve (AUC)
The sROC AUCs obtained ranged between 0.93 and 0.99 for most IgA-based tests with the exception of IgA AGA, with an AUC of 0.89.
Sensitivity and Specificity of Serologic Tests According to Age Groups
Serologic test accuracy did not seem to vary according to age (adults or children).
Sensitivity and Specificity of Serologic Tests According to Marsh Criteria
Four studies observed a trend towards a higher sensitivity of serologic celiac disease tests when Marsh 3c grade abnormalities were found in the small bowel biopsy compared to Marsh 3a or 3b (statistical significance not reported). The sensitivity of serologic tests was much lower when Marsh 1 grade abnormalities were found in small bowel biopsy compared to Marsh 3 grade abnormalities. The statistical significance of these findings were not reported in the studies.
Diagnostic Accuracy of Serologic Celiac Disease Tests in Subjects with Chronic Liver Disease
A total of 14 observational studies that evaluated the specificity of serologic celiac disease tests in subjects with chronic liver disease were identified. All studies evaluated the frequency of false positive results (1-specificity) of IgA tTG, however, IgA tTG test kits using different substrates were used, i.e., human recombinant, human, and guinea-pig substrates. The gold standard, small bowel biopsy, was used to confirm the result of the serologic tests in only 5 studies. The studies do not seem to have been designed or powered to compare the diagnostic accuracy among different serologic celiac disease tests.
The results of the studies identified in the systematic literature review suggest that there is a trend towards a lower frequency of false positive results if the IgA tTG test using human recombinant substrate is used compared to the guinea pig substrate in subjects with chronic liver disease. However, the statistical significance of the difference was not reported in the studies. When IgA tTG with human recombinant substrate was used, the number of false positives seems to be similar to what was estimated in the MAS pooled analysis for IgA-based serologic tests in a general population of patients. These results should be interpreted with caution since most studies did not use the gold standard, small bowel biopsy, to confirm or exclude the diagnosis of celiac disease, and since the studies were not designed to compare the diagnostic accuracy among different serologic tests. The sensitivity of the different serologic tests in patients with chronic liver disease was not evaluated in the studies identified.
Effects of a Gluten-Free Diet (GFD) in Patients Diagnosed with Celiac Disease
Six studies identified evaluated the effects of GFD on clinical, histological, or serologic improvement in patients diagnosed with celiac disease. Improvement was observed in 51% to 95% of the patients included in the studies.
Grading of Evidence
Overall, the quality of the evidence ranged from moderate to very low depending on the serologic celiac disease test. Reasons to downgrade the quality of the evidence included the use of a surrogate endpoint (diagnostic accuracy) since none of the studies evaluated clinical outcomes, inconsistencies among study results, imprecise estimates, and sparse data. The quality of the evidence was considered moderate for IgA tTg and IgA EMA, low for IgA DGP, and serologic test combinations, and very low for IgA AGA.
Clinical Validity and Clinical Utility of Serologic Testing in the Diagnosis of Celiac Disease
The clinical validity of serologic tests in the diagnosis of celiac disease was considered high in subjects with symptoms consistent with this disease due to
High accuracy of some serologic tests.
Serologic tests detect possible celiac disease cases and avoid unnecessary small bowel biopsy if the test result is negative, unless an endoscopy/ small bowel biopsy is necessary due to the clinical presentation.
Serologic tests support the results of small bowel biopsy.
The clinical utility of serologic tests for the diagnosis of celiac disease, as defined by its impact in decision making was also considered high in subjects with symptoms consistent with this disease given the considerations listed above and since celiac disease diagnosis leads to treatment with a gluten-free diet.
Economic Analysis
A decision analysis was constructed to compare costs and outcomes between the tests based on the sensitivity, specificity and prevalence summary estimates from the MAS Evidence-Based Analysis (EBA). A budget impact was then calculated by multiplying the expected costs and volumes in Ontario. The outcome of the analysis was expected costs and false negatives (FN). Costs were reported in 2010 CAD$. All analyses were performed using TreeAge Pro Suite 2009.
Four strategies made up the efficiency frontier; IgG tTG, IgA tTG, EMA and small bowel biopsy. All other strategies were dominated. IgG tTG was the least costly and least effective strategy ($178.95, FN avoided=0). Small bowel biopsy was the most costly and most effective strategy ($396.60, FN avoided =0.1553). The cost per FN avoided were $293, $369, $1,401 for EMA, IgATTG and small bowel biopsy respectively. One-way sensitivity analyses did not change the ranking of strategies.
All testing strategies with small bowel biopsy are cheaper than biopsy alone however they also result in more FNs. The most cost-effective strategy will depend on the decision makers’ willingness to pay. Findings suggest that IgA tTG was the most cost-effective and feasible strategy based on its Incremental Cost-Effectiveness Ratio (ICER) and convenience to conduct the test. The potential impact of IgA tTG test in the province of Ontario would be $10.4M, $11.0M and $11.7M respectively in the following three years based on past volumes and trends in the province and basecase expected costs.
The panel of tests is the commonly used strategy in the province of Ontario therefore the impact to the system would be $13.6M, $14.5M and $15.3M respectively in the next three years based on past volumes and trends in the province and basecase expected costs.
The clinical validity and clinical utility of serologic tests for celiac disease was considered high in subjects with symptoms consistent with this disease as they aid in the diagnosis of celiac disease and some tests present a high accuracy.
The study findings suggest that IgA tTG is the most accurate and the most cost-effective test.
AGA test (IgA) has a lower accuracy compared to other IgA-based tests
Serologic test combinations appear to be more costly with little gain in accuracy. In addition there may be problems with generalizability of the results of the studies included in this review if different test combinations are used in clinical practice.
IgA deficiency seems to be uncommon in patients diagnosed with celiac disease.
The generalizability of study results is contingent on performing both the serologic test and small bowel biopsy in subjects on a gluten-containing diet as was the case in the studies identified, since the avoidance of gluten may affect test results.
PMCID: PMC3377499  PMID: 23074399
20.  Granulocyte colony-stimulating factor use in a large British hospital: comparison with published experience 
Pharmacy Practice  2010;8(4):213-219.
Granulocyte colony-stimulating factors (G-CSF) are high-cost agents recommended as prophylaxis of febrile neutropenia or as adjunctive treatment of severe neutropenic sepsis. Their use in high-risk situations such as acute myeloid leukaemia, acute lymphocytic leukaemia, myelodysplastic syndrome and stem cell transplantation is also indicated.
This audit assessed the use of G-CSF within the Oncology and Haematology Service Delivery Unit at Guy’s and St. Thomas’ hospital (London, United Kingdom).
Patients who received G-CSF in April-May 2008 were identified retrospectively from the pharmacy labelling system, and chemotherapy front sheets, clinic letters and transplantation protocols were reviewed. Patients on lenograstim, in clinical trials or under non-approved chemotherapy protocols were excluded.
A total of 104 G-CSF treatments were assessed. The most commonly treated malignancy was breast cancer (41.3%), with docetaxel 100 mg/m 2 (34.6%) being the most frequent chemotherapy regimen. The chemotherapy intent was curative in 66.3 % of cases. Pegfilgrastim was used in 73.1 % of cases and primary prophylaxis was the most common indication (54.8%). Stem cell transplantation was the first indication to meet the audit criterion (93.3%), followed by primary prophylaxis (89.5%). There was a considerable nonadherence for secondary prophylaxis (6.7%).
The overall level of compliance with the audit criteria was 72.1%. The results for primary and secondary prophylaxis would have been different if FEC100 (fluorouracil, epirubicin, cyclophosphamide) and docetaxel 100 mg/m 2 had been considered a single chemotherapy regimen. Also, the lack of access to medical notes may have affected the reliability of the results for ‘therapeutic’ use.
PMCID: PMC4127058  PMID: 25126143
Hematopoietic Cell Growth Factors; Neutropenia; Clinical Audit; Drug Utilization Review; United Kingdom
21.  Women's Access and Provider Practices for the Case Management of Malaria during Pregnancy: A Systematic Review and Meta-Analysis 
PLoS Medicine  2014;11(8):e1001688.
Jenny Hill and colleagues conduct a systematic review and meta-analysis of women’s access and healthcare provider adherence to WHO case-management policy of malaria during pregnancy.
Please see later in the article for the Editors' Summary
WHO recommends prompt diagnosis and quinine plus clindamycin for treatment of uncomplicated malaria in the first trimester and artemisinin-based combination therapies in subsequent trimesters. We undertook a systematic review of women's access to and healthcare provider adherence to WHO case management policy for malaria in pregnant women.
Methods and Findings
We searched the Malaria in Pregnancy Library, the Global Health Database, and the International Network for the Rational Use of Drugs Bibliography from 1 January 2006 to 3 April 2014, without language restriction. Data were appraised for quality and content. Frequencies of women's and healthcare providers' practices were explored using narrative synthesis and random effect meta-analysis. Barriers to women's access and providers' adherence to policy were explored by content analysis using NVivo. Determinants of women's access and providers' case management practices were extracted and compared across studies. We did not perform a meta-ethnography. Thirty-seven studies were included, conducted in Africa (30), Asia (4), Yemen (1), and Brazil (2). One- to three-quarters of women reported malaria episodes during pregnancy, of whom treatment was sought by >85%. Barriers to access among women included poor knowledge of drug safety, prohibitive costs, and self-treatment practices, used by 5%–40% of women. Determinants of women's treatment-seeking behaviour were education and previous experience of miscarriage and antenatal care. Healthcare provider reliance on clinical diagnosis and poor adherence to treatment policy, especially in first versus other trimesters (28%, 95% CI 14%–47%, versus 72%, 95% CI 39%–91%, p = 0.02), was consistently reported. Prescribing practices were driven by concerns over side effects and drug safety, patient preference, drug availability, and cost. Determinants of provider practices were access to training and facility type (public versus private). Findings were limited by the availability, quality, scope, and methodological inconsistencies of the included studies.
A systematic assessment of the extent of substandard case management practices of malaria in pregnancy is required, as well as quality improvement interventions that reach all providers administering antimalarial drugs in the community. Pregnant women need access to information on which anti-malarial drugs are safe to use at different stages of pregnancy.
Please see later in the article for the Editors' Summary
Editors' Summary
Malaria, a mosquito-borne parasite, kills about 600,000 people every year. Most of these deaths occur among young children in sub-Saharan Africa, but pregnant women and their unborn babies are also vulnerable to malaria. Infection with malaria during pregnancy can cause severe maternal anemia, miscarriages, and preterm births, and kills about 10,000 women and 100,000 children each year. Since 2006, the World Health Organization (WHO) has recommended that uncomplicated malaria (an infection that causes a fever but does not involve organ damage or severe anemia) should be treated with quinine and clindamycin if it occurs during the first trimester (first three months) of pregnancy and with an artemisinin-based combination therapy (ACT) if it occurs during the second or third trimester; ACTs should be used during the first trimester only if no other treatment is immediately available because their safety during early pregnancy has not been established. Since 2010, WHO has also recommended that clinical diagnosis of malaria should be confirmed before treatment by looking for parasites in patients' blood (parasitology).
Why Was This Study Done?
Prompt diagnosis and treatment of malaria in pregnancy in regions where malaria is always present (endemic regions) is extremely important, yet little is known about women's access to the recommended interventions for malaria in pregnancy or about healthcare providers' adherence to the WHO case management guidelines. In this systematic review and meta-analysis of qualitative, quantitative, and mixed methods studies, the researchers explore the factors that affect women's access to treatment and healthcare provider practices for case management of malaria during pregnancy. A systematic review uses predefined criteria to identify all the research on a given topic. Meta-analysis is a statistical method for combining the results of several studies. A qualitative study collects non-quantitative data such as reasons for refusing an intervention, whereas a qualitative study collects numerical data such as the proportion of a population receiving an intervention.
What Did the Researchers Do and Find?
The researchers identified 37 studies (mostly conducted in Africa) that provided data on the range of healthcare providers visited, antimalarials used, and the factors influencing the choice of healthcare provider and medicines among pregnant women seeking treatment for malaria and/or the type and quality of diagnostic and case management services offered to them by healthcare providers. The researchers explored the data in these studies using narrative synthesis (which summarizes the results from several qualitative studies) and content analysis (which identifies key themes within texts). Among the studies that provided relevant data, one-quarter to three-quarters of women reported malaria episodes during pregnancy. More than 85% of the women who reported a malaria episode during pregnancy sought some form of treatment. Barriers to access to WHO-recommended treatment among women included poor knowledge about drug safety, and the use of self-treatment practices such as taking herbal remedies. Factors that affected the treatment-seeking behavior of pregnant women (“determinants”) included prior use of antenatal care, education, and previous experience of a miscarriage. Among healthcare providers, reliance on clinical diagnosis of malaria was consistently reported, as was poor adherence to the treatment policy. Specifically, 28% and 72% of healthcare providers followed the treatment guidelines for malaria during the first and second/third trimesters of pregnancy, respectively. Finally, the researchers report that concerns over side effects and drug safety, patient preference, drug availability, and cost drove the prescribing practices of the healthcare providers, and that the determinants of provider practices included the type (cadre) of heathcare worker, access to training, and whether they were based in a public or private facility.
What Do These Findings Mean?
These findings reveal important limitations in the implementation of the WHO policy on the treatment of malaria in pregnancy across many parts of Africa and in several other malaria endemic regions. Notably, they show that women do not uniformly seek care within the formal healthcare system and suggest that, when they do seek care, they may not be given the appropriate treatment because healthcare providers frequently fail to adhere to the WHO diagnostic and treatment guidelines. Although limited by the sparseness of data and by inconsistencies in study methodologies, these findings nevertheless highlight the need for further systematic assessments of the extent of substandard case management of malaria in pregnancy in malaria endemic countries, and the need to develop interventions to improve access to and delivery of quality case management of malaria among pregnant women.
Additional Information
Please access these websites via the online version of this summary at
Information is available from the World Health Organization on malaria (in several languages) and on malaria in pregnancy; the 2010 Guidelines for the Treatment of Malaria are available; the World Malaria Report 2013 provides details of the current global malaria situation
The US Centers for Disease Control and Prevention also provides information on malaria; a personal story about malaria in pregnancy is available
Information is available from the Roll Back Malaria Partnership on all aspects of global malaria control, including information on malaria in pregnancy
The Malaria in Pregnancy Consortium is undertaking research into the prevention and treatment of malaria in pregnancy and provides links to the consortium's publications and an online library on malaria in pregnancy
MedlinePlus provides links to additional information on malaria (in English and Spanish)
PMCID: PMC4122360  PMID: 25093720
22.  Implementing the 2009 Institute of Medicine recommendations on resident physician work hours, supervision, and safety 
Long working hours and sleep deprivation have been a facet of physician training in the US since the advent of the modern residency system. However, the scientific evidence linking fatigue with deficits in human performance, accidents and errors in industries from aeronautics to medicine, nuclear power, and transportation has mounted over the last 40 years. This evidence has also spawned regulations to help ensure public safety across safety-sensitive industries, with the notable exception of medicine.
In late 2007, at the behest of the US Congress, the Institute of Medicine embarked on a year-long examination of the scientific evidence linking resident physician sleep deprivation with clinical performance deficits and medical errors. The Institute of Medicine’s report, entitled “Resident duty hours: Enhancing sleep, supervision and safety”, published in January 2009, recommended new limits on resident physician work hours and workload, increased supervision, a heightened focus on resident physician safety, training in structured handovers and quality improvement, more rigorous external oversight of work hours and other aspects of residency training, and the identification of expanded funding sources necessary to implement the recommended reforms successfully and protect the public and resident physicians themselves from preventable harm.
Given that resident physicians comprise almost a quarter of all physicians who work in hospitals, and that taxpayers, through Medicare and Medicaid, fund graduate medical education, the public has a deep investment in physician training. Patients expect to receive safe, high-quality care in the nation’s teaching hospitals. Because it is their safety that is at issue, their voices should be central in policy decisions affecting patient safety. It is likewise important to integrate the perspectives of resident physicians, policy makers, and other constituencies in designing new policies. However, since its release, discussion of the Institute of Medicine report has been largely confined to the medical education community, led by the Accreditation Council for Graduate Medical Education (ACGME).
To begin gathering these perspectives and developing a plan to implement safer work hours for resident physicians, a conference entitled “Enhancing sleep, supervision and safety: What will it take to implement the Institute of Medicine recommendations?” was held at Harvard Medical School on June 17–18, 2010. This White Paper is a product of a diverse group of 26 representative stakeholders bringing relevant new information and innovative practices to bear on a critical patient safety problem. Given that our conference included experts from across disciplines with diverse perspectives and interests, not every recommendation was endorsed by each invited conference participant. However, every recommendation made here was endorsed by the majority of the group, and many were endorsed unanimously. Conference members participated in the process, reviewed the final product, and provided input before publication. Participants provided their individual perspectives, which do not necessarily represent the formal views of any organization.
In September 2010 the ACGME issued new rules to go into effect on July 1, 2011. Unfortunately, they stop considerably short of the Institute of Medicine’s recommendations and those endorsed by this conference. In particular, the ACGME only applied the limitation of 16 hours to first-year resident physicans. Thus, it is clear that policymakers, hospital administrators, and residency program directors who wish to implement safer health care systems must go far beyond what the ACGME will require. We hope this White Paper will serve as a guide and provide encouragement for that effort.
Resident physician workload and supervision
By the end of training, a resident physician should be able to practice independently. Yet much of resident physicians’ time is dominated by tasks with little educational value. The caseload can be so great that inadequate reflective time is left for learning based on clinical experiences. In addition, supervision is often vaguely defined and discontinuous. Medical malpractice data indicate that resident physicians are frequently named in lawsuits, most often for lack of supervision. The recommendations are: The ACGME should adjust resident physicians workload requirements to optimize educational value. Resident physicians as well as faculty should be involved in work redesign that eliminates nonessential and noneducational activity from resident physician dutiesMechanisms should be developed for identifying in real time when a resident physician’s workload is excessive, and processes developed to activate additional providersTeamwork should be actively encouraged in delivery of patient care. Historically, much of medical training has focused on individual knowledge, skills, and responsibility. As health care delivery has become more complex, it will be essential to train resident and attending physicians in effective teamwork that emphasizes collective responsibility for patient care and recognizes the signs, both individual and systemic, of a schedule and working conditions that are too demanding to be safeHospitals should embrace the opportunities that resident physician training redesign offers. Hospitals should recognize and act on the potential benefits of work redesign, eg, increased efficiency, reduced costs, improved quality of care, and resident physician and attending job satisfactionAttending physicians should supervise all hospital admissions. Resident physicians should directly discuss all admissions with attending physicians. Attending physicians should be both cognizant of and have input into the care patients are to receive upon admission to the hospitalInhouse supervision should be required for all critical care services, including emergency rooms, intensive care units, and trauma services. Resident physicians should not be left unsupervised to care for critically ill patients. In settings in which the acuity is high, physicians who have completed residency should provide direct supervision for resident physicians. Supervising physicians should always be physically in the hospital for supervision of resident physicians who care for critically ill patientsThe ACGME should explicitly define “good” supervision by specialty and by year of training. Explicit requirements for intensity and level of training for supervision of specific clinical scenarios should be providedCenters for Medicare and Medicaid Services (CMS) should use graduate medical education funding to provide incentives to programs with proven, effective levels of supervision. Although this action would require federal legislation, reimbursement rules would help to ensure that hospitals pay attention to the importance of good supervision and require it from their training programs
Resident physician work hours
Although the IOM “Sleep, supervision and safety” report provides a comprehensive review and discussion of all aspects of graduate medical education training, the report’s focal point is its recommendations regarding the hours that resident physicians are currently required to work. A considerable body of scientific evidence, much of it cited by the Institute of Medicine report, describes deteriorating performance in fatigued humans, as well as specific studies on resident physician fatigue and preventable medical errors.
The question before this conference was what work redesign and cultural changes are needed to reform work hours as recommended by the Institute of Medicine’s evidence-based report? Extensive scientific data demonstrate that shifts exceeding 12–16 hours without sleep are unsafe. Several principles should be followed in efforts to reduce consecutive hours below this level and achieve safer work schedules. The recommendations are: Limit resident physician work hours to 12–16 hour maximum shiftsA minimum of 10 hours off duty should be scheduled between shiftsResident physician input into work redesign should be actively solicitedSchedules should be designed that adhere to principles of sleep and circadian science; this includes careful consideration of the effects of multiple consecutive night shifts, and provision of adequate time off after night work, as specified in the IOM reportResident physicians should not be scheduled up to the maximum permissible limits; emergencies frequently occur that require resident physicians to stay longer than their scheduled shifts, and this should be anticipated in scheduling resident physicians’ work shiftsHospitals should anticipate the need for iterative improvement as new schedules are initiated; be prepared to learn from the initial phase-in, and change the plan as neededAs resident physician work hours are redesigned, attending physicians should also be considered; a potential consequence of resident physician work hour reduction and increased supervisory requirements may be an increase in work for attending physicians; this should be carefully monitored, and adjustments to attending physician work schedules made as needed to prevent unsafe work hours or working conditions for this group“Home call” should be brought under the overall limits of working hours; work load and hours should be monitored in each residency program to ensure that resident physicians and fellows on home call are getting sufficient sleepMedicare funding for graduate medical education in each hospital should be linked with adherence to the Institute of Medicine limits on resident physician work hours
Moonlighting by resident physicians
The Institute of Medicine report recommended including external as well as internal moonlighting in working hour limits. The recommendation is: All moonlighting work hours should be included in the ACGME working hour limits and actively monitored. Hospitals should formalize a moonlighting policy and establish systems for actively monitoring resident physician moonlighting
Safety of resident physicians
The “Sleep, supervision and safety” report also addresses fatigue-related harm done to resident physicians themselves. The report focuses on two main sources of physical injury to resident physicians impaired by fatigue, ie, needle-stick exposure to blood-borne pathogens and motor vehicle crashes. Providing safe transportation home for resident physicians is a logistical and financial challenge for hospitals. Educating physicians at all levels on the dangers of fatigue is clearly required to change driving behavior so that safe hospital-funded transport home is used effectively. Fatigue-related injury prevention (including not driving while drowsy) should be taught in medical school and during residency, and reinforced with attending physicians; hospitals and residency programs must be informed that resident physicians’ ability to judge their own level of impairment is impaired when they are sleep deprived; hence, leaving decisions about the capacity to drive to impaired resident physicians is not recommendedHospitals should provide transportation to all resident physicians who report feeling too tired to drive safely; in addition, although consecutive work should not exceed 16 hours, hospitals should provide transportation for all resident physicians who, because of unforeseen reasons or emergencies, work for longer than consecutive 24 hours; transportation under these circumstances should be automatically provided to house staff, and should not rely on self-identification or request
Training in effective handovers and quality improvement
Handover practice for resident physicians, attendings, and other health care providers has long been identified as a weak link in patient safety throughout health care settings. Policies to improve handovers of care must be tailored to fit the appropriate clinical scenario, recognizing that information overload can also be a problem. At the heart of improving handovers is the organizational effort to improve quality, an effort in which resident physicians have typically been insufficiently engaged. The recommendations are: Hospitals should train attending and resident physicians in effective handovers of careHospitals should create uniform processes for handovers that are tailored to meet each clinical setting; all handovers should be done verbally and face-to-face, but should also utilize written toolsWhen possible, hospitals should integrate hand-over tools into their electronic medical records (EMR) systems; these systems should be standardized to the extent possible across residency programs in a hospital, but may be tailored to the needs of specific programs and services; federal government should help subsidize adoption of electronic medical records by hospitals to improve signoutWhen feasible, handovers should be a team effort including nurses, patients, and familiesHospitals should include residents in their quality improvement and patient safety efforts; the ACGME should specify in their core competency requirements that resident physicians work on quality improvement projects; likewise, the Joint Commission should require that resident physicians be included in quality improvement and patient safety programs at teaching hospitals; hospital administrators and residency program directors should create opportunities for resident physicians to become involved in ongoing quality improvement projects and root cause analysis teams; feedback on successful quality improvement interventions should be shared with resident physicians and broadly disseminatedQuality improvement/patient safety concepts should be integral to the medical school curriculum; medical school deans should elevate the topics of patient safety, quality improvement, and teamwork; these concepts should be integrated throughout the medical school curriculum and reinforced throughout residency; mastery of these concepts by medical students should be tested on the United States Medical Licensing Examination (USMLE) stepsFederal government should support involvement of resident physicians in quality improvement efforts; initiatives to improve quality by including resident physicians in quality improvement projects should be financially supported by the Department of Health and Human Services
Monitoring and oversight of the ACGME
While the ACGME is a key stakeholder in residency training, external voices are essential to ensure that public interests are heard in the development and monitoring of standards. Consequently, the Institute of Medicine report recommended external oversight and monitoring through the Joint Commission and Centers for Medicare and Medicaid Services (CMS). The recommendations are: Make comprehensive fatigue management a Joint Commission National Patient Safety Goal; fatigue is a safety concern not only for resident physicians, but also for nurses, attending physicians, and other health care workers; the Joint Commission should seek to ensure that all health care workers, not just resident physicians, are working as safely as possibleFederal government, including the Centers for Medicare and Medicaid Services and the Agency for Healthcare Research and Quality, should encourage development of comprehensive fatigue management programs which all health systems would eventually be required to implementMake ACGME compliance with working hours a “ condition of participation” for reimbursement of direct and indirect graduate medical education costs; financial incentives will greatly increase the adoption of and compliance with ACGME standards
Future financial support for implementation
The Institute of Medicine’s report estimates that $1.7 billion (in 2008 dollars) would be needed to implement its recommendations. Twenty-five percent of that amount ($376 million) will be required just to bring hospitals into compliance with the existing 2003 ACGME rules. Downstream savings to the health care system could potentially result from safer care, but these benefits typically do not accrue to hospitals and residency programs, who have been asked historically to bear the burden of residency reform costs. The recommendations are: The Institute of Medicine should convene a panel of stakeholders, including private and public funders of health care and graduate medical education, to lay down the concrete steps necessary to identify and allocate the resources needed to implement the recommendations contained in the IOM “Resident duty hours: Enhancing sleep, supervision and safety” report. Conference participants suggested several approaches to engage public and private support for this initiativeEfforts to find additional funding to implement the Institute of Medicine recommendations should focus more broadly on patient safety and health care delivery reform; policy efforts focused narrowly upon resident physician work hours are less likely to succeed than broad patient safety initiatives that include residency redesign as a key componentHospitals should view the Institute of Medicine recommendations as an opportunity to begin resident physician work redesign projects as the core of a business model that embraces safety and ultimately saves resourcesBoth the Secretary of Health and Human Services and the Director of the Centers for Medicare and Medicaid Services should take the Institute of Medicine recommendations into consideration when promulgating rules for innovation grantsThe National Health Care Workforce Commission should consider the Institute of Medicine recommendations when analyzing the nation’s physician workforce needs
Recommendations for future research
Conference participants concurred that convening the stakeholders and agreeing on a research agenda was key. Some observed that some sectors within the medical education community have been reluctant to act on the data. Several logical funders for future research were identified. But above all agencies, Centers for Medicare and Medicaid Services is the only stakeholder that funds graduate medical education upstream and will reap savings downstream if preventable medical errors are reduced as a result of reform of resident physician work hours.
PMCID: PMC3630963  PMID: 23616719
resident; hospital; working hours; safety
23.  Lessons from the Leucovorin Shortages Between 2009 and 2012 in a Medicare Advantage Population: Where Do We Go from Here? 
American Health & Drug Benefits  2014;7(5):264-270.
Three distinct shortages of the generic drug leucovorin, a reduced form of folic acid used in several chemotherapy regimens, were reported by the US Food and Drug Administration (FDA) between 2008 and 2014. Levoleucovorin, an alternative therapy to leucovorin, failed to demonstrate superiority over leucovorin in clinical trials and is substantially more expensive.
To calculate the impact of the leucovorin shortages on primary treatment costs to patients and a health plan, and to present strategies for health plans to deal with future drug shortages.
This retrospective descriptive study was conducted using Humana's Medicare Advantage prescription drug plan administrative claims database between January 1, 2009, and December 31, 2012. A total of 1542 patients with at least 1 medical or pharmacy claim for either leucovorin or levoleucovorin during the first 3 months of the respective plan year (between 2009 and 2012) who had continuous enrollment for the entirety of the same plan year, were included in this study. Trends in primary treatment costs—defined as the drug cost of leucovorin or levoleucovorin—over the 4-year evaluation period were assessed. The mean annual patient out-of-pocket (OOP) costs and the mean plan-paid per member per month (PMPM) costs were also calculated.
The percentage of patients receiving leucovorin decreased annually, with a 15.8% drop from 2010 to 2011. This reduction was accompanied by a 6.6% increase in patients receiving levoleucovorin. The mean annual patient OOP costs were $167 to $714 higher for levoleucovorin than for leucovorin. Similarly, the mean plan-paid PMPM costs were higher (up to $1667 PMPM) for levoleucovorin than for leucovorin. The aggregate costs for the 2 drugs increased steadily, including the patient OOP costs and the plan-paid PMPM costs. The most prominent cost increase occurred between 2010 and 2011, with a 3.8-fold increase in patient OOP costs and a 5-fold increase in the plan-paid PMPM costs. This corresponded to the timing of the second leucovorin shortage announcement by the FDA in June 2010.
Health plans can play an important role in minimizing the impact of drug shortages by identifying the affected patient population, identifying therapeutic alternatives, assisting providers with alternative sourcing strategies when possible, adjusting approval processes, and implementing quality management or pathway programs.
PMCID: PMC4163778  PMID: 25237422
24.  Relapsing Acute Kidney Injury Associated with Pegfilgrastim 
We report a previously unrecognized complication of severe acute kidney injury (AKI) after the administration of pegfilgrastim with biopsy findings of mesangioproliferative glomerulonephritis (GN) and tubular necrosis. A 51-year-old white female with a history of breast cancer presented to the hospital with nausea, vomiting and dark urine 2 weeks after her third cycle of cyclophosphamide and docetaxel along with pegfilgrastim. She was found to have AKI with a serum creatinine (Cr) level of 6.9 mg/dl (baseline 0.7). At that time, her AKI was believed to be related to prior sepsis and/or daptomycin exposure that had occurred 5 weeks earlier. She was dialyzed for 6 weeks, after which her kidney function recovered to near baseline, but her urinalysis (UA) still showed 3.5 g protein/day and dysmorphic hematuria. Repeat blood cultures and serological workup (complement levels, hepatitis panel, ANA, ANCA and anti-GBM) were negative. She received her next cycle of chemotherapy with the same drugs. Two weeks later, she developed recurrent AKI with a Cr level of 6.7 mg/dl. A kidney biopsy showed mesangioproliferative GN, along with tubular epithelial damage and a rare electron-dense glomerular deposit. Pegfilgrastim was suspected as the inciting agent after exclusion of other causes. Her Cr improved to 1.4 mg/dl over the next 3 weeks, this time without dialysis. She had the next 2 cycles of chemotherapy without pegfilgrastim, with no further episodes of AKI. A literature review revealed a few cases of a possible association of filgrastim with mild self-limited acute GN. In conclusion, pegfilgrastim may cause GN with severe AKI. Milder cases may be missed and therefore routine monitoring of renal function and UA is important.
PMCID: PMC3542938  PMID: 23326257
Pegfilgrastim; Glomerulonephritis; Renal dysfunction
25.  Canadian supportive care recommendations for the management of neutropenia in patients with cancer 
Current Oncology  2008;15(1):9-23.
Hematologic toxicities of cancer chemotherapy are common and often limit the ability to provide treatment in a timely and dose-intensive manner. These limitations may be of utmost importance in the adjuvant and curative intent settings. Hematologic toxicities may result in febrile neutropenia, infections, fatigue, and bleeding, all of which may lead to additional complications and prolonged hospitalization. The older cancer patient and patients with significant comorbidities may be at highest risk of neutropenic complications. Colony-stimulating factors (csfs) such as filgrastim and pegfilgrastim can effectively attenuate most of the neutropenic consequences of chemotherapy, improve the ability to continue chemotherapy on the planned schedule, and minimize the risk of febrile neutropenia and infectious morbidity and mortality. The present consensus statement reviews the use of csfs in the management of neutropenia in patients with cancer and sets out specific recommendations based on published international guidelines tailored to the specifics of the Canadian practice landscape. We review existing international guidelines, the indications for primary and secondary prophylaxis, the importance of maintaining dose intensity, and the use of csfs in leukemia, stem-cell transplantation, and radiotherapy. Specific disease-related recommendations are provided related to breast cancer, non-Hodgkin lymphoma, lung cancer, and gastrointestinal cancer. Finally, csf dosing and schedules, duration of therapy, and associated acute and potential chronic toxicities are examined.
PMCID: PMC2259432  PMID: 18317581
Canadian recommendations; neutropenia; febrile neutropenia; supportive care; colony-stimulating factors; chemotherapy-induced neutropenia; safety

Results 1-25 (1085649)