PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1519031)

Clipboard (0)
None

Related Articles

1.  Implementing the 2009 Institute of Medicine recommendations on resident physician work hours, supervision, and safety 
Long working hours and sleep deprivation have been a facet of physician training in the US since the advent of the modern residency system. However, the scientific evidence linking fatigue with deficits in human performance, accidents and errors in industries from aeronautics to medicine, nuclear power, and transportation has mounted over the last 40 years. This evidence has also spawned regulations to help ensure public safety across safety-sensitive industries, with the notable exception of medicine.
In late 2007, at the behest of the US Congress, the Institute of Medicine embarked on a year-long examination of the scientific evidence linking resident physician sleep deprivation with clinical performance deficits and medical errors. The Institute of Medicine’s report, entitled “Resident duty hours: Enhancing sleep, supervision and safety”, published in January 2009, recommended new limits on resident physician work hours and workload, increased supervision, a heightened focus on resident physician safety, training in structured handovers and quality improvement, more rigorous external oversight of work hours and other aspects of residency training, and the identification of expanded funding sources necessary to implement the recommended reforms successfully and protect the public and resident physicians themselves from preventable harm.
Given that resident physicians comprise almost a quarter of all physicians who work in hospitals, and that taxpayers, through Medicare and Medicaid, fund graduate medical education, the public has a deep investment in physician training. Patients expect to receive safe, high-quality care in the nation’s teaching hospitals. Because it is their safety that is at issue, their voices should be central in policy decisions affecting patient safety. It is likewise important to integrate the perspectives of resident physicians, policy makers, and other constituencies in designing new policies. However, since its release, discussion of the Institute of Medicine report has been largely confined to the medical education community, led by the Accreditation Council for Graduate Medical Education (ACGME).
To begin gathering these perspectives and developing a plan to implement safer work hours for resident physicians, a conference entitled “Enhancing sleep, supervision and safety: What will it take to implement the Institute of Medicine recommendations?” was held at Harvard Medical School on June 17–18, 2010. This White Paper is a product of a diverse group of 26 representative stakeholders bringing relevant new information and innovative practices to bear on a critical patient safety problem. Given that our conference included experts from across disciplines with diverse perspectives and interests, not every recommendation was endorsed by each invited conference participant. However, every recommendation made here was endorsed by the majority of the group, and many were endorsed unanimously. Conference members participated in the process, reviewed the final product, and provided input before publication. Participants provided their individual perspectives, which do not necessarily represent the formal views of any organization.
In September 2010 the ACGME issued new rules to go into effect on July 1, 2011. Unfortunately, they stop considerably short of the Institute of Medicine’s recommendations and those endorsed by this conference. In particular, the ACGME only applied the limitation of 16 hours to first-year resident physicans. Thus, it is clear that policymakers, hospital administrators, and residency program directors who wish to implement safer health care systems must go far beyond what the ACGME will require. We hope this White Paper will serve as a guide and provide encouragement for that effort.
Resident physician workload and supervision
By the end of training, a resident physician should be able to practice independently. Yet much of resident physicians’ time is dominated by tasks with little educational value. The caseload can be so great that inadequate reflective time is left for learning based on clinical experiences. In addition, supervision is often vaguely defined and discontinuous. Medical malpractice data indicate that resident physicians are frequently named in lawsuits, most often for lack of supervision. The recommendations are: The ACGME should adjust resident physicians workload requirements to optimize educational value. Resident physicians as well as faculty should be involved in work redesign that eliminates nonessential and noneducational activity from resident physician dutiesMechanisms should be developed for identifying in real time when a resident physician’s workload is excessive, and processes developed to activate additional providersTeamwork should be actively encouraged in delivery of patient care. Historically, much of medical training has focused on individual knowledge, skills, and responsibility. As health care delivery has become more complex, it will be essential to train resident and attending physicians in effective teamwork that emphasizes collective responsibility for patient care and recognizes the signs, both individual and systemic, of a schedule and working conditions that are too demanding to be safeHospitals should embrace the opportunities that resident physician training redesign offers. Hospitals should recognize and act on the potential benefits of work redesign, eg, increased efficiency, reduced costs, improved quality of care, and resident physician and attending job satisfactionAttending physicians should supervise all hospital admissions. Resident physicians should directly discuss all admissions with attending physicians. Attending physicians should be both cognizant of and have input into the care patients are to receive upon admission to the hospitalInhouse supervision should be required for all critical care services, including emergency rooms, intensive care units, and trauma services. Resident physicians should not be left unsupervised to care for critically ill patients. In settings in which the acuity is high, physicians who have completed residency should provide direct supervision for resident physicians. Supervising physicians should always be physically in the hospital for supervision of resident physicians who care for critically ill patientsThe ACGME should explicitly define “good” supervision by specialty and by year of training. Explicit requirements for intensity and level of training for supervision of specific clinical scenarios should be providedCenters for Medicare and Medicaid Services (CMS) should use graduate medical education funding to provide incentives to programs with proven, effective levels of supervision. Although this action would require federal legislation, reimbursement rules would help to ensure that hospitals pay attention to the importance of good supervision and require it from their training programs
Resident physician work hours
Although the IOM “Sleep, supervision and safety” report provides a comprehensive review and discussion of all aspects of graduate medical education training, the report’s focal point is its recommendations regarding the hours that resident physicians are currently required to work. A considerable body of scientific evidence, much of it cited by the Institute of Medicine report, describes deteriorating performance in fatigued humans, as well as specific studies on resident physician fatigue and preventable medical errors.
The question before this conference was what work redesign and cultural changes are needed to reform work hours as recommended by the Institute of Medicine’s evidence-based report? Extensive scientific data demonstrate that shifts exceeding 12–16 hours without sleep are unsafe. Several principles should be followed in efforts to reduce consecutive hours below this level and achieve safer work schedules. The recommendations are: Limit resident physician work hours to 12–16 hour maximum shiftsA minimum of 10 hours off duty should be scheduled between shiftsResident physician input into work redesign should be actively solicitedSchedules should be designed that adhere to principles of sleep and circadian science; this includes careful consideration of the effects of multiple consecutive night shifts, and provision of adequate time off after night work, as specified in the IOM reportResident physicians should not be scheduled up to the maximum permissible limits; emergencies frequently occur that require resident physicians to stay longer than their scheduled shifts, and this should be anticipated in scheduling resident physicians’ work shiftsHospitals should anticipate the need for iterative improvement as new schedules are initiated; be prepared to learn from the initial phase-in, and change the plan as neededAs resident physician work hours are redesigned, attending physicians should also be considered; a potential consequence of resident physician work hour reduction and increased supervisory requirements may be an increase in work for attending physicians; this should be carefully monitored, and adjustments to attending physician work schedules made as needed to prevent unsafe work hours or working conditions for this group“Home call” should be brought under the overall limits of working hours; work load and hours should be monitored in each residency program to ensure that resident physicians and fellows on home call are getting sufficient sleepMedicare funding for graduate medical education in each hospital should be linked with adherence to the Institute of Medicine limits on resident physician work hours
Moonlighting by resident physicians
The Institute of Medicine report recommended including external as well as internal moonlighting in working hour limits. The recommendation is: All moonlighting work hours should be included in the ACGME working hour limits and actively monitored. Hospitals should formalize a moonlighting policy and establish systems for actively monitoring resident physician moonlighting
Safety of resident physicians
The “Sleep, supervision and safety” report also addresses fatigue-related harm done to resident physicians themselves. The report focuses on two main sources of physical injury to resident physicians impaired by fatigue, ie, needle-stick exposure to blood-borne pathogens and motor vehicle crashes. Providing safe transportation home for resident physicians is a logistical and financial challenge for hospitals. Educating physicians at all levels on the dangers of fatigue is clearly required to change driving behavior so that safe hospital-funded transport home is used effectively. Fatigue-related injury prevention (including not driving while drowsy) should be taught in medical school and during residency, and reinforced with attending physicians; hospitals and residency programs must be informed that resident physicians’ ability to judge their own level of impairment is impaired when they are sleep deprived; hence, leaving decisions about the capacity to drive to impaired resident physicians is not recommendedHospitals should provide transportation to all resident physicians who report feeling too tired to drive safely; in addition, although consecutive work should not exceed 16 hours, hospitals should provide transportation for all resident physicians who, because of unforeseen reasons or emergencies, work for longer than consecutive 24 hours; transportation under these circumstances should be automatically provided to house staff, and should not rely on self-identification or request
Training in effective handovers and quality improvement
Handover practice for resident physicians, attendings, and other health care providers has long been identified as a weak link in patient safety throughout health care settings. Policies to improve handovers of care must be tailored to fit the appropriate clinical scenario, recognizing that information overload can also be a problem. At the heart of improving handovers is the organizational effort to improve quality, an effort in which resident physicians have typically been insufficiently engaged. The recommendations are: Hospitals should train attending and resident physicians in effective handovers of careHospitals should create uniform processes for handovers that are tailored to meet each clinical setting; all handovers should be done verbally and face-to-face, but should also utilize written toolsWhen possible, hospitals should integrate hand-over tools into their electronic medical records (EMR) systems; these systems should be standardized to the extent possible across residency programs in a hospital, but may be tailored to the needs of specific programs and services; federal government should help subsidize adoption of electronic medical records by hospitals to improve signoutWhen feasible, handovers should be a team effort including nurses, patients, and familiesHospitals should include residents in their quality improvement and patient safety efforts; the ACGME should specify in their core competency requirements that resident physicians work on quality improvement projects; likewise, the Joint Commission should require that resident physicians be included in quality improvement and patient safety programs at teaching hospitals; hospital administrators and residency program directors should create opportunities for resident physicians to become involved in ongoing quality improvement projects and root cause analysis teams; feedback on successful quality improvement interventions should be shared with resident physicians and broadly disseminatedQuality improvement/patient safety concepts should be integral to the medical school curriculum; medical school deans should elevate the topics of patient safety, quality improvement, and teamwork; these concepts should be integrated throughout the medical school curriculum and reinforced throughout residency; mastery of these concepts by medical students should be tested on the United States Medical Licensing Examination (USMLE) stepsFederal government should support involvement of resident physicians in quality improvement efforts; initiatives to improve quality by including resident physicians in quality improvement projects should be financially supported by the Department of Health and Human Services
Monitoring and oversight of the ACGME
While the ACGME is a key stakeholder in residency training, external voices are essential to ensure that public interests are heard in the development and monitoring of standards. Consequently, the Institute of Medicine report recommended external oversight and monitoring through the Joint Commission and Centers for Medicare and Medicaid Services (CMS). The recommendations are: Make comprehensive fatigue management a Joint Commission National Patient Safety Goal; fatigue is a safety concern not only for resident physicians, but also for nurses, attending physicians, and other health care workers; the Joint Commission should seek to ensure that all health care workers, not just resident physicians, are working as safely as possibleFederal government, including the Centers for Medicare and Medicaid Services and the Agency for Healthcare Research and Quality, should encourage development of comprehensive fatigue management programs which all health systems would eventually be required to implementMake ACGME compliance with working hours a “ condition of participation” for reimbursement of direct and indirect graduate medical education costs; financial incentives will greatly increase the adoption of and compliance with ACGME standards
Future financial support for implementation
The Institute of Medicine’s report estimates that $1.7 billion (in 2008 dollars) would be needed to implement its recommendations. Twenty-five percent of that amount ($376 million) will be required just to bring hospitals into compliance with the existing 2003 ACGME rules. Downstream savings to the health care system could potentially result from safer care, but these benefits typically do not accrue to hospitals and residency programs, who have been asked historically to bear the burden of residency reform costs. The recommendations are: The Institute of Medicine should convene a panel of stakeholders, including private and public funders of health care and graduate medical education, to lay down the concrete steps necessary to identify and allocate the resources needed to implement the recommendations contained in the IOM “Resident duty hours: Enhancing sleep, supervision and safety” report. Conference participants suggested several approaches to engage public and private support for this initiativeEfforts to find additional funding to implement the Institute of Medicine recommendations should focus more broadly on patient safety and health care delivery reform; policy efforts focused narrowly upon resident physician work hours are less likely to succeed than broad patient safety initiatives that include residency redesign as a key componentHospitals should view the Institute of Medicine recommendations as an opportunity to begin resident physician work redesign projects as the core of a business model that embraces safety and ultimately saves resourcesBoth the Secretary of Health and Human Services and the Director of the Centers for Medicare and Medicaid Services should take the Institute of Medicine recommendations into consideration when promulgating rules for innovation grantsThe National Health Care Workforce Commission should consider the Institute of Medicine recommendations when analyzing the nation’s physician workforce needs
Recommendations for future research
Conference participants concurred that convening the stakeholders and agreeing on a research agenda was key. Some observed that some sectors within the medical education community have been reluctant to act on the data. Several logical funders for future research were identified. But above all agencies, Centers for Medicare and Medicaid Services is the only stakeholder that funds graduate medical education upstream and will reap savings downstream if preventable medical errors are reduced as a result of reform of resident physician work hours.
doi:10.2147/NSS.S19649
PMCID: PMC3630963  PMID: 23616719
resident; hospital; working hours; safety
2.  Rational Prescribing in Primary Care (RaPP): A Cluster Randomized Trial of a Tailored Intervention 
PLoS Medicine  2006;3(6):e134.
Background
A gap exists between evidence and practice regarding the management of cardiovascular risk factors. This gap could be narrowed if systematically developed clinical practice guidelines were effectively implemented in clinical practice. We evaluated the effects of a tailored intervention to support the implementation of systematically developed guidelines for the use of antihypertensive and cholesterol-lowering drugs for the primary prevention of cardiovascular disease.
Methods and Findings
We conducted a cluster-randomized trial comparing a tailored intervention to passive dissemination of guidelines in 146 general practices in two geographical areas in Norway. Each practice was randomized to either the tailored intervention (70 practices; 257 physicians) or control group (69 practices; 244 physicians). Patients started on medication for hypertension or hypercholesterolemia during the study period and all patients already on treatment that consulted their physician during the trial were included. A multifaceted intervention was tailored to address identified barriers to change. Key components were an educational outreach visit with audit and feedback, and computerized reminders linked to the medical record system. Pharmacists conducted the visits. Outcomes were measured for all eligible patients seen in the participating practices during 1 y before and after the intervention. The main outcomes were the proportions of (1) first-time prescriptions for hypertension where thiazides were prescribed, (2) patients assessed for cardiovascular risk before prescribing antihypertensive or cholesterol-lowering drugs, and (3) patients treated for hypertension or hypercholesterolemia for 3 mo or more who had achieved recommended treatment goals.
The intervention led to an increase in adherence to guideline recommendations on choice of antihypertensive drug. Thiazides were prescribed to 17% of patients in the intervention group versus 11% in the control group (relative risk 1.94; 95% confidence interval 1.49–2.49, adjusted for baseline differences and clustering effect). Little or no differences were found for risk assessment prior to prescribing and for achievement of treatment goals.
Conclusions
Our tailored intervention had a significant impact on prescribing of antihypertensive drugs, but was ineffective in improving the quality of other aspects of managing hypertension and hypercholesterolemia in primary care.
Editors' Summary
Background.
An important issue in health care is “getting research into practice,” in other words, making sure that, when evidence from research has established the best way to treat a disease, doctors actually use that approach with their patients. In reality, there is often a gap between evidence and practice.
  An example concerns the treatment of people who have high blood pressure (hypertension) and/or high cholesterol. These are common conditions, and both increase the risk of having a heart attack or a stroke. Research has shown that the risks can be lowered if patients with these conditions are given drugs that lower blood pressure (antihypertensives) and drugs that lower cholesterol. There are many types of these drugs now available. In many countries, the health authorities want family doctors (general practitioners) to make better use of these drugs. They want doctors to prescribe them to everyone who would benefit, using the type of drugs found to be most effective. When there is a choice of drugs that are equally effective, they want doctors to use the cheapest type. (In the case of antihypertensives, an older type, known as thiazides, is very effective and also very cheap, but many doctors prefer to give their patients newer, more expensive alternatives.) Health authorities have issued guidelines to doctors that address these issues. However, it is not easy to change prescribing practices, and research in several countries has shown that issuing guidelines has only limited effects.
Why Was This Study Done?
The researchers wanted—in two parts of Norway—to compare the effects on prescribing practices of what they called the “passive dissemination of guidelines” with a more active approach, where the use of the guidelines was strongly promoted and encouraged.
What Did the Researchers Do and Find?
They worked with 146 general practices. In half of them the guidelines were actively promoted. The remaining were regarded as a control group; they were given the guidelines but no special efforts were made to encourage their use. It was decided at random which practices would be in which group; this approach is called a randomized controlled trial. The methods used to actively promote use of the guidelines included personal visits to the practices by pharmacists and use of a computerized reminder system. Information was then collected on the number of patients who, when first treated for hypertension, were prescribed a thiazide. Other information collected included whether patients had been properly assessed for their level of risk (for strokes and heart attacks) before antihypertensive or cholesterol-lowering drugs were given. In addition, the researchers recorded whether the recommended targets for improvement in blood pressure and cholesterol level had been reached.
Only 11% of those patients visiting the control group of practices who should have been prescribed thiazides, according to the guidelines, actually received them. Of those seen by doctors in the practices where the guidelines were actively promoted, 17% received thiazides. According to statistical analysis, the increase achieved by active promotion is significant. Little or no differences were found for risk assessment prior to prescribing and for achievement of treatment goals.
What Do These Findings Mean?
Even in the active promotion group, the great majority of patients (83%) were still not receiving treatment according to the guidelines. However, active promotion of guidelines is more effective than simply issuing the guidelines by themselves. The study also demonstrates that it is very hard to change prescribing practices. The efforts made here to encourage the doctors to change were considerable, and although the results were significant, they were still disappointing. Also disappointing is the fact that achievement of treatment goals was no better in the active-promotion group. These issues are discussed further in a Perspective about this study (DOI: 10.1371/journal.pmed.0030229).
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030134.
• The Web site of the American Academy of Family Physicians has a page on heart disease
• The MedlinePlus Medical Encyclopedia's pages on heart diseases and vascular diseases
• Information from NHS Direct (UK National Health Service) about heart attack and stroke
• Another PLoS Medicine article has also addressed trends in thiazide prescribing
Passive dissemination of management guidelines for hypertension and hypercholesterolaemia was compared with active promotion. Active promotion led to significant improvement in antihypertensive prescribing but not other aspects of management.
doi:10.1371/journal.pmed.0030134
PMCID: PMC1472695  PMID: 16737346
3.  Clinical Utility of Vitamin D Testing 
Executive Summary
This report from the Medical Advisory Secretariat (MAS) was intended to evaluate the clinical utility of vitamin D testing in average risk Canadians and in those with kidney disease. As a separate analysis, this report also includes a systematic literature review of the prevalence of vitamin D deficiency in these two subgroups.
This evaluation did not set out to determine the serum vitamin D thresholds that might apply to non-bone health outcomes. For bone health outcomes, no high or moderate quality evidence could be found to support a target serum level above 50 nmol/L. Similarly, no high or moderate quality evidence could be found to support vitamin D’s effects in non-bone health outcomes, other than falls.
Vitamin D
Vitamin D is a lipid soluble vitamin that acts as a hormone. It stimulates intestinal calcium absorption and is important in maintaining adequate phosphate levels for bone mineralization, bone growth, and remodelling. It’s also believed to be involved in the regulation of cell growth proliferation and apoptosis (programmed cell death), as well as modulation of the immune system and other functions. Alone or in combination with calcium, Vitamin D has also been shown to reduce the risk of fractures in elderly men (≥ 65 years), postmenopausal women, and the risk of falls in community-dwelling seniors. However, in a comprehensive systematic review, inconsistent results were found concerning the effects of vitamin D in conditions such as cancer, all-cause mortality, and cardiovascular disease. In fact, no high or moderate quality evidence could be found concerning the effects of vitamin D in such non-bone health outcomes. Given the uncertainties surrounding the effects of vitamin D in non-bone health related outcomes, it was decided that this evaluation should focus on falls and the effects of vitamin D in bone health and exclusively within average-risk individuals and patients with kidney disease.
Synthesis of vitamin D occurs naturally in the skin through exposure to ultraviolet B (UVB) radiation from sunlight, but it can also be obtained from dietary sources including fortified foods, and supplements. Foods rich in vitamin D include fatty fish, egg yolks, fish liver oil, and some types of mushrooms. Since it is usually difficult to obtain sufficient vitamin D from non-fortified foods, either due to low content or infrequent use, most vitamin D is obtained from fortified foods, exposure to sunlight, and supplements.
Clinical Need: Condition and Target Population
Vitamin D deficiency may lead to rickets in infants and osteomalacia in adults. Factors believed to be associated with vitamin D deficiency include:
darker skin pigmentation,
winter season,
living at higher latitudes,
skin coverage,
kidney disease,
malabsorption syndromes such as Crohn’s disease, cystic fibrosis, and
genetic factors.
Patients with chronic kidney disease (CKD) are at a higher risk of vitamin D deficiency due to either renal losses or decreased synthesis of 1,25-dihydroxyvitamin D.
Health Canada currently recommends that, until the daily recommended intakes (DRI) for vitamin D are updated, Canada’s Food Guide (Eating Well with Canada’s Food Guide) should be followed with respect to vitamin D intake. Issued in 2007, the Guide recommends that Canadians consume two cups (500 ml) of fortified milk or fortified soy beverages daily in order to obtain a daily intake of 200 IU. In addition, men and women over the age of 50 should take 400 IU of vitamin D supplements daily. Additional recommendations were made for breastfed infants.
A Canadian survey evaluated the median vitamin D intake derived from diet alone (excluding supplements) among 35,000 Canadians, 10,900 of which were from Ontario. Among Ontarian males ages 9 and up, the median daily dietary vitamin D intake ranged between 196 IU and 272 IU per day. Among females, it varied from 152 IU to 196 IU per day. In boys and girls ages 1 to 3, the median daily dietary vitamin D intake was 248 IU, while among those 4 to 8 years it was 224 IU.
Vitamin D Testing
Two laboratory tests for vitamin D are available, 25-hydroxy vitamin D, referred to as 25(OH)D, and 1,25-dihydroxyvitamin D. Vitamin D status is assessed by measuring the serum 25(OH)D levels, which can be assayed using radioimmunoassays, competitive protein-binding assays (CPBA), high pressure liquid chromatography (HPLC), and liquid chromatography-tandem mass spectrometry (LC-MS/MS). These may yield different results with inter-assay variation reaching up to 25% (at lower serum levels) and intra-assay variation reaching 10%.
The optimal serum concentration of vitamin D has not been established and it may change across different stages of life. Similarly, there is currently no consensus on target serum vitamin D levels. There does, however, appear to be a consensus on the definition of vitamin D deficiency at 25(OH)D < 25 nmol/l, which is based on the risk of diseases such as rickets and osteomalacia. Higher target serum levels have also been proposed based on subclinical endpoints such as parathyroid hormone (PTH). Therefore, in this report, two conservative target serum levels have been adopted, 25 nmol/L (based on the risk of rickets and osteomalacia), and 40 to 50 nmol/L (based on vitamin D’s interaction with PTH).
Ontario Context
Volume & Cost
The volume of vitamin D tests done in Ontario has been increasing over the past 5 years with a steep increase of 169,000 tests in 2007 to more than 393,400 tests in 2008. The number of tests continues to rise with the projected number of tests for 2009 exceeding 731,000. According to the Ontario Schedule of Benefits, the billing cost of each test is $51.7 for 25(OH)D (L606, 100 LMS units, $0.517/unit) and $77.6 for 1,25-dihydroxyvitamin D (L605, 150 LMS units, $0.517/unit). Province wide, the total annual cost of vitamin D testing has increased from approximately $1.7M in 2004 to over $21.0M in 2008. The projected annual cost for 2009 is approximately $38.8M.
Evidence-Based Analysis
The objective of this report is to evaluate the clinical utility of vitamin D testing in the average risk population and in those with kidney disease. As a separate analysis, the report also sought to evaluate the prevalence of vitamin D deficiency in Canada. The specific research questions addressed were thus:
What is the clinical utility of vitamin D testing in the average risk population and in subjects with kidney disease?
What is the prevalence of vitamin D deficiency in the average risk population in Canada?
What is the prevalence of vitamin D deficiency in patients with kidney disease in Canada?
Clinical utility was defined as the ability to improve bone health outcomes with the focus on the average risk population (excluding those with osteoporosis) and patients with kidney disease.
Literature Search
A literature search was performed on July 17th, 2009 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from January 1, 1998 until July 17th, 2009. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria, full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search. Articles with unknown eligibility were reviewed with a second clinical epidemiologist, then a group of epidemiologists until consensus was established. The quality of evidence was assessed as high, moderate, low or very low according to GRADE methodology.
Observational studies that evaluated the prevalence of vitamin D deficiency in Canada in the population of interest were included based on the inclusion and exclusion criteria listed below. The baseline values were used in this report in the case of interventional studies that evaluated the effect of vitamin D intake on serum levels. Studies published in grey literature were included if no studies published in the peer-reviewed literature were identified for specific outcomes or subgroups.
Considering that vitamin D status may be affected by factors such as latitude, sun exposure, food fortification, among others, the search focused on prevalence studies published in Canada. In cases where no Canadian prevalence studies were identified, the decision was made to include studies from the United States, given the similar policies in vitamin D food fortification and recommended daily intake.
Inclusion Criteria
Studies published in English
Publications that reported the prevalence of vitamin D deficiency in Canada
Studies that included subjects from the general population or with kidney disease
Studies in children or adults
Studies published between January 1998 and July 17th 2009
Exclusion Criteria
Studies that included subjects defined according to a specific disease other than kidney disease
Letters, comments, and editorials
Studies that measured the serum vitamin D levels but did not report the percentage of subjects with serum levels below a given threshold
Outcomes of Interest
Prevalence of serum vitamin D less than 25 nmol/L
Prevalence of serum vitamin D less than 40 to 50 nmol/L
Serum 25-hydroxyvitamin D was the metabolite used to assess vitamin D status. Results from adult and children studies were reported separately. Subgroup analyses according to factors that affect serum vitamin D levels (e.g., seasonal effects, skin pigmentation, and vitamin D intake) were reported if enough information was provided in the studies
Quality of Evidence
The quality of the prevalence studies was based on the method of subject recruitment and sampling, possibility of selection bias, and generalizability to the source population. The overall quality of the trials was examined according to the GRADE Working Group criteria.
Summary of Findings
Fourteen prevalence studies examining Canadian adults and children met the eligibility criteria. With the exception of one longitudinal study, the studies had a cross-sectional design. Two studies were conducted among Canadian adults with renal disease but none studied Canadian children with renal disease (though three such US studies were included). No systematic reviews or health technology assessments that evaluated the prevalence of vitamin D deficiency in Canada were identified. Two studies were published in grey literature, consisting of a Canadian survey designed to measure serum vitamin D levels and a study in infants presented as an abstract at a conference. Also included were the results of vitamin D tests performed in community laboratories in Ontario between October 2008 and September 2009 (provided by the Ontario Association of Medical Laboratories).
Different threshold levels were used in the studies, thus we reported the percentage of subjects with serum levels of between 25 and 30 nmol/L and between 37.5 and 50 nmol/L. Some studies stratified the results according to factors affecting vitamin D status and two used multivariate models to investigate the effects of these characteristics (including age, season, BMI, vitamin D intake, skin pigmentation, and season) on serum 25(OH)D levels. It’s unclear, however, if these studies were adequately powered for these subgroup analyses.
Study participants generally consisted of healthy, community-dwelling subjects and most excluded individuals with conditions or medications that alter vitamin D or bone metabolism, such as kidney or liver disease. Although the studies were conducted in different parts of Canada, fewer were performed in Northern latitudes, i.e. above 53°N, which is equivalent to the city of Edmonton.
Adults
Serum vitamin D levels of < 25 to 30 nmol/L were observed in 0% to 25.5% of the subjects included in five studies; the weighted average was 3.8% (95% CI: 3.0, 4.6). The preliminary results of the Canadian survey showed that approximately 5% of the subjects had serum levels below 29.5 nmol/L. The results of over 600,000 vitamin D tests performed in Ontarian community laboratories between October 2008 and September 2009 showed that 2.6% of adults (> 18 years) had serum levels < 25 nmol/L.
The prevalence of serum vitamin D levels below 37.5-50 nmol/L reported among studies varied widely, ranging from 8% to 73.6% with a weighted average of 22.5%. The preliminary results of the CHMS survey showed that between 10% and 25% of subjects had serum levels below 37 to 48 nmol/L. The results of the vitamin D tests performed in community laboratories showed that 10% to 25% of the individuals had serum levels between 39 and 50 nmol/L.
In an attempt to explain this inter-study variation, the study results were stratified according to factors affecting serum vitamin D levels, as summarized below. These results should be interpreted with caution as none were adjusted for other potential confounders. Adequately powered multivariate analyses would be necessary to determine the contribution of risk factors to lower serum 25(OH)D levels.
Seasonal variation
Three adult studies evaluating serum vitamin D levels in different seasons observed a trend towards a higher prevalence of serum levels < 37.5 to 50 nmol/L during the winter and spring months, specifically 21% to 39%, compared to 8% to 14% in the summer. The weighted average was 23.6% over the winter/spring months and 9.6% over summer. The difference between the seasons was not statistically significant in one study and not reported in the other two studies.
Skin Pigmentation
Four studies observed a trend toward a higher prevalence of serum vitamin D levels < 37.5 to 50 nmol/L in subjects with darker skin pigmentation compared to those with lighter skin pigmentation, with weighted averages of 46.8% among adults with darker skin colour and 15.9% among those with fairer skin.
Vitamin D intake and serum levels
Four adult studies evaluated serum vitamin D levels according to vitamin D intake and showed an overall trend toward a lower prevalence of serum levels < 37.5 to 50 nmol/L with higher levels of vitamin D intake. One study observed a dose-response relationship between higher vitamin D intake from supplements, diet (milk), and sun exposure (results not adjusted for other variables). It was observed that subjects taking 50 to 400 IU or > 400 IU of vitamin D per day had a 6% and 3% prevalence of serum vitamin D level < 40 nmol/L, respectively, versus 29% in subjects not on vitamin D supplementation. Similarly, among subjects drinking one or two glasses of milk per day, the prevalence of serum vitamin D levels < 40 nmol/L was found to be 15%, versus 6% in those who drink more than two glasses of milk per day and 21% among those who do not drink milk. On the other hand, one study observed little variation in serum vitamin D levels during winter according to milk intake, with the proportion of subjects exhibiting vitamin D levels of < 40 nmol/L being 21% among those drinking 0-2 glasses per day, 26% among those drinking > 2 glasses, and 20% among non-milk drinkers.
The overall quality of evidence for the studies conducted among adults was deemed to be low, although it was considered moderate for the subgroups of skin pigmentation and seasonal variation.
Newborn, Children and Adolescents
Five Canadian studies evaluated serum vitamin D levels in newborns, children, and adolescents. In four of these, it was found that between 0 and 36% of children exhibited deficiency across age groups with a weighted average of 6.4%. The results of over 28,000 vitamin D tests performed in children 0 to 18 years old in Ontario laboratories (Oct. 2008 to Sept. 2009) showed that 4.4% had serum levels of < 25 nmol/L.
According to two studies, 32% of infants 24 to 30 months old and 35.3% of newborns had serum vitamin D levels of < 50 nmol/L. Two studies of children 2 to 16 years old reported that 24.5% and 34% had serum vitamin D levels below 37.5 to 40 nmol/L. In both studies, older children exhibited a higher prevalence than younger children, with weighted averages 34.4% and 10.3%, respectively. The overall weighted average of the prevalence of serum vitamin D levels < 37.5 to 50 nmol/L among pediatric studies was 25.8%. The preliminary results of the Canadian survey showed that between 10% and 25% of subjects between 6 and 11 years (N= 435) had serum levels below 50 nmol/L, while for those 12 to 19 years, 25% to 50% exhibited serum vitamin D levels below 50 nmol/L.
The effects of season, skin pigmentation, and vitamin D intake were not explored in Canadian pediatric studies. A Canadian surveillance study did, however, report 104 confirmed cases1 (2.9 cases per 100,000 children) of vitamin D-deficient rickets among Canadian children age 1 to 18 between 2002 and 2004, 57 (55%) of which from Ontario. The highest incidence occurred among children living in the North, i.e., the Yukon, Northwest Territories, and Nunavut. In 92 (89%) cases, skin pigmentation was categorized as intermediate to dark, 98 (94%) had been breastfed, and 25 (24%) were offspring of immigrants to Canada. There were no cases of rickets in children receiving ≥ 400 IU VD supplementation/day.
Overall, the quality of evidence of the studies of children was considered very low.
Kidney Disease
Adults
Two studies evaluated serum vitamin D levels in Canadian adults with kidney disease. The first included 128 patients with chronic kidney disease stages 3 to 5, 38% of which had serum vitamin D levels of < 37.5 nmol/L (measured between April and July). This is higher than what was reported in Canadian studies of the general population during the summer months (i.e. between 8% and 14%). In the second, which examined 419 subjects who had received a renal transplantation (mean time since transplantation: 7.2 ± 6.4 years), the prevalence of serum vitamin D levels < 40 nmol/L was 27.3%. The authors concluded that the prevalence observed in the study population was similar to what is expected in the general population.
Children
No studies evaluating serum vitamin D levels in Canadian pediatric patients with kidney disease could be identified, although three such US studies among children with chronic kidney disease stages 1 to 5 were. The mean age varied between 10.7 and 12.5 years in two studies but was not reported in the third. Across all three studies, the prevalence of serum vitamin D levels below the range of 37.5 to 50 nmol/L varied between 21% and 39%, which is not considerably different from what was observed in studies of healthy Canadian children (24% to 35%).
Overall, the quality of evidence in adults and children with kidney disease was considered very low.
Clinical Utility of Vitamin D Testing
A high quality comprehensive systematic review published in August 2007 evaluated the association between serum vitamin D levels and different bone health outcomes in different age groups. A total of 72 studies were included. The authors observed that there was a trend towards improvement in some bone health outcomes with higher serum vitamin D levels. Nevertheless, precise thresholds for improved bone health outcomes could not be defined across age groups. Further, no new studies on the association were identified during an updated systematic review on vitamin D published in July 2009.
With regards to non-bone health outcomes, there is no high or even moderate quality evidence that supports the effectiveness of vitamin D in outcomes such as cancer, cardiovascular outcomes, and all-cause mortality. Even if there is any residual uncertainty, there is no evidence that testing vitamin D levels encourages adherence to Health Canada’s guidelines for vitamin D intake. A normal serum vitamin D threshold required to prevent non-bone health related conditions cannot be resolved until a causal effect or correlation has been demonstrated between vitamin D levels and these conditions. This is as an ongoing research issue around which there is currently too much uncertainty to base any conclusions that would support routine vitamin D testing.
For patients with chronic kidney disease (CKD), there is again no high or moderate quality evidence supporting improved outcomes through the use of calcitriol or vitamin D analogs. In the absence of such data, the authors of the guidelines for CKD patients consider it best practice to maintain serum calcium and phosphate at normal levels, while supplementation with active vitamin D should be considered if serum PTH levels are elevated. As previously stated, the authors of guidelines for CKD patients believe that there is not enough evidence to support routine vitamin D [25(OH)D] testing. According to what is stated in the guidelines, decisions regarding the commencement or discontinuation of treatment with calcitriol or vitamin D analogs should be based on serum PTH, calcium, and phosphate levels.
Limitations associated with the evidence of vitamin D testing include ambiguities in the definition of an ‘adequate threshold level’ and both inter- and intra- assay variability. The MAS considers both the lack of a consensus on the target serum vitamin D levels and assay limitations directly affect and undermine the clinical utility of testing. The evidence supporting the clinical utility of vitamin D testing is thus considered to be of very low quality.
Daily vitamin D intake, either through diet or supplementation, should follow Health Canada’s recommendations for healthy individuals of different age groups. For those with medical conditions such as renal disease, liver disease, and malabsorption syndromes, and for those taking medications that may affect vitamin D absorption/metabolism, physician guidance should be followed with respect to both vitamin D testing and supplementation.
Conclusions
Studies indicate that vitamin D, alone or in combination with calcium, may decrease the risk of fractures and falls among older adults.
There is no high or moderate quality evidence to support the effectiveness of vitamin D in other outcomes such as cancer, cardiovascular outcomes, and all-cause mortality.
Studies suggest that the prevalence of vitamin D deficiency in Canadian adults and children is relatively low (approximately 5%), and between 10% and 25% have serum levels below 40 to 50 nmol/L (based on very low to low grade evidence).
Given the limitations associated with serum vitamin D measurement, ambiguities in the definition of a ‘target serum level’, and the availability of clear guidelines on vitamin D supplementation from Health Canada, vitamin D testing is not warranted for the average risk population.
Health Canada has issued recommendations regarding the adequate daily intake of vitamin D, but current studies suggest that the mean dietary intake is below these recommendations. Accordingly, Health Canada’s guidelines and recommendations should be promoted.
Based on a moderate level of evidence, individuals with darker skin pigmentation appear to have a higher risk of low serum vitamin D levels than those with lighter skin pigmentation and therefore may need to be specially targeted with respect to optimum vitamin D intake. The cause-effect of this association is currently unclear.
Individuals with medical conditions such as renal and liver disease, osteoporosis, and malabsorption syndromes, as well as those taking medications that may affect vitamin D absorption/metabolism, should follow their physician’s guidance concerning both vitamin D testing and supplementation.
PMCID: PMC3377517  PMID: 23074397
4.  How Evidence-Based Are the Recommendations in Evidence-Based Guidelines? 
PLoS Medicine  2007;4(8):e250.
Background
Treatment recommendations for the same condition from different guideline bodies often disagree, even when the same randomized controlled trial (RCT) evidence is cited. Guideline appraisal tools focus on methodology and quality of reporting, but not on the nature of the supporting evidence. This study was done to evaluate the quality of the evidence (based on consideration of its internal validity, clinical relevance, and applicability) underlying therapy recommendations in evidence-based clinical practice guidelines.
Methods and Findings
A cross-sectional analysis of cardiovascular risk management recommendations was performed for three different conditions (diabetes mellitus, dyslipidemia, and hypertension) from three pan-national guideline panels (from the United States, Canada, and Europe). Of the 338 treatment recommendations in these nine guidelines, 231 (68%) cited RCT evidence but only 105 (45%) of these RCT-based recommendations were based on high-quality evidence. RCT-based evidence was downgraded most often because of reservations about the applicability of the RCT to the populations specified in the guideline recommendation (64/126 cases, 51%) or because the RCT reported surrogate outcomes (59/126 cases, 47%).
Conclusions
The results of internally valid RCTs may not be applicable to the populations, interventions, or outcomes specified in a guideline recommendation and therefore should not always be assumed to provide high-quality evidence for therapy recommendations.
From an analysis of cardiovascular risk-management recommendations in guidelines produced by pan-national panels, McAlister and colleagues concluded that fewer than half were based on high-quality evidence.
Editors' Summary
Background.
Until recently, doctors largely relied on their own experience to choose the best treatment for their patients. Faced with a patient with high blood pressure (hypertension), for example, the doctor had to decide whether to recommend lifestyle changes or to prescribe drugs to reduce the blood pressure. If he or she chose the latter, he or she then had to decide which drug to prescribe, set a target blood pressure, and decide how long to wait before changing the prescription if this target was not reached. But, over the past decade, numerous clinical practice guidelines have been produced by governmental bodies and medical associations to help doctors make treatment decisions like these. For each guideline, experts have searched the medical literature for the current evidence about the diagnosis and treatment of a disease, evaluated the quality of that evidence, and then made recommendations based on the best evidence available.
Why Was This Study Done?
The recommendations made in different clinical practice guidelines vary, in part because they are based on evidence of varying quality. To help clinicians decide which recommendations to follow, some guidelines indicate the strength of their recommendations by grading them, based on the methods used to collect the underlying evidence. Thus, a randomized clinical trial (RCT)—one in which patients are randomly allocated to different treatments without the patient or clinician knowing the allocation—provides higher-quality evidence than a nonrandomized trial. Similarly, internally valid trials—in which the differences between patient groups are solely due to their different treatments and not to other aspects of the trial—provide high-quality evidence. However, grading schemes rarely consider the size of studies and whether they have focused on clinical or so-called “surrogate” measures. (For example, an RCT of a treatment to reduce heart or circulation [“cardiovascular”] problems caused by high blood pressure might have death rate as a clinical measure; a surrogate endpoint would be blood pressure reduction.) Most guidelines also do not consider how generalizable (applicable) the results of a trial are to the populations, interventions, and outcomes specified in the guideline recommendation. In this study, the researchers have investigated the quality of the evidence underlying recommendations for cardiovascular risk management in nine evidence-based clinical practice guides using these additional criteria.
What Did the Researchers Do and Find?
The researchers extracted the recommendations for managing cardiovascular risk from the current US, Canadian, and European guidelines for the management of diabetes, abnormal blood lipid levels (dyslipidemia), and hypertension. They graded the quality of evidence for each recommendation using the Canadian Hypertension Education Program (CHEP) grading scheme, which considers the type of study, its internal validity, its clinical relevance, and how generally applicable the evidence is considered to be. Of 338 evidence-based recommendations, two-thirds were based on evidence collected in internally valid RCTs, but only half of these RCT-based recommendations were based on high-quality evidence. The evidence underlying 64 of the guideline recommendations failed to achieve a high CHEP grade because the RCT data were collected in a population of people with different characteristics to those covered by the guideline. For example, a recommendation to use spironolactone to reduce blood pressure in people with hypertension was based on an RCT in which the participants initially had congestive heart failure with normal blood pressure. Another 59 recommendations were downgraded because they were based on evidence from RCTs that had not focused on clinical measures of effectiveness.
What Do These Findings Mean?
These findings indicate that although most of the recommendations for cardiovascular risk management therapies in the selected guidelines were based on evidence collected in internally valid RCTs, less than one-third were based on high-quality evidence applicable to the populations, treatments, and outcomes specified in guideline recommendations. A limitation of this study is that it analyzed a subset of recommendations in only a few guidelines. Nevertheless, the findings serve to warn clinicians that evidence-based guidelines are not necessarily based on high-quality evidence. In addition, they emphasize the need to make the evidence base underlying guideline recommendations more transparent by using an extended grading system like the CHEP scheme. If this were done, the researchers suggest, it would help clinicians apply guideline recommendations appropriately to their individual patients.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040250.
• Wikipedia contains pages on evidence-based medicine and on clinical practice guidelines (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
• The National Guideline Clearinghouse provides information on US national guidelines
• The Guidelines International Network promotes the systematic development and application of clinical practice guidelines
• Information is available on the Canadian Hypertension Education Program (CHEP) (in French and English)
• See information on the Grading of Recommendations Assessment, Development and Evaluation (GRADE) working group, an organization that has developed an grading scheme similar to the CHEP scheme (in English, Spanish, French, German, and Italian)
doi:10.1371/journal.pmed.0040250
PMCID: PMC1939859  PMID: 17683197
5.  A Randomized Controlled Trial Comparing the Effects of Counseling and Alarm Device on HAART Adherence and Virologic Outcomes 
PLoS Medicine  2011;8(3):e1000422.
Michael Chung and colleagues show that intensive early adherence counseling at HAART initiation resulted in sustained, significant impact on adherence and virologic treatment failure, whereas use of an alarm device had no effect.
Background
Behavioral interventions that promote adherence to antiretroviral medications may decrease HIV treatment failure. Antiretroviral treatment programs in sub-Saharan Africa confront increasing financial constraints to provide comprehensive HIV care, which include adherence interventions. This study compared the impact of counseling and use of an alarm device on adherence and biological outcomes in a resource-limited setting.
Methods and Findings
A randomized controlled, factorial designed trial was conducted in Nairobi, Kenya. Antiretroviral-naïve individuals initiating free highly active antiretroviral therapy (HAART) in the form of fixed-dose combination pills (d4T, 3TC, and nevirapine) were randomized to one of four arms: counseling (three counseling sessions around HAART initiation), alarm (pocket electronic pill reminder carried for 6 months), counseling plus alarm, and neither counseling nor alarm. Participants were followed for 18 months after HAART initiation. Primary study endpoints included plasma HIV-1 RNA and CD4 count every 6 months, mortality, and adherence measured by monthly pill count. Between May 2006 and September 2008, 400 individuals were enrolled, 362 initiated HAART, and 310 completed follow-up. Participants who received counseling were 29% less likely to have monthly adherence <80% (hazard ratio [HR] = 0.71; 95% confidence interval [CI] 0.49–1.01; p = 0.055) and 59% less likely to experience viral failure (HIV-1 RNA ≥5,000 copies/ml) (HR 0.41; 95% CI 0.21–0.81; p = 0.01) compared to those who received no counseling. There was no significant impact of using an alarm on poor adherence (HR 0.93; 95% CI 0.65–1.32; p = 0.7) or viral failure (HR 0.99; 95% CI 0.53–1.84; p = 1.0) compared to those who did not use an alarm. Neither counseling nor alarm was significantly associated with mortality or rate of immune reconstitution.
Conclusions
Intensive early adherence counseling at HAART initiation resulted in sustained, significant impact on adherence and virologic treatment failure during 18-month follow-up, while use of an alarm device had no effect. As antiretroviral treatment clinics expand to meet an increasing demand for HIV care in sub-Saharan Africa, adherence counseling should be implemented to decrease the development of treatment failure and spread of resistant HIV.
Trial registration
ClinicalTrials gov NCT00273780
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Adherence to HIV treatment programs in poor countries has long been cited as an important public health concern, especially as poor adherence can lead to drug resistance and inadequate treatment of HIV. However, two factors have recently cast doubt on the poor adherence problem: (1) recent studies have shown that adherence is high in African HIV treatment programs and often better than in Western HIV clinics. For example, in a meta-analysis of 27 cohorts from 12 African countries, adequate adherence was noted in 77% of subjects compared to only 55% among 31 North America cohorts; (2) choice of antiretroviral regimens may impact on the development of antiretroviral resistance. In poor countries, most antiretroviral regimens contain non-nucleoside reverse transcriptase inhibitors (NNRTIs), such as nevirapine or efavirenz, which remain in the patient's circulation for weeks after single-dose administration. This situation means that such patients may not experience antiretroviral resistance unless they drop below 80% adherence—contrary to the more stringent 95% plus adherence levels needed to prevent resistance in regimens based on unboosted protease inhibitors—ultimately, off-setting some treatment lapses in resource-limited settings where NNRTI-based regimens are widely used.
Why Was This Study Done?
Given that adherence may not be as crucial an issue as previously thought, antiretroviral treatment programs in sub-Saharan Africa may be spending scarce resources to promote adherence to the detriment of some potentially more effective elements of HIV treatment and management programs. Although many treatment programs currently include adherence interventions, there is limited quality evidence that any of these methods improve long-term adherence to HIV treatment. Therefore, it is necessary to identify adherence interventions that are inexpensive and proven to be effective in resource-limited settings. As adherence counseling is already widely implemented in African HIV treatment programs and inexpensive alarm devices are thought to also improve compliance, the researchers compared the impact of adherence counseling and the use of an alarm device on adherence and biological outcomes in patients enrolled in HIV programs in rural Kenya.
What Did the Researchers Do and Find?
The researchers enrolled 400 eligible patients (newly diagnosed with HIV, never before taken antiretroviral therapy, aged over 18 years) to four arms: (1) adherence counseling alone; (2) alarm device alone; (3) both adherence counseling and alarm device together; and (4) a control group that received neither adherence counseling nor alarm device. The patients had blood taken to record baseline CD4 count and HIV-1 RNA and after starting HIV treatment, returned to the study clinic every month with their pill bottles for the study pharmacist to count and recorded the number of pills remaining in the bottle, and to receive another prescription. Patients were followed up for 18 months and had their CD4 count and HIV-1 RNA measured at 6, 12, and 18 months.
Patients receiving adherence counseling were 29% less likely to experience poor adherence compared to those who received no counseling. Furthermore, those receiving intensive early adherence counseling were 59% less likely to experience viral failure. However, there was no significant difference in mortality or significant differences in CD4 counts at 18 months follow-up between those who received counseling and those who did not. There were no significant differences in adherence, time to viral failure, mortality, or CD4 counts in patients who received alarm devices compared to those who did not.
What Do These Findings Mean?
The results of this study suggest that intensive adherence counseling around the time of HIV treatment initiation significantly reduces poor adherence and virologic treatment failure, while using an alarm device has no effect. Therefore, investment in careful counseling based on individual needs at the onset of HIV treatment initiation, appears to have sustained benefit, possibly through strengthening the relationship between the health care provider and patient through communication, education, and trust. Interactive adherence counseling supports the bond between the clinic and the patient and may result in fewer patients needing to switch to expensive second-line medications and, possibly, may help to decrease the spread of resistant HIV. These findings define an adherence counseling protocol that is effective and are highly relevant to other HIV clinics caring for large numbers of patients in sub-Saharan Africa.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000422.
UNAIDS provides information about HIV treatment strategies
The American Public Health Association has information about adherence to HIV treatment regimens
The US Department of Health and Human Services has information for patients about adherence to HIV treatment
The World Health Organization provides information about HIV treatment pharmacovigilance
doi:10.1371/journal.pmed.1000422
PMCID: PMC3046986  PMID: 21390262
6.  Pharmacy Refill Adherence Compared with CD4 Count Changes for Monitoring HIV-Infected Adults on Antiretroviral Therapy 
PLoS Medicine  2008;5(5):e109.
Background
World Health Organization (WHO) guidelines for monitoring HIV-infected individuals taking combination antiretroviral therapy (cART) in resource-limited settings recommend using CD4+ T cell (CD4) count changes to monitor treatment effectiveness. In practice, however, falling CD4 counts are a consequence, rather than a cause, of virologic failure. Adherence lapses precede virologic failure and, unlike CD4 counts, data on adherence are immediately available to all clinics dispensing cART. However, the accuracy of adherence assessments for predicting future or detecting current virologic failure has not been determined. The goal of this study therefore was to determine the accuracy of adherence assessments for predicting and detecting virologic failure and to compare the accuracy of adherence-based monitoring approaches with approaches monitoring CD4 count changes.
Methodology and Findings
We conducted an observational cohort study among 1,982 of 4,984 (40%) HIV-infected adults initiating non-nucleoside reverse transcriptase inhibitor-based cART in the Aid for AIDS Disease Management Program, which serves nine countries in southern Africa. Pharmacy refill adherence was calculated as the number of months of cART claims submitted divided by the number of complete months between cART initiation and the last refill prior to the endpoint of interest, expressed as a percentage. The main outcome measure was virologic failure defined as a viral load > 1,000 copies/ml (1) at an initial assessment either 6 or 12 mo after cART initiation and (2) after a previous undetectable (i.e., < 400 copies/ml) viral load (breakthrough viremia). Adherence levels outperformed CD4 count changes when used to detect current virologic failure in the first year after cART initiation (area under the receiver operating characteristic [ROC] curves [AUC] were 0.79 and 0.68 [difference = 0.11; 95% CI 0.06 to 0.16; χ2 = 20.1] respectively at 6 mo, and 0.85 and 0.75 [difference = 0.10; 95% CI 0.05 to 0.14; χ2 = 20.2] respectively at 12 mo; p < 0.001 for both comparisons). When used to detect current breakthrough viremia, adherence and CD4 counts were equally accurate (AUCs of 0.68 versus 0.67, respectively [difference = 0.01; 95% CI −0.06 to 0.07]; χ2 = 0.1, p > 0.5). In addition, adherence levels assessed 3 mo prior to viral load assessments were as accurate for virologic failure occurring approximately 3 mo later as were CD4 count changes calculated from cART initiation to the actual time of the viral load assessments, indicating the potential utility of adherence assessments for predicting future, rather than simply detecting current, virologic failure. Moreover, combinations of CD4 count and adherence data appeared useful in identifying patients at very low risk of virologic failure.
Conclusions
Pharmacy refill adherence assessments were as accurate as CD4 counts for detecting current virologic failure in this cohort of patients on cART and have the potential to predict virologic failure before it occurs. Approaches to cART scale-up in resource-limited settings should include an adherence-based monitoring approach.
Analyzing pharmacy and laboratory records from 1,982 patients beginning HIV therapy in southern Africa, Gregory Bisson and colleagues find medication adherence superior to CD4 count changes in identifying treatment failure.
Editors' Summary
Background.
Globally, more than 30 million people are infected with the human immunodeficiency virus (HIV), the cause of acquired immunodeficiency syndrome (AIDS). Combinations of antiretroviral drugs that hold HIV in check (viral suppression) have been available since 1996. Unfortunately, most of the people affected by HIV/AIDS live in developing countries and cannot afford these expensive drugs. As a result, life expectancy has plummeted and economic growth has reversed in these poor countries since the beginning of the AIDS pandemic. Faced with this humanitarian crisis, the lack of access to HIV treatment was declared a global health emergency in 2003. Today, through the concerted efforts of governments, international organizations, and funding bodies, about a quarter of the HIV-positive people in developing and transitional countries who are in immediate need of life-saving, combination antiretroviral therapy (cART) receive the drugs they need.
Why Was This Study Done?
To maximize the benefits of cART, health-care workers in developing countries need simple, affordable ways to monitor viral suppression in their patients—a poor virologic response to cART can lead to the selection of drug-resistant HIV, rapid disease progression, and death. In developed countries, virologic response is monitored by measuring the number of viral particles in patients' blood (viral load) but this technically demanding assay is unavailable in most developing countries. Instead, the World Health Organization recommends that CD4+ T cell (CD4) counts be used to monitor patient responses to cART in resource-limited settings. HIV results in loss of CD4 cells (a type of immune system cell), so a drop in a patient's CD4 count often indicates virologic failure (failure of treatment to suppress the virus). However, falling CD4 counts are often a result of virologic failure and therefore monitoring CD4 counts for drops is unlikely to prevent virologic failure from occurring. Rather, falling CD4 counts are often used only to guide a change to new medicines, which may be even more expensive or difficult to take. On the other hand “adherence lapses”—the failure to take cART regularly—often precede virologic failure, so detecting them early provides an opportunity for improvement in adherence that could prevent virologic failure. Because clinics that dispense cART routinely collect data that can be used to calculate adherence, in this study the researchers investigate whether assessing adherence might provide an alternative, low-cost way to monitor and predict virologic failure among HIV-infected adults on cART.
What Did the Researchers Do and Find?
The Aid for AIDS Disease Management Program provides cART to medical insurance fund subscribers in nine countries in southern Africa. Data on claims for antiretroviral drugs made through this program, plus CD4 counts assessed at about 6 or 12 months after initiating cART, and viral load measurements taken within 45 days of a CD4 count, were available for nearly 2,000 HIV-positive adults who had been prescribed a combination of HIV drugs including either efavirenz or nevirapine. The researchers defined adherence as the number of months of cART claims submitted divided by the number of complete months between cART initiation and the last pharmacy refill before a viral load assessment was performed. Virologic failure was defined in two ways: as a viral load of more than 1,000 copies per ml of blood 6 or 12 months after cART initiation, or as a rebound of viral load to similar levels after a previously very low reading (breakthrough viremia). The researchers' statistical analysis of these data shows that at 6 and 12 months after initiation of cART, adherence levels indicated virologic failure more accurately than CD4 count changes. For breakthrough viremia, both measurements were equally accurate. Adherence levels during the first 3 months of cART predicted virologic failure at 6 months as accurately as did CD4 count changes since cART initiation. Finally, the combination of adherence levels and CD4 count changes accurately identified patients at very low risk of virologic failure.
What Do These Findings Mean?
These findings suggest that adherence assessments (based in this study on insurance claims for pharmacy refills) can identify the patients on cART who are at high and low risk of virologic failure at least as accurately as CD4 counts. In addition, they suggest that adherence assessments could be used for early identification of patients at high risk of virologic failure, averting the health impact of treatment failure and the cost of changing to second-line drug regimens. Studies need to be done in other settings (in particular, in public clinics where cART is provided without charge) to confirm the generalizability of these findings. These finding do not change that fact that monitoring CD4 counts plays an important role in deciding when to start cART or indicating when cART is no longer protecting the immune system. But, write the researchers, systematic monitoring of adherence to cART should be considered as an alternative to CD4 count monitoring in patients who are receiving cART in resource-limited settings or as a way to direct the use of viral load testing where feasible.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050109.
This study is discussed further in a PLoS Medicine Perspective by David Bangsberg
Information is available from the US National Institute of Allergy and Infectious Diseases on HIV infection and AIDS
HIV InSite has comprehensive information on all aspects of HIV/AIDS, including an article about adherence to antiretroviral therapy
Information is available from Avert, an international AIDS charity, on HIV and AIDS in Africa and on providing AIDS drug treatment for millions
The World Health Organization provides information about universal access to HIV treatment (in several languages) and on its recommendations for antiretroviral therapy for HIV infection in adults and adolescents
The US Centers for Disease Control and Prevention also provides information on global efforts to deal with the HIV/AIDS pandemic (in English and Spanish)
doi:10.1371/journal.pmed.0050109
PMCID: PMC2386831  PMID: 18494555
7.  The Potential Impact of Pre-Exposure Prophylaxis for HIV Prevention among Men Who Have Sex with Men and Transwomen in Lima, Peru: A Mathematical Modelling Study 
PLoS Medicine  2012;9(10):e1001323.
Gabriela Gomez and colleagues developed a mathematical model of the HIV epidemic among men who have sex with men and transwomen in Lima, Peru to explore whether HIV pre-exposure prophylaxis could be a cost-effective addition to existing HIV prevention strategies.
Background
HIV pre-exposure prophylaxis (PrEP), the use of antiretroviral drugs by uninfected individuals to prevent HIV infection, has demonstrated effectiveness in preventing acquisition in a high-risk population of men who have sex with men (MSM). Consequently, there is a need to understand if and how PrEP can be used cost-effectively to prevent HIV infection in such populations.
Methods and Findings
We developed a mathematical model representing the HIV epidemic among MSM and transwomen (male-to-female transgender individuals) in Lima, Peru, as a test case. PrEP effectiveness in the model is assumed to result from the combination of a “conditional efficacy” parameter and an adherence parameter. Annual operating costs from a health provider perspective were based on the US Centers for Disease Control and Prevention interim guidelines for PrEP use. The model was used to investigate the population-level impact, cost, and cost-effectiveness of PrEP under a range of implementation scenarios. The epidemiological impact of PrEP is largely driven by programme characteristics. For a modest PrEP coverage of 5%, over 8% of infections could be averted in a programme prioritising those at higher risk and attaining the adherence levels of the Pre-Exposure Prophylaxis Initiative study. Across all scenarios, the highest estimated cost per disability-adjusted life year averted (uniform strategy for a coverage level of 20%, US$1,036–US$4,254) is below the World Health Organization recommended threshold for cost-effective interventions, while only certain optimistic scenarios (low coverage of 5% and some or high prioritisation) are likely to be cost-effective using the World Bank threshold. The impact of PrEP is reduced if those on PrEP decrease condom use, but only extreme behaviour changes among non-adherers (over 80% reduction in condom use) and a low PrEP conditional efficacy (40%) would adversely impact the epidemic. However, PrEP will not arrest HIV transmission in isolation because of its incomplete effectiveness and dependence on adherence, and because the high cost of programmes limits the coverage levels that could potentially be attained.
Conclusions
A strategic PrEP intervention could be a cost-effective addition to existing HIV prevention strategies for MSM populations. However, despite being cost-effective, a substantial expenditure would be required to generate significant reductions in incidence.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Without a vaccine, the only ways to halt the global HIV epidemic are prevention strategies that reduce transmission of the HIV virus. Up until recently, behavioral strategies such as condom use and reduction of sexual partners have been at the center of HIV prevention. In the past few years, several biological prevention measures have also been shown to be effective in reducing (though not completely preventing) HIV transmission. These include male circumcision, treatment for prevention (giving antiretroviral drugs to HIV-infected people, before they need it for their own health, to reduce their infectiousness) and pre-exposure prophylaxis (or PrEP), in which HIV-negative people use antiretroviral drugs to protect themselves from infection. One PrEP regimen (a daily pill containing two different antiretrovirals) has been shown in a clinical trial to reduce new infections by 44% in of men who have sex with men (MSM). In July 2012, the US Food and Drug Administration approved this PrEP regimen to reduce the risk of HIV infection in uninfected men and women who are at high risk of HIV infection and who may engage in sexual activity with HIV-infected partners. The approval makes it clear that PrEP needs to be used in combination with safe sex practices.
Why Was This Study Done?
Clinical trials have shown that PrEP can reduce HIV infections among participants, but they have not examined the consequences PrEP could have at the population level. Before decision-makers can decide whether to invest in PrEP programs, they need to know about the costs and benefits at the population level. Besides the price of the drug itself, the costs include HIV testing before starting PrEP, as well as regular tests thereafter. The health benefits of reducing new HIV infections are calculated in “disability-adjusted life years” (or DALYs) averted. One DALY is equal to one year of healthy life lost. Other benefits include future savings in lifelong HIV/AIDS treatment for every person whose infection is prevented by PrEP.
This study estimates the potential costs and health benefits of several hypothetical PrEP roll-out scenarios among the community of MSM in Lima, Peru. The scientists chose this community because many of the participants in the clinical trial that showed that PrEP can reduce infections came from this community, and they therefore have some knowledge on how PrEP affects HIV infection rates and behavior in this population. Because the HIV epidemic in Lima is concentrated among MSM, similar to most of Latin America and several other developed countries, the results might also be relevant for the evaluation of PrEP in other places.
What Did the Researchers Do and Find?
For their scenarios, the researchers looked at “high coverage” and “low coverage” scenarios, in which 20% and 5% of uninfected individuals use PrEP, respectively. They also divided the MSM community into those at lower risk of becoming infected and those at higher risk. The latter group consisted of transwomen at higher risk (transsexuals and transvestites with many sexual partners) and male sex workers. In a “uniform coverage” scenario, PrEP is equally distributed among all MSM. “Prioritized scenarios” cover transwomen at higher risk and sex workers preferentially. Two additional important factors for the estimated benefits are treatment adherence (i.e., whether people take the pills they have been prescribed faithfully over long periods of time even though they are not sick) and changes in risk behavior (i.e., whether the perceived protection provided by PrEP leads to more unprotected sex).
The cost estimates for PrEP included the costs of the drug itself and HIV tests prior to PrEP prescription and at three-month intervals thereafter, as well as outreach and counseling services and condom and lubricant promotion and provision.
To judge whether under the various scenarios PrEP is cost-effective, the researchers applied two commonly used but different cost-effectiveness thresholds. The World Health Organization's WHO-CHOICE initiative considers an intervention cost-effective if its cost is less than three times the gross domestic product (GDP) per capita per DALY averted. For Peru, this means an intervention should cost less than US$16,302 per DALY. The World Bank has more stringent criteria: it considers an intervention cost-effective for a middle-income country like Peru if it costs less than US$500 per DALY averted.
The researchers estimate that PrEP is cost-effective in Lima's MSM population for most scenarios by WHO-CHOICE guidelines. Only scenarios that prioritize PrEP to those most likely to become infected (i.e., transwomen at higher risk and sex workers) are cost-effective (and only barely) by the more stringent World Bank criteria. If the savings on antiretroviral drugs to treat people with HIV (those who would have become infected without PrEP) are included in the calculation, most scenarios become cost-effective, even under World Bank criteria.
The most cost-effective scenario, namely, having a modest coverage of 5%, prioritizing PrEP to transwomen at higher risk and sex workers, and assuming fairly high adherence levels among PrEP recipients, is estimated to avert about 8% of new infections among this community over ten years.
What Do these Findings Mean?
These findings suggest that under some circumstances, PrEP could be a cost-effective tool to reduce new HIV infections. However, as the researchers discuss, PrEP is expensive and only partly effective. Moreover, its effectiveness depends on two behavioral factors—adherence to a strict drug regimen and continued practicing of safe sex—both of which remain hard to predict. As a consequence, PrEP alone is not a valid strategy to prevent new HIV infections. It needs instead to be considered as one of several available tools. If and when PrEP is chosen as part of an integrated prevention strategy will depend on the specific target population, the overall funds available, and how well its cost-effectiveness compares with other prevention measures.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001323.
Information is available from the US National Institute of Allergy and Infectious Diseases on HIV infection and AIDS
NAM/aidsmap provides basic information about HIV/AIDS, summaries of recent research findings on HIV care and treatment, and a section on PrEP
Information is available from Avert, an international AIDS charity, on many aspects of HIV/AIDS, including HIV prevention
AVAC Global Advocacy for HIV Prevention provides up-to-date information on HIV prevention, including PrEP
The US Centers for Disease Control and Prevention also has information on PrEP
The World Health Organization has a page on its WHO-CHOICE criteria for cost-effectiveness
doi:10.1371/journal.pmed.1001323
PMCID: PMC3467261  PMID: 23055836
8.  Inclusion of Ethical Issues in Dementia Guidelines: A Thematic Text Analysis 
PLoS Medicine  2013;10(8):e1001498.
Background
Clinical practice guidelines (CPGs) aim to improve professionalism in health care. However, current CPG development manuals fail to address how to include ethical issues in a systematic and transparent manner. The objective of this study was to assess the representation of ethical issues in general CPGs on dementia care.
Methods and Findings
To identify national CPGs on dementia care, five databases of guidelines were searched and national psychiatric associations were contacted in August 2011 and in June 2013. A framework for the assessment of the identified CPGs' ethical content was developed on the basis of a prior systematic review of ethical issues in dementia care. Thematic text analysis and a 4-point rating score were employed to assess how ethical issues were addressed in the identified CPGs. Twelve national CPGs were included. Thirty-one ethical issues in dementia care were identified by the prior systematic review. The proportion of these 31 ethical issues that were explicitly addressed by each CPG ranged from 22% to 77%, with a median of 49.5%. National guidelines differed substantially with respect to (a) which ethical issues were represented, (b) whether ethical recommendations were included, (c) whether justifications or citations were provided to support recommendations, and (d) to what extent the ethical issues were explained.
Conclusions
Ethical issues were inconsistently addressed in national dementia guidelines, with some guidelines including most and some including few ethical issues. Guidelines should address ethical issues and how to deal with them to help the medical profession understand how to approach care of patients with dementia, and for patients, their relatives, and the general public, all of whom might seek information and advice in national guidelines. There is a need for further research to specify how detailed ethical issues and their respective recommendations can and should be addressed in dementia guidelines.
Please see later in the article for the Editors' Summary
Editors’ Summary
Background
In the past, doctors tended to rely on their own experience to choose the best treatment for their patients. Faced with a patient with dementia (a brain disorder that affects short-term memory and the ability tocarry out normal daily activities), for example, a doctor would use his/her own experience to help decide whether the patient should remain at home or would be better cared for in a nursing home. Similarly, the doctor might have to decide whether antipsychotic drugs might be necessary to reduce behavioral or psychological symptoms such as restlessness or shouting. However, over the past two decades, numerous evidence-based clinical practice guidelines (CPGs) have been produced by governmental bodies and medical associations that aim to improve standards of clinical competence and professionalism in health care. During the development of each guideline, experts search the medical literature for the current evidence about the diagnosis and treatment of a disease, evaluate the quality of that evidence, and then make recommendations based on the best evidence available.
Why Was This Study Done?
Currently, CPG development manuals do not address how to include ethical issues in CPGs. A health-care professional is ethical if he/she behaves in accordance with the accepted principles of right and wrong that govern the medical profession. More specifically, medical professionalism is based on a set of binding ethical principles—respect for patient autonomy, beneficence, non-malfeasance (the “do no harm” principle), and justice. In particular, CPG development manuals do not address disease-specific ethical issues (DSEIs), clinical ethical situations that are relevant to the management of a specific disease. So, for example, a DSEI that arises in dementia care is the conflict between the ethical principles of non-malfeasance and patient autonomy (freedom-to-move-at-will). Thus, healthcare professionals may have to decide to physically restrain a patient with dementia to prevent the patient doing harm to him- or herself or to someone else. Given the lack of guidance on how to address ethical issues in CPG development manuals, in this thematic text analysis, the researchers assess the representation of ethical issues in CPGs on general dementia care. Thematic text analysis uses a framework for the assessment of qualitative data (information that is word-based rather than number-based) that involves pinpointing, examining, and recording patterns (themes) among the available data.
What Did the Researchers Do and Find?
The researchers identified 12 national CPGs on dementia care by searching guideline databases and by contacting national psychiatric associations. They developed a framework for the assessment of the ethical content in these CPGs based on a previous systematic review of ethical issues in dementia care. Of the 31 DSEIs included by the researchers in their analysis, the proportion that were explicitly addressed by each CPG ranged from 22% (Switzerland) to 77% (USA); on average the CPGs explicitly addressed half of the DSEIs. Four DSEIs—adequate consideration of advanced directives in decision making, usage of GPS and other monitoring techniques, covert medication, and dealing with suicidal thinking—were not addressed in at least 11 of the CPGs. The inclusion of recommendations on how to deal with DSEIs ranged from 10% of DSEIs covered in the Swiss CPG to 71% covered in the US CPG. Overall, national guidelines differed substantially with respect to which ethical issues were included, whether ethical recommendations were included, whether justifications or citations were provided to support recommendations, and to what extent the ethical issues were clearly explained.
What Do These Findings Mean?
These findings show that national CPGs on dementia care already address clinical ethical issues but that the extent to which the spectrum of DSEIs is considered varies widely within and between CPGs. They also indicate that recommendations on how to deal with DSEIs often lack the evidence that health-care professionals use to justify their clinical decisions. The researchers suggest that this situation can and should be improved, although more research is needed to determine how ethical issues and recommendations should be addressed in dementia guidelines. A more systematic and transparent inclusion of DSEIs in CPGs for dementia (and for other conditions) would further support the concept of medical professionalism as a core element of CPGs, note the researchers, but is also important for patients and their relatives who might turn to national CPGs for information and guidance at a stressful time of life.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001498.
Wikipedia contains a page on clinical practice guidelines (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The US National Guideline Clearinghouse provides information on national guidelines, including CPGs for dementia
The Guidelines International Network promotes the systematic development and application of clinical practice guidelines
The American Medical Association provides information about medical ethics; the British Medical Association provides information on all aspects of ethics and includes an essential tool kit that introduces common ethical problems and practical ways to deal with them
The UK National Health Service Choices website provides information about dementia, including a personal story about dealing with dementia
MedlinePlus provides links to additional resources about dementia and about Alzheimers disease, a specific type of dementia (in English and Spanish)
The UK Nuffield Council on Bioethics provides the report Dementia: ethical issues and additional information on the public consultation on ethical issues in dementia care
doi:10.1371/journal.pmed.1001498
PMCID: PMC3742442  PMID: 23966839
9.  Guideline adherence and health outcomes in diabetes mellitus type 2 patients: a cross-sectional study 
Background
The complex disease of diabetes mellitus type 2 (T2DM) requires a high standard of quality of care. Clinical practice guidelines define norms for diabetes care that ensure regular monitoring of T2DM patients, including annual diagnostic tests. This study aims to quantify guideline adherence in Dutch general practices providing care to T2DM patients and explores the association between guideline adherence and patients’ health outcomes.
Methods
In this cross-sectional study, we studied 363 T2DM patients in 32 general practices in 2011 and 2012. Guideline adherence was measured by comparing structure and process indicators of care with recommendations in the national diabetes care guideline. Health outcomes included biomedical measures and health behaviours. Data was extracted from medical records. The association between guideline adherence and health outcomes was analysed using hierarchical linear and logistic regression models.
Results
Guideline adherence varied between different recommendations. For example 53% of the practices had a system for collecting patient experience feedback, while 97% had a policy for no-show patients. With regard to process indicators of care, guideline adherence was below 50% for foot, eye and urine albumin examination and high (>85%) for blood pressure, HbA1c and smoking behaviour assessment. Although guideline adherence varied considerably between practices, after adjusting for patient characteristics we found guideline adherence not to be associated with patients’ health outcomes.
Conclusions
Guideline adherence in Dutch general practices offering diabetes care was not optimal. Despite considerable variations between general practices, we found no clear relationship between guideline adherence and health outcomes. More research is needed to better understand the relationship between guideline adherence and health outcomes, specifically for guidelines that are based on limited scientific evidence.
Electronic supplementary material
The online version of this article (doi:10.1186/s12913-014-0669-z) contains supplementary material, which is available to authorized users.
doi:10.1186/s12913-014-0669-z
PMCID: PMC4312465  PMID: 25608447
Integrated care; Diabetes mellitus; Guideline adherence; Measurement of quality of care; Health outcomes
10.  Clinical decision support improves physician guideline adherence for laboratory monitoring of chronic kidney disease: a matched cohort study 
BMC Nephrology  2015;16:163.
Background
Guidelines exist for chronic kidney disease (CKD) but are not well implemented in clinical practice. We evaluated the impact of a guideline-based clinical decision support system (CDSS) on laboratory monitoring and achievement of laboratory targets in stage 3–4 CKD patients.
Methods
We performed a matched cohort study of 12,353 stage 3–4 CKD patients whose physicians opted to receive an automated guideline-based CDSS with CKD-related lab results, and 42,996 matched controls whose physicians did not receive the CDSS. Physicians were from US community-based physician practices utilizing a large, commercial laboratory (LabCorp®).
We compared the percentage of laboratory tests obtained within guideline-recommended intervals and the percentage of results within guideline target ranges between CDSS and non-CDSS patients. Laboratory tests analyzed included estimated glomerular filtration rate, plasma parathyroid hormone, serum calcium, phosphorus, 25-hydroxy vitamin D (25-D), total carbon dioxide, transferrin saturation (TSAT), LDL cholesterol (LDL-C), blood hemoglobin, and urine protein measurements.
Results
Physicians who used the CDSS ordered all CKD-relevant testing more in accord with guidelines than those who did not use the system. Odds ratios favoring CDSS ranged from 1.29 (TSAT) to 1.88 (serum phosphorus) [CI, 1.20 to 2.01], p < 0.001 for all tests. The CDSS impact was greater for primary care physicians versus nephrologists. CDSS physicians met guideline targets for LDL-C and 25-D more often, but hemoglobin targets less often, than non-CDSS physicians. Use of CDSS did not impact guideline target achievement for the remaining tests.
Conclusions
Use of an automated laboratory-based CDSS may improve physician adherence to guidelines with respect to timely monitoring of CKD.
Electronic supplementary material
The online version of this article (doi:10.1186/s12882-015-0159-5) contains supplementary material, which is available to authorized users.
doi:10.1186/s12882-015-0159-5
PMCID: PMC4608162  PMID: 26471846
11.  Adherence to physiotherapy clinical guideline acute ankle injury and determinants of adherence: a cohort study 
Background
Clinical guidelines are considered important instruments to improve quality in health care. In physiotherapy, insight in adherence to guidelines is limited. Knowledge of adherence is important to identify barriers and to enhance implementation. Purpose of this study is to investigate the ability to adherence to recommendations of the guideline Acute ankle injury, and to identify patient characteristics that determine adherence to the guideline.
Methods
Twenty-two physiotherapists collected data of 174 patients in a prospective cohort study, in which the course of treatment was systematically registered. Indicators were used to investigate adherence to recommendations. Patient characteristics were used to identify prognostic factors that may determine adherence to the guideline. Correlation between patient characteristics and adherence to outcome-indicators (treatment sessions, functioning of patient, accomplished goals) was calculated using univariate logistic regression. To calculate explained variance of combined patient characteristics, multivariate analysis was performed.
Results
Adherence to individual recommendations varied from 71% to 100%. In 99 patients (57%) the physiotherapists showed adherence to all indicators. Adherence to preset maximum of six treatment sessions for patients with severe ankle injury was 81% (132 patients).
The odds to receive more than six sessions were statistically significant for three patient characteristics: females (OR:3.89; 95%CI: 1.41–10.72), recurrent sprain (OR: 6.90; 95%CI: 2.34 – 20.37), co-morbidity (OR: 25.92; 95% CI: 6.79 – 98.93). All factors together explained 40% of the variance. Inclusion of physiotherapist characteristics in the regression model showed that work-experience reduced the odds to receive more than six sessions (OR: 0.2; 95%CI: 0.06 – 0.77), and increased explained variance to 45%.
Conclusion
Adherence to the clinical guideline Acute ankle sprain showed that the guideline is applicable in daily practice. Adherence to the guideline, even in a group of physiotherapists familiar with the guideline, showed possibilities for improvement. The necessity to exceed the expected number of treatment sessions may be explained by co-morbidity and recurrent sprains. It is not clear why female patients were treated with more sessions. Experience of the physiotherapist reduced the number of treatment sessions. Quality indicators may be used for audit and feedback as part of the implementation strategy.
doi:10.1186/1471-2474-8-45
PMCID: PMC1885796  PMID: 17519040
12.  Guidelines, Editors, Pharma And The Biological Paradigm Shift 
Mens Sana Monographs  2007;5(1):27-30.
Private investment in biomedical research has increased over the last few decades. At most places it has been welcomed as the next best thing to technology itself. Much of the intellectual talent from academic institutions is getting absorbed in lucrative positions in industry. Applied research finds willing collaborators in venture capital funded industry, so a symbiotic growth is ensured for both.
There are significant costs involved too. As academia interacts with industry, major areas of conflict of interest especially applicable to biomedical research have arisen. They are related to disputes over patents and royalty, hostile encounters between academia and industry, as also between public and private enterprise, legal tangles, research misconduct of various types, antagonistic press and patient-advocate lobbies and a general atmosphere in which commercial interest get precedence over patient welfare.
Pharma image stinks because of a number of errors of omission and commission. A recent example is suppression of negative findings about Bayer's Trasylol (Aprotinin) and the marketing maneuvers of Eli Lilly's Xigris (rhAPC). Whenever there is a conflict between patient vulnerability and profit motives, pharma often tends to tilt towards the latter. Moreover there are documents that bring to light how companies frequently cross the line between patient welfare and profit seeking behaviour.
A voluntary moratorium over pharma spending to pamper drug prescribers is necessary. A code of conduct adopted recently by OPPI in India to limit pharma company expenses over junkets and trinkets is a welcome step.
Clinical practice guidelines (CPG) are considered important as they guide the diagnostic/therapeutic regimen of a large number of medical professionals and hospitals and provide recommendations on drugs, their dosages and criteria for selection. Along with clinical trials, they are another area of growing influence by the pharmaceutical industry. For example, in a relatively recent survey of 2002, it was found that about 60% of 192 authors of clinical practice guidelines reported they had financial connections with the companies whose drugs were under consideration. There is a strong case for making CPGs based not just on effectivity but cost effectivity. The various ramifications of this need to be spelt out. Work of bodies like the Appraisal of Guidelines Research and Evaluation (AGREE) Collaboration and Guidelines Advisory Committee (GAC) are also worth a close look.
Even the actions of Foundations that work for disease amelioration have come under scrutiny. The process of setting up ‘Best Practices’ Guidelines for interactions between the pharmaceutical industry and clinicians has already begun and can have important consequences for patient care. Similarly, Good Publication Practice (GPP) for pharmaceutical companies have also been set up aimed at improving the behaviour of drug companies while reporting drug trials
The rapidly increasing trend toward influence and control by industry has become a concern for many. It is of such importance that the Association of American Medical Colleges has issued two relatively new documents - one, in 2001, on how to deal with individual conflicts of interest; and the other, in 2002, on how to deal with institutional conflicts of interest in the conduct of clinical research. Academic Medical Centers (AMCs), as also medical education and research institutions at other places, have to adopt means that minimize their conflicts of interest.
Both medical associations and research journal editors are getting concerned with individual and institutional conflicts of interest in the conduct of clinical research and documents are now available which address these issues. The 2001 ICMJE revision calls for full disclosure of the sponsor's role in research, as well as assurance that the investigators are independent of the sponsor, are fully accountable for the design and conduct of the trial, have independent access to all trial data and control all editorial and publication decisions. However the findings of a 2002 study suggest that academic institutions routinely participate in clinical research that does not adhere to ICMJE standards of accountability, access to data and control of publication.
There is an inevitable slant to produce not necessarily useful but marketable products which ensure the profitability of industry and research grants outflow to academia. Industry supports new, not traditional, therapies, irrespective of what is effective. Whatever traditional therapy is supported is most probably because the company concerned has a product with a big stake there, which has remained a ‘gold standard’ or which that player thinks has still some ‘juice’ left.
Industry sponsorship is mainly for potential medications, not for trying to determine whether there may be non-pharmacological interventions that may be equally good, if not better. In the paradigm shift towards biological psychiatry, the role of industry sponsorship is not overt but probably more pervasive than many have realised, or the right thinking may consider good, for the health of the branch in the long run.
An issue of major concern is protection of the interests of research subjects. Patients agree to become research subjects not only for personal medical benefit but, as an extension, to benefit the rest of the patient population and also advance medical research.
We all accept that industry profits have to be made, and investment in research and development by the pharma industry is massive. However, we must also accept there is a fundamental difference between marketing strategies for other entities and those for drugs.
The ultimate barometer is patient welfare and no drug that compromises it can stand the test of time. So, how does it make even commercial sense in the long term to market substandard products? The greatest mistake long-term players in industry may make is try to adopt the shady techniques of the upstart new entrant. Secrecy of marketing/sales tactics, of the process of manufacture, of other strategies and plans of business expansion, of strategies to tackle competition are fine business tactics. But it is critical that secrecy as a tactic not extend to reporting of research findings, especially those contrary to one's product.
Pharma has no option but to make a quality product, do comprehensive adverse reaction profiles, and market it only if it passes both tests.
Why does pharma adopt questionable tactics? The reasons are essentially two:
What with all the constraints, a drug comes to the pharmacy after huge investments. There are crippling overheads and infrastructure costs to be recovered. And there are massive profit margins to be maintained. If these were to be dependent only on genuine drug discoveries, that would be taking too great a risk.Industry players have to strike the right balance between profit making and credibility. In profit making, the marketing champions play their role. In credibility ratings, researchers and paid spokes-persons play their role. All is hunky dory till marketing is based on credibility. When there is nothing available to make for credibility, something is projected as one and marketing carried out, in the calculated hope that profits can accrue, since profit making must continue endlessly. That is what makes pharma adopt even questionable means to make profits.
Essentially, there are four types of drugs. First, drugs that work and have minimal side-effects; second, drugs which work but have serious side-effects; third, drugs that do not work and have minimal side-effects; and fourth, drugs which work minimally but have serious side-effects. It is the second and fourth types that create major hassles for industry. Often, industry may try to project the fourth type as the second to escape censure.
The major cat and mouse game being played by conscientious researchers is in exposing the third and fourth for what they are and not allowing industry to palm them off as the first and second type respectively. The other major game is in preventing the second type from being projected as the first. The third type are essentially harmless, so they attract censure all right and some merriment at the antics to market them. But they escape anything more than a light rap on the knuckles, except when they are projected as the first type.
What is necessary for industry captains and long-term players is to realise:
Their major propelling force can only be producing the first type. 2. They accept the second type only till they can lay their hands on the first. 3. The third type can be occasionally played around with to shore up profits, but never by projecting them as the first type. 4. The fourth type are the laggards, real threat to credibility and therefore do not deserve any market hype or promotion.
In finding out why most pharma indulges in questionable tactics, we are lead to some interesting solutions to prevent such tactics with the least amount of hassles for all concerned, even as both profits and credibility are kept intact.
doi:10.4103/0973-1229.32176
PMCID: PMC3192391  PMID: 22058616
Academia; Pharmaceutical Industry; Clinical Practice Guidelines; Best Practice Guidelines; Academic Medical Centers; Medical Associations; Research Journals; Clinical Research; Public Welfare; Pharma Image; Corporate Welfare; Biological Psychiatry; Law Suits Against Industry
13.  Adherence to guidelines and protocols in the prehospital and emergency care setting: a systematic review 
A gap between guidelines or protocols and clinical practice often exists, which may result in patients not receiving appropriate care. Therefore, the objectives of this systematic review were (1) to give an overview of professionals’ adherence to (inter)national guidelines and protocols in the emergency medical dispatch, prehospital and emergency department (ED) settings, and (2) to explore which factors influencing adherence were described in studies reporting on adherence. PubMed (including MEDLINE), CINAHL, EMBASE and the Cochrane database for systematic reviews were systematically searched. Reference lists of included studies were also searched for eligible studies. Identified articles were screened on title, abstract and year of publication (≥1990) and were included when reporting on adherence in the eligible settings. Following the initial selection, articles were screened full text and included if they concerned adherence to a (inter)national guideline or protocol, and if the time interval between data collection and publication date was <10 years. Finally, articles were assessed on reporting quality. Each step was undertaken by two independent researchers. Thirty-five articles met the criteria, none of these addressed the emergency medical dispatch setting or protocols. Median adherence ranged from 7.8-95% in the prehospital setting, and from 0-98% in the ED setting. In the prehospital setting, recommendations on monitoring came with higher median adherence percentages than treatment recommendations. For both settings, cardiology treatment recommendations came with relatively low median adherence percentages. Eight studies identified patient and organisational factors influencing adherence. The results showed that professionals’ adherence to (inter)national prehospital and emergency department guidelines shows a wide variation, while adherence in the emergency medical dispatch setting is not reported. As insight in influencing factors for adherence in the emergency care settings is minimal, future research should identify such factors to allow the development of strategies to improve adherence and thus improve quality of care.
doi:10.1186/1757-7241-21-9
PMCID: PMC3599067  PMID: 23422062
Emergency medical technicians [MeSH]; Emergency medical services [MeSH]; Emergency medicine [MeSH]; Emergency nursing [MeSH]; Guideline adherence [MeSH]
14.  Children with Severe Malnutrition: Can Those at Highest Risk of Death Be Identified with the WHO Protocol? 
PLoS Medicine  2006;3(12):e500.
Background
With strict adherence to international recommended treatment guidelines, the case fatality for severe malnutrition ought to be less than 5%. In African hospitals, fatality rates of 20% are common and are often attributed to poor training and faulty case management. Improving outcome will depend upon the identification of those at greatest risk and targeting limited health resources. We retrospectively examined the major risk factors associated with early (<48 h) and late in-hospital death in children with severe malnutrition with the aim of identifying admission features that could distinguish a high-risk group in relation to the World Health Organization (WHO) guidelines.
Methods and Findings
Of 920 children in the study, 176 (19%) died, with 59 (33%) deaths occurring within 48 h of admission. Bacteraemia complicated 27% of all deaths: 52% died before 48 h despite 85% in vitro antibiotic susceptibility of cultured organisms. The sensitivity, specificity, and likelihood ratio of the WHO-recommended “danger signs” (lethargy, hypothermia, or hypoglycaemia) to predict early mortality was 52%, 84%, and 3.4% (95% confidence interval [CI] = 2.2 to 5.1), respectively. In addition, four bedside features were associated with early case fatality: bradycardia, capillary refill time greater than 2 s, weak pulse volume, and impaired consciousness level; the presence of two or more features was associated with an odds ratio of 9.6 (95% CI = 4.8 to 19) for early fatality (p < 0.0001). Conversely, the group of children without any of these seven features, or signs of dehydration, severe acidosis, or electrolyte derangements, had a low fatality (7%).
Conclusions
Formal assessment of these features as emergency signs to improve triage and to rationalize manpower resources toward the high-risk groups is required. In addition, basic clinical research is necessary to identify and test appropriate supportive treatments.
A retrospective examination of major risk factors associated with in-hospital deaths in children with severe malnutrition has identified admission features that could help distinguish those at highest risk.
Editors' Summary
Background.
Severe malnutrition is thought to be responsible, at least in part, for a large proportion of the many millions of deaths every year among children below the age of five years. The World Health Organization (WHO) has developed guidelines for management of the severely malnourished child in the hospital. These guidelines outline ten initial steps for routine care, followed by treatment of associated conditions and rehabilitation. However, death rates among children admitted to hospital with severe malnutrition are worryingly high, commonly 20% or sometimes even higher. Many hospitals have reported that following introduction of the WHO guidelines, the death rates have been cut, but not to a level that the WHO defines as acceptable (5% or lower).
Why Was This Study Done?
In the region where this study was done, an area on the coast of Kenya, East Africa, malnutrition is very common. The local hospital, Kilifi District Hospital, currently reports a death rate of approximately 19% among children admitted with severe malnutrition, even with implementation of the WHO guidelines. A group of researchers based at the hospital wanted to see if they could identify those children who were most likely to die. Their aim was to see which aspects of the children's medical condition put them at highest risk. This information would be useful in ensuring that high-risk children received the most appropriate care.
What Did the Researchers Do and Find?
The researchers studied all severely malnourished children over three months of age who were admitted to the Kilifi District Hospital between September 2000 and June 2002. The children were treated according to the WHO guidelines, and the research group collected data on the condition of the children after treatment (their “outcomes”), as well as for relevant clinical signs and symptoms. The study involved 920 children, of whom 176 died in hospital (a death rate of 19%). They then examined the data to see which characteristics on admission were associated with early death (less than 48 h) and later deaths. They found that four clinical features, which could be easily ascertained at the bedside on admission, were associated with a large proportion of the early deaths. These four signs were slow heart rate, weak pulse volume, depressed consciousness level, and a delayed capillary refilling time (as tested by pressing a fingernail bed to blanche the finger, releasing it, and observing the time taken to reperfuse the capillaries—or recolor the nailbed). The researchers proposed that these findings, together with a number of other features that were associated with the later deaths could be used to identify three groups of patients differing in their need for emergency care: a high-risk group (with any of the four signs listed above, or hypoglycemia, and among whom mortality was 34%); a moderate-risk group (among whom mortality was 23%); and a low-risk group (mortality 7%).
What Do These Findings Mean?
First, the death rate amongst these children was very high even though WHO guidelines were used to guide management. The signs reported here as indicators of poor outcome may prove useful in future in identifying high-risk individuals to ensure they receive the right treatment. However, the indicators proposed here would need further evaluation before current guidelines for treatment of the severely malnourished child could be changed.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030500.
• Information on severe malnutrition is available from the World Health Organization
• Management guidelines from the WHO can also be downloaded in many languages
• UNICEF, the United Nations Children's Fund, provides relevant resources and statistics as well as information about its programs addressing malnutrition worldwide
• Information from Médecins Sans Frontières (MSF) on acute malnutrition worldwide and MSF's response to current emergencies
doi:10.1371/journal.pmed.0030500
PMCID: PMC1716191  PMID: 17194194
15.  Generalizability of guidelines and physicians' adherence. Case study on the Sixth Joint National Commitee's guidelines on hypertension 
BMC Public Health  2003;3:24.
Background
Clinical practice guidelines (CPG) are thought to be an effective tool in improving efficiency and outcomes of clinical practice. Physicians' adherence to guidelines is reported to be poor. We evaluated the relationship between generalizability of guidelines on hypertension and physicians' adherence to guidelines' recommendations for pharmacological treatment.
Methods
We used the Sixth Joint National Committee's (JNC VI) guidelines on hypertension to evaluate our hypothesis. We evaluated the evidence from controlled clinical trials on which the JNC VI bases its recommendation, and compared the population enrolled in those trials with the American hypertensive population. Data on this population came from the National Health and Nutritional Examination Survey III.
Results
Twenty-three percent of the NHANES population had a diagnosis of hypertension, 11% had hypertension requiring drug treatment according to the JNC VI. Only half of the population requiring treatment would have been enrolled in at least two trials. Rate of adherence to CPG was 69%. We found a weak association between generalizability and physicians' adherence to guidelines. Baseline risk was the major determinant of the decision to treat.
Conclusion
JNC VI guidelines may not be generalizable to their target population. We found a relatively poor adherence rate to these guidelines. Failing of completely taking into account the clinical characteristics of the patients may be partly responsible for this lack of adherence.
doi:10.1186/1471-2458-3-24
PMCID: PMC183849  PMID: 12873353
16.  Implications of early and guideline adherent physical therapy for low back pain on utilization and costs 
Background
Initial management decisions following a new episode of low back pain (LBP) are thought to have profound implications for health care utilization and costs. The purpose of this study was to evaluate the impact of early and guideline adherent physical therapy for low back pain on utilization and costs within the Military Health System (MHS).
Methods
Patients presenting to a primary care setting with a new complaint of LBP from January 1, 2007 to December 31, 2009 were identified from the MHS Management Analysis and Reporting Tool. Descriptive statistics, utilization, and costs were examined on the basis of timing of referral to physical therapy and adherence to practice guidelines over a 2-year period. Utilization outcomes (advanced imaging, lumbar injections or surgery, and opioid use) were compared using adjusted odds ratios with 99% confidence intervals. Total LBP-related health care costs over the 2-year follow-up were compared using linear regression models.
Results
753,450 eligible patients with a primary care visit for LBP between 18–60 years of age were considered. Physical therapy was utilized by 16.3% (n = 122,723) of patients, with 24.0% (n = 17,175) of those receiving early physical therapy that was adherent to recommendations for active treatment. Early referral to guideline adherent physical therapy was associated with significantly lower utilization for all outcomes and 60% lower total LBP-related costs.
Conclusions
The potential for cost savings in the MHS from early guideline adherent physical therapy may be substantial. These results also extend the findings from similar studies in civilian settings by demonstrating an association between early guideline adherent care and utilization and costs in a single payer health system. Future research is necessary to examine which patients with LBP benefit early physical therapy and determine strategies for providing early guideline adherent care.
Electronic supplementary material
The online version of this article (doi:10.1186/s12913-015-0830-3) contains supplementary material, which is available to authorized users.
doi:10.1186/s12913-015-0830-3
PMCID: PMC4393575  PMID: 25880898
Guideline adherence; Low back pain; Physical therapy; Timing; Utilization and costs
17.  Hypertension guidelines and their effects on the health system 
Introduction
Hypertension guidelines, which have existed for many years and primarily used in the USA, Canada and Great Britain, are now becoming an issue in Germany. Strong efforts are presently underway for a German version comparable to the guidelines developed for the mentioned countries. The development of guidelines is a part of the implementation system of guidelines in Germany. It covers the mode of operation of the AWMF (work community of the scientific medical subject companies) with the clearinghouse for guidelines (CLA) and the cooperation with the centre for medical quality (ÄZQ).
In the HTA report the real use of the hypertension guidelines shall be investigated for Germany from the development trends and further possibilities of use according to a medical applicability. Economic issues and an optimisation of use are also discussed.
Question
The following questions shall be answered in particular:
How much are the guidelines used concerning hypertension? Can effects (or their influence) be established on the medical procedures? Are there statements available about costs and cost effectiveness? Are there recommendations for further use?
Methodology
To answer these questions, a comprehensive literature search was done. No empirical investigation was carried out. From this enquiry 206 articles were checked in detail but not all of them were available in full text.
Only those publications which directly dealt with high blood pressure guidelines or articles with a direct reference to the topic have been considered in the HTA report.
Publications concerning screening or methods of prevention, medical studies of the hypertension syndrome without a direct reference to guidelines and publications concerned with putting guidelines into action were excluded.
Results
After an analysis of the selected literature addressing the topic of hypertension guidelines, it was evident that the use of these guidelines cannot be gathered from existing literature at the present time. One can assume from international studies with analogical reasoning that these are confessed and have a high level of acceptance in the medical community. Unfortunately the actual usage is not represented satisfactorily in the scientific literature.
The effects of the guidelines on the medical procedures seem to be very strongly individual and the analyses to the compliance show at least an observable effect within the last few years. No publications could be found for the cost effectiveness of the guidelines.
The actual compliance with guidelines seems to be in relation with the duration of the professional practice. It seems the shorter the professional practice takes place, the stronger the guidelines are adhered.
Discussion
At present, there are only a few notes for the German health service regarding the actual effect of the hypertonus guidelines. However, the reason is not that the effect would not be possibly strong but at the methodical challenge to evaluate the sustaining effects of the application of the hypertonus guidelines. For this reason the literature is very rare regarding this topic.
For Germany it can be derived by analogical reasoning from foreign studies that guidelines will facilitate a more and more essential contribution to the design of the health system. Considering that primarily younger physicians accepted guidelines mode, the further construction, update and implementation of guidelines are essential, particularly with regard to the quality assurance. Straight guidelines can express a standard of the quality of a health system as a benchmark. The existence of guidelines or the lack thereof is considered also as a quality indicator of a health system at the organisation for economic cooperation and development (OECD).
Conclusion
Guidelines should be evaluated - especially the hypertonus guideline. Also further development and implementation should be emphasised. Methodically oriented work to the approach is pretty recent.
It is undeniable that guidelines represent a very essential and important contribution for the successful dealing with significant morbidity problems in a health system.
The fact that primarily younger doctors more frequently adopt, employ and adhere to guidelines leads to the assumption that expected sustainability for practical use will increase. Furthermore intensified use of guidelines can be considered in the "mainstream" of the development of the public health system also in an international perspective.
Not one single publication contradicts that a further acquirement, update and distribution of guidelines for the use of practices is necessary. The importance of the guideline is also not questioned in any article.
PMCID: PMC3011314  PMID: 21289932
18.  Clinical testing patterns and cost implications of variation in the evaluation of chronic kidney disease among U.S. physicians 
Background
Clinical practice guidelines were established to improve the diagnosis and management of chronic kidney disease (CKD), but the extent, determinants, and cost implications of guideline adherence and variation in adherence have not been evaluated.
Study Design
Cross-sectional survey
Settings & Participants
Nationally representative sample of 301 U.S. primary care physicians and nephrologists
Predictor
Provider and patient characteristics
Outcomes & Measurements
Guideline adherence was assessed as present if physicians recommended at least 5 of 6 clinical tests prescribed by the National Kidney Foundation-Kidney Disease Outcomes and Quality Initiative (KDOQI) guidelines for a hypothetical patient with newly identified CKD. We also assessed patterns and cost of additional non-recommended tests for the initial clinical evaluation of CKD.
Results
Most of the 86 family medicine, 89 internal medicine, and 126 nephrology physicians practiced greater than 10 years (54%), were in non-academic practices (76%), spent greater than 80% of their time performing clinical duties (78%), and correctly estimated kidney function (73%). Overall, 35% of participants were guideline adherent. Compared to nephrologists, internal medicine and family physicians had lower odds of adherence for all recommended testing (Odds Ratio (OR) [95% CI]:0.6[0.3–1.1] and 0.3[0.1–0.6], respectively). Participants practicing greater than 10 years had lower odds of ordering all recommended testing compared to participants practicing less than 10 years (OR[95% CI]: 0.5[0.3–0.9]). Eighty-five percent of participants recommended additional tests, which resulted in a 23% increased total per patient cost of the clinical evaluation.
Limitations
Recommendations for a hypothetical case scenario may differ from that of actual patients.
Conclusions
Adherence to the recommended clinical testing for the diagnosis and management of CKD was poor and additional testing was associated with substantially increased cost of the clinical evaluation. Improved clarity, dissemination, and uptake of existing guidelines are needed to improve quality and decrease costs of care for patients with CKD.
doi:10.1053/j.ajkd.2008.12.044
PMCID: PMC2714476  PMID: 19371991
chronic kidney disease; primary care providers; guidelines; KDOQI; cost
19.  Swedish general practitioners’ attitudes towards treatment guidelines – a qualitative study 
BMC Family Practice  2014;15:199.
Background
Drug therapy in primary care is a challenge for general practitioners (GPs) and the prescribing decision is influenced by several factors. GPs obtain drug information in different ways, from evidence-based sources, their own or others’ experiences, or interactions with opinion makers, patients or colleagues. The need for objective drug information sources instead of drug industry-provided information has led to the establishment of local drug and therapeutic committees. They annually produce and implement local treatment guidelines in order to promote rational drug use. This study describes Swedish GPs’ attitudes towards locally developed evidence-based treatment guidelines.
Methods
Three focus group interviews were performed with a total of 17 GPs working at both public and private primary health care centres in Skåne in southern Sweden. Transcripts were analysed by conventional content analysis. Codes, categories and themes were derived from data during the analysis.
Results
We found two main themes: GP-related influencing factors and External influencing factors. The first theme emerged when we put together four main categories: Expectations and perceptions about existing local guidelines, Knowledge about evidence-based prescribing, Trust in development of guidelines, and Beliefs about adherence to guidelines. The second theme included the categories Patient-related aspects, Drug industry-related aspects, and Health economic aspects. The time-saving aspect, trust in evidence-based market-neutral guidelines and patient safety were described as key motivating factors for adherence. Patient safety was reported to be more important than adherence to guidelines or maintaining a good patient-doctor relationship. Cost containment was perceived both as a motivating factor and a barrier for adherence to guidelines. GPs expressed concerns about difficulties with adherence to guidelines when managing patients with drugs from other prescribers. GPs experienced a lack of time to self-inform and difficulties managing direct-to-consumer drug industry information.
Conclusions
Patient safety, trust in development of evidence-based recommendations, the patient-doctor encounter and cost containment were found to be key factors in GPs’ prescribing. Future studies should explore the need for transparency in forming and implementing guidelines, which might potentially increase adherence to evidence-based treatment guidelines in primary care.
doi:10.1186/s12875-014-0199-0
PMCID: PMC4276045  PMID: 25511989
Qualitative research; Focus groups; Guidelines; Attitudes; Primary care; GPs; Adherence; Drug therapy
20.  Internet-Based Device-Assisted Remote Monitoring of Cardiovascular Implantable Electronic Devices 
Executive Summary
Objective
The objective of this Medical Advisory Secretariat (MAS) report was to conduct a systematic review of the available published evidence on the safety, effectiveness, and cost-effectiveness of Internet-based device-assisted remote monitoring systems (RMSs) for therapeutic cardiac implantable electronic devices (CIEDs) such as pacemakers (PMs), implantable cardioverter-defibrillators (ICDs), and cardiac resynchronization therapy (CRT) devices. The MAS evidence-based review was performed to support public financing decisions.
Clinical Need: Condition and Target Population
Sudden cardiac death (SCD) is a major cause of fatalities in developed countries. In the United States almost half a million people die of SCD annually, resulting in more deaths than stroke, lung cancer, breast cancer, and AIDS combined. In Canada each year more than 40,000 people die from a cardiovascular related cause; approximately half of these deaths are attributable to SCD.
Most cases of SCD occur in the general population typically in those without a known history of heart disease. Most SCDs are caused by cardiac arrhythmia, an abnormal heart rhythm caused by malfunctions of the heart’s electrical system. Up to half of patients with significant heart failure (HF) also have advanced conduction abnormalities.
Cardiac arrhythmias are managed by a variety of drugs, ablative procedures, and therapeutic CIEDs. The range of CIEDs includes pacemakers (PMs), implantable cardioverter-defibrillators (ICDs), and cardiac resynchronization therapy (CRT) devices. Bradycardia is the main indication for PMs and individuals at high risk for SCD are often treated by ICDs.
Heart failure (HF) is also a significant health problem and is the most frequent cause of hospitalization in those over 65 years of age. Patients with moderate to severe HF may also have cardiac arrhythmias, although the cause may be related more to heart pump or haemodynamic failure. The presence of HF, however, increases the risk of SCD five-fold, regardless of aetiology. Patients with HF who remain highly symptomatic despite optimal drug therapy are sometimes also treated with CRT devices.
With an increasing prevalence of age-related conditions such as chronic HF and the expanding indications for ICD therapy, the rate of ICD placement has been dramatically increasing. The appropriate indications for ICD placement, as well as the rate of ICD placement, are increasingly an issue. In the United States, after the introduction of expanded coverage of ICDs, a national ICD registry was created in 2005 to track these devices. A recent survey based on this national ICD registry reported that 22.5% (25,145) of patients had received a non-evidence based ICD and that these patients experienced significantly higher in-hospital mortality and post-procedural complications.
In addition to the increased ICD device placement and the upfront device costs, there is the need for lifelong follow-up or surveillance, placing a significant burden on patients and device clinics. In 2007, over 1.6 million CIEDs were implanted in Europe and the United States, which translates to over 5.5 million patient encounters per year if the recommended follow-up practices are considered. A safe and effective RMS could potentially improve the efficiency of long-term follow-up of patients and their CIEDs.
Technology
In addition to being therapeutic devices, CIEDs have extensive diagnostic abilities. All CIEDs can be interrogated and reprogrammed during an in-clinic visit using an inductive programming wand. Remote monitoring would allow patients to transmit information recorded in their devices from the comfort of their own homes. Currently most ICD devices also have the potential to be remotely monitored. Remote monitoring (RM) can be used to check system integrity, to alert on arrhythmic episodes, and to potentially replace in-clinic follow-ups and manage disease remotely. They do not currently have the capability of being reprogrammed remotely, although this feature is being tested in pilot settings.
Every RMS is specifically designed by a manufacturer for their cardiac implant devices. For Internet-based device-assisted RMSs, this customization includes details such as web application, multiplatform sensors, custom algorithms, programming information, and types and methods of alerting patients and/or physicians. The addition of peripherals for monitoring weight and pressure or communicating with patients through the onsite communicators also varies by manufacturer. Internet-based device-assisted RMSs for CIEDs are intended to function as a surveillance system rather than an emergency system.
Health care providers therefore need to learn each application, and as more than one application may be used at one site, multiple applications may need to be reviewed for alarms. All RMSs deliver system integrity alerting; however, some systems seem to be better geared to fast arrhythmic alerting, whereas other systems appear to be more intended for remote follow-up or supplemental remote disease management. The different RMSs may therefore have different impacts on workflow organization because of their varying frequency of interrogation and methods of alerts. The integration of these proprietary RM web-based registry systems with hospital-based electronic health record systems has so far not been commonly implemented.
Currently there are 2 general types of RMSs: those that transmit device diagnostic information automatically and without patient assistance to secure Internet-based registry systems, and those that require patient assistance to transmit information. Both systems employ the use of preprogrammed alerts that are either transmitted automatically or at regular scheduled intervals to patients and/or physicians.
The current web applications, programming, and registry systems differ greatly between the manufacturers of transmitting cardiac devices. In Canada there are currently 4 manufacturers—Medtronic Inc., Biotronik, Boston Scientific Corp., and St Jude Medical Inc.—which have regulatory approval for remote transmitting CIEDs. Remote monitoring systems are proprietary to the manufacturer of the implant device. An RMS for one device will not work with another device, and the RMS may not work with all versions of the manufacturer’s devices.
All Internet-based device-assisted RMSs have common components. The implanted device is equipped with a micro-antenna that communicates with a small external device (at bedside or wearable) commonly known as the transmitter. Transmitters are able to interrogate programmed parameters and diagnostic data stored in the patients’ implant device. The information transfer to the communicator can occur at preset time intervals with the participation of the patient (waving a wand over the device) or it can be sent automatically (wirelessly) without their participation. The encrypted data are then uploaded to an Internet-based database on a secure central server. The data processing facilities at the central database, depending on the clinical urgency, can trigger an alert for the physician(s) that can be sent via email, fax, text message, or phone. The details are also posted on the secure website for viewing by the physician (or their delegate) at their convenience.
Research Questions
The research directions and specific research questions for this evidence review were as follows:
To identify the Internet-based device-assisted RMSs available for follow-up of patients with therapeutic CIEDs such as PMs, ICDs, and CRT devices.
To identify the potential risks, operational issues, or organizational issues related to Internet-based device-assisted RM for CIEDs.
To evaluate the safety, acceptability, and effectiveness of Internet-based device-assisted RMSs for CIEDs such as PMs, ICDs, and CRT devices.
To evaluate the safety, effectiveness, and cost-effectiveness of Internet-based device-assisted RMSs for CIEDs compared to usual outpatient in-office monitoring strategies.
To evaluate the resource implications or budget impact of RMSs for CIEDs in Ontario, Canada.
Research Methods
Literature Search
The review included a systematic review of published scientific literature and consultations with experts and manufacturers of all 4 approved RMSs for CIEDs in Canada. Information on CIED cardiac implant clinics was also obtained from Provincial Programs, a division within the Ministry of Health and Long-Term Care with a mandate for cardiac implant specialty care. Various administrative databases and registries were used to outline the current clinical follow-up burden of CIEDs in Ontario. The provincial population-based ICD database developed and maintained by the Institute for Clinical Evaluative Sciences (ICES) was used to review the current follow-up practices with Ontario patients implanted with ICD devices.
Search Strategy
A literature search was performed on September 21, 2010 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from 1950 to September 2010. Search alerts were generated and reviewed for additional relevant literature until December 31, 2010. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search.
Inclusion Criteria
published between 1950 and September 2010;
English language full-reports and human studies;
original reports including clinical evaluations of Internet-based device-assisted RMSs for CIEDs in clinical settings;
reports including standardized measurements on outcome events such as technical success, safety, effectiveness, cost, measures of health care utilization, morbidity, mortality, quality of life or patient satisfaction;
randomized controlled trials (RCTs), systematic reviews and meta-analyses, cohort and controlled clinical studies.
Exclusion Criteria
non-systematic reviews, letters, comments and editorials;
reports not involving standardized outcome events;
clinical reports not involving Internet-based device assisted RM systems for CIEDs in clinical settings;
reports involving studies testing or validating algorithms without RM;
studies with small samples (<10 subjects).
Outcomes of Interest
The outcomes of interest included: technical outcomes, emergency department visits, complications, major adverse events, symptoms, hospital admissions, clinic visits (scheduled and/or unscheduled), survival, morbidity (disease progression, stroke, etc.), patient satisfaction, and quality of life.
Summary of Findings
The MAS evidence review was performed to review available evidence on Internet-based device-assisted RMSs for CIEDs published until September 2010. The search identified 6 systematic reviews, 7 randomized controlled trials, and 19 reports for 16 cohort studies—3 of these being registry-based and 4 being multi-centered. The evidence is summarized in the 3 sections that follow.
1. Effectiveness of Remote Monitoring Systems of CIEDs for Cardiac Arrhythmia and Device Functioning
In total, 15 reports on 13 cohort studies involving investigations with 4 different RMSs for CIEDs in cardiology implant clinic groups were identified in the review. The 4 RMSs were: Care Link Network® (Medtronic Inc,, Minneapolis, MN, USA); Home Monitoring® (Biotronic, Berlin, Germany); House Call 11® (St Jude Medical Inc., St Pauls, MN, USA); and a manufacturer-independent RMS. Eight of these reports were with the Home Monitoring® RMS (12,949 patients), 3 were with the Care Link® RMS (167 patients), 1 was with the House Call 11® RMS (124 patients), and 1 was with a manufacturer-independent RMS (44 patients). All of the studies, except for 2 in the United States, (1 with Home Monitoring® and 1 with House Call 11®), were performed in European countries.
The RMSs in the studies were evaluated with different cardiac implant device populations: ICDs only (6 studies), ICD and CRT devices (3 studies), PM and ICD and CRT devices (4 studies), and PMs only (2 studies). The patient populations were predominately male (range, 52%–87%) in all studies, with mean ages ranging from 58 to 76 years. One study population was unique in that RMSs were evaluated for ICDs implanted solely for primary prevention in young patients (mean age, 44 years) with Brugada syndrome, which carries an inherited increased genetic risk for sudden heart attack in young adults.
Most of the cohort studies reported on the feasibility of RMSs in clinical settings with limited follow-up. In the short follow-up periods of the studies, the majority of the events were related to detection of medical events rather than system configuration or device abnormalities. The results of the studies are summarized below:
The interrogation of devices on the web platform, both for continuous and scheduled transmissions, was significantly quicker with remote follow-up, both for nurses and physicians.
In a case-control study focusing on a Brugada population–based registry with patients followed-up remotely, there were significantly fewer outpatient visits and greater detection of inappropriate shocks. One death occurred in the control group not followed remotely and post-mortem analysis indicated early signs of lead failure prior to the event.
Two studies examined the role of RMSs in following ICD leads under regulatory advisory in a European clinical setting and noted:
– Fewer inappropriate shocks were administered in the RM group.
– Urgent in-office interrogations and surgical revisions were performed within 12 days of remote alerts.
– No signs of lead fracture were detected at in-office follow-up; all were detected at remote follow-up.
Only 1 study reported evaluating quality of life in patients followed up remotely at 3 and 6 months; no values were reported.
Patient satisfaction was evaluated in 5 cohort studies, all in short term follow-up: 1 for the Home Monitoring® RMS, 3 for the Care Link® RMS, and 1 for the House Call 11® RMS.
– Patients reported receiving a sense of security from the transmitter, a good relationship with nurses and physicians, positive implications for their health, and satisfaction with RM and organization of services.
– Although patients reported that the system was easy to implement and required less than 10 minutes to transmit information, a variable proportion of patients (range, 9% 39%) reported that they needed the assistance of a caregiver for their transmission.
– The majority of patients would recommend RM to other ICD patients.
– Patients with hearing or other physical or mental conditions hindering the use of the system were excluded from studies, but the frequency of this was not reported.
Physician satisfaction was evaluated in 3 studies, all with the Care Link® RMS:
– Physicians reported an ease of use and high satisfaction with a generally short-term use of the RMS.
– Physicians reported being able to address the problems in unscheduled patient transmissions or physician initiated transmissions remotely, and were able to handle the majority of the troubleshooting calls remotely.
– Both nurses and physicians reported a high level of satisfaction with the web registry system.
2. Effectiveness of Remote Monitoring Systems in Heart Failure Patients for Cardiac Arrhythmia and Heart Failure Episodes
Remote follow-up of HF patients implanted with ICD or CRT devices, generally managed in specialized HF clinics, was evaluated in 3 cohort studies: 1 involved the Home Monitoring® RMS and 2 involved the Care Link® RMS. In these RMSs, in addition to the standard diagnostic features, the cardiac devices continuously assess other variables such as patient activity, mean heart rate, and heart rate variability. Intra-thoracic impedance, a proxy measure for lung fluid overload, was also measured in the Care Link® studies. The overall diagnostic performance of these measures cannot be evaluated, as the information was not reported for patients who did not experience intra-thoracic impedance threshold crossings or did not undergo interventions. The trial results involved descriptive information on transmissions and alerts in patients experiencing high morbidity and hospitalization in the short study periods.
3. Comparative Effectiveness of Remote Monitoring Systems for CIEDs
Seven RCTs were identified evaluating RMSs for CIEDs: 2 were for PMs (1276 patients) and 5 were for ICD/CRT devices (3733 patients). Studies performed in the clinical setting in the United States involved both the Care Link® RMS and the Home Monitoring® RMS, whereas all studies performed in European countries involved only the Home Monitoring® RMS.
3A. Randomized Controlled Trials of Remote Monitoring Systems for Pacemakers
Two trials, both multicenter RCTs, were conducted in different countries with different RMSs and study objectives. The PREFER trial was a large trial (897 patients) performed in the United States examining the ability of Care Link®, an Internet-based remote PM interrogation system, to detect clinically actionable events (CAEs) sooner than the current in-office follow-up supplemented with transtelephonic monitoring transmissions, a limited form of remote device interrogation. The trial results are summarized below:
In the 375-day mean follow-up, 382 patients were identified with at least 1 CAE—111 patients in the control arm and 271 in the remote arm.
The event rate detected per patient for every type of CAE, except for loss of atrial capture, was higher in the remote arm than the control arm.
The median time to first detection of CAEs (4.9 vs. 6.3 months) was significantly shorter in the RMS group compared to the control group (P < 0.0001).
Additionally, only 2% (3/190) of the CAEs in the control arm were detected during a transtelephonic monitoring transmission (the rest were detected at in-office follow-ups), whereas 66% (446/676) of the CAEs were detected during remote interrogation.
The second study, the OEDIPE trial, was a smaller trial (379 patients) performed in France evaluating the ability of the Home Monitoring® RMS to shorten PM post-operative hospitalization while preserving the safety of conventional management of longer hospital stays.
Implementation and operationalization of the RMS was reported to be successful in 91% (346/379) of the patients and represented 8144 transmissions.
In the RM group 6.5% of patients failed to send messages (10 due to improper use of the transmitter, 2 with unmanageable stress). Of the 172 patients transmitting, 108 patients sent a total of 167 warnings during the trial, with a greater proportion of warnings being attributed to medical rather than technical causes.
Forty percent had no warning message transmission and among these, 6 patients experienced a major adverse event and 1 patient experienced a non-major adverse event. Of the 6 patients having a major adverse event, 5 contacted their physician.
The mean medical reaction time was faster in the RM group (6.5 ± 7.6 days vs. 11.4 ± 11.6 days).
The mean duration of hospitalization was significantly shorter (P < 0.001) for the RM group than the control group (3.2 ± 3.2 days vs. 4.8 ± 3.7 days).
Quality of life estimates by the SF-36 questionnaire were similar for the 2 groups at 1-month follow-up.
3B. Randomized Controlled Trials Evaluating Remote Monitoring Systems for ICD or CRT Devices
The 5 studies evaluating the impact of RMSs with ICD/CRT devices were conducted in the United States and in European countries and involved 2 RMSs—Care Link® and Home Monitoring ®. The objectives of the trials varied and 3 of the trials were smaller pilot investigations.
The first of the smaller studies (151 patients) evaluated patient satisfaction, achievement of patient outcomes, and the cost-effectiveness of the Care Link® RMS compared to quarterly in-office device interrogations with 1-year follow-up.
Individual outcomes such as hospitalizations, emergency department visits, and unscheduled clinic visits were not significantly different between the study groups.
Except for a significantly higher detection of atrial fibrillation in the RM group, data on ICD detection and therapy were similar in the study groups.
Health-related quality of life evaluated by the EuroQoL at 6-month or 12-month follow-up was not different between study groups.
Patients were more satisfied with their ICD care in the clinic follow-up group than in the remote follow-up group at 6-month follow-up, but were equally satisfied at 12- month follow-up.
The second small pilot trial (20 patients) examined the impact of RM follow-up with the House Call 11® system on work schedules and cost savings in patients randomized to 2 study arms varying in the degree of remote follow-up.
The total time including device interrogation, transmission time, data analysis, and physician time required was significantly shorter for the RM follow-up group.
The in-clinic waiting time was eliminated for patients in the RM follow-up group.
The physician talk time was significantly reduced in the RM follow-up group (P < 0.05).
The time for the actual device interrogation did not differ in the study groups.
The third small trial (115 patients) examined the impact of RM with the Home Monitoring® system compared to scheduled trimonthly in-clinic visits on the number of unplanned visits, total costs, health-related quality of life (SF-36), and overall mortality.
There was a 63.2% reduction in in-office visits in the RM group.
Hospitalizations or overall mortality (values not stated) were not significantly different between the study groups.
Patient-induced visits were higher in the RM group than the in-clinic follow-up group.
The TRUST Trial
The TRUST trial was a large multicenter RCT conducted at 102 centers in the United States involving the Home Monitoring® RMS for ICD devices for 1450 patients. The primary objectives of the trial were to determine if remote follow-up could be safely substituted for in-office clinic follow-up (3 in-office visits replaced) and still enable earlier physician detection of clinically actionable events.
Adherence to the protocol follow-up schedule was significantly higher in the RM group than the in-office follow-up group (93.5% vs. 88.7%, P < 0.001).
Actionability of trimonthly scheduled checks was low (6.6%) in both study groups. Overall, actionable causes were reprogramming (76.2%), medication changes (24.8%), and lead/system revisions (4%), and these were not different between the 2 study groups.
The overall mean number of in-clinic and hospital visits was significantly lower in the RM group than the in-office follow-up group (2.1 per patient-year vs. 3.8 per patient-year, P < 0.001), representing a 45% visit reduction at 12 months.
The median time from onset of first arrhythmia to physician evaluation was significantly shorter (P < 0.001) in the RM group than in the in-office follow-up group for all arrhythmias (1 day vs. 35.5 days).
The median time to detect clinically asymptomatic arrhythmia events—atrial fibrillation (AF), ventricular fibrillation (VF), ventricular tachycardia (VT), and supra-ventricular tachycardia (SVT)—was also significantly shorter (P < 0.001) in the RM group compared to the in-office follow-up group (1 day vs. 41.5 days) and was significantly quicker for each of the clinical arrhythmia events—AF (5.5 days vs. 40 days), VT (1 day vs. 28 days), VF (1 day vs. 36 days), and SVT (2 days vs. 39 days).
System-related problems occurred infrequently in both groups—in 1.5% of patients (14/908) in the RM group and in 0.7% of patients (3/432) in the in-office follow-up group.
The overall adverse event rate over 12 months was not significantly different between the 2 groups and individual adverse events were also not significantly different between the RM group and the in-office follow-up group: death (3.4% vs. 4.9%), stroke (0.3% vs. 1.2%), and surgical intervention (6.6% vs. 4.9%), respectively.
The 12-month cumulative survival was 96.4% (95% confidence interval [CI], 95.5%–97.6%) in the RM group and 94.2% (95% confidence interval [CI], 91.8%–96.6%) in the in-office follow-up group, and was not significantly different between the 2 groups (P = 0.174).
The CONNECT Trial
The CONNECT trial, another major multicenter RCT, involved the Care Link® RMS for ICD/CRT devices in a15-month follow-up study of 1,997 patients at 133 sites in the United States. The primary objective of the trial was to determine whether automatically transmitted physician alerts decreased the time from the occurrence of clinically relevant events to medical decisions. The trial results are summarized below:
Of the 575 clinical alerts sent in the study, 246 did not trigger an automatic physician alert. Transmission failures were related to technical issues such as the alert not being programmed or not being reset, and/or a variety of patient factors such as not being at home and the monitor not being plugged in or set up.
The overall mean time from the clinically relevant event to the clinical decision was significantly shorter (P < 0.001) by 17.4 days in the remote follow-up group (4.6 days for 172 patients) than the in-office follow-up group (22 days for 145 patients).
– The median time to a clinical decision was shorter in the remote follow-up group than in the in-office follow-up group for an AT/AF burden greater than or equal to 12 hours (3 days vs. 24 days) and a fast VF rate greater than or equal to 120 beats per minute (4 days vs. 23 days).
Although infrequent, similar low numbers of events involving low battery and VF detection/therapy turned off were noted in both groups. More alerts, however, were noted for out-of-range lead impedance in the RM group (18 vs. 6 patients), and the time to detect these critical events was significantly shorter in the RM group (same day vs. 17 days).
Total in-office clinic visits were reduced by 38% from 6.27 visits per patient-year in the in-office follow-up group to 3.29 visits per patient-year in the remote follow-up group.
Health care utilization visits (N = 6,227) that included cardiovascular-related hospitalization, emergency department visits, and unscheduled clinic visits were not significantly higher in the remote follow-up group.
The overall mean length of hospitalization was significantly shorter (P = 0.002) for those in the remote follow-up group (3.3 days vs. 4.0 days) and was shorter both for patients with ICD (3.0 days vs. 3.6 days) and CRT (3.8 days vs. 4.7 days) implants.
The mortality rate between the study arms was not significantly different between the follow-up groups for the ICDs (P = 0.31) or the CRT devices with defribillator (P = 0.46).
Conclusions
There is limited clinical trial information on the effectiveness of RMSs for PMs. However, for RMSs for ICD devices, multiple cohort studies and 2 large multicenter RCTs demonstrated feasibility and significant reductions in in-office clinic follow-ups with RMSs in the first year post implantation. The detection rates of clinically significant events (and asymptomatic events) were higher, and the time to a clinical decision for these events was significantly shorter, in the remote follow-up groups than in the in-office follow-up groups. The earlier detection of clinical events in the remote follow-up groups, however, was not associated with lower morbidity or mortality rates in the 1-year follow-up. The substitution of almost all the first year in-office clinic follow-ups with RM was also not associated with an increased health care utilization such as emergency department visits or hospitalizations.
The follow-up in the trials was generally short-term, up to 1 year, and was a more limited assessment of potential longer term device/lead integrity complications or issues. None of the studies compared the different RMSs, particularly the different RMSs involving patient-scheduled transmissions or automatic transmissions. Patients’ acceptance of and satisfaction with RM were reported to be high, but the impact of RM on patients’ health-related quality of life, particularly the psychological aspects, was not evaluated thoroughly. Patients who are not technologically competent, having hearing or other physical/mental impairments, were identified as potentially disadvantaged with remote surveillance. Cohort studies consistently identified subgroups of patients who preferred in-office follow-up. The evaluation of costs and workflow impact to the health care system were evaluated in European or American clinical settings, and only in a limited way.
Internet-based device-assisted RMSs involve a new approach to monitoring patients, their disease progression, and their CIEDs. Remote monitoring also has the potential to improve the current postmarket surveillance systems of evolving CIEDs and their ongoing hardware and software modifications. At this point, however, there is insufficient information to evaluate the overall impact to the health care system, although the time saving and convenience to patients and physicians associated with a substitution of in-office follow-up by RM is more certain. The broader issues surrounding infrastructure, impacts on existing clinical care systems, and regulatory concerns need to be considered for the implementation of Internet-based RMSs in jurisdictions involving different clinical practices.
PMCID: PMC3377571  PMID: 23074419
21.  Utilization of DXA Bone Mineral Densitometry in Ontario 
Executive Summary
Issue
Systematic reviews and analyses of administrative data were performed to determine the appropriate use of bone mineral density (BMD) assessments using dual energy x-ray absorptiometry (DXA), and the associated trends in wrist and hip fractures in Ontario.
Background
Dual Energy X-ray Absorptiometry Bone Mineral Density Assessment
Dual energy x-ray absorptiometry bone densitometers measure bone density based on differential absorption of 2 x-ray beams by bone and soft tissues. It is the gold standard for detecting and diagnosing osteoporosis, a systemic disease characterized by low bone density and altered bone structure, resulting in low bone strength and increased risk of fractures. The test is fast (approximately 10 minutes) and accurate (exceeds 90% at the hip), with low radiation (1/3 to 1/5 of that from a chest x-ray). DXA densitometers are licensed as Class 3 medical devices in Canada. The World Health Organization has established criteria for osteoporosis and osteopenia based on DXA BMD measurements: osteoporosis is defined as a BMD that is >2.5 standard deviations below the mean BMD for normal young adults (i.e. T-score <–2.5), while osteopenia is defined as BMD that is more than 1 standard deviation but less than 2.5 standard deviation below the mean for normal young adults (i.e. T-score< –1 & ≥–2.5). DXA densitometry is presently an insured health service in Ontario.
Clinical Need
 
Burden of Disease
The Canadian Multicenter Osteoporosis Study (CaMos) found that 16% of Canadian women and 6.6% of Canadian men have osteoporosis based on the WHO criteria, with prevalence increasing with age. Osteopenia was found in 49.6% of Canadian women and 39% of Canadian men. In Ontario, it is estimated that nearly 530,000 Ontarians have some degrees of osteoporosis. Osteoporosis-related fragility fractures occur most often in the wrist, femur and pelvis. These fractures, particularly those in the hip, are associated with increased mortality, and decreased functional capacity and quality of life. A Canadian study showed that at 1 year after a hip fracture, the mortality rate was 20%. Another 20% required institutional care, 40% were unable to walk independently, and there was lower health-related quality of life due to attributes such as pain, decreased mobility and decreased ability to self-care. The cost of osteoporosis and osteoporotic fractures in Canada was estimated to be $1.3 billion in 1993.
Guidelines for Bone Mineral Density Testing
With 2 exceptions, almost all guidelines address only women. None of the guidelines recommend blanket population-based BMD testing. Instead, all guidelines recommend BMD testing in people at risk of osteoporosis, predominantly women aged 65 years or older. For women under 65 years of age, BMD testing is recommended only if one major or two minor risk factors for osteoporosis exist. Osteoporosis Canada did not restrict its recommendations to women, and thus their guidelines apply to both sexes. Major risk factors are age greater than or equal to 65 years, a history of previous fractures, family history (especially parental history) of fracture, and medication or disease conditions that affect bone metabolism (such as long-term glucocorticoid therapy). Minor risk factors include low body mass index, low calcium intake, alcohol consumption, and smoking.
Current Funding for Bone Mineral Density Testing
The Ontario Health Insurance Program (OHIP) Schedule presently reimburses DXA BMD at the hip and spine. Measurements at both sites are required if feasible. Patients at low risk of accelerated bone loss are limited to one BMD test within any 24-month period, but there are no restrictions on people at high risk. The total fee including the professional and technical components for a test involving 2 or more sites is $106.00 (Cdn).
Method of Review
This review consisted of 2 parts. The first part was an analysis of Ontario administrative data relating to DXA BMD, wrist and hip fractures, and use of antiresorptive drugs in people aged 65 years and older. The Institute for Clinical Evaluative Sciences extracted data from the OHIP claims database, the Canadian Institute for Health Information hospital discharge abstract database, the National Ambulatory Care Reporting System, and the Ontario Drug Benefit database using OHIP and ICD-10 codes. The data was analyzed to examine the trends in DXA BMD use from 1992 to 2005, and to identify areas requiring improvement.
The second part included systematic reviews and analyses of evidence relating to issues identified in the analyses of utilization data. Altogether, 8 reviews and qualitative syntheses were performed, consisting of 28 published systematic reviews and/or meta-analyses, 34 randomized controlled trials, and 63 observational studies.
Findings of Utilization Analysis
Analysis of administrative data showed a 10-fold increase in the number of BMD tests in Ontario between 1993 and 2005.
OHIP claims for BMD tests are presently increasing at a rate of 6 to 7% per year. Approximately 500,000 tests were performed in 2005/06 with an age-adjusted rate of 8,600 tests per 100,000 population.
Women accounted for 90 % of all BMD tests performed in the province.
In 2005/06, there was a 2-fold variation in the rate of DXA BMD tests across local integrated health networks, but a 10-fold variation between the county with the highest rate (Toronto) and that with the lowest rate (Kenora). The analysis also showed that:
With the increased use of BMD, there was a concomitant increase in the use of antiresorptive drugs (as shown in people 65 years and older) and a decrease in the rate of hip fractures in people age 50 years and older.
Repeat BMD made up approximately 41% of all tests. Most of the people (>90%) who had annual BMD tests in a 2-year or 3-year period were coded as being at high risk for osteoporosis.
18% (20,865) of the people who had a repeat BMD within a 24-month period and 34% (98,058) of the people who had one BMD test in a 3-year period were under 65 years, had no fracture in the year, and coded as low-risk.
Only 19% of people age greater than 65 years underwent BMD testing and 41% received osteoporosis treatment during the year following a fracture.
Men accounted for 24% of all hip fractures and 21 % of all wrist fractures, but only 10% of BMD tests. The rates of BMD tests and treatment in men after a fracture were only half of those in women.
In both men and women, the rate of hip and wrist fractures mainly increased after age 65 with the sharpest increase occurring after age 80 years.
Findings of Systematic Review and Analysis
Serial Bone Mineral Density Testing for People Not Receiving Osteoporosis Treatment
A systematic review showed that the mean rate of bone loss in people not receiving osteoporosis treatment (including postmenopausal women) is generally less than 1% per year. Higher rates of bone loss were reported for people with disease conditions or on medications that affect bone metabolism. In order to be considered a genuine biological change, the change in BMD between serial measurements must exceed the least significant change (variability) of the testing, ranging from 2.77% to 8% for precisions ranging from 1% to 3% respectively. Progression in BMD was analyzed, using different rates of baseline BMD values, rates of bone loss, precision, and BMD value for initiating treatment. The analyses showed that serial BMD measurements every 24 months (as per OHIP policy for low-risk individuals) is not necessary for people with no major risk factors for osteoporosis, provided that the baseline BMD is normal (T-score ≥ –1), and the rate of bone loss is less than or equal to 1% per year. The analyses showed that for someone with a normal baseline BMD and a rate of bone loss of less than 1% per year, the change in BMD is not likely to exceed least significant change (even for a 1% precision) in less than 3 years after the baseline test, and is not likely to drop to a BMD level that requires initiation of treatment in less than 16 years after the baseline test.
Serial Bone Mineral Density Testing in People Receiving Osteoporosis Therapy
Seven published meta-analysis of randomized controlled trials (RCTs) and 2 recent RCTs on BMD monitoring during osteoporosis therapy showed that although higher increases in BMD were generally associated with reduced risk of fracture, the change in BMD only explained a small percentage of the fracture risk reduction.
Studies showed that some people with small or no increase in BMD during treatment experienced significant fracture risk reduction, indicating that other factors such as improved bone microarchitecture might have contributed to fracture risk reduction.
There is conflicting evidence relating to the role of BMD testing in improving patient compliance with osteoporosis therapy.
Even though BMD may not be a perfect surrogate for reduction in fracture risk when monitoring responses to osteoporosis therapy, experts advised that it is still the only reliable test available for this purpose.
A systematic review conducted by the Medical Advisory Secretariat showed that the magnitude of increases in BMD during osteoporosis drug therapy varied among medications. Although most of the studies yielded mean percentage increases in BMD from baseline that did not exceed the least significant change for a 2% precision after 1 year of treatment, there were some exceptions.
Bone Mineral Density Testing and Treatment After a Fragility Fracture
A review of 3 published pooled analyses of observational studies and 12 prospective population-based observational studies showed that the presence of any prevalent fracture increases the relative risk for future fractures by approximately 2-fold or more. A review of 10 systematic reviews of RCTs and 3 additional RCTs showed that therapy with antiresorptive drugs significantly reduced the risk of vertebral fractures by 40 to 50% in postmenopausal osteoporotic women and osteoporotic men, and 2 antiresorptive drugs also reduced the risk of nonvertebral fractures by 30 to 50%. Evidence from observational studies in Canada and other jurisdictions suggests that patients who had undergone BMD measurements, particularly if a diagnosis of osteoporosis is made, were more likely to be given pharmacologic bone-sparing therapy. Despite these findings, the rate of BMD investigation and osteoporosis treatment after a fracture remained low (<20%) in Ontario as well as in other jurisdictions.
Bone Mineral Density Testing in Men
There are presently no specific Canadian guidelines for BMD screening in men. A review of the literature suggests that risk factors for fracture and the rate of vertebral deformity are similar for men and women, but the mortality rate after a hip fracture is higher in men compared with women. Two bisphosphonates had been shown to reduce the risk of vertebral and hip fractures in men. However, BMD testing and osteoporosis treatment were proportionately low in Ontario men in general, and particularly after a fracture, even though men accounted for 25% of the hip and wrist fractures. The Ontario data also showed that the rates of wrist fracture and hip fracture in men rose sharply in the 75- to 80-year age group.
Ontario-Based Economic Analysis
The economic analysis focused on analyzing the economic impact of decreasing future hip fractures by increasing the rate of BMD testing in men and women age greater than or equal to 65 years following a hip or wrist fracture. A decision analysis showed the above strategy, especially when enhanced by improved reporting of BMD tests, to be cost-effective, resulting in a cost-effectiveness ratio ranging from $2,285 (Cdn) per fracture avoided (worst-case scenario) to $1,981 (Cdn) per fracture avoided (best-case scenario). A budget impact analysis estimated that shifting utilization of BMD testing from the low risk population to high risk populations within Ontario would result in a saving of $0.85 million to $1.5 million (Cdn) to the health system. The potential net saving was estimated at $1.2 million to $5 million (Cdn) when the downstream cost-avoidance due to prevention of future hip fractures was factored into the analysis.
Other Factors for Consideration
There is a lack of standardization for BMD testing in Ontario. Two different standards are presently being used and experts suggest that variability in results from different facilities may lead to unnecessary testing. There is also no requirement for standardized equipment, procedure or reporting format. The current reimbursement policy for BMD testing encourages serial testing in people at low risk of accelerated bone loss. This review showed that biannual testing is not necessary for all cases. The lack of a database to collect clinical data on BMD testing makes it difficult to evaluate the clinical profiles of patients tested and outcomes of the BMD tests. There are ministry initiatives in progress under the Osteoporosis Program to address the development of a mandatory standardized requisition form for BMD tests to facilitate data collection and clinical decision-making. Work is also underway for developing guidelines for BMD testing in men and in perimenopausal women.
Conclusion
Increased use of BMD in Ontario since 1996 appears to be associated with increased use of antiresorptive medication and a decrease in hip and wrist fractures.
Data suggest that as many as 20% (98,000) of the DXA BMD tests in Ontario in 2005/06 were performed in people aged less than 65 years, with no fracture in the current year, and coded as being at low risk for accelerated bone loss; this is not consistent with current guidelines. Even though some of these people might have been incorrectly coded as low-risk, the number of tests in people truly at low risk could still be substantial.
Approximately 4% (21,000) of the DXA BMD tests in 2005/06 were repeat BMDs in low-risk individuals within a 24-month period. Even though this is in compliance with current OHIP reimbursement policies, evidence showed that biannual serial BMD testing is not necessary in individuals without major risk factors for fractures, provided that the baseline BMD is normal (T-score < –1). In this population, BMD measurements may be repeated in 3 to 5 years after the baseline test to establish the rate of bone loss, and further serial BMD tests may not be necessary for another 7 to 10 years if the rate of bone loss is no more than 1% per year. Precision of the test needs to be considered when interpreting serial BMD results.
Although changes in BMD may not be the perfect surrogate for reduction in fracture risk as a measure of response to osteoporosis treatment, experts advised that it is presently the only reliable test for monitoring response to treatment and to help motivate patients to continue treatment. Patients should not discontinue treatment if there is no increase in BMD after the first year of treatment. Lack of response or bone loss during treatment should prompt the physician to examine whether the patient is taking the medication appropriately.
Men and women who have had a fragility fracture at the hip, spine, wrist or shoulder are at increased risk of having a future fracture, but this population is presently under investigated and under treated. Additional efforts have to be made to communicate to physicians (particularly orthopaedic surgeons and family physicians) and the public about the need for a BMD test after fracture, and for initiating treatment if low BMD is found.
Men had a disproportionately low rate of BMD tests and osteoporosis treatment, especially after a fracture. Evidence and fracture data showed that the risk of hip and wrist fractures in men rises sharply at age 70 years.
Some counties had BMD utilization rates that were only 10% of that of the county with the highest utilization. The reasons for low utilization need to be explored and addressed.
Initiatives such as aligning reimbursement policy with current guidelines, developing specific guidelines for BMD testing in men and perimenopausal women, improving BMD reports to assist in clinical decision making, developing a registry to track BMD tests, improving access to BMD tests in remote/rural counties, establishing mechanisms to alert family physicians of fractures, and educating physicians and the public, will improve the appropriate utilization of BMD tests, and further decrease the rate of fractures in Ontario. Some of these initiatives such as developing guidelines for perimenopausal women and men, and developing a standardized requisition form for BMD testing, are currently in progress under the Ontario Osteoporosis Strategy.
PMCID: PMC3379167  PMID: 23074491
22.  Developing an efficient scheduling template of a chemotherapy treatment unit 
The Australasian Medical Journal  2011;4(10):575-588.
This study was undertaken to improve the performance of a Chemotherapy Treatment Unit by increasing the throughput and reducing the average patient’s waiting time. In order to achieve this objective, a scheduling template has been built. The scheduling template is a simple tool that can be used to schedule patients' arrival to the clinic. A simulation model of this system was built and several scenarios, that target match the arrival pattern of the patients and resources availability, were designed and evaluated. After performing detailed analysis, one scenario provide the best system’s performance. A scheduling template has been developed based on this scenario. After implementing the new scheduling template, 22.5% more patients can be served.
Introduction
CancerCare Manitoba is a provincially mandated cancer care agency. It is dedicated to provide quality care to those who have been diagnosed and are living with cancer. MacCharles Chemotherapy unit is specially built to provide chemotherapy treatment to the cancer patients of Winnipeg. In order to maintain an excellent service, it tries to ensure that patients get their treatment in a timely manner. It is challenging to maintain that goal because of the lack of a proper roster, the workload distribution and inefficient resource allotment. In order to maintain the satisfaction of the patients and the healthcare providers, by serving the maximum number of patients in a timely manner, it is necessary to develop an efficient scheduling template that matches the required demand with the availability of resources. This goal can be reached using simulation modelling. Simulation has proven to be an excellent modelling tool. It can be defined as building computer models that represent real world or hypothetical systems, and hence experimenting with these models to study system behaviour under different scenarios.1, 2
A study was undertaken at the Children's Hospital of Eastern Ontario to identify the issues behind the long waiting time of a emergency room.3 A 20-­‐day field observation revealed that the availability of the staff physician and interaction affects the patient wait time. Jyväskylä et al.4 used simulation to test different process scenarios, allocate resources and perform activity-­‐based cost analysis in the Emergency Department (ED) at the Central Hospital. The simulation also supported the study of a new operational method, named "triage-team" method without interrupting the main system. The proposed triage team method categorises the entire patient according to the urgency to see the doctor and allows the patient to complete the necessary test before being seen by the doctor for the first time. The simulation study showed that it will decrease the throughput time of the patient and reduce the utilisation of the specialist and enable the ordering all the tests the patient needs right after arrival, thus quickening the referral to treatment.
Santibáñez et al.5 developed a discrete event simulation model of British Columbia Cancer Agency"s ambulatory care unit which was used to study the impact of scenarios considering different operational factors (delay in starting clinic), appointment schedule (appointment order, appointment adjustment, add-­‐ons to the schedule) and resource allocation. It was found that the best outcomes were obtained when not one but multiple changes were implemented simultaneously. Sepúlveda et al.6 studied the M. D. Anderson Cancer Centre Orlando, which is a cancer treatment facility and built a simulation model to analyse and improve flow process and increase capacity in the main facility. Different scenarios were considered like, transferring laboratory and pharmacy areas, adding an extra blood draw room and applying different scheduling techniques of patients. The study shows that by increasing the number of short-­‐term (four hours or less) patients in the morning could increase chair utilisation.
Discrete event simulation also helps improve a service where staff are ignorant about the behaviour of the system as a whole; which can also be described as a real professional system. Niranjon et al.7 used simulation successfully where they had to face such constraints and lack of accessible data. Carlos et al. 8 used Total quality management and simulation – animation to improve the quality of the emergency room. Simulation was used to cover the key point of the emergency room and animation was used to indicate the areas of opportunity required. This study revealed that a long waiting time, overload personnel and increasing withdrawal rate of patients are caused by the lack of capacity in the emergency room.
Baesler et al.9 developed a methodology for a cancer treatment facility to find stochastically a global optimum point for the control variables. A simulation model generated the output using a goal programming framework for all the objectives involved in the analysis. Later a genetic algorithm was responsible for performing the search for an improved solution. The control variables that were considered in this research are number of treatment chairs, number of drawing blood nurses, laboratory personnel, and pharmacy personnel. Guo et al. 10 presented a simulation framework considering demand for appointment, patient flow logic, distribution of resources, scheduling rules followed by the scheduler. The objective of the study was to develop a scheduling rule which will ensure that 95% of all the appointment requests should be seen within one week after the request is made to increase the level of patient satisfaction and balance the schedule of each doctor to maintain a fine harmony between "busy clinic" and "quiet clinic".
Huschka et al.11 studied a healthcare system which was about to change their facility layout. In this case a simulation model study helped them to design a new healthcare practice by evaluating the change in layout before implementation. Historical data like the arrival rate of the patients, number of patients visited each day, patient flow logic, was used to build the current system model. Later, different scenarios were designed which measured the changes in the current layout and performance.
Wijewickrama et al.12 developed a simulation model to evaluate appointment schedule (AS) for second time consultations and patient appointment sequence (PSEQ) in a multi-­‐facility system. Five different appointment rule (ARULE) were considered: i) Baily; ii) 3Baily; iii) Individual (Ind); iv) two patients at a time (2AtaTime); v) Variable Interval and (V-­‐I) rule. PSEQ is based on type of patients: Appointment patients (APs) and new patients (NPs). The different PSEQ that were studied in this study were: i) first-­‐ come first-­‐serve; ii) appointment patient at the beginning of the clinic (APBEG); iii) new patient at the beginning of the clinic (NPBEG); iv) assigning appointed and new patients in an alternating manner (ALTER); v) assigning a new patient after every five-­‐appointment patients. Also patient no show (0% and 5%) and patient punctuality (PUNCT) (on-­‐time and 10 minutes early) were also considered. The study found that ALTER-­‐Ind. and ALTER5-­‐Ind. performed best on 0% NOSHOW, on-­‐time PUNCT and 5% NOSHOW, on-­‐time PUNCT situation to reduce WT and IT per patient. As NOSHOW created slack time for waiting patients, their WT tends to reduce while IT increases due to unexpected cancellation. Earliness increases congestion whichin turn increases waiting time.
Ramis et al.13 conducted a study of a Medical Imaging Center (MIC) to build a simulation model which was used to improve the patient journey through an imaging centre by reducing the wait time and making better use of the resources. The simulation model also used a Graphic User Interface (GUI) to provide the parameters of the centre, such as arrival rates, distances, processing times, resources and schedule. The simulation was used to measure the waiting time of the patients in different case scenarios. The study found that assigning a common function to the resource personnel could improve the waiting time of the patients.
The objective of this study is to develop an efficient scheduling template that maximises the number of served patients and minimises the average patient's waiting time at the given resources availability. To accomplish this objective, we will build a simulation model which mimics the working conditions of the clinic. Then we will suggest different scenarios of matching the arrival pattern of the patients with the availability of the resources. Full experiments will be performed to evaluate these scenarios. Hence, a simple and practical scheduling template will be built based on the indentified best scenario. The developed simulation model is described in section 2, which consists of a description of the treatment room, and a description of the types of patients and treatment durations. In section 3, different improvement scenarios are described and their analysis is presented in section 4. Section 5 illustrates a scheduling template based on one of the improvement scenarios. Finally, the conclusion and future direction of our work is exhibited in section 6.
Simulation Model
A simulation model represents the actual system and assists in visualising and evaluating the performance of the system under different scenarios without interrupting the actual system. Building a proper simulation model of a system consists of the following steps.
Observing the system to understand the flow of the entities, key players, availability of resources and overall generic framework.
Collecting the data on the number and type of entities, time consumed by the entities at each step of their journey, and availability of resources.
After building the simulation model it is necessary to confirm that the model is valid. This can be done by confirming that each entity flows as it is supposed to and the statistical data generated by the simulation model is similar to the collected data.
Figure 1 shows the patient flow process in the treatment room. On the patient's first appointment, the oncologist comes up with the treatment plan. The treatment time varies according to the patient’s condition, which may be 1 hour to 10 hours. Based on the type of the treatment, the physician or the clinical clerk books an available treatment chair for that time period.
On the day of the appointment, the patient will wait until the booked chair is free. When the chair is free a nurse from that station comes to the patient, verifies the name and date of birth and takes the patient to a treatment chair. Afterwards, the nurse flushes the chemotherapy drug line to the patient's body which takes about five minutes and sets up the treatment. Then the nurse leaves to serve another patient. Chemotherapy treatment lengths vary from less than an hour to 10 hour infusions. At the end of the treatment, the nurse returns, removes the line and notifies the patient about the next appointment date and time which also takes about five minutes. Most of the patients visit the clinic to take care of their PICC line (a peripherally inserted central catheter). A PICC is a line that is used to inject the patient with the chemical. This PICC line should be regularly cleaned, flushed to maintain patency and the insertion site checked for signs of infection. It takes approximately 10–15 minutes to take care of a PICC line by a nurse.
Cancer Care Manitoba provided access to the electronic scheduling system, also known as "ARIA" which is comprehensive information and image management system that aggregates patient data into a fully-­‐electronic medical chart, provided by VARIAN Medical System. This system was used to find out how many patients are booked in every clinic day. It also reveals which chair is used for how many hours. It was necessary to search a patient's history to find out how long the patient spends on which chair. Collecting the snapshot of each patient gives the complete picture of a one day clinic schedule.
The treatment room consists of the following two main limited resources:
Treatment Chairs: Chairs that are used to seat the patients during the treatment.
Nurses: Nurses are required to inject the treatment line into the patient and remove it at the end of the treatment. They also take care of the patients when they feel uncomfortable.
Mc Charles Chemotherapy unit consists of 11 nurses, and 5 stations with the following description:
Station 1: Station 1 has six chairs (numbered 1 to 6) and two nurses. The two nurses work from 8:00 to 16:00.
Station 2: Station 2 has six chairs (7 to 12) and three nurses. Two nurses work from 8:00 to 16:00 and one nurse works from 12:00 to 20:00.
Station 3: Station 4 has six chairs (13 to 18) and two nurses. The two nurses work from 8:00 to 16:00.
Station 4: Station 4 has six chairs (19 to 24) and three nurses. One nurse works from 8:00 to 16:00. Another nurse works from 10:00 to 18:00.
Solarium Station: Solarium Station has six chairs (Solarium Stretcher 1, Solarium Stretcher 2, Isolation, Isolation emergency, Fire Place 1, Fire Place 2). There is only one nurse assigned to this station that works from 12:00 to 20:00. The nurses from other stations can help when need arises.
There is one more nurse known as the "float nurse" who works from 11:00 to 19:00. This nurse can work at any station. Table 1 summarises the working hours of chairs and nurses. All treatment stations start at 8:00 and continue until the assigned nurse for that station completes her shift.
Currently, the clinic uses a scheduling template to assign the patients' appointments. But due to high demand of patient appointment it is not followed any more. We believe that this template can be improved based on the availability of nurses and chairs. Clinic workload was collected from 21 days of field observation. The current scheduling template has 10 types of appointment time slot: 15-­‐minute, 1-­‐hour, 1.5-­‐hour, 2-­‐hour, 3-­‐hour, 4-­‐hour, 5-­‐hour, 6-­‐hour, 8-­‐hour and 10-­‐hour and it is designed to serve 95 patients. But when the scheduling template was compared with the 21 days observations, it was found that the clinic is serving more patients than it is designed for. Therefore, the providers do not usually follow the scheduling template. Indeed they very often break the time slots to accommodate slots that do not exist in the template. Hence, we find that some of the stations are very busy (mostly station 2) and others are underused. If the scheduling template can be improved, it will be possible to bring more patients to the clinic and reduce their waiting time without adding more resources.
In order to build or develop a simulation model of the existing system, it is necessary to collect the following data:
Types of treatment durations.
Numbers of patients in each treatment type.
Arrival pattern of the patients.
Steps that the patients have to go through in their treatment journey and required time of each step.
Using the observations of 2,155 patients over 21 days of historical data, the types of treatment durations and the number of patients in each type were estimated. This data also assisted in determining the arrival rate and the frequency distribution of the patients. The patients were categorised into six types. The percentage of these types and their associated service times distributions are determined too.
ARENA Rockwell Simulation Software (v13) was used to build the simulation model. Entities of the model were tracked to verify that the patients move as intended. The model was run for 30 replications and statistical data was collected to validate the model. The total number of patients that go though the model was compared with the actual number of served patients during the 21 days of observations.
Improvement Scenarios
After verifying and validating the simulation model, different scenarios were designed and analysed to identify the best scenario that can handle more patients and reduces the average patient's waiting time. Based on the clinic observation and discussion with the healthcare providers, the following constraints have been stated:
The stations are filled up with treatment chairs. Therefore, it is literally impossible to fit any more chairs in the clinic. Moreover, the stakeholders are not interested in adding extra chairs.
The stakeholders and the caregivers are not interested in changing the layout of the treatment room.
Given these constraints the options that can be considered to design alternative scenarios are:
Changing the arrival pattern of the patients: that will fit over the nurses' availability.
Changing the nurses' schedule.
Adding one full time nurse at different starting times of the day.
Figure 2 compares the available number of nurses and the number of patients' arrival during different hours of a day. It can be noticed that there is a rapid growth in the arrival of patients (from 13 to 17) between 8:00 to 10:00 even though the clinic has the equal number of nurses during this time period. At 12:00 there is a sudden drop of patient arrival even though there are more available nurses. It is clear that there is an imbalance in the number of available nurses and the number of patient arrivals over different hours of the day. Consequently, balancing the demand (arrival rate of patients) and resources (available number of nurses) will reduce the patients' waiting time and increases the number of served patients. The alternative scenarios that satisfy the above three constraints are listed in Table 2. These scenarios respect the following rules:
Long treatments (between 4hr to 11hr) have to be scheduled early in the morning to avoid working overtime.
Patients of type 1 (15 minutes to 1hr treatment) are the most common. They can be fitted in at any time of the day because they take short treatment time. Hence, it is recommended to bring these patients in at the middle of the day when there are more nurses.
Nurses get tired at the end of the clinic day. Therefore, fewer patients should be scheduled at the late hours of the day.
In Scenario 1, the arrival pattern of the patient was changed so that it can fit with the nurse schedule. This arrival pattern is shown Table 3. Figure 3 shows the new patients' arrival pattern compared with the current arrival pattern. Similar patterns can be developed for the remaining scenarios too.
Analysis of Results
ARENA Rockwell Simulation software (v13) was used to develop the simulation model. There is no warm-­‐up period because the model simulates day-­‐to-­‐day scenarios. The patients of any day are supposed to be served in the same day. The model was run for 30 days (replications) and statistical data was collected to evaluate each scenario. Tables 4 and 5 show the detailed comparison of the system performance between the current scenario and Scenario 1. The results are quite interesting. The average throughput rate of the system has increased from 103 to 125 patients per day. The maximum throughput rate can reach 135 patients. Although the average waiting time has increased, the utilisation of the treatment station has increased by 15.6%. Similar analysis has been performed for the rest of the other scenarios. Due to the space limitation the detailed results are not given. However, Table 6 exhibits a summary of the results and comparison between the different scenarios. Scenario 1 was able to significantly increase the throughput of the system (by 21%) while it still results in an acceptable low average waiting time (13.4 minutes). In addition, it is worth noting that adding a nurse (Scenarios 3, 4, and 5) does not significantly reduce the average wait time or increase the system's throughput. The reason behind this is that when all the chairs are busy, the nurses have to wait until some patients finish the treatment. As a consequence, the other patients have to wait for the commencement of their treatment too. Therefore, hiring a nurse, without adding more chairs, will not reduce the waiting time or increase the throughput of the system. In this case, the only way to increase the throughput of the system is by adjusting the arrival pattern of patients over the nurses' schedule.
Developing a Scheduling Template based on Scenario 1
Scenario 1 provides the best performance. However a scheduling template is necessary for the care provider to book the patients. Therefore, a brief description is provided below on how scheduling the template is developed based on this scenario.
Table 3 gives the number of patients that arrive hourly, following Scenario 1. The distribution of each type of patient is shown in Table 7. This distribution is based on the percentage of each type of patient from the collected data. For example, in between 8:00-­‐9:00, 12 patients will come where 54.85% are of Type 1, 34.55% are of Type 2, 15.163% are of Type 3, 4.32% are of Type 4, 2.58% are of Type 5 and the rest are of Type 6. It is worth noting that, we assume that the patients of each type arrive as a group at the beginning of the hourly time slot. For example, all of the six patients of Type 1 from 8:00 to 9:00 time slot arrive at 8:00.
The numbers of patients from each type is distributed in such a way that it respects all the constraints described in Section 1.3. Most of the patients of the clinic are from type 1, 2 and 3 and they take less amount of treatment time compared with the patients of other types. Therefore, they are distributed all over the day. Patients of type 4, 5 and 6 take a longer treatment time. Hence, they are scheduled at the beginning of the day to avoid overtime. Because patients of type 4, 5 and 6 come at the beginning of the day, most of type 1 and 2 patients come at mid-­‐day (12:00 to 16:00). Another reason to make the treatment room more crowded in between 12:00 to 16:00 is because the clinic has the maximum number of nurses during this time period. Nurses become tired at the end of the clinic which is a reason not to schedule any patient after 19:00.
Based on the patient arrival schedule and nurse availability a scheduling template is built and shown in Figure 4. In order to build the template, if a nurse is available and there are patients waiting for service, a priority list of these patients will be developed. They are prioritised in a descending order based on their estimated slack time and secondarily based on the shortest service time. The secondary rule is used to break the tie if two patients have the same slack. The slack time is calculated using the following equation:
Slack time = Due time - (Arrival time + Treatment time)
Due time is the clinic closing time. To explain how the process works, assume at hour 8:00 (in between 8:00 to 8:15) two patients in station 1 (one 8-­‐hour and one 15-­‐ minute patient), two patients in station 2 (two 12-­‐hour patients), two patients in station 3 (one 2-­‐hour and one 15-­‐ minute patient) and one patient in station 4 (one 3-­‐hour patient) in total seven patients are scheduled. According to Figure 2, there are seven nurses who are available at 8:00 and it takes 15 minutes to set-­‐up a patient. Therefore, it is not possible to schedule more than seven patients in between 8:00 to 8:15 and the current scheduling is also serving seven patients by this time. The rest of the template can be justified similarly.
doi:10.4066/AMJ.2011.837
PMCID: PMC3562880  PMID: 23386870
23.  Effect of an Educational Toolkit on Quality of Care: A Pragmatic Cluster Randomized Trial 
PLoS Medicine  2014;11(2):e1001588.
In a pragmatic cluster-randomized trial, Baiju Shah and colleagues evaluated the effectiveness of printed educational materials for clinician education focusing on cardiovascular disease screening and risk reduction in people with diabetes.
Please see later in the article for the Editors' Summary
Background
Printed educational materials for clinician education are one of the most commonly used approaches for quality improvement. The objective of this pragmatic cluster randomized trial was to evaluate the effectiveness of an educational toolkit focusing on cardiovascular disease screening and risk reduction in people with diabetes.
Methods and Findings
All 933,789 people aged ≥40 years with diagnosed diabetes in Ontario, Canada were studied using population-level administrative databases, with additional clinical outcome data collected from a random sample of 1,592 high risk patients. Family practices were randomly assigned to receive the educational toolkit in June 2009 (intervention group) or May 2010 (control group). The primary outcome in the administrative data study, death or non-fatal myocardial infarction, occurred in 11,736 (2.5%) patients in the intervention group and 11,536 (2.5%) in the control group (p = 0.77). The primary outcome in the clinical data study, use of a statin, occurred in 700 (88.1%) patients in the intervention group and 725 (90.1%) in the control group (p = 0.26). Pre-specified secondary outcomes, including other clinical events, processes of care, and measures of risk factor control, were also not improved by the intervention. A limitation is the high baseline rate of statin prescribing in this population.
Conclusions
The educational toolkit did not improve quality of care or cardiovascular outcomes in a population with diabetes. Despite being relatively easy and inexpensive to implement, printed educational materials were not effective. The study highlights the need for a rigorous and scientifically based approach to the development, dissemination, and evaluation of quality improvement interventions.
Trial Registration
http://www.ClinicalTrials.gov NCT01411865 and NCT01026688
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Clinical practice guidelines help health care providers deliver the best care to patients by combining all the evidence on disease management into specific recommendations for care. However, the implementation of evidence-based guidelines is often far from perfect. Take the example of diabetes. This common chronic disease, which is characterized by high levels of sugar (glucose) in the blood, impairs the quality of life of patients and shortens life expectancy by increasing the risk of cardiovascular diseases (conditions that affect the heart and circulation) and other life-threatening conditions. Patients need complex care to manage the multiple risk factors (high blood sugar, high blood pressure, high levels of fat in the blood) that are associated with the long-term complications of diabetes, and they need to be regularly screened and treated for these complications. Clinical practice guidelines for diabetes provide recommendations on screening and diagnosis, drug treatment, and cardiovascular disease risk reduction, and on helping patients self-manage their disease. Unfortunately, the care delivered to patients with diabetes frequently fails to meet the standards laid down in these guidelines.
Why Was This Study Done?
How can guideline adherence and the quality of care provided to patients be improved? A common approach is to send printed educational materials to clinicians. For example, when the Canadian Diabetes Association (CDA) updated its clinical practice guidelines in 2008, it mailed educational toolkits that contained brochures and other printed materials targeting key themes from the guidelines to family physicians. In this pragmatic cluster randomized trial, the researchers investigate the effect of the CDA educational toolkit that targeted cardiovascular disease screening and treatment on the quality of care of people with diabetes. A pragmatic trial asks whether an intervention works under real-life conditions and whether it works in terms that matter to the patient; a cluster randomized trial randomly assigns groups of people to receive alternative interventions and compares outcomes in the differently treated “clusters.”
What Did the Researchers Do and Find?
The researchers randomly assigned family practices in Ontario, Canada to receive the educational toolkit in June 2009 (intervention group) or in May 2010 (control group). They examined outcomes between July 2009 and April 2010 in all patients with diabetes in Ontario aged over 40 years (933,789 people) using population-level administrative data. In Canada, administrative databases record the personal details of people registered with provincial health plans, information on hospital visits and prescriptions, and physician service claims for consultations, assessments, and diagnostic and therapeutic procedures. They also examined clinical outcome data from a random sample of 1,592 patients at high risk of cardiovascular complications. In the administrative data study, death or non-fatal heart attack (the primary outcome) occurred in about 11,500 patients in both the intervention and control group. In the clinical data study, the primary outcome―use of a statin to lower blood fat levels―occurred in about 700 patients in both study groups. Secondary outcomes, including other clinical events, processes of care, and measures of risk factor control were also not improved by the intervention. Indeed, in the administrative data study, some processes of care outcomes related to screening for heart disease were statistically significantly worse in the intervention group than in the control group, and in the clinical data study, fewer patients in the intervention group reached blood pressure targets than in the control group.
What Do These Findings Mean?
These findings suggest that the CDA cardiovascular diseases educational toolkit did not improve quality of care or cardiovascular outcomes in a population with diabetes. Indeed, the toolkit may have led to worsening in some secondary outcomes although, because numerous secondary outcomes were examined, this may be a chance finding. Limitations of the study include its length, which may have been too short to see an effect of the intervention on clinical outcomes, and the possibility of a ceiling effect—the control group in the clinical data study generally had good care, which left little room for improvement of the quality of care in the intervention group. Overall, however, these findings suggest that printed educational materials may not be an effective way to improve the quality of care for patients with diabetes and other complex conditions and highlight the need for a rigorous, scientific approach to the development, dissemination, and evaluation of quality improvement interventions.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001588.
The US National Diabetes Information Clearinghouse provides information about diabetes for patients, health care professionals, and the general public (in English and Spanish)
The UK National Health Service Choices website provides information (including some personal stories) for patients and carers about type 2 diabetes, the commonest form of diabetes
The Canadian Diabetes Association also provides information about diabetes for patients (including some personal stories about living with diabetes) and health care professionals; its latest clinical practice guidelines are available on its website
The UK National Institute for Health and Care Excellence provides general information about clinical guidelines and about health care quality standards in the UK
The US Agency for Healthcare Research and Quality aims to improve the quality, safety, efficiency, and effectiveness of health care for all Americans (information in English and Spanish); the US National Guideline Clearinghouse is a searchable database of clinical practice guidelines
The International Diabetes Federation provides information about diabetes for patients and health care professionals, along with international statistics on the burden of diabetes
doi:10.1371/journal.pmed.1001588
PMCID: PMC3913553  PMID: 24505216
24.  Methodological quality of guidelines for management of Lyme neuroborreliosis 
BMC Neurology  2015;15:242.
Background
Many aspects of clinical management of Lyme neuroborreliosis are subject to intense debates. Guidelines show considerable variability in their recommendations, leading to divergent treatment regimes. The most pronounced differences in recommendations exist between guidelines from scientific societies and from patient advocacy groups. Assessment of the methodological quality of these contradictory guideline recommendations can be helpful for healthcare professionals.
Methods
Systematic searches were conducted in MEDLINE and databases of four international and national guideline organizations for guidelines on Lyme neuroborreliosis published from 1999–2014. Characteristics (e.g., year of publication, sponsoring organization) and key recommendations were extracted from each guideline. Two independent reviewers assessed the methodological quality of each guideline according to the Appraisal of Guidelines for Research and Evaluation II (AGREE II) tool. AGREE II scores from guidelines developed by scientific societies and from patient advocacy groups were compared across domains.
Results
We identified eight eligible guidelines of which n = 6 were developed by scientific societies and n = 2 by patient advocacy groups. Agreement on AGREE II scores was good (Cohen’s weighted kappa = 0.87, 95 % CI 0.83–0.92). Three guidelines, all from scientific societies, had an overall quality score of ≥ 50 %. Two of them were recommended for use according to the AGREE II criteria. Across all guidelines, the AGREE II domain with the highest scores was “Clarity of Presentation” (65, SD 19 %); all other domains had scores < 50 % with the domain “Applicability” having the lowest scores (4, SD 4 %). Guidelines developed by scientific societies had statistically significantly higher scores regarding clarity of presentation than guidelines from patient advocacy groups (p = 0.0151). No statistically significant differences were found in other domains.
Conclusions
Current guidelines on Lyme neuroborreliosis vary in methodological quality and content. Health care providers and patients need to be aware of this variability in quality when choosing recommendations for their treatment decisions regarding Lyme neuroborreliosis. No statement can be given on quality of content and validity of recommendations, as these issues are not subject to assessment with the AGREE II tool and are prone to individual interpretation of the available evidence by the corresponding guideline panels. To enhance guideline quality, guideline panels should put more emphasis on linking recommendations to the available evidence, transparency in reporting how evidence was searched for and evaluated, and the implementation of recommendations into clinical practice.
Electronic supplementary material
The online version of this article (doi:10.1186/s12883-015-0501-3) contains supplementary material, which is available to authorized users.
doi:10.1186/s12883-015-0501-3
PMCID: PMC4660677  PMID: 26607686
25.  Guideline-adherent initial intravenous antibiotic therapy for hospital-acquired/ventilator-associated pneumonia is clinically superior, saves lives and is cheaper than non guideline adherent therapy 
Introduction
Hospital-acquired pneumonia (HAP) often occurring as ventilator-associated pneumonia (VAP) is the most frequent hospital infection in intensive care units (ICU). Early adequate antimicrobial therapy is an essential determinant of clinical outcome. Organisations like the German PEG or ATS/IDSA provide guidelines for the initial calculated treatment in the absence of pathogen identification. We conducted a retrospective chart review for patients with HAP/VAP and assessed whether the initial intravenous antibiotic therapy (IIAT) was adequate according to the PEG guidelines
Materials and methods
We collected data from 5 tertiary care hospitals. Electronic data filtering identified 895 patients with potential HAP/VAP. After chart review we finally identified 221 patients meeting the definition of HAP/VAP. Primary study endpoints were clinical improvement, survival and length of stay. Secondary endpoints included duration of mechanical ventilation, total costs, costs incurred on the intensive care unit (ICU), costs incurred on general wards and drug costs.
Results
We found that 107 patients received adequate initial intravenous antibiotic therapy (IIAT) vs. 114 with inadequate IIAT according to the PEG guidelines. Baseline characteristics of both groups revealed no significant differences and good comparability. Clinical improvement was 64% over all patients and 82% (85/104) in the subpopulation with adequate IIAT while only 47% (48/103) inadequately treated patients improved (p < 0.001). The odds ratio of therapeutic success with GA versus NGA treatment was 5.821 (p < 0.001, [95% CI: 2.712-12.497]). Survival was 80% for the total population (n = 221), 86% in the adequately treated (92/107) and 74% in the inadequately treated 'subpopulation (84/114) (p = 0.021). The odds ratio of mortality for GA vs. NGA treatment was 0.565 (p = 0.117, [95% CI: 0.276-1.155]). Adequately treated patients had a significantly shorter length of stay (LOS) (23.9 vs. 28.3 days; p = 0.022), require significantly less hours of mechanical ventilation (175 vs. 274; p = 0.001), incurred lower total costs (EUR 28,033 vs. EUR 36,139, p = 0.006) and lower ICU-related costs (EUR 13,308 vs. EUR 18,666, p = 0.003).
Drug costs for the hospital stay were also lower (EUR 4,069 vs. EUR 4,833) yet not significant. The most frequent types of inadequate therapy were monotherapy instead of combination therapy, wrong type of penicillin and wrong type of cephalosporin.
Discussion
These findings are consistent with those from other studies analyzing the impact of guideline adherence on survival rates, clinical success, LOS and costs. However, inadequately treated patients had a higher complicated pathogen risk score (CPRS) compared to those who received adequate therapy. This shows that therapy based on local experiences may be sufficient for patients with low CPRS but inadequate for those with high CPRS. Linear regression models showed that single items of the CPRS like extrapulmonary organ failure or late onset had no significant influence on the results.
Conclusion
Guideline-adherent initial intravenous antibiotic therapy is clinically superior, saves lives and is less expensive than non guideline adherent therapy. Using a CPRS score can be a useful tool to determine the right choice of initial intravenous antibiotic therapy. the net effect on the German healthcare system per year is estimated at up to 2,042 lives and EUR 125,819,000 saved if guideline-adherent initial therapy for HAP/VAP were established in all German ICUs.
doi:10.1186/2047-783X-16-7-315
PMCID: PMC3352003  PMID: 21813372

Results 1-25 (1519031)