PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1098939)

Clipboard (0)
None

Related Articles

1.  Ethnic Disparities in Diabetes Management and Pay-for-Performance in the UK: The Wandsworth Prospective Diabetes Study 
PLoS Medicine  2007;4(6):e191.
Background
Pay-for-performance rewards health-care providers by paying them more if they succeed in meeting performance targets. A new contract for general practitioners in the United Kingdom represents the most radical shift towards pay-for-performance seen in any health-care system. The contract provides an important opportunity to address disparities in chronic disease management between ethnic and socioeconomic groups. We examined disparities in management of people with diabetes and intermediate clinical outcomes within a multiethnic population in primary care before and after the introduction of the new contract in April 2004.
Methods and Findings
We conducted a population-based longitudinal survey, using electronic general practice records, in an ethnically diverse part of southwest London. Outcome measures were prescribing levels and achievement of national treatment targets (HbA1c ≤ 7.0%; blood pressure [BP] < 140/80 mm Hg; total cholesterol ≤ 5 mmol/l or 193 mg/dl). The proportion of patients reaching treatment targets for HbA1c, BP, and total cholesterol increased significantly after the implementation of the new contract. The extents of these increases were broadly uniform across ethnic groups, with the exception of the black Caribbean patient group, which had a significantly lower improvement in HbA1c (adjusted odds ratio [AOR] 0.75, 95% confidence interval [CI] 0.57–0.97) and BP control (AOR 0.65, 95% CI 0.53–0.81) relative to the white British patient group. Variations in prescribing and achievement of treatment targets between ethnic groups present in 2003 were not attenuated in 2005.
Conclusions
Pay-for-performance incentives have not addressed disparities in the management and control of diabetes between ethnic groups. Quality improvement initiatives must place greater emphasis on minority communities to avoid continued disparities in mortality from cardiovascular disease and the other major complications of diabetes.
Based on a population-based longitudinal survey, Christopher Millett and colleagues concluded that pay-for-performance incentives for UK general practitioners had not addressed disparities in the management and control of diabetes between ethnic groups.
Editors' Summary
Background.
When used in health care, the term “pay-for-performance” means rewarding health-care providers by paying them more if they succeed in meeting performance targets set by the government and other commissioners of health care. It is an approach to health service management that is becoming common, particularly in the US and the UK. For example, the UK's general practitioners (family doctors) agreed with the government in 2004 that they would receive increases to their income that would depend on how well they were judged to be performing according to 146 quality indicators that cover clinical care for ten chronic diseases, as well as “organization of care,” and “patient experience.” One of the chronic diseases is diabetes, a condition that has reached epidemic proportions in the UK, as it has also in many other countries.
  Ethnic minorities often suffer more from health problems than the majority population of the country they live in. They are also likely to be served less well by the health services. Diabetes is a case in point; in many countries—including the US and UK—the condition is much more common in minority groups. In addition, their diabetes is usually less well “managed”—i.e., it becomes more severe more rapidly and there are more complications. In the UK, the government recognizes the need to ensure that its health policies are applied to all sectors of the population, including minority ethnic communities. Nevertheless, the advances that have been made in the management of diabetes have not benefited the UK's ethnic minorities to the same extent as they have the majority population. It is hoped that the use of pay-for-performance management by the UK National Health Service will lead to more efficient delivery of health care, and that one consequence will be that different communities will be more equally served.
Why Was This Study Done?
The researchers wanted to find out whether the introduction of pay-for-performance management in general medical practice in the UK was leading to a reduction in the gap in the quality of care provided to people with diabetes who belonged to ethnic minorities and other people with diabetes.
What Did the Researchers Do and Find?
The research was carried out in Wandsworth, an area of southwest London that is considered to be “ethnically diverse.” Over 4,200 people with diabetes are registered with general practitioners in this area. The researchers used the electronic records kept by these doctors and they focused on diabetes “treatment targets” set by the government, according to which the blood pressure and cholesterol levels of people with diabetes should be kept below defined levels. There is also a target level for glycated hemoglobin (HbA1c), which is a substance that can be used to measure the extent to which a patient's diabetes is under control. The researchers calculated the percentage of patients who were meeting these treatment targets. Overall, more patients met their treatment targets after the introduction of pay-for-performance management than were doing so before. All ethnic groups seemed to have benefited, but the black Caribbean group did not benefit as much as the other groups; the number of these patients who met the targets did improve, but the gap between them and patients with diabetes from other ethnic groups remained about the same.
What Do These Findings Mean?
The researchers concluded that, while the introduction of pay-for-performance did seem to have been beneficial, it had not addressed disparities in the management and control of diabetes between ethnic groups. They say that, in all initiatives to improve the quality of health care, special efforts must be made to reduce such gaps. The UK's use of pay-for-performance in general practice is regarded internationally as a very bold step, but, as other countries are also considering moving in this direction, the lessons from the study will be relevant in many other parts of the world.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040191.
Wikipedia has an entry on pay-for-performance in health care (note: Wikipedia is a free online encyclopedia that anyone can edit)
Information about how the NHS works in England
Diabetes UK is the largest organization in the UK working for people with diabetes and its website includes a useful Guide to Diabetes
The London Health Observatory is one of nine health observatories set up by the NHS to monitor health and health care in England. There is a page devoted to “ethnic health intelligence”
Introductory information about diabetes as a medical condition may be found on the MedlinePlus website; there are several MedlinePlus pages on diabetes as well
doi:10.1371/journal.pmed.0040191
PMCID: PMC1891316  PMID: 17564486
2.  Behavioural Interventions for Type 2 Diabetes 
Executive Summary
In June 2008, the Medical Advisory Secretariat began work on the Diabetes Strategy Evidence Project, an evidence-based review of the literature surrounding strategies for successful management and treatment of diabetes. This project came about when the Health System Strategy Division at the Ministry of Health and Long-Term Care subsequently asked the secretariat to provide an evidentiary platform for the Ministry’s newly released Diabetes Strategy.
After an initial review of the strategy and consultation with experts, the secretariat identified five key areas in which evidence was needed. Evidence-based analyses have been prepared for each of these five areas: insulin pumps, behavioural interventions, bariatric surgery, home telemonitoring, and community based care. For each area, an economic analysis was completed where appropriate and is described in a separate report.
To review these titles within the Diabetes Strategy Evidence series, please visit the Medical Advisory Secretariat Web site, http://www.health.gov.on.ca/english/providers/program/mas/mas_about.html,
Diabetes Strategy Evidence Platform: Summary of Evidence-Based Analyses
Continuous Subcutaneous Insulin Infusion Pumps for Type 1 and Type 2 Adult Diabetics: An Evidence-Based Analysis
Behavioural Interventions for Type 2 Diabetes: An Evidence-Based Analysis
Bariatric Surgery for People with Diabetes and Morbid Obesity: An Evidence-Based Summary
Community-Based Care for the Management of Type 2 Diabetes: An Evidence-Based Analysis
Home Telemonitoring for Type 2 Diabetes: An Evidence-Based Analysis
Application of the Ontario Diabetes Economic Model (ODEM) to Determine the Cost-effectiveness and Budget Impact of Selected Type 2 Diabetes Interventions in Ontario
Objective
The objective of this report is to determine whether behavioural interventions1 are effective in improving glycemic control in adults with type 2 diabetes.
Background
Diabetes is a serious chronic condition affecting millions of people worldwide and is the sixth leading cause of death in Canada. In 2005, an estimated 8.8% of Ontario’s population had diabetes, representing more than 816,000 Ontarians. The direct health care cost of diabetes was $1.76 billion in the year 2000 and is projected to rise to a total cost of $3.14 billion by 2016. Much of this cost arises from the serious long-term complications associated with the disease including: coronary heart disease, stroke, adult blindness, limb amputations and kidney disease.
Type 2 diabetes accounts for 90–95% of diabetes and while type 2 diabetes is more prevalent in people aged 40 years and older, prevalence in younger populations is increasing due to a rise in obesity and physical inactivity in children.
Data from the United Kingdom Prospective Diabetes Study (UKPDS) has shown that tight glycemic control can significantly reduce the risk of developing serious complications in type 2 diabetics. Despite physicians’ and patients’ knowledge of the importance of glycemic control, Canadian data has shown that only 38% of patients with diabetes have HbA1C levels in the optimal range of 7% or less. This statistic highlights the complexities involved in the management of diabetes, which is characterized by extensive patient involvement in addition to the support provided by physicians. An enormous demand is, therefore, placed on patients to self-manage the physical, emotional and psychological aspects of living with a chronic illness.
Despite differences in individual needs to cope with diabetes, there is general agreement for the necessity of supportive programs for patient self-management. While traditional programs were didactic models with the goal of improving patients’ knowledge of their disease, current models focus on behavioural approaches aimed at providing patients with the skills and strategies required to promote and change their behaviour.
Several meta-analyses and systematic reviews have demonstrated improved health outcomes with self-management support programs in type 2 diabetics. They have all, however, either looked at a specific component of self-management support programs (i.e. self-management education) or have been conducted in specific populations. Most reviews are also qualitative and do not clearly define the interventions of interest, making findings difficult to interpret. Moreover, heterogeneity in the interventions has led to conflicting evidence on the components of effective programs. There is thus much uncertainty regarding the optimal design and delivery of these programs by policymakers.
Evidence-Based Analysis of Effectiveness
Research Questions
Are behavioural interventions effective in improving glycemic control in adults with type 2 diabetes?
Is the effectiveness of the intervention impacted by intervention characteristics (e.g. delivery of intervention, length of intervention, mode of instruction, interventionist etc.)?
Inclusion Criteria
English Language
Published between January 1996 to August 2008
Type 2 diabetic adult population (>18 years)
Randomized controlled trials (RCTs)
Systematic reviews, or meta-analyses
Describing a multi-faceted self-management support intervention as defined by the 2007 Self-Management Mapping Guide (1)
Reporting outcomes of glycemic control (HbA1c) with extractable data
Studies with a minimum of 6-month follow up
Exclusion Criteria
Studies with a control group other than usual care
Studies with a sample size <30
Studies without a clearly defined intervention
Outcomes of Interest
Primary outcome: glycemic control (HbA1c)
Secondary outcomes: systolic blood pressure (SBP) control, lipid control, change in smoking status, weight change, quality of life, knowledge, self-efficacy, managing psychosocial aspects of diabetes, assessing dissatisfaction and readiness to change, and setting and achieving diabetes goals.
Search Strategy
A search was performed in OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), The Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published between January 1996 and August 2008. Abstracts were reviewed by a single author and studies meeting the inclusion criteria outlined above were obtained. Data on population characteristics, glycemic control outcomes, and study design were extracted. Reference lists were also checked for relevant studies. The quality of the evidence was assessed as being either high, moderate, low, or very low according to the GRADE methodology.
Summary of Findings
The search identified 638 citations published between 1996 and August 2008, of which 12 met the inclusion criteria and one was a meta-analysis (Gary et al. 2003). The remaining 11 studies were RCTs (9 were used in the meta-analysis) and only one was defined as small (total sample size N=47).
Summary of Participant Demographics across studies
A total of 2,549 participants were included in the 11 identified studies. The mean age of participants reported was approximately 58 years and the mean duration of diabetes was approximately 6 years. Most studies reported gender with a mean percentage of females of approximately 67%. Of the eleven studies, two focused only on women and four included only Hispanic individuals. All studies evaluated type 2 diabetes patients exclusively.
Study Characteristics
The studies were conducted between 2002 and 2008. Approximately six of 11 studies were carried out within the USA, with the remaining studies conducted in the UK, Sweden, and Israel (sample size ranged from 47 to 824 participants). The quality of the studies ranged from moderate to low with four of the studies being of moderate quality and the remaining seven of low quality (based on the Consort Checklist). Differences in quality were mainly due to methodological issues such as inadequate description of randomization, sample size calculation allocation concealment, blinding and uncertainty of the use of intention-to-treat (ITT) analysis. Patients were recruited from several settings: six studies from primary or general medical practices, three studies from the community (e.g. via advertisements), and two from outpatient diabetes clinics. A usual care control group was reported in nine of 11 of the studies and two studies reported some type of minimal diabetes care in addition to usual care for the control group.
Intervention Characteristics
All of the interventions examined in the studies were mapped to the 2007 Self-management Mapping Guide. The interventions most often focused on problem solving, goal setting and encouraging participants to engage in activities that protect and promote health (e.g. modifying behaviour, change in diet, and increase physical activity). All of the studies examined comprehensive interventions targeted at least two self-care topics (e.g. diet, physical activity, blood glucose monitoring, foot care, etc.). Despite the homogeneity in the aims of the interventions, there was substantial clinical heterogeneity in other intervention characteristics such as duration, intensity, setting, mode of delivery (group vs. individual), interventionist, and outcomes of interest (discussed below).
Duration, Intensity and Mode of Delivery
Intervention durations ranged from 2 days to 1 year, with many falling into the range of 6 to 10 weeks. The rest of the interventions fell into categories of ≤ 2 weeks (2 studies), 6 months (2 studies), or 1 year (3 studies). Intensity of the interventions varied widely from 6 hours over 2 days, to 52 hours over 1 year; however, the majority consisted of interventions of 6 to 15 hours. Both individual and group sessions were used to deliver interventions. Group counselling was used in five studies as a mode of instruction, three studies used both individual and group sessions, and one study used individual sessions as its sole mode of instruction. Three studies also incorporated the use of telephone support as part of the intervention.
Interventionists and Setting
The following interventionists were reported (highest to lowest percentage, categories not mutually exclusive): nurse (36%), dietician (18%), physician (9%), pharmacist (9%), peer leader/community worker (18%), and other (36%). The ‘other’ category included interventionists such as consultants and facilitators with unspecified professional backgrounds. The setting of most interventions was community-based (seven studies), followed by primary care practices (three studies). One study described an intervention conducted in a pharmacy setting.
Outcomes
Duration of follow up of the studies ranged from 6 months to 8 years with a median follow-up duration of 12 months. Nine studies followed up patients at a minimum of two time points. Despite clear reporting of outcomes at follow up time points, there was poor reporting on whether the follow up was measured from participant entry into study or from end of intervention. All studies reported measures of glycemic control, specifically HbA1c levels. BMI was measured in five studies, while body weight was reported in two studies. Cholesterol was examined in three studies and blood pressure reduction in two. Smoking status was only examined in one of the studies. Additional outcomes examined in the trials included patient satisfaction, quality of life, diabetes knowledge, diabetes medication reduction, and behaviour modification (i.e. daily consumption of fruits/vegetables, exercise etc). Meta-analysis of the studies identified a moderate but significant reduction in HbA1c levels -0.44% 95%CI: -0.60, -0.29) for behavioural interventions in comparison to usual care for adults with type 2 diabetes. Subgroup analyses suggested the largest effects in interventions which were of at least duration and interventions in diabetics with higher baseline HbA1c (≥9.0). The quality of the evidence according to GRADE for the overall estimate was moderate and the quality of evidence for the subgroup analyses was identified as low.
Summary of Meta-Analysis of Studies Investigating the Effectiveness of Behavioural Interventions on HbA1c in Patients with Type 2 Diabetes.
Based on one study
Conclusions
Based on moderate quality evidence, behavioural interventions as defined by the 2007 Self-management mapping guide (Government of Victoria, Australia) produce a moderate reduction in HbA1c levels in patients with type 2 diabetes compared with usual care.
Based on low quality evidence, the interventions with the largest effects are those:
- in diabetics with higher baseline HbA1c (≥9.0)
- in which the interventions were of at least 1 year in duration
PMCID: PMC3377516  PMID: 23074526
3.  Effect of social deprivation on blood pressure monitoring and control in England: a survey of data from the quality and outcomes framework 
Objective To determine levels of blood pressure monitoring and control in primary care and to determine the effect of social deprivation on these levels.
Design Retrospective longitudinal survey, 2005 to 2007.
Setting General practices in England.
Participants Data obtained from 8515 practices (99.3% of all practices) in year 1, 8264 (98.3%) in year 2, and 8192 (97.8%) in year 3.
Main outcome measures Blood pressure indicators and chronic disease prevalence estimates contained within the UK quality and outcomes framework; social deprivation scores for each practice, ethnicity data obtained from the 2001 national census; general practice characteristics.
Results In 2005, 82.3% of adults (n=52.8m) had an up to date blood pressure recording; by 2007, this proportion had risen to 88.3% (n=53.2m). Initially, there was a 1.7% gap between mean blood pressure recording levels in practices located in the least deprived fifth of communities compared with the most deprived fifth, but, three years later, this gap had narrowed to 0.2%. Achievement of target blood pressure levels in 2005 for practices located in the least deprived communities ranged from 71.0% (95% CI 70.4% to 71.6%) for diabetes to 85.1% (84.7% to 85.6%) for coronary heart disease; practices in the most deprived communities achieved 68.9% (68.4% to 69.5%) and 81.8 % (81.3% to 82.3%) respectively. Three years later, target achievement in the least deprived practices had risen to 78.6% (78.1% to 79.1%) and 89.4% (89.1% to 89.7%) respectively. Target achievement in the most deprived practices rose similarly, to 79.2% (78.8% to 79.6%) and 88.4% (88.2% to 88.7%) respectively. Similar changes were observed for the achievement of blood pressure targets in hypertension, cerebrovascular disease, and chronic kidney disease.
Conclusions Since the reporting of performance indicators for primary care and the incorporation of pay for performance in 2004, blood pressure monitoring and control have improved substantially. Improvements in achievement have been accompanied by the near disappearance of the achievement gap between least and most deprived areas.
doi:10.1136/bmj.a2030
PMCID: PMC2590907  PMID: 18957697
4.  Hospital Performance, the Local Economy, and the Local Workforce: Findings from a US National Longitudinal Study 
PLoS Medicine  2010;7(6):e1000297.
Blustein and colleagues examine the associations between changes in hospital performance and their local economic resources. Locationally disadvantaged hospitals perform poorly on key indicators, raising concerns that pay-for-performance models may not reduce inequality.
Background
Pay-for-performance is an increasingly popular approach to improving health care quality, and the US government will soon implement pay-for-performance in hospitals nationwide. Yet hospital capacity to perform (and improve performance) likely depends on local resources. In this study, we quantify the association between hospital performance and local economic and human resources, and describe possible implications of pay-for-performance for socioeconomic equity.
Methods and Findings
We applied county-level measures of local economic and workforce resources to a national sample of US hospitals (n = 2,705), during the period 2004–2007. We analyzed performance for two common cardiac conditions (acute myocardial infarction [AMI] and heart failure [HF]), using process-of-care measures from the Hospital Quality Alliance [HQA], and isolated temporal trends and the contributions of individual resource dimensions on performance, using multivariable mixed models. Performance scores were translated into net scores for hospitals using the Performance Assessment Model, which has been suggested as a basis for reimbursement under Medicare's “Value-Based Purchasing” program. Our analyses showed that hospital performance is substantially associated with local economic and workforce resources. For example, for HF in 2004, hospitals located in counties with longstanding poverty had mean HQA composite scores of 73.0, compared with a mean of 84.1 for hospitals in counties without longstanding poverty (p<0.001). Hospitals located in counties in the lowest quartile with respect to college graduates in the workforce had mean HQA composite scores of 76.7, compared with a mean of 86.2 for hospitals in the highest quartile (p<0.001). Performance on AMI measures showed similar patterns. Performance improved generally over the study period. Nevertheless, by 2007—4 years after public reporting began—hospitals in locationally disadvantaged areas still lagged behind their locationally advantaged counterparts. This lag translated into substantially lower net scores under the Performance Assessment Model for hospital reimbursement.
Conclusions
Hospital performance on clinical process measures is associated with the quantity and quality of local economic and human resources. Medicare's hospital pay-for-performance program may exacerbate inequalities across regions, if implemented as currently proposed. Policymakers in the US and beyond may need to take into consideration the balance between greater efficiency through pay-for-performance and socioeconomic equity.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
These days, many people are rewarded for working hard and efficiently by being given bonuses when they reach preset performance targets. With a rapidly aging population and rising health care costs, policy makers in many developed countries are considering ways of maximizing value for money, including rewarding health care providers when they meet targets, under “pay-for-performance.” In the UK, for example, a major pay-for-performance initiative—the Quality and Outcomes Framework—began in 2004. All the country's general practices (primary health care facilities that deal with all medical ailments) now detail their achievements in terms of numerous clinical quality indicators for common chronic conditions (for example, the regularity of blood sugar checks for people with diabetes). They are then rewarded on the basis of these results.
Why Was This Study Done?
In the US, the government is poised to implement a nationwide pay-for-performance program in hospitals within Medicare, the government program that provides health insurance to Americans aged 65 years or older, as well as people with disabilities. However, some observers are concerned about the effect that the proposed pay-for-performance program might have on the distribution of health care resources in the US. Pay-for-performance assumes that health care providers have the economic and human resources that they need to perform or to improve their performance. But, if a hospital's capacity to perform depends on local resources, payment based on performance might worsen existing health care inequalities because hospitals in under-resourced areas might lose funds to hospitals in more affluent regions. In other words, the government might act as a reverse Robin Hood, taking from the poor and giving to the rich. In this study, the researchers examine the association between hospital performance and local economic and human resources, to explore whether this scenario is a plausible result of the pending change in US hospital reimbursement.
What Did the Researchers Do and Find?
US hospitals have voluntarily reported their performance on indicators of clinical care (“process-of-care measures”) for acute myocardial infarction (AMI, heart attack), heart failure (HF), and pneumonia under the Hospital Quality Alliance (HQA) program since 2004. The researchers identified 2,705 hospitals that had fully reported process-of-care measures for AMI and HF in both 2004 and 2007. They then used the “Performance Assessment Model” (a methodology developed by the US Centers for Medicare and Medicaid Services to score hospital performance) to calculate scores for each hospital. Finally, they looked for associations between these scores and measures of the hospital's local economic and human resources such as population poverty levels and the percentage of college graduates in the workforce. Hospital performance was associated with local and economic workforce capacity, they report. Thus, hospitals in counties with longstanding poverty had lower average performance scores for HF and AMI than hospitals in affluent counties. Similarly, hospitals in counties with a low percentage of college graduates in the workforce had lower average performance scores than hospitals in counties where more of the workforce had been to college. Finally, although performance improved generally over the study period, hospitals in disadvantaged areas still lagged behind hospitals in advantaged areas in 2007.
What Do These Findings Mean?
These findings indicate that hospital performance (as measured by the clinical process measures considered here) is associated with the quantity and quality of local human and economic resources. Thus, the proposed Medicare hospital pay-for-performance program may exacerbate existing US health care inequalities by leading to the transfer of funds from hospitals in disadvantaged locations to those in advantaged locations. Although further studies are needed to confirm this conclusion, these findings have important implications for pay-for-performance programs in health care. They suggest that US policy makers may need to modify how they measure performance improvement—the current Performance Assessment Model gives hospitals that start from a low baseline less credit for improvements than those that start from a high baseline. This works against hospitals in disadvantaged locations, which start at a low baseline. Second and more generally, they suggest that there may be a tension between the efficiency goals of pay-for-performance and other equity goals of health care systems. In a world where resources vary across regions, the expectation that regions can perform equally may not be realistic.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000297.
KaiserEDU.org is an online resource for learning about the US health care system. It includes educational modules on such topics as the Medicare program and efforts to improve the quality of care
The Hospital Quality Alliance provides information on the quality of care in US hospitals
Information about the UK National Health Service Quality and Outcomes Framework pay-for-performance initiative for general practice surgeries is available
doi:10.1371/journal.pmed.1000297
PMCID: PMC2893955  PMID: 20613863
5.  Ethnic Disparities in Coronary Heart Disease Management and Pay for Performance in the UK 
Background
Few pay for performance schemes have been subject to rigorous evaluation, and their impact on disparities in chronic disease management is uncertain.
Objective
To examine disparities in coronary heart disease management and intermediate clinical outcomes within a multiethnic population before and after the introduction of a major pay for performance initiative in April 2004.
Design
Comparison of two cross-sectional surveys using electronic general practice records.
Setting
Thirty-two family practices in south London, United Kingdom (UK).
Patients
Two thousand eight hundred and ninety-one individuals with coronary heart disease registered with participating practices in 2003 and 3,101 in 2005.
Measurements
Percentage achievement by ethnic group of quality indicators in the management of coronary heart disease
Results
The proportion of patients reaching national treatment targets increased significantly for blood pressure (51.2% to 58.9%) and total cholesterol (65.7% to 73.8%) after the implementation of a major pay for performance initiative in April 2004. Improvements in blood pressure control were greater in the black group compared to whites, with disparities evident at baseline being attenuated (black 54.8% vs. white 58.3% reaching target in 2005). Lower recording of blood pressure in the south Asian group evident in 2003 was attenuated in 2005. Statin prescribing remained significantly lower ( < 0.001) in the black group compared with the south Asian and white groups after the implementation of pay for performance (black 74.8%, south Asian 83.8%, white 80.2% in 2005).
Conclusions
The introduction of pay for performance incentives in UK primary care has been associated with better and more equitable management of coronary heart disease across ethnic groups.
doi:10.1007/s11606-008-0832-5
PMCID: PMC2607505  PMID: 18953616
pay for performance; coronary heart disease; primary care; ethnicity
6.  Clinical Utility of Vitamin D Testing 
Executive Summary
This report from the Medical Advisory Secretariat (MAS) was intended to evaluate the clinical utility of vitamin D testing in average risk Canadians and in those with kidney disease. As a separate analysis, this report also includes a systematic literature review of the prevalence of vitamin D deficiency in these two subgroups.
This evaluation did not set out to determine the serum vitamin D thresholds that might apply to non-bone health outcomes. For bone health outcomes, no high or moderate quality evidence could be found to support a target serum level above 50 nmol/L. Similarly, no high or moderate quality evidence could be found to support vitamin D’s effects in non-bone health outcomes, other than falls.
Vitamin D
Vitamin D is a lipid soluble vitamin that acts as a hormone. It stimulates intestinal calcium absorption and is important in maintaining adequate phosphate levels for bone mineralization, bone growth, and remodelling. It’s also believed to be involved in the regulation of cell growth proliferation and apoptosis (programmed cell death), as well as modulation of the immune system and other functions. Alone or in combination with calcium, Vitamin D has also been shown to reduce the risk of fractures in elderly men (≥ 65 years), postmenopausal women, and the risk of falls in community-dwelling seniors. However, in a comprehensive systematic review, inconsistent results were found concerning the effects of vitamin D in conditions such as cancer, all-cause mortality, and cardiovascular disease. In fact, no high or moderate quality evidence could be found concerning the effects of vitamin D in such non-bone health outcomes. Given the uncertainties surrounding the effects of vitamin D in non-bone health related outcomes, it was decided that this evaluation should focus on falls and the effects of vitamin D in bone health and exclusively within average-risk individuals and patients with kidney disease.
Synthesis of vitamin D occurs naturally in the skin through exposure to ultraviolet B (UVB) radiation from sunlight, but it can also be obtained from dietary sources including fortified foods, and supplements. Foods rich in vitamin D include fatty fish, egg yolks, fish liver oil, and some types of mushrooms. Since it is usually difficult to obtain sufficient vitamin D from non-fortified foods, either due to low content or infrequent use, most vitamin D is obtained from fortified foods, exposure to sunlight, and supplements.
Clinical Need: Condition and Target Population
Vitamin D deficiency may lead to rickets in infants and osteomalacia in adults. Factors believed to be associated with vitamin D deficiency include:
darker skin pigmentation,
winter season,
living at higher latitudes,
skin coverage,
kidney disease,
malabsorption syndromes such as Crohn’s disease, cystic fibrosis, and
genetic factors.
Patients with chronic kidney disease (CKD) are at a higher risk of vitamin D deficiency due to either renal losses or decreased synthesis of 1,25-dihydroxyvitamin D.
Health Canada currently recommends that, until the daily recommended intakes (DRI) for vitamin D are updated, Canada’s Food Guide (Eating Well with Canada’s Food Guide) should be followed with respect to vitamin D intake. Issued in 2007, the Guide recommends that Canadians consume two cups (500 ml) of fortified milk or fortified soy beverages daily in order to obtain a daily intake of 200 IU. In addition, men and women over the age of 50 should take 400 IU of vitamin D supplements daily. Additional recommendations were made for breastfed infants.
A Canadian survey evaluated the median vitamin D intake derived from diet alone (excluding supplements) among 35,000 Canadians, 10,900 of which were from Ontario. Among Ontarian males ages 9 and up, the median daily dietary vitamin D intake ranged between 196 IU and 272 IU per day. Among females, it varied from 152 IU to 196 IU per day. In boys and girls ages 1 to 3, the median daily dietary vitamin D intake was 248 IU, while among those 4 to 8 years it was 224 IU.
Vitamin D Testing
Two laboratory tests for vitamin D are available, 25-hydroxy vitamin D, referred to as 25(OH)D, and 1,25-dihydroxyvitamin D. Vitamin D status is assessed by measuring the serum 25(OH)D levels, which can be assayed using radioimmunoassays, competitive protein-binding assays (CPBA), high pressure liquid chromatography (HPLC), and liquid chromatography-tandem mass spectrometry (LC-MS/MS). These may yield different results with inter-assay variation reaching up to 25% (at lower serum levels) and intra-assay variation reaching 10%.
The optimal serum concentration of vitamin D has not been established and it may change across different stages of life. Similarly, there is currently no consensus on target serum vitamin D levels. There does, however, appear to be a consensus on the definition of vitamin D deficiency at 25(OH)D < 25 nmol/l, which is based on the risk of diseases such as rickets and osteomalacia. Higher target serum levels have also been proposed based on subclinical endpoints such as parathyroid hormone (PTH). Therefore, in this report, two conservative target serum levels have been adopted, 25 nmol/L (based on the risk of rickets and osteomalacia), and 40 to 50 nmol/L (based on vitamin D’s interaction with PTH).
Ontario Context
Volume & Cost
The volume of vitamin D tests done in Ontario has been increasing over the past 5 years with a steep increase of 169,000 tests in 2007 to more than 393,400 tests in 2008. The number of tests continues to rise with the projected number of tests for 2009 exceeding 731,000. According to the Ontario Schedule of Benefits, the billing cost of each test is $51.7 for 25(OH)D (L606, 100 LMS units, $0.517/unit) and $77.6 for 1,25-dihydroxyvitamin D (L605, 150 LMS units, $0.517/unit). Province wide, the total annual cost of vitamin D testing has increased from approximately $1.7M in 2004 to over $21.0M in 2008. The projected annual cost for 2009 is approximately $38.8M.
Evidence-Based Analysis
The objective of this report is to evaluate the clinical utility of vitamin D testing in the average risk population and in those with kidney disease. As a separate analysis, the report also sought to evaluate the prevalence of vitamin D deficiency in Canada. The specific research questions addressed were thus:
What is the clinical utility of vitamin D testing in the average risk population and in subjects with kidney disease?
What is the prevalence of vitamin D deficiency in the average risk population in Canada?
What is the prevalence of vitamin D deficiency in patients with kidney disease in Canada?
Clinical utility was defined as the ability to improve bone health outcomes with the focus on the average risk population (excluding those with osteoporosis) and patients with kidney disease.
Literature Search
A literature search was performed on July 17th, 2009 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from January 1, 1998 until July 17th, 2009. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria, full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search. Articles with unknown eligibility were reviewed with a second clinical epidemiologist, then a group of epidemiologists until consensus was established. The quality of evidence was assessed as high, moderate, low or very low according to GRADE methodology.
Observational studies that evaluated the prevalence of vitamin D deficiency in Canada in the population of interest were included based on the inclusion and exclusion criteria listed below. The baseline values were used in this report in the case of interventional studies that evaluated the effect of vitamin D intake on serum levels. Studies published in grey literature were included if no studies published in the peer-reviewed literature were identified for specific outcomes or subgroups.
Considering that vitamin D status may be affected by factors such as latitude, sun exposure, food fortification, among others, the search focused on prevalence studies published in Canada. In cases where no Canadian prevalence studies were identified, the decision was made to include studies from the United States, given the similar policies in vitamin D food fortification and recommended daily intake.
Inclusion Criteria
Studies published in English
Publications that reported the prevalence of vitamin D deficiency in Canada
Studies that included subjects from the general population or with kidney disease
Studies in children or adults
Studies published between January 1998 and July 17th 2009
Exclusion Criteria
Studies that included subjects defined according to a specific disease other than kidney disease
Letters, comments, and editorials
Studies that measured the serum vitamin D levels but did not report the percentage of subjects with serum levels below a given threshold
Outcomes of Interest
Prevalence of serum vitamin D less than 25 nmol/L
Prevalence of serum vitamin D less than 40 to 50 nmol/L
Serum 25-hydroxyvitamin D was the metabolite used to assess vitamin D status. Results from adult and children studies were reported separately. Subgroup analyses according to factors that affect serum vitamin D levels (e.g., seasonal effects, skin pigmentation, and vitamin D intake) were reported if enough information was provided in the studies
Quality of Evidence
The quality of the prevalence studies was based on the method of subject recruitment and sampling, possibility of selection bias, and generalizability to the source population. The overall quality of the trials was examined according to the GRADE Working Group criteria.
Summary of Findings
Fourteen prevalence studies examining Canadian adults and children met the eligibility criteria. With the exception of one longitudinal study, the studies had a cross-sectional design. Two studies were conducted among Canadian adults with renal disease but none studied Canadian children with renal disease (though three such US studies were included). No systematic reviews or health technology assessments that evaluated the prevalence of vitamin D deficiency in Canada were identified. Two studies were published in grey literature, consisting of a Canadian survey designed to measure serum vitamin D levels and a study in infants presented as an abstract at a conference. Also included were the results of vitamin D tests performed in community laboratories in Ontario between October 2008 and September 2009 (provided by the Ontario Association of Medical Laboratories).
Different threshold levels were used in the studies, thus we reported the percentage of subjects with serum levels of between 25 and 30 nmol/L and between 37.5 and 50 nmol/L. Some studies stratified the results according to factors affecting vitamin D status and two used multivariate models to investigate the effects of these characteristics (including age, season, BMI, vitamin D intake, skin pigmentation, and season) on serum 25(OH)D levels. It’s unclear, however, if these studies were adequately powered for these subgroup analyses.
Study participants generally consisted of healthy, community-dwelling subjects and most excluded individuals with conditions or medications that alter vitamin D or bone metabolism, such as kidney or liver disease. Although the studies were conducted in different parts of Canada, fewer were performed in Northern latitudes, i.e. above 53°N, which is equivalent to the city of Edmonton.
Adults
Serum vitamin D levels of < 25 to 30 nmol/L were observed in 0% to 25.5% of the subjects included in five studies; the weighted average was 3.8% (95% CI: 3.0, 4.6). The preliminary results of the Canadian survey showed that approximately 5% of the subjects had serum levels below 29.5 nmol/L. The results of over 600,000 vitamin D tests performed in Ontarian community laboratories between October 2008 and September 2009 showed that 2.6% of adults (> 18 years) had serum levels < 25 nmol/L.
The prevalence of serum vitamin D levels below 37.5-50 nmol/L reported among studies varied widely, ranging from 8% to 73.6% with a weighted average of 22.5%. The preliminary results of the CHMS survey showed that between 10% and 25% of subjects had serum levels below 37 to 48 nmol/L. The results of the vitamin D tests performed in community laboratories showed that 10% to 25% of the individuals had serum levels between 39 and 50 nmol/L.
In an attempt to explain this inter-study variation, the study results were stratified according to factors affecting serum vitamin D levels, as summarized below. These results should be interpreted with caution as none were adjusted for other potential confounders. Adequately powered multivariate analyses would be necessary to determine the contribution of risk factors to lower serum 25(OH)D levels.
Seasonal variation
Three adult studies evaluating serum vitamin D levels in different seasons observed a trend towards a higher prevalence of serum levels < 37.5 to 50 nmol/L during the winter and spring months, specifically 21% to 39%, compared to 8% to 14% in the summer. The weighted average was 23.6% over the winter/spring months and 9.6% over summer. The difference between the seasons was not statistically significant in one study and not reported in the other two studies.
Skin Pigmentation
Four studies observed a trend toward a higher prevalence of serum vitamin D levels < 37.5 to 50 nmol/L in subjects with darker skin pigmentation compared to those with lighter skin pigmentation, with weighted averages of 46.8% among adults with darker skin colour and 15.9% among those with fairer skin.
Vitamin D intake and serum levels
Four adult studies evaluated serum vitamin D levels according to vitamin D intake and showed an overall trend toward a lower prevalence of serum levels < 37.5 to 50 nmol/L with higher levels of vitamin D intake. One study observed a dose-response relationship between higher vitamin D intake from supplements, diet (milk), and sun exposure (results not adjusted for other variables). It was observed that subjects taking 50 to 400 IU or > 400 IU of vitamin D per day had a 6% and 3% prevalence of serum vitamin D level < 40 nmol/L, respectively, versus 29% in subjects not on vitamin D supplementation. Similarly, among subjects drinking one or two glasses of milk per day, the prevalence of serum vitamin D levels < 40 nmol/L was found to be 15%, versus 6% in those who drink more than two glasses of milk per day and 21% among those who do not drink milk. On the other hand, one study observed little variation in serum vitamin D levels during winter according to milk intake, with the proportion of subjects exhibiting vitamin D levels of < 40 nmol/L being 21% among those drinking 0-2 glasses per day, 26% among those drinking > 2 glasses, and 20% among non-milk drinkers.
The overall quality of evidence for the studies conducted among adults was deemed to be low, although it was considered moderate for the subgroups of skin pigmentation and seasonal variation.
Newborn, Children and Adolescents
Five Canadian studies evaluated serum vitamin D levels in newborns, children, and adolescents. In four of these, it was found that between 0 and 36% of children exhibited deficiency across age groups with a weighted average of 6.4%. The results of over 28,000 vitamin D tests performed in children 0 to 18 years old in Ontario laboratories (Oct. 2008 to Sept. 2009) showed that 4.4% had serum levels of < 25 nmol/L.
According to two studies, 32% of infants 24 to 30 months old and 35.3% of newborns had serum vitamin D levels of < 50 nmol/L. Two studies of children 2 to 16 years old reported that 24.5% and 34% had serum vitamin D levels below 37.5 to 40 nmol/L. In both studies, older children exhibited a higher prevalence than younger children, with weighted averages 34.4% and 10.3%, respectively. The overall weighted average of the prevalence of serum vitamin D levels < 37.5 to 50 nmol/L among pediatric studies was 25.8%. The preliminary results of the Canadian survey showed that between 10% and 25% of subjects between 6 and 11 years (N= 435) had serum levels below 50 nmol/L, while for those 12 to 19 years, 25% to 50% exhibited serum vitamin D levels below 50 nmol/L.
The effects of season, skin pigmentation, and vitamin D intake were not explored in Canadian pediatric studies. A Canadian surveillance study did, however, report 104 confirmed cases1 (2.9 cases per 100,000 children) of vitamin D-deficient rickets among Canadian children age 1 to 18 between 2002 and 2004, 57 (55%) of which from Ontario. The highest incidence occurred among children living in the North, i.e., the Yukon, Northwest Territories, and Nunavut. In 92 (89%) cases, skin pigmentation was categorized as intermediate to dark, 98 (94%) had been breastfed, and 25 (24%) were offspring of immigrants to Canada. There were no cases of rickets in children receiving ≥ 400 IU VD supplementation/day.
Overall, the quality of evidence of the studies of children was considered very low.
Kidney Disease
Adults
Two studies evaluated serum vitamin D levels in Canadian adults with kidney disease. The first included 128 patients with chronic kidney disease stages 3 to 5, 38% of which had serum vitamin D levels of < 37.5 nmol/L (measured between April and July). This is higher than what was reported in Canadian studies of the general population during the summer months (i.e. between 8% and 14%). In the second, which examined 419 subjects who had received a renal transplantation (mean time since transplantation: 7.2 ± 6.4 years), the prevalence of serum vitamin D levels < 40 nmol/L was 27.3%. The authors concluded that the prevalence observed in the study population was similar to what is expected in the general population.
Children
No studies evaluating serum vitamin D levels in Canadian pediatric patients with kidney disease could be identified, although three such US studies among children with chronic kidney disease stages 1 to 5 were. The mean age varied between 10.7 and 12.5 years in two studies but was not reported in the third. Across all three studies, the prevalence of serum vitamin D levels below the range of 37.5 to 50 nmol/L varied between 21% and 39%, which is not considerably different from what was observed in studies of healthy Canadian children (24% to 35%).
Overall, the quality of evidence in adults and children with kidney disease was considered very low.
Clinical Utility of Vitamin D Testing
A high quality comprehensive systematic review published in August 2007 evaluated the association between serum vitamin D levels and different bone health outcomes in different age groups. A total of 72 studies were included. The authors observed that there was a trend towards improvement in some bone health outcomes with higher serum vitamin D levels. Nevertheless, precise thresholds for improved bone health outcomes could not be defined across age groups. Further, no new studies on the association were identified during an updated systematic review on vitamin D published in July 2009.
With regards to non-bone health outcomes, there is no high or even moderate quality evidence that supports the effectiveness of vitamin D in outcomes such as cancer, cardiovascular outcomes, and all-cause mortality. Even if there is any residual uncertainty, there is no evidence that testing vitamin D levels encourages adherence to Health Canada’s guidelines for vitamin D intake. A normal serum vitamin D threshold required to prevent non-bone health related conditions cannot be resolved until a causal effect or correlation has been demonstrated between vitamin D levels and these conditions. This is as an ongoing research issue around which there is currently too much uncertainty to base any conclusions that would support routine vitamin D testing.
For patients with chronic kidney disease (CKD), there is again no high or moderate quality evidence supporting improved outcomes through the use of calcitriol or vitamin D analogs. In the absence of such data, the authors of the guidelines for CKD patients consider it best practice to maintain serum calcium and phosphate at normal levels, while supplementation with active vitamin D should be considered if serum PTH levels are elevated. As previously stated, the authors of guidelines for CKD patients believe that there is not enough evidence to support routine vitamin D [25(OH)D] testing. According to what is stated in the guidelines, decisions regarding the commencement or discontinuation of treatment with calcitriol or vitamin D analogs should be based on serum PTH, calcium, and phosphate levels.
Limitations associated with the evidence of vitamin D testing include ambiguities in the definition of an ‘adequate threshold level’ and both inter- and intra- assay variability. The MAS considers both the lack of a consensus on the target serum vitamin D levels and assay limitations directly affect and undermine the clinical utility of testing. The evidence supporting the clinical utility of vitamin D testing is thus considered to be of very low quality.
Daily vitamin D intake, either through diet or supplementation, should follow Health Canada’s recommendations for healthy individuals of different age groups. For those with medical conditions such as renal disease, liver disease, and malabsorption syndromes, and for those taking medications that may affect vitamin D absorption/metabolism, physician guidance should be followed with respect to both vitamin D testing and supplementation.
Conclusions
Studies indicate that vitamin D, alone or in combination with calcium, may decrease the risk of fractures and falls among older adults.
There is no high or moderate quality evidence to support the effectiveness of vitamin D in other outcomes such as cancer, cardiovascular outcomes, and all-cause mortality.
Studies suggest that the prevalence of vitamin D deficiency in Canadian adults and children is relatively low (approximately 5%), and between 10% and 25% have serum levels below 40 to 50 nmol/L (based on very low to low grade evidence).
Given the limitations associated with serum vitamin D measurement, ambiguities in the definition of a ‘target serum level’, and the availability of clear guidelines on vitamin D supplementation from Health Canada, vitamin D testing is not warranted for the average risk population.
Health Canada has issued recommendations regarding the adequate daily intake of vitamin D, but current studies suggest that the mean dietary intake is below these recommendations. Accordingly, Health Canada’s guidelines and recommendations should be promoted.
Based on a moderate level of evidence, individuals with darker skin pigmentation appear to have a higher risk of low serum vitamin D levels than those with lighter skin pigmentation and therefore may need to be specially targeted with respect to optimum vitamin D intake. The cause-effect of this association is currently unclear.
Individuals with medical conditions such as renal and liver disease, osteoporosis, and malabsorption syndromes, as well as those taking medications that may affect vitamin D absorption/metabolism, should follow their physician’s guidance concerning both vitamin D testing and supplementation.
PMCID: PMC3377517  PMID: 23074397
7.  Pay for perfomance and the quality of diabetes management in individuals with and without co-morbid medical conditions 
Summary
Objective
To examine the impact of the Quality and Outcomes Framework, a major pay-for-performance incentive introduced in the UK during 2004, on diabetes management in patients with and without co-morbidity.
Design
Cohort study comparing actual achievement of treatment targets in 2004 and 2005 with that predicted by the underlying (pre-intervention) trend in diabetes patients with and without co-morbid conditions.
Setting
A total of 422 general practices participating in the General Practice Research Database.
Main outcomes measures
Achievement of diabetes treatment targets for blood pressure (< 140/80 mm Hg), HbA1c (≤ 7.0%) and cholesterol (≤ 5 mmol/L).
Results
The percentage of diabetes patients with co-morbidity reaching blood pressure and cholesterol targets exceeded that predicted by the underlying trend during the first two years of pay for perfomance (by 3.1% [95% CI 1.1–5.1] for BP and 4.1% [95% CI 2.2–6.0] for cholesterol among patients with ≥ 5 co-morbidities in 2005). Similar improvements were evident in patients without co-morbidity, except for cholesterol control in 2004 (−0.2% [95% CI −1.7–1.4]). The percentage of patients meeting the HbA1c target in the first two years of this program was significantly lower than predicted by the underlying trend in all patients, with the greatest shortfall in patients without co-morbidity (3.8% [95% CI 2.6–5.0] lower in 2005). Patients with co-morbidity remained significantly more likely to meet treatment targets for cholesterol and HbA1c than those without after the introduction of pay for perfomance.
Conclusions
Diabetes patients with co-morbid conditions appear to have benefited more from this pay-for-performance program than those without co-morbidity.
doi:10.1258/jrsm.2009.090171
PMCID: PMC2738769  PMID: 19734534
8.  Association of practice size and pay-for-performance incentives with the quality of diabetes management in primary care 
Background:
Not enough is known about the association between practice size and clinical outcomes in primary care. We examined this association between 1997 and 2005, in addition to the impact of the Quality and Outcomes Framework, a pay-for-performance incentive scheme introduced in the United Kingdom in 2004, on diabetes management.
Methods:
We conducted a retrospective open-cohort study using data from the General Practice Research Database. We enrolled 422 general practices providing care for 154 945 patients with diabetes. Our primary outcome measures were the achievement of national treatment targets for blood pressure, glycated hemoglobin (HbA1c) levels and total cholesterol.
Results:
We saw improvements in the recording of process of care measures, prescribing and achieving intermediate outcomes in all practice sizes during the study period. We saw improvement in reaching national targets after the introduction of the Quality and Outcomes Framework. These improvements significantly exceeded the underlying trends in all practice sizes for achieving targets for cholesterol level and blood pressure, but not for HbA1c level. In 1997 and 2005, there were no significant differences between the smallest and largest practices in achieving targets for blood pressure (1997 odds ratio [OR] 0.98, 95% confidence interval [CI] 0.82 to 1.16; 2005 OR 0.92, 95% CI 0.80 to 1.06 in 2005), cholesterol level (1997 OR 0.94, 95% CI 0.76 to 1.16; 2005 OR 1.1, 95% CI 0.97 to 1.40) and glycated hemoglobin level (1997 OR 0.79, 95% CI 0.55 to 1.14; 2005 OR 1.05, 95% CI 0.93 to 1.19).
Interpretation:
We found no evidence that size of practice is associated with the quality of diabetes management in primary care. Pay-for-performance programs appear to benefit both large and small practices to a similar extent.
doi:10.1503/cmaj.101187
PMCID: PMC3168664  PMID: 21810950
9.  Risk of Cardiovascular Disease and Total Mortality in Adults with Type 1 Diabetes: Scottish Registry Linkage Study 
PLoS Medicine  2012;9(10):e1001321.
Helen Colhoun and colleagues report findings from a Scottish registry linkage study regarding contemporary risks for cardiovascular events and all-cause mortality among individuals diagnosed with type 1 diabetes.
Background
Randomized controlled trials have shown the importance of tight glucose control in type 1 diabetes (T1DM), but few recent studies have evaluated the risk of cardiovascular disease (CVD) and all-cause mortality among adults with T1DM. We evaluated these risks in adults with T1DM compared with the non-diabetic population in a nationwide study from Scotland and examined control of CVD risk factors in those with T1DM.
Methods and Findings
The Scottish Care Information-Diabetes Collaboration database was used to identify all people registered with T1DM and aged ≥20 years in 2005–2007 and to provide risk factor data. Major CVD events and deaths were obtained from the national hospital admissions database and death register. The age-adjusted incidence rate ratio (IRR) for CVD and mortality in T1DM (n = 21,789) versus the non-diabetic population (3.96 million) was estimated using Poisson regression. The age-adjusted IRR for first CVD event associated with T1DM versus the non-diabetic population was higher in women (3.0: 95% CI 2.4–3.8, p<0.001) than men (2.3: 2.0–2.7, p<0.001) while the IRR for all-cause mortality associated with T1DM was comparable at 2.6 (2.2–3.0, p<0.001) in men and 2.7 (2.2–3.4, p<0.001) in women. Between 2005–2007, among individuals with T1DM, 34 of 123 deaths among 10,173 who were <40 years and 37 of 907 deaths among 12,739 who were ≥40 years had an underlying cause of death of coma or diabetic ketoacidosis. Among individuals 60–69 years, approximately three extra deaths per 100 per year occurred among men with T1DM (28.51/1,000 person years at risk), and two per 100 per year for women (17.99/1,000 person years at risk). 28% of those with T1DM were current smokers, 13% achieved target HbA1c of <7% and 37% had very poor (≥9%) glycaemic control. Among those aged ≥40, 37% had blood pressures above even conservative targets (≥140/90 mmHg) and 39% of those ≥40 years were not on a statin. Although many of these risk factors were comparable to those previously reported in other developed countries, CVD and mortality rates may not be generalizable to other countries. Limitations included lack of information on the specific insulin therapy used.
Conclusions
Although the relative risks for CVD and total mortality associated with T1DM in this population have declined relative to earlier studies, T1DM continues to be associated with higher CVD and death rates than the non-diabetic population. Risk factor management should be improved to further reduce risk but better treatment approaches for achieving good glycaemic control are badly needed.
Please see later in the article for the Editors' Summary
Editors' Summary
Background. People with diabetes are more likely to have cardiovascular disease such as heart attacks and strokes. They also have a higher risk of dying prematurely from any cause. Controlling blood sugar (glucose), blood pressure, and cholesterol can help reduce these risks. Some people with type 1 diabetes can achieve tight blood glucose control through a strict regimen that includes a carefully calculated diet, frequent physical activity, regular blood glucose testing several times a day, and multiple daily doses of insulin. Other drugs can reduce blood pressure and cholesterol levels. Keeping one's weight in the normal range and not smoking are important ways in which all people, including those with type 1 diabetes can reduce their risks of heart disease and premature death.
Why Was This Study Done? Researchers and doctors have known for almost two decades what patients with type 1 diabetes can do to minimize the complications from the disease and thereby reduce their risks for cardiovascular disease and early death. So for some time now, patients should have been treated and counseled accordingly. This study was done to evaluate the current risks for have cardiovascular disease and premature death amongst people living with type 1 diabetes in a high-income country (Scotland).
What Did the Researchers Do and Find? From a national register of all people with type 1 diabetes in Scotland, the researchers selected those who were older than 20 years and alive at any time from January 2005 to May 2008. This included about 19,000 people who had been diagnosed with type 1 diabetes before 2005. Another 2,600 were diagnosed between 2005 and 2008. They also obtained data on heart attacks and strokes in these patients from hospital records and on deaths from the natural death register. To obtain a good picture of the current relative risks, they compared the patients with type 1 diabetes with the non-diabetic general Scottish population with regard to the risk of heart attacks/strokes and death from all causes. They also collected information on how well the people with diabetes controlled their blood glucose, on their weight, and whether they smoked.
They found that the current risks compared with the general Scottish population are quite a bit lower than those of people with type 1 diabetes in earlier decades. However, people with type 1 diabetes in Scotland still have much higher (more than twice) the risk of heart attacks, strokes, or premature death than the general population. Moreover, the researchers found a high number of deaths in younger people with diabetes from coma—caused by either too much blood sugar (hyperglycemia) or too little (hypoglycemia). Severe hyperglycemia and hypoglycemia happen when blood glucose control is poor. When the scientists looked at test results for HbA1c levels (a test that is done once or twice a year to see how well patients controlled their blood sugar over the previous 3 months) for all patients, they found that the majority of them did not come close to controlling their blood glucose within the recommended range.
When the researchers compared body mass index (a measure of weight that takes height into account) and smoking between the people with type 1 diabetes and the general population, they found similar proportions of smokers and overweight or obese people.
What Do these Findings Mean? The results represent a snapshot of the recent situation regarding complications from type 1 diabetes in the Scottish population. The results suggest that within this population, strategies over the past two decades to reduce complications from type 1 diabetes that cause cardiovascular disease and death are working, in principle. However, there is much need for further improvement. This includes the urgent need to understand why so few people with type 1 diabetes achieve good control of their blood sugar, and what can be done to improve this situation. It is also important to put more effort into keeping people with diabetes from taking up smoking or getting them to quit, as well as preventing them from getting overweight or promoting weight reduction, because this could further reduce the risks of cardiovascular disease and premature death.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001321
National Diabetes Information Clearinghouse, a service of the US National Institute of Diabetes and Digestive and Kidney Diseases, has information on heart disease and diabetes, on general complications of diabetes, and on the HbA1c test (on this site and some others called A1C test) that measures control of blood sugar over the past 3 months
Diabetes.co.uk provides general information on type 1 diabetes, its complications, and what people with the disease can do to reduce their risks
The Canadian Diabetes Association offers a cardiovascular risk self-assessment tool and other relevant information
The American Diabetes Association has information on the benefits and challenges of tight blood sugar control and how it is tested
The Juvenile Diabetes Research Foundation funds research to prevent, cure, and treat type 1 diabetes
Diabetes UK provides extensive information on diabetes for patients, carers, and clinicians
doi:10.1371/journal.pmed.1001321
PMCID: PMC3462745  PMID: 23055834
10.  Heart Disease and Stroke Statistics—2011 Update 
Circulation  2010;123(4):e18-e209.
Summary
Each year, the American Heart Association (AHA), in conjunction with the Centers for Disease Control and Prevention, the National Institutes of Health, and other government agencies, brings together the most up-to-date statistics on heart disease, stroke, other vascular diseases, and their risk factors and presents them in its Heart Disease and Stroke Statistical Update. The Statistical Update is a valuable resource for researchers, clinicians, healthcare policy makers, media professionals, the lay public, and many others who seek the best national data available on disease morbidity and mortality and the risks, quality of care, medical procedures and operations, and costs associated with the management of these diseases in a single document. Indeed, since 1999, the Statistical Update has been cited more than 8700 times in the literature (including citations of all annual versions). In 2009 alone, the various Statistical Updates were cited ≈1600 times (data from ISI Web of Science). In recent years, the Statistical Update has undergone some major changes with the addition of new chapters and major updates across multiple areas. For this year’s edition, the Statistics Committee, which produces the document for the AHA, updated all of the current chapters with the most recent nationally representative data and inclusion of relevant articles from the literature over the past year and added a new chapter detailing how family history and genetics play a role in cardiovascular disease (CVD) risk. Also, the 2011 Statistical Update is a major source for monitoring both cardiovascular health and disease in the population, with a focus on progress toward achievement of the AHA’s 2020 Impact Goals. Below are a few highlights from this year’s Update.
Death Rates From CVD Have Declined, Yet the Burden of Disease Remains High
The 2007 overall death rate from CVD (International Classification of Diseases 10, I00–I99) was 251.2 per 100 000. The rates were 294.0 per 100 000 for white males, 405.9 per 100 000 for black males, 205.7 per 100 000 for white females, and 286.1 per 100 000 for black females. From 1997 to 2007, the death rate from CVD declined 27.8%. Mortality data for 2007 show that CVD (I00–I99; Q20–Q28) accounted for 33.6% (813 804) of all 2 243 712 deaths in 2007, or 1 of every 2.9 deaths in the United States.
On the basis of 2007 mortality rate data, more than 2200 Americans die of CVD each day, an average of 1 death every 39 seconds. More than 150 000 Americans killed by CVD (I00–I99) in 2007 were <65 years of age. In 2007, nearly 33% of deaths due to CVD occurred before the age of 75 years, which is well before the average life expectancy of 77.9 years.
Coronary heart disease caused ≈1 of every 6 deaths in the United States in 2007. Coronary heart disease mortality in 2007 was 406 351. Each year, an estimated 785 000 Americans will have a new coronary attack, and ≈470 000 will have a recurrent attack. It is estimated that an additional 195 000 silent first myocardial infarctions occur each year. Approximately every 25 seconds, an American will have a coronary event, and approximately every minute, someone will die of one.
Each year, ≈795 000 people experience a new or recurrent stroke. Approximately 610 000 of these are first attacks, and 185 000 are recurrent attacks. Mortality data from 2007 indicate that stroke accounted for ≈1 of every 18 deaths in the United States. On average, every 40 seconds, someone in the United States has a stroke. From 1997 to 2007, the stroke death rate fell 44.8%, and the actual number of stroke deaths declined 14.7%.
In 2007, 1 in 9 death certificates (277 193 deaths) in the United States mentioned heart failure.
Prevalence and Control of Traditional Risk Factors Remains an Issue for Many Americans
Data from the National Health and Nutrition Examination Survey (NHANES) 2005–2008 indicate that 33.5% of US adults ≥20 years of age have hypertension (Table 7-1). This amounts to an estimated 76 400 000 US adults with hypertension. The prevalence of hypertension is nearly equal between men and women. African American adults have among the highest rates of hypertension in the world, at 44%. Among hypertensive adults, ≈80% are aware of their condition, 71% are using antihypertensive medication, and only 48% of those aware that they have hypertension have their condition controlled.
Despite 4 decades of progress, in 2008, among Americans ≥18 years of age, 23.1% of men and 18.3% of women continued to be cigarette smokers. In 2009, 19.5% of students in grades 9 through 12 reported current tobacco use. The percentage of the nonsmoking population with detectable serum cotinine (indicating exposure to secondhand smoke) was 46.4% in 1999 to 2004, with declines occurring, and was highest for those 4 to 11 years of age (60.5%) and those 12 to 19 years of age (55.4%).
An estimated 33 600 000 adults ≥20 years of age have total serum cholesterol levels ≥240 mg/dL, with a prevalence of 15.0% (Table 13-1).
In 2008, an estimated 18 300 000 Americans had diagnosed diabetes mellitus, representing 8.0% of the adult population. An additional 7 100 000 had undiagnosed diabetes mellitus, and 36.8% had prediabetes, with abnormal fasting glucose levels. African Americans, Mexican Americans, Hispanic/Latino individuals, and other ethnic minorities bear a strikingly disproportionate burden of diabetes mellitus in the United States (Table 16-1).
The 2011 Update Expands Data Coverage of the Obesity Epidemic and Its Antecedents and Consequences
The estimated prevalence of overweight and obesity in US adults (≥20 years of age) is 149 300 000, which represents 67.3% of this group in 2008. Fully 33.7% of US adults are obese (body mass index ≥30 kg/m2). Men and women of all race/ethnic groups in the population are affected by the epidemic of overweight and obesity (Table 15-1).
Among children 2 to 19 years of age, 31.9% are overweight and obese (which represents 23 500 000 children), and 16.3% are obese (12 000 000 children). Mexican American boys and girls and African American girls are disproportionately affected. Over the past 3 decades, the prevalence of obesity in children 6 to 11 years of age has increased from ≈4% to more than 20%.
Obesity (body mass index ≥30 kg/m2) is associated with marked excess mortality in the US population. Even more notable is the excess morbidity associated with overweight and obesity in terms of risk factor development and incidence of diabetes mellitus, CVD end points (including coronary heart disease, stroke, and heart failure), and numerous other health conditions, including asthma, cancer, degenerative joint disease, and many others.
The prevalence of diabetes mellitus is increasing dramatically over time, in parallel with the increases in prevalence of overweight and obesity.
On the basis of NHANES 2003–2006 data, the age-adjusted prevalence of metabolic syndrome, a cluster of major cardiovascular risk factors related to overweight/obesity and insulin resistance, is 34% (35.1% among men and 32.6% among women).
The proportion of youth (≤18 years of age) who report engaging in no regular physical activity is high, and the proportion increases with age. In 2007, among adolescents in grades 9 through 12, 29.9% of girls and 17.0% of boys reported that they had not engaged in 60 minutes of moderate-to-vigorous physical activity, defined as any activity that increased heart rate or breathing rate, even once in the previous 7 days, despite recommendations that children engage in such activity ≥5 days per week.
Thirty-six percent of adults reported engaging in no vigorous activity (activity that causes heavy sweating and a large increase in breathing or heart rate).
Data from NHANES indicate that between 1971 and 2004, average total energy consumption among US adults increased by 22% in women (from 1542 to 1886 kcal/d) and by 10% in men (from 2450 to 2693 kcal/d; see Chart 19-1).
The increases in calories consumed during this time period are attributable primarily to greater average carbohydrate intake, in particular, of starches, refined grains, and sugars. Other specific changes related to increased caloric intake in the United States include larger portion sizes, greater food quantity and calories per meal, and increased consumption of sugar-sweetened beverages, snacks, commercially prepared (especially fast food) meals, and higher energy-density foods.
The 2011 Update Provides Critical Data Regarding Cardiovascular Quality of Care, Procedure Utilization, and Costs
In light of the current national focus on healthcare utilization, costs, and quality, it is critical to monitor and understand the magnitude of healthcare delivery and costs, as well as the quality of healthcare delivery, related to CVDs. The Update provides these critical data in several sections.
Quality-of-Care Metrics for CVDs
Chapter 20 reviews many metrics related to the quality of care delivered to patients with CVDs, as well as healthcare disparities. In particular, quality data are available from the AHA’s “Get With The Guidelines” programs for coronary artery disease and heart failure and the American Stroke Association/ AHA’s “Get With the Guidelines” program for acute stroke. Similar data from the Veterans Healthcare Administration, national Medicare and Medicaid data and National Cardiovascular Data Registry Acute Coronary Treatment and Intervention Outcomes Network - “Get With The Guidelines” Registry data are also reviewed. These data show impressive adherence with guideline recommendations for many, but not all, metrics of quality of care for these hospitalized patients. Data are also reviewed on screening for cardiovascular risk factor levels and control.
Cardiovascular Procedure Utilization and Costs
Chapter 21 provides data on trends and current usage of cardiovascular surgical and invasive procedures. For example, the total number of inpatient cardiovascular operations and procedures increased 27%, from 5 382 000 in 1997 to 6 846 000 in 2007 (National Heart, Lung, and Blood Institute computation based on National Center for Health Statistics annual data).
Chapter 22 reviews current estimates of direct and indirect healthcare costs related to CVDs, stroke, and related conditions using Medical Expenditure Panel Survey data. The total direct and indirect cost of CVD and stroke in the United States for 2007 is estimated to be $286 billion. This figure includes health expenditures (direct costs, which include the cost of physicians and other professionals, hospital services, prescribed medications, home health care, and other medical durables) and lost productivity resulting from mortality (indirect costs). By comparison, in 2008, the estimated cost of all cancer and benign neoplasms was $228 billion ($93 billion in direct costs, $19 billion in morbidity indirect costs, and $116 billion in mortality indirect costs). CVD costs more than any other diagnostic group.
The AHA, through its Statistics Committee, continuously monitors and evaluates sources of data on heart disease and stroke in the United States to provide the most current data available in the Statistics Update. The 2007 mortality data have been released. More information can be found at the National Center for Health Statistics Web site, http://www.cdc.gov/nchs/data/nvsr/nvsr58/nvsr58_01.pdf.
Finally, it must be noted that this annual Statistical Update is the product of an entire year’s worth of effort by dedicated professionals, volunteer physicians and scientists, and outstanding AHA staff members, without whom publication of this valuable resource would be impossible. Their contributions are gratefully acknowledged. Véronique L. Roger, MD, MPH, FAHAMelanie B. Turner, MPHOn behalf of the American Heart Association Heart Disease and Stroke Statistics Writing Group
Note: Population data used in the compilation of NHANES prevalence estimates is for the latest year of the NHANES survey being used. Extrapolations for NHANES prevalence estimates are based on the census resident population for 2008 because this is the most recent year of NHANES data used in the Statistical Update.
doi:10.1161/CIR.0b013e3182009701
PMCID: PMC4418670  PMID: 21160056
AHA Statistical Update; cardiovascular diseases; epidemiology; risk factors; statistics; stroke
11.  Continuous Subcutaneous Insulin Infusion (CSII) Pumps for Type 1 and Type 2 Adult Diabetic Populations 
Executive Summary
In June 2008, the Medical Advisory Secretariat began work on the Diabetes Strategy Evidence Project, an evidence-based review of the literature surrounding strategies for successful management and treatment of diabetes. This project came about when the Health System Strategy Division at the Ministry of Health and Long-Term Care subsequently asked the secretariat to provide an evidentiary platform for the Ministry’s newly released Diabetes Strategy.
After an initial review of the strategy and consultation with experts, the secretariat identified five key areas in which evidence was needed. Evidence-based analyses have been prepared for each of these five areas: insulin pumps, behavioural interventions, bariatric surgery, home telemonitoring, and community based care. For each area, an economic analysis was completed where appropriate and is described in a separate report.
To review these titles within the Diabetes Strategy Evidence series, please visit the Medical Advisory Secretariat Web site, http://www.health.gov.on.ca/english/providers/program/mas/mas_about.html,
Diabetes Strategy Evidence Platform: Summary of Evidence-Based Analyses
Continuous Subcutaneous Insulin Infusion Pumps for Type 1 and Type 2 Adult Diabetics: An Evidence-Based Analysis
Behavioural Interventions for Type 2 Diabetes: An Evidence-Based Analysis
Bariatric Surgery for People with Diabetes and Morbid Obesity: An Evidence-Based Summary
Community-Based Care for the Management of Type 2 Diabetes: An Evidence-Based Analysis
Home Telemonitoring for Type 2 Diabetes: An Evidence-Based Analysis
Application of the Ontario Diabetes Economic Model (ODEM) to Determine the Cost-effectiveness and Budget Impact of Selected Type 2 Diabetes Interventions in Ontario
Objective
The objective of this analysis is to review the efficacy of continuous subcutaneous insulin infusion (CSII) pumps as compared to multiple daily injections (MDI) for the type 1 and type 2 adult diabetics.
Clinical Need and Target Population
Insulin therapy is an integral component of the treatment of many individuals with diabetes. Type 1, or juvenile-onset diabetes, is a life-long disorder that commonly manifests in children and adolescents, but onset can occur at any age. It represents about 10% of the total diabetes population and involves immune-mediated destruction of insulin producing cells in the pancreas. The loss of these cells results in a decrease in insulin production, which in turn necessitates exogenous insulin therapy.
Type 2, or ‘maturity-onset’ diabetes represents about 90% of the total diabetes population and is marked by a resistance to insulin or insufficient insulin secretion. The risk of developing type 2 diabetes increases with age, obesity, and lack of physical activity. The condition tends to develop gradually and may remain undiagnosed for many years. Approximately 30% of patients with type 2 diabetes eventually require insulin therapy.
CSII Pumps
In conventional therapy programs for diabetes, insulin is injected once or twice a day in some combination of short- and long-acting insulin preparations. Some patients require intensive therapy regimes known as multiple daily injection (MDI) programs, in which insulin is injected three or more times a day. It’s a time consuming process and usually requires an injection of slow acting basal insulin in the morning or evening and frequent doses of short-acting insulin prior to eating. The most common form of slower acting insulin used is neutral protamine gagedorn (NPH), which reaches peak activity 3 to 5 hours after injection. There are some concerns surrounding the use of NPH at night-time as, if injected immediately before bed, nocturnal hypoglycemia may occur. To combat nocturnal hypoglycemia and other issues related to absorption, alternative insulins have been developed, such as the slow-acting insulin glargine. Glargine has no peak action time and instead acts consistently over a twenty-four hour period, helping reduce the frequency of hypoglycemic episodes.
Alternatively, intensive therapy regimes can be administered by continuous insulin infusion (CSII) pumps. These devices attempt to closely mimic the behaviour of the pancreas, continuously providing a basal level insulin to the body with additional boluses at meal times. Modern CSII pumps are comprised of a small battery-driven pump that is designed to administer insulin subcutaneously through the abdominal wall via butterfly needle. The insulin dose is adjusted in response to measured capillary glucose values in a fashion similar to MDI and is thus often seen as a preferred method to multiple injection therapy. There are, however, still risks associated with the use of CSII pumps. Despite the increased use of CSII pumps, there is uncertainty around their effectiveness as compared to MDI for improving glycemic control.
Part A: Type 1 Diabetic Adults (≥19 years)
An evidence-based analysis on the efficacy of CSII pumps compared to MDI was carried out on both type 1 and type 2 adult diabetic populations.
Research Questions
Are CSII pumps more effective than MDI for improving glycemic control in adults (≥19 years) with type 1 diabetes?
Are CSII pumps more effective than MDI for improving additional outcomes related to diabetes such as quality of life (QoL)?
Literature Search
Inclusion Criteria
Randomized controlled trials, systematic reviews, meta-analysis and/or health technology assessments from MEDLINE, EMBASE, CINAHL
Adults (≥ 19 years)
Type 1 diabetes
Study evaluates CSII vs. MDI
Published between January 1, 2002 – March 24, 2009
Patient currently on intensive insulin therapy
Exclusion Criteria
Studies with <20 patients
Studies <5 weeks in duration
CSII applied only at night time and not 24 hours/day
Mixed group of diabetes patients (children, adults, type 1, type 2)
Pregnancy studies
Outcomes of Interest
The primary outcomes of interest were glycosylated hemoglobin (HbA1c) levels, mean daily blood glucose, glucose variability, and frequency of hypoglycaemic events. Other outcomes of interest were insulin requirements, adverse events, and quality of life.
Search Strategy
The literature search strategy employed keywords and subject headings to capture the concepts of:
1) insulin pumps, and
2) type 1 diabetes.
The search was run on July 6, 2008 in the following databases: Ovid MEDLINE (1996 to June Week 4 2008), OVID MEDLINE In-Process and Other Non-Indexed Citations, EMBASE (1980 to 2008 Week 26), OVID CINAHL (1982 to June Week 4 2008) the Cochrane Library, and the Centre for Reviews and Dissemination/International Agency for Health Technology Assessment. A search update was run on March 24, 2009 and studies published prior to 2002 were also examined for inclusion into the review. Parallel search strategies were developed for the remaining databases. Search results were limited to human and English-language published between January 2002 and March 24, 2009. Abstracts were reviewed, and studies meeting the inclusion criteria outlined above were obtained. Reference lists were also checked for relevant studies.
Summary of Findings
The database search identified 519 relevant citations published between 1996 and March 24, 2009. Of the 519 abstracts reviewed, four RCTs and one abstract met the inclusion criteria outlined above. While efficacy outcomes were reported in each of the trials, a meta-analysis was not possible due to missing data around standard deviations of change values as well as missing data for the first period of the crossover arm of the trial. Meta-analysis was not possible on other outcomes (quality of life, insulin requirements, frequency of hypoglycemia) due to differences in reporting.
HbA1c
In studies where no baseline data was reported, the final values were used. Two studies (Hanaire-Broutin et al. 2000, Hoogma et al. 2005) reported a slight reduction in HbA1c of 0.35% and 0.22% respectively for CSII pumps in comparison to MDI. A slightly larger reduction in HbA1c of 0.84% was reported by DeVries et al.; however, this study was the only study to include patients with poor glycemic control marked by higher baseline HbA1c levels. One study (Bruttomesso et al. 2008) showed no difference between CSII pumps and MDI on Hba1c levels and was the only study using insulin glargine (consistent with results of parallel RCT in abstract by Bolli 2004). While there is statistically significant reduction in HbA1c in three of four trials, there is no evidence to suggest these results are clinically significant.
Mean Blood Glucose
Three of four studies reported a statistically significant reduction in the mean daily blood glucose for patients using CSII pump, though these results were not clinically significant. One study (DeVries et al. 2002) did not report study data on mean blood glucose but noted that the differences were not statistically significant. There is difficulty with interpreting study findings as blood glucose was measured differently across studies. Three of four studies used a glucose diary, while one study used a memory meter. In addition, frequency of self monitoring of blood glucose (SMBG) varied from four to nine times per day. Measurements used to determine differences in mean daily blood glucose between the CSII pump group and MDI group at clinic visits were collected at varying time points. Two studies use measurements from the last day prior to the final visit (Hoogma et al. 2005, DeVries et al. 2002), while one study used measurements taken during the last 30 days and another study used measurements taken during the 14 days prior to the final visit of each treatment period.
Glucose Variability
All four studies showed a statistically significant reduction in glucose variability for patients using CSII pumps compared to those using MDI, though one, Bruttomesso et al. 2008, only showed a significant reduction at the morning time point. Brutomesso et al. also used alternate measures of glucose variability and found that both the Lability index and mean amplitude of glycemic excursions (MAGE) were in concordance with the findings using the standard deviation (SD) values of mean blood glucose, but the average daily risk range (ADRR) showed no difference between the CSII pump and MDI groups.
Hypoglycemic Events
There is conflicting evidence concerning the efficacy of CSII pumps in decreasing both mild and severe hypoglycemic events. For mild hypoglycemic events, DeVries et al. observed a higher number of events per patient week in the CSII pump group than the MDI group, while Hoogma et al. observed a higher number of events per patient year in the MDI group. The remaining two studies found no differences between the two groups in the frequency of mild hypoglycemic events. For severe hypoglycemic events, Hoogma et al. found an increase in events per patient year among MDI patients, however, all of the other RCTs showed no difference between the patient groups in this aspect.
Insulin Requirements and Adverse Events
In all four studies, insulin requirements were significantly lower in patients receiving CSII pump treatment in comparison to MDI. This difference was statistically significant in all studies. Adverse events were reported in three studies. Devries et al. found no difference in ketoacidotic episodes between CSII pump and MDI users. Bruttomesso et al. reported no adverse events during the study. Hanaire-Broutin et al. found that 30 patients experienced 58 serious adverse events (SAEs) during MDI and 23 patients had 33 SAEs during treatment out of a total of 256 patients. Most events were related to severe hypoglycemia and diabetic ketoacidosis.
Quality of Life and Patient Preference
QoL was measured in three studies and patient preference was measured in one. All three studies found an improvement in QoL for CSII users compared to those using MDI, although various instruments were used among the studies and possible reporting bias was evident as non-positive outcomes were not consistently reported. Moreover, there was also conflicting results in two of the studies using the Diabetes Treatment Satisfaction Questionnaire (DTSQ). DeVries et al. reported no difference in treatment satisfaction between CSII pump users and MDI users while Brutomesso et al. reported that treatment satisfaction improved among CSII pump users.
Patient preference for CSII pumps was demonstrated in just one study (Hanaire-Broutin et al. 2000) and there are considerable limitations with interpreting this data as it was gathered through interview and 72% of patients that preferred CSII pumps were previously on CSII pump therapy prior to the study. As all studies were industry sponsored, findings on QoL and patient preference must be interpreted with caution.
Quality of Evidence
Overall, the body of evidence was downgraded from high to low due to study quality and issues with directness as identified using the GRADE quality assessment tool (see Table 1) While blinding of patient to intervention/control was not feasible in these studies, blinding of study personnel during outcome assessment and allocation concealment were generally lacking. Trials reported consistent results for the outcomes HbA1c, mean blood glucose and glucose variability, but the directness or generalizability of studies, particularly with respect to the generalizability of the diabetic population, was questionable as most trials used highly motivated populations with fairly good glycemic control. In addition, the populations in each of the studies varied with respect to prior treatment regimens, which may not be generalizable to the population eligible for pumps in Ontario. For the outcome of hypoglycaemic events the evidence was further downgraded to very low since there was conflicting evidence between studies with respect to the frequency of mild and severe hypoglycaemic events in patients using CSII pumps as compared to CSII (see Table 2). The GRADE quality of evidence for the use of CSII in adults with type 1 diabetes is therefore low to very low and any estimate of effect is, therefore, uncertain.
GRADE Quality Assessment for CSII pumps vs. MDI on HbA1c, Mean Blood Glucose, and Glucose Variability for Adults with Type 1 Diabetes
Inadequate or unknown allocation concealment (3/4 studies); Unblinded assessment (all studies) however lack of blinding due to the nature of the study; No ITT analysis (2/4 studies); possible bias SMBG (all studies)
HbA1c: 3/4 studies show consistency however magnitude of effect varies greatly; Single study uses insulin glargine instead of NPH; Mean Blood Glucose: 3/4 studies show consistency however magnitude of effect varies between studies; Glucose Variability: All studies show consistency but 1 study only showed a significant effect in the morning
Generalizability in question due to varying populations: highly motivated populations, educational component of interventions/ run-in phases, insulin pen use in 2/4 studies and varying levels of baseline glycemic control and experience with intensified insulin therapy, pumps and MDI.
GRADE Quality Assessment for CSII pumps vs. MDI on Frequency of Hypoglycemic
Inadequate or unknown allocation concealment (3/4 studies); Unblinded assessment (all studies) however lack of blinding due to the nature of the study; No ITT analysis (2/4 studies); possible bias SMBG (all studies)
Conflicting evidence with respect to mild and severe hypoglycemic events reported in studies
Generalizability in question due to varying populations: highly motivated populations, educational component of interventions/ run-in phases, insulin pen use in 2/4 studies and varying levels of baseline glycemic control and experience with intensified insulin therapy, pumps and MDI.
Economic Analysis
One article was included in the analysis from the economic literature scan. Four other economic evaluations were identified but did not meet our inclusion criteria. Two of these articles did not compare CSII with MDI and the other two articles used summary estimates from a mixed population with Type 1 and 2 diabetes in their economic microsimulation to estimate costs and effects over time. Included were English articles that conducted comparisons between CSII and MDI with the outcome of Quality Adjusted Life Years (QALY) in an adult population with type 1 diabetes.
From one study, a subset of the population with type 1 diabetes was identified that may be suitable and benefit from using insulin pumps. There is, however, limited data in the literature addressing the cost-effectiveness of insulin pumps versus MDI in type 1 diabetes. Longer term models are required to estimate the long term costs and effects of pumps compared to MDI in this population.
Conclusions
CSII pumps for the treatment of adults with type 1 diabetes
Based on low-quality evidence, CSII pumps confer a statistically significant but not clinically significant reduction in HbA1c and mean daily blood glucose as compared to MDI in adults with type 1 diabetes (>19 years).
CSII pumps also confer a statistically significant reduction in glucose variability as compared to MDI in adults with type 1 diabetes (>19 years) however the clinical significance is unknown.
There is indirect evidence that the use of newer long-acting insulins (e.g. insulin glargine) in MDI regimens result in less of a difference between MDI and CSII compared to differences between MDI and CSII in which older insulins are used.
There is conflicting evidence regarding both mild and severe hypoglycemic events in this population when using CSII pumps as compared to MDI. These findings are based on very low-quality evidence.
There is an improved quality of life for patients using CSII pumps as compared to MDI however, limitations exist with this evidence.
Significant limitations of the literature exist specifically:
All studies sponsored by insulin pump manufacturers
All studies used crossover design
Prior treatment regimens varied
Types of insulins used in study varied (NPH vs. glargine)
Generalizability of studies in question as populations were highly motivated and half of studies used insulin pens as the mode of delivery for MDI
One short-term study concluded that pumps are cost-effective, although this was based on limited data and longer term models are required to estimate the long-term costs and effects of pumps compared to MDI in adults with type 1 diabetes.
Part B: Type 2 Diabetic Adults
Research Questions
Are CSII pumps more effective than MDI for improving glycemic control in adults (≥19 years) with type 2 diabetes?
Are CSII pumps more effective than MDI for improving other outcomes related to diabetes such as quality of life?
Literature Search
Inclusion Criteria
Randomized controlled trials, systematic reviews, meta-analysis and/or health technology assessments from MEDLINE, Excerpta Medica Database (EMBASE), Cumulative Index to Nursing & Allied Health Literature (CINAHL)
Any person with type 2 diabetes requiring insulin treatment intensive
Published between January 1, 2000 – August 2008
Exclusion Criteria
Studies with <10 patients
Studies <5 weeks in duration
CSII applied only at night time and not 24 hours/day
Mixed group of diabetes patients (children, adults, type 1, type 2)
Pregnancy studies
Outcomes of Interest
The primary outcome of interest was a reduction in glycosylated hemoglobin (HbA1c) levels. Other outcomes of interest were mean blood glucose level, glucose variability, insulin requirements, frequency of hypoglycemic events, adverse events, and quality of life.
Search Strategy
A comprehensive literature search was performed in OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, CINAHL, The Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published between January 1, 2000 and August 15, 2008. Studies meeting the inclusion criteria were selected from the search results. Data on the study characteristics, patient characteristics, primary and secondary treatment outcomes, and adverse events were abstracted. Reference lists of selected articles were also checked for relevant studies. The quality of the evidence was assessed as high, moderate, low, or very low according to the GRADE methodology.
Summary of Findings
The database search identified 286 relevant citations published between 1996 and August 2008. Of the 286 abstracts reviewed, four RCTs met the inclusion criteria outlined above. Upon examination, two studies were subsequently excluded from the meta-analysis due to small sample size and missing data (Berthe et al.), as well as outlier status and high drop out rate (Wainstein et al) which is consistent with previously reported meta-analyses on this topic (Jeitler et al 2008, and Fatourechi M et al. 2009).
HbA1c
The primary outcome in this analysis was reduction in HbA1c. Both studies demonstrated that both CSII pumps and MDI reduce HbA1c, but neither treatment modality was found to be superior to the other. The results of a random effects model meta-analysis showed a mean difference in HbA1c of -0.14 (-0.40, 0.13) between the two groups, which was found not to be statistically or clinically significant. There was no statistical heterogeneity observed between the two studies (I2=0%).
Forrest plot of two parallel, RCTs comparing CSII to MDI in type 2 diabetes
Secondary Outcomes
Mean Blood Glucose and Glucose Variability
Mean blood glucose was only used as an efficacy outcome in one study (Raskin et al. 2003). The authors found that the only time point in which there were consistently lower blood glucose values for the CSII group compared to the MDI group was 90 minutes after breakfast. Glucose variability was not examined in either study and the authors reported no difference in weight gain between the CSII pump group and MDI groups at the end of study. Conflicting results were reported regarding injection site reactions between the two studies. Herman et al. reported no difference in the number of subjects experiencing site problems between the two groups, while Raskin et al. reported that there were no injection site reactions in the MDI group but 15 such episodes among 8 participants in the CSII pump group.
Frequency of Hypoglycemic Events and Insulin Requirements
All studies reported that there were no differences in the number of mild hypoglycemic events in patients on CSII pumps versus MDI. Herman et al. also reported no differences in the number of severe hypoglycemic events in patients using CSII pumps compared to those on MDI. Raskin et al. reported that there were no severe hypoglycemic events in either group throughout the study duration. Insulin requirements were only examined in Herman et al., who found that daily insulin requirements were equal between the CSII pump and MDI treatment groups.
Quality of Life
QoL was measured by Herman et al. using the Diabetes Quality of Life Clinical Trial Questionnaire (DQOLCTQ). There were no differences reported between CSII users and MDI users for treatment satisfaction, diabetes impact, and worry-related scores. Patient satisfaction was measured in Raskin et al. using a patient satisfaction questionnaire, whose results indicated that patients in the CSII pump group had significantly greater improvement in overall treatment satisfaction at the end of the study compared to the MDI group. Although patient preference was also reported, it was only examined in the CSII pump group, thus results indicating a greater preference for CSII pumps in this groups (as compared to prior injectable insulin regimens) are biased and must be interpreted with caution.
Quality of Evidence
Overall, the body of evidence was downgraded from high to low according to study quality and issues with directness as identified using the GRADE quality assessment tool (see Table 3). While blinding of patient to intervention/control is not feasible in these studies, blinding of study personnel during outcome assessment and allocation concealment were generally lacking. ITT was not clearly explained in one study and heterogeneity between study populations was evident from participants’ treatment regimens prior to study initiation. Although trials reported consistent results for HbA1c outcomes, the directness or generalizability of studies, particularly with respect to the generalizability of the diabetic population, was questionable as trials required patients to adhere to an intense SMBG regimen. This suggests that patients were highly motivated. In addition, since prior treatment regimens varied between participants (no requirement for patients to be on MDI), study findings may not be generalizable to the population eligible for a pump in Ontario. The GRADE quality of evidence for the use of CSII in adults with type 2 diabetes is, therefore, low and any estimate of effect is uncertain.
GRADE Quality Assessment for CSII pumps vs. MDI on HbA1c Adults with Type 2 Diabetes
Inadequate or unknown allocation concealment (all studies); Unblinded assessment (all studies) however lack of blinding due to the nature of the study; ITT not well explained in 1 of 2 studies
Indirect due to lack of generalizability of findings since participants varied with respect to prior treatment regimens and intensive SMBG suggests highly motivated populations used in trials.
Economic Analysis
An economic analysis of CSII pumps was carried out using the Ontario Diabetes Economic Model (ODEM) and has been previously described in the report entitled “Application of the Ontario Diabetes Economic Model (ODEM) to Determine the Cost-effectiveness and Budget Impact of Selected Type 2 Diabetes Interventions in Ontario”, part of the diabetes strategy evidence series. Based on the analysis, CSII pumps are not cost-effective for adults with type 2 diabetes, either for the age 65+ sub-group or for all patients in general. Details of the analysis can be found in the full report.
Conclusions
CSII pumps for the treatment of adults with type 2 diabetes
There is low quality evidence demonstrating that the efficacy of CSII pumps is not superior to MDI for adult type 2 diabetics.
There were no differences in the number of mild and severe hypoglycemic events in patients on CSII pumps versus MDI.
There are conflicting findings with respect to an improved quality of life for patients using CSII pumps as compared to MDI.
Significant limitations of the literature exist specifically:
All studies sponsored by insulin pump manufacturers
Prior treatment regimens varied
Types of insulins used in study varied (NPH vs. glargine)
Generalizability of studies in question as populations may not reflect eligible patient population in Ontario (participants not necessarily on MDI prior to study initiation, pen used in one study and frequency of SMBG required during study was high suggesting highly motivated participants)
Based on ODEM, insulin pumps are not cost-effective for adults with type 2 diabetes either for the age 65+ sub-group or for all patients in general.
PMCID: PMC3377523  PMID: 23074525
12.  Performance of small general practices under the UK's Quality and Outcomes Framework 
The British Journal of General Practice  2010;60(578):e335-e344.
Background
Small general practices are often perceived to provide worse care than larger practices.
Aim
To describe the comparative performance of small practices on the UK's pay-for-performance scheme, the Quality and Outcomes Framework.
Design of study
Longitudinal analysis (2004–2005 to 2006–2007) of quality scores for 48 clinical activities.
Setting
Family practices in England (n = 7502).
Method
Comparison of performance of practices by list size, in terms of points scored in the pay-for-performance scheme, reported achievement rates, and population achievement rates (which allow for patients excluded from the scheme).
Results
In the first year of the pay-for-performance scheme, the smallest practices (those with fewer than 2000 patients) had the lowest median reported achievement rates, achieving the clinical targets for 83.8% of eligible patients. Performance generally improved for practices of all sizes over time, but the smallest practices improved at the fastest rate, and by year 3 had the highest median reported achievement rates (91.5%). This improvement was not achieved by additional exception reporting. There was more variation in performance among small practices than larger ones: practices with fewer than 3000 patients (20.1% of all practices in year 3), represented 46.7% of the highest-achieving 5% of practices and 45.1% of the lowest-achieving 5% of practices.
Conclusion
Small practices were represented among both the best and the worst practices in terms of achievement of clinical quality targets. The effect of the pay-for-performance scheme appears to have been to reduce variation in performance, and to reduce the difference between large and small practices.
doi:10.3399/bjgp10X515340
PMCID: PMC2930243  PMID: 20849683
incentives; quality; primary care
13.  Causes of death in Tonga: quality of certification and implications for statistics 
Background
Detailed cause of death data by age group and sex are critical to identify key public health issues and target interventions appropriately. In this study the quality of local routinely collected cause of death data from medical certification is reviewed, and a cause of death profile for Tonga based on amended data is presented.
Methods
Medical certificates of death for all deaths in Tonga for 2001 to 2008 and medical records for all deaths in the main island Tongatapu for 2008 were sought from the national hospital. Cause of death data for 2008 were reviewed for quality through (a) a review of current tabulation procedures and (b) a medical record review. Data from each medical record were extracted and provided to an independent medical doctor to assign cause of death, with underlying cause from the medical record tabulated against underlying cause from the medical certificate. Significant associations in reporting patterns were evaluated and final cause of death for each case in 2008 was assigned based on the best quality information from the medical certificate or medical record. Cause of death data from 2001 to 2007 were revised based on findings from the evaluation of certification of the 2008 data and added to the dataset. Proportional mortality was calculated and applied to age- and sex-specific mortality for all causes from 2001 to 2008. Cause of death was tabulated by age group and sex, and age-standardized (all ages) mortality rates for each sex by cause were calculated.
Results
Reported tabulations of cause of death in Tonga are of immediate cause, with ischemic heart disease and diabetes underrepresented. In the majority of cases the reported (immediate) cause fell within the same broad category as the underlying cause of death from the medical certificate. Underlying cause of death from the medical certificate, attributed to neoplasms, diabetes, and cardiovascular disease were assigned to other underlying causes by the medical record review in 70% to 77% of deaths. Of the 28 (6.5%) deaths attributed to nonspecific or unknown causes on the medical certificate, 17 were able to be attributed elsewhere following review of the medical record. Final cause of death tabulations for 2001 to 2008 demonstrate that noncommunicable diseases are leading adult mortality, and age-standardized rates for cardiovascular diseases, neoplasms, and diabetes increased significantly between 2001 to 2004 and 2005 to 2008. Cause of death data for 2001 to 2008 show increasing cause-specific mortality (deaths per 100,000) from 2001-2004 to 2005-2008 from cardiovascular (194-382 to 423-644 in 2005-2008 for males and 108-227 to 194-321 for females) and other noncommunicable diseases that cannot be accounted for by changes in the age structure of the population. Mortality from diabetes for 2005 to 2008 is estimated at 94 to 222 deaths per 100,000 population for males and 98 to 190 for females (based on the range of plausible all-cause mortality estimates) compared with 2008 estimates from the global burden of disease study of 40 (males) and 53 (females) deaths per 100,000 population.
Discussion
Certification of death was generally found to be the most reliable data on cause of death in Tonga available for Tonga, with 93% of the final assigned causes following review of the 2008 data matching those listed on the medical certificate of death. Cause of death data available in Tonga can be improved by routinely tabulating data by underlying cause and ensuring contributory causes are not recorded in Part I of the certificate during data entry to the database. There is significantly more data on cause of death available in Tonga than are routinely reported or known to international agencies.
doi:10.1186/1478-7954-10-4
PMCID: PMC3378436  PMID: 22390221
Mortality; Cause of death; Noncommunicable Diseases; Medical record review; Death Certification; Tonga; Pacific Islands
14.  Reviewing progress: 7 year trends in characteristics of adults and children enrolled at HIV care and treatment clinics in the United Republic of Tanzania 
BMC Public Health  2013;13:1016.
Background
To evaluate the on-going scale-up of HIV programs, we assessed trends in patient characteristics at enrolment and ART initiation over 7 years of implementation.
Methods
Data were from Optimal Models, a prospective open cohort study of HIV-infected (HIV+) adults (≥15 years) and children (<15 years) enrolled from January 2005 to December 2011 at 44 HIV clinics in 3 regions of mainland Tanzania (Kagera, Kigoma, Pwani) and Zanzibar. Comparative statistics for trends in characteristics of patients enrolled in 2005–2007, 2008–2009 and 2010–2011 were examined.
Results
Overall 62,801 HIV + patients were enrolled: 58,102(92.5%) adults, (66.5% female); 4,699(7.5%) children.
Among adults, pregnant women enrolment increased: 6.8%, 2005–2007; 12.1%, 2008–2009; 17.2%, 2010–2011; as did entry into care from prevention of mother-to-child HIV transmission (PMTCT) programs: 6.6%, 2005–2007; 9.5%, 2008–2009; 12.6%, 2010–2011
. WHO stage IV at enrolment declined: 27.1%, 2005–2007; 20.2%, 2008–2009; 11.1% 2010–2011. Of the 42.5% and 29.5% with CD4+ data at enrolment and ART initiation respectively, median CD4+ count increased: 210 cells/μL, 2005–2007; 262 cells/μL, 2008–2009; 266 cells/μL 2010–2011; but median CD4+ at ART initiation did not change (148 cells/μL overall). Stavudine initiation declined: 84.9%, 2005–2007; 43.1%, 2008–2009; 19.7%, 2010–2011.
Among children, median age (years) at enrolment decreased from 6.1(IQR:2.7-10.0) in 2005–2007 to 4.8(IQR:1.9-8.6) in 2008–2009, and 4.1(IQR:1.5-8.1) in 2010–2011 and children <24 months increased from 18.5% to 26.1% and 31.5% respectively. Entry from PMTCT was 7.0%, 2005–2007; 10.7%, 2008–2009; 15.0%, 2010–2011. WHO stage IV at enrolment declined from 22.9%, 2005–2007, to 18.3%, 2008–2009 to 13.9%, 2010–2011. Proportion initiating stavudine was 39.8% 2005–2007; 39.5%, 2008–2009; 26.1%, 2010–2011. Median age at ART initiation also declined significantly.
Conclusions
Over time, the proportion of pregnant women and of adults and children enrolled from PMTCT programs increased. There was a decline in adults and children with advanced HIV disease at enrolment and initiation of stavudine. Pediatric age at enrolment and ART initiation declined. Results suggest HIV program maturation from an emergency response.
doi:10.1186/1471-2458-13-1016
PMCID: PMC3937235  PMID: 24160907
ART program; HIV-infected adults; HIV-infected children; Trends at enrolment; Trends at ART initiation; Tanzania
15.  Clinical Utility of Serologic Testing for Celiac Disease in Ontario 
Executive Summary
Objective of Analysis
The objective of this evidence-based evaluation is to assess the accuracy of serologic tests in the diagnosis of celiac disease in subjects with symptoms consistent with this disease. Furthermore the impact of these tests in the diagnostic pathway of the disease and decision making was also evaluated.
Celiac Disease
Celiac disease is an autoimmune disease that develops in genetically predisposed individuals. The immunological response is triggered by ingestion of gluten, a protein that is present in wheat, rye, and barley. The treatment consists of strict lifelong adherence to a gluten-free diet (GFD).
Patients with celiac disease may present with a myriad of symptoms such as diarrhea, abdominal pain, weight loss, iron deficiency anemia, dermatitis herpetiformis, among others.
Serologic Testing in the Diagnosis Celiac Disease
There are a number of serologic tests used in the diagnosis of celiac disease.
Anti-gliadin antibody (AGA)
Anti-endomysial antibody (EMA)
Anti-tissue transglutaminase antibody (tTG)
Anti-deamidated gliadin peptides antibodies (DGP)
Serologic tests are automated with the exception of the EMA test, which is more time-consuming and operator-dependent than the other tests. For each serologic test, both immunoglobulin A (IgA) or G (IgG) can be measured, however, IgA measurement is the standard antibody measured in celiac disease.
Diagnosis of Celiac Disease
According to celiac disease guidelines, the diagnosis of celiac disease is established by small bowel biopsy. Serologic tests are used to initially detect and to support the diagnosis of celiac disease. A small bowel biopsy is indicated in individuals with a positive serologic test. In some cases an endoscopy and small bowel biopsy may be required even with a negative serologic test. The diagnosis of celiac disease must be performed on a gluten-containing diet since the small intestine abnormalities and the serologic antibody levels may resolve or improve on a GFD.
Since IgA measurement is the standard for the serologic celiac disease tests, false negatives may occur in IgA-deficient individuals.
Incidence and Prevalence of Celiac Disease
The incidence and prevalence of celiac disease in the general population and in subjects with symptoms consistent with or at higher risk of celiac disease based on systematic reviews published in 2004 and 2009 are summarized below.
Incidence of Celiac Disease in the General Population
Adults or mixed population: 1 to 17/100,000/year
Children: 2 to 51/100,000/year
In one of the studies, a stratified analysis showed that there was a higher incidence of celiac disease in younger children compared to older children, i.e., 51 cases/100,000/year in 0 to 2 year-olds, 33/100,000/year in 2 to 5 year-olds, and 10/100,000/year in children 5 to 15 years old.
Prevalence of Celiac Disease in the General Population
The prevalence of celiac disease reported in population-based studies identified in the 2004 systematic review varied between 0.14% and 1.87% (median: 0.47%, interquartile range: 0.25%, 0.71%). According to the authors of the review, the prevalence did not vary by age group, i.e., adults and children.
Prevalence of Celiac Disease in High Risk Subjects
Type 1 diabetes (adults and children): 1 to 11%
Autoimmune thyroid disease: 2.9 to 3.3%
First degree relatives of patients with celiac disease: 2 to 20%
Prevalence of Celiac Disease in Subjects with Symptoms Consistent with the Disease
The prevalence of celiac disease in subjects with symptoms consistent with the disease varied widely among studies, i.e., 1.5% to 50% in adult studies, and 1.1% to 17% in pediatric studies. Differences in prevalence may be related to the referral pattern as the authors of a systematic review noted that the prevalence tended to be higher in studies whose population originated from tertiary referral centres compared to general practice.
Research Questions
What is the sensitivity and specificity of serologic tests in the diagnosis celiac disease?
What is the clinical validity of serologic tests in the diagnosis of celiac disease? The clinical validity was defined as the ability of the test to change diagnosis.
What is the clinical utility of serologic tests in the diagnosis of celiac disease? The clinical utility was defined as the impact of the test on decision making.
What is the budget impact of serologic tests in the diagnosis of celiac disease?
What is the cost-effectiveness of serologic tests in the diagnosis of celiac disease?
Methods
Literature Search
A literature search was performed on November 13th, 2009 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from January 1st 2003 and November 13th 2010. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria, full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search. Articles with unknown eligibility were reviewed with a second clinical epidemiologist, then a group of epidemiologists until consensus was established. The quality of evidence was assessed as high, moderate, low or very low according to GRADE methodology.
Studies that evaluated diagnostic accuracy, i.e., both sensitivity and specificity of serology tests in the diagnosis of celiac disease.
Study population consisted of untreated patients with symptoms consistent with celiac disease.
Studies in which both serologic celiac disease tests and small bowel biopsy (gold standard) were used in all subjects.
Systematic reviews, meta-analyses, randomized controlled trials, prospective observational studies, and retrospective cohort studies.
At least 20 subjects included in the celiac disease group.
English language.
Human studies.
Studies published from 2000 on.
Clearly defined cut-off value for the serology test. If more than one test was evaluated, only those tests for which a cut-off was provided were included.
Description of small bowel biopsy procedure clearly outlined (location, number of biopsies per patient), unless if specified that celiac disease diagnosis guidelines were followed.
Patients in the treatment group had untreated CD.
Studies on screening of the general asymptomatic population.
Studies that evaluated rapid diagnostic kits for use either at home or in physician’s offices.
Studies that evaluated diagnostic modalities other than serologic tests such as capsule endoscopy, push enteroscopy, or genetic testing.
Cut-off for serologic tests defined based on controls included in the study.
Study population defined based on positive serology or subjects pre-screened by serology tests.
Celiac disease status known before study enrolment.
Sensitivity or specificity estimates based on repeated testing for the same subject.
Non-peer-reviewed literature such as editorials and letters to the editor.
Population
The population consisted of adults and children with untreated, undiagnosed celiac disease with symptoms consistent with the disease.
Serologic Celiac Disease Tests Evaluated
Anti-gliadin antibody (AGA)
Anti-endomysial antibody (EMA)
Anti-tissue transglutaminase antibody (tTG)
Anti-deamidated gliadin peptides antibody (DGP)
Combinations of some of the serologic tests listed above were evaluated in some studies
Both IgA and IgG antibodies were evaluated for the serologic tests listed above.
Outcomes of Interest
Sensitivity
Specificity
Positive and negative likelihood ratios
Diagnostic odds ratio (OR)
Area under the sROC curve (AUC)
Small bowel biopsy was used as the gold standard in order to estimate the sensitivity and specificity of each serologic test.
Statistical Analysis
Pooled estimates of sensitivity, specificity and diagnostic odds ratios (DORs) for the different serologic tests were calculated using a bivariate, binomial generalized linear mixed model. Statistical significance for differences in sensitivity and specificity between serologic tests was defined by P values less than 0.05, where “false discovery rate” adjustments were made for multiple hypothesis testing. The bivariate regression analyses were performed using SAS version 9.2 (SAS Institute Inc.; Cary, NC, USA). Using the bivariate model parameters, summary receiver operating characteristic (sROC) curves were produced using Review Manager 5.0.22 (The Nordiac Cochrane Centre, The Cochrane Collaboration, 2008). The area under the sROC curve (AUC) was estimated by bivariate mixed-efects binary regression modeling framework. Model specification, estimation and prediction are carried out with xtmelogit in Stata release 10 (Statacorp, 2007). Statistical tests for the differences in AUC estimates could not be carried out.
The study results were stratified according to patient or disease characteristics such as age, severity of Marsh grade abnormalities, among others, if reported in the studies. The literature indicates that the diagnostic accuracy of serologic tests for celiac disease may be affected in patients with chronic liver disease, therefore, the studies identified through the systematic literature review that evaluated the diagnostic accuracy of serologic tests for celiac disease in patients with chronic liver disease were summarized. The effect of the GFD in patiens diagnosed with celiac disease was also summarized if reported in the studies eligible for the analysis.
Summary of Findings
Published Systematic Reviews
Five systematic reviews of studies that evaluated the diagnostic accuracy of serologic celiac disease tests were identified through our literature search. Seventeen individual studies identified in adults and children were eligible for this evaluation.
In general, the studies included evaluated the sensitivity and specificity of at least one serologic test in subjects with symptoms consistent with celiac disease. The gold standard used to confirm the celiac disease diagnosis was small bowel biopsy. Serologic tests evaluated included tTG, EMA, AGA, and DGP, using either IgA or IgG antibodies. Indirect immunoflurorescence was used for the EMA serologic tests whereas enzyme-linked immunosorbent assay (ELISA) was used for the other serologic tests.
Common symptoms described in the studies were chronic diarrhea, abdominal pain, bloating, unexplained weight loss, unexplained anemia, and dermatitis herpetiformis.
The main conclusions of the published systematic reviews are summarized below.
IgA tTG and/or IgA EMA have a high accuracy (pooled sensitivity: 90% to 98%, pooled specificity: 95% to 99% depending on the pooled analysis).
Most reviews found that AGA (IgA or IgG) are not as accurate as IgA tTG and/or EMA tests.
A 2009 systematic review concluded that DGP (IgA or IgG) seems to have a similar accuracy compared to tTG, however, since only 2 studies identified evaluated its accuracy, the authors believe that additional data is required to draw firm conclusions.
Two systematic reviews also concluded that combining two serologic celiac disease tests has little contribution to the accuracy of the diagnosis.
MAS Analysis
Sensitivity
The pooled analysis performed by MAS showed that IgA tTG has a sensitivity of 92.1% [95% confidence interval (CI) 88.0, 96.3], compared to 89.2% (83.3, 95.1, p=0.12) for IgA DGP, 85.1% (79.5, 94.4, p=0.07) for IgA EMA, and 74.9% (63.6, 86.2, p=0.0003) for IgA AGA. Among the IgG-based tests, the results suggest that IgG DGP has a sensitivity of 88.4% (95% CI: 82.1, 94.6), 44.7% (30.3, 59.2) for tTG, and 69.1% (56.0, 82.2) for AGA. The difference was significant when IgG DGP was compared to IgG tTG but not IgG AGA. Combining serologic celiac disease tests yielded a slightly higher sensitivity compared to individual IgA-based serologic tests.
IgA deficiency
The prevalence of total or severe IgA deficiency was low in the studies identified varying between 0 and 1.7% as reported in 3 studies in which IgA deficiency was not used as a referral indication for celiac disease serologic testing. The results of IgG-based serologic tests were positive in all patients with IgA deficiency in which celiac disease was confirmed by small bowel biopsy as reported in four studies.
Specificity
The MAS pooled analysis indicates a high specificity across the different serologic tests including the combination strategy, pooled estimates ranged from 90.1% to 98.7% depending on the test.
Likelihood Ratios
According to the likelihood ratio estimates, both IgA tTG and serologic test combinationa were considered very useful tests (positive likelihood ratio above ten and the negative likelihood ratio below 0.1).
Moderately useful tests included IgA EMA, IgA DGP, and IgG DGP (positive likelihood ratio between five and ten and the negative likelihood ratio between 0.1 and 0.2).
Somewhat useful tests: IgA AGA, IgG AGA, generating small but sometimes important changes from pre- to post-test probability (positive LR between 2 and 5 and negative LR between 0.2 and 0.5)
Not Useful: IgG tTG, altering pre- to post-test probability to a small and rarely important degree (positive LR between 1 and 2 and negative LR between 0.5 and 1).
Diagnostic Odds Ratios (DOR)
Among the individual serologic tests, IgA tTG had the highest DOR, 136.5 (95% CI: 51.9, 221.2). The statistical significance of the difference in DORs among tests was not calculated, however, considering the wide confidence intervals obtained, the differences may not be statistically significant.
Area Under the sROC Curve (AUC)
The sROC AUCs obtained ranged between 0.93 and 0.99 for most IgA-based tests with the exception of IgA AGA, with an AUC of 0.89.
Sensitivity and Specificity of Serologic Tests According to Age Groups
Serologic test accuracy did not seem to vary according to age (adults or children).
Sensitivity and Specificity of Serologic Tests According to Marsh Criteria
Four studies observed a trend towards a higher sensitivity of serologic celiac disease tests when Marsh 3c grade abnormalities were found in the small bowel biopsy compared to Marsh 3a or 3b (statistical significance not reported). The sensitivity of serologic tests was much lower when Marsh 1 grade abnormalities were found in small bowel biopsy compared to Marsh 3 grade abnormalities. The statistical significance of these findings were not reported in the studies.
Diagnostic Accuracy of Serologic Celiac Disease Tests in Subjects with Chronic Liver Disease
A total of 14 observational studies that evaluated the specificity of serologic celiac disease tests in subjects with chronic liver disease were identified. All studies evaluated the frequency of false positive results (1-specificity) of IgA tTG, however, IgA tTG test kits using different substrates were used, i.e., human recombinant, human, and guinea-pig substrates. The gold standard, small bowel biopsy, was used to confirm the result of the serologic tests in only 5 studies. The studies do not seem to have been designed or powered to compare the diagnostic accuracy among different serologic celiac disease tests.
The results of the studies identified in the systematic literature review suggest that there is a trend towards a lower frequency of false positive results if the IgA tTG test using human recombinant substrate is used compared to the guinea pig substrate in subjects with chronic liver disease. However, the statistical significance of the difference was not reported in the studies. When IgA tTG with human recombinant substrate was used, the number of false positives seems to be similar to what was estimated in the MAS pooled analysis for IgA-based serologic tests in a general population of patients. These results should be interpreted with caution since most studies did not use the gold standard, small bowel biopsy, to confirm or exclude the diagnosis of celiac disease, and since the studies were not designed to compare the diagnostic accuracy among different serologic tests. The sensitivity of the different serologic tests in patients with chronic liver disease was not evaluated in the studies identified.
Effects of a Gluten-Free Diet (GFD) in Patients Diagnosed with Celiac Disease
Six studies identified evaluated the effects of GFD on clinical, histological, or serologic improvement in patients diagnosed with celiac disease. Improvement was observed in 51% to 95% of the patients included in the studies.
Grading of Evidence
Overall, the quality of the evidence ranged from moderate to very low depending on the serologic celiac disease test. Reasons to downgrade the quality of the evidence included the use of a surrogate endpoint (diagnostic accuracy) since none of the studies evaluated clinical outcomes, inconsistencies among study results, imprecise estimates, and sparse data. The quality of the evidence was considered moderate for IgA tTg and IgA EMA, low for IgA DGP, and serologic test combinations, and very low for IgA AGA.
Clinical Validity and Clinical Utility of Serologic Testing in the Diagnosis of Celiac Disease
The clinical validity of serologic tests in the diagnosis of celiac disease was considered high in subjects with symptoms consistent with this disease due to
High accuracy of some serologic tests.
Serologic tests detect possible celiac disease cases and avoid unnecessary small bowel biopsy if the test result is negative, unless an endoscopy/ small bowel biopsy is necessary due to the clinical presentation.
Serologic tests support the results of small bowel biopsy.
The clinical utility of serologic tests for the diagnosis of celiac disease, as defined by its impact in decision making was also considered high in subjects with symptoms consistent with this disease given the considerations listed above and since celiac disease diagnosis leads to treatment with a gluten-free diet.
Economic Analysis
A decision analysis was constructed to compare costs and outcomes between the tests based on the sensitivity, specificity and prevalence summary estimates from the MAS Evidence-Based Analysis (EBA). A budget impact was then calculated by multiplying the expected costs and volumes in Ontario. The outcome of the analysis was expected costs and false negatives (FN). Costs were reported in 2010 CAD$. All analyses were performed using TreeAge Pro Suite 2009.
Four strategies made up the efficiency frontier; IgG tTG, IgA tTG, EMA and small bowel biopsy. All other strategies were dominated. IgG tTG was the least costly and least effective strategy ($178.95, FN avoided=0). Small bowel biopsy was the most costly and most effective strategy ($396.60, FN avoided =0.1553). The cost per FN avoided were $293, $369, $1,401 for EMA, IgATTG and small bowel biopsy respectively. One-way sensitivity analyses did not change the ranking of strategies.
All testing strategies with small bowel biopsy are cheaper than biopsy alone however they also result in more FNs. The most cost-effective strategy will depend on the decision makers’ willingness to pay. Findings suggest that IgA tTG was the most cost-effective and feasible strategy based on its Incremental Cost-Effectiveness Ratio (ICER) and convenience to conduct the test. The potential impact of IgA tTG test in the province of Ontario would be $10.4M, $11.0M and $11.7M respectively in the following three years based on past volumes and trends in the province and basecase expected costs.
The panel of tests is the commonly used strategy in the province of Ontario therefore the impact to the system would be $13.6M, $14.5M and $15.3M respectively in the next three years based on past volumes and trends in the province and basecase expected costs.
Conclusions
The clinical validity and clinical utility of serologic tests for celiac disease was considered high in subjects with symptoms consistent with this disease as they aid in the diagnosis of celiac disease and some tests present a high accuracy.
The study findings suggest that IgA tTG is the most accurate and the most cost-effective test.
AGA test (IgA) has a lower accuracy compared to other IgA-based tests
Serologic test combinations appear to be more costly with little gain in accuracy. In addition there may be problems with generalizability of the results of the studies included in this review if different test combinations are used in clinical practice.
IgA deficiency seems to be uncommon in patients diagnosed with celiac disease.
The generalizability of study results is contingent on performing both the serologic test and small bowel biopsy in subjects on a gluten-containing diet as was the case in the studies identified, since the avoidance of gluten may affect test results.
PMCID: PMC3377499  PMID: 23074399
16.  Trends in the Prevalence, Awareness, Treatment and Control of High Low Density Lipoprotein-Cholesterol among US Adults from 1999–2000 through 2009–2010 
The American journal of cardiology  2013;112(5):664-670.
Marked increases in the awareness, treatment and control of high LDL-cholesterol occurred among US adults between 1988–1994 and 1999–2004. An update to the ATP-III guidelines was published in 2004 and it is unknown if these improvements have continued following publication of these revised treatment recommendations. We determined trends in the awareness, treatment and control of high LDL-cholesterol among US adults from 1999– 2000 through 2009–2010 using nationally representative samples of US adults ≥ 20 years of age from six consecutive National Health and Nutrition Examination Surveys (NHANES) in 1999–2000 (n=1,659), 2001–2002 (n=1,897), 2003–2004 (n=1,698), 2005–2006 (n=1,692), 2007–2008 (n=2,044) and 2009–2010 (n=2,318). LDL-cholesterol was measured after an overnight fast and high LDL-cholesterol and controlled LDL-cholesterol were defined using the 2004 updated ATP-III guidelines. Awareness and treatment of high cholesterol were defined using self-report. Among US adults, the prevalence of high LDL-cholesterol did not change from 1999–2000 (37.2%) through 2009–2010 (37.8%). Awareness of high LDLcholesterol increased from 48.9% in 1999–2000 to 62.8% in 2003–2004 but did not increase further through 2009–2010 (61.5%). Among those aware of having high LDL-cholesterol, treatment increased from 41.3% in 1999–2000 to 72.6% in 2007–2008 and was 70.0% in 2009–2010. Among US adults receiving treatment for high LDL-cholesterol, the percentage with controlled LDL-cholesterolincreased from 45.0% in 1999–2000 to 65.3% in 2005–2006 and decreased slightly by 2009–2010 (63.6%). High LDL-cholesterol remains common among US adults. Additional efforts are needed to prevent high LDL-cholesterol and increase the awareness, treatment and control of high LDL-cholesterol among US adults.
doi:10.1016/j.amjcard.2013.04.041
PMCID: PMC3769104  PMID: 23726177
LDL-cholesterol; statins; treatment; awareness; risk factors
17.  Impact of pay for performance on quality of chronic disease management by social class group in England 
Summary
Objective
To examine associations between social class and achievement of selected national audit targets for coronary heart disease (CHD), diabetes and hypertension in England before and after the introduction of a major pay for performance programme in 2004.
Design
Secondary analysis of 2003 and 2006 national survey data for respondents with CHD and diabetes and hypertension.
Setting
England.
Main outcome measure
Achievement of national audit targets for blood pressure, blood glucose and cholesterol control.
Results
There were no significant differences in achievement of blood pressure targets in individuals from manual and non-manual occupational groups with diabetes (2003: 65.9% v 60.3%, 2006: 67.6% v 69.7%) or hypertension (2003: 66.2% v 66.2%, 2006: 72.8% v 71.9%) before or after the introduction of pay for performance. Achievement of the cholesterol target was also similar in individuals from manual and non-manual groups with diabetes (2003: 52.5% v 46.6%, 2006: 68.7% v 70.5%) or CHD (2003: 54.3% v 53.3%, 2006: 68.6% v 71.3%). Differences in achievement of the blood pressure target in CHD [75.8% v 84.5%; AOR 0.44 (0.21-0.90)] were evident between manual and non-manual occupational groups after the introduction of pay for performance.
Conclusion
The quality of chronic disease management in England was broadly equitable between socioeconomic groups before this major pay for performance programme and remained so after its introduction.
doi:10.1258/jrsm.2009.080389
PMCID: PMC2746849  PMID: 19297651
18.  Endovascular Laser Therapy for Varicose Veins 
Executive Summary
Objective
The objective of the MAS evidence review was to conduct a systematic review of the available evidence on the safety, effectiveness, durability and cost–effectiveness of endovascular laser therapy (ELT) for the treatment of primary symptomatic varicose veins (VV).
Background
The Ontario Health Technology Advisory Committee (OHTAC) met on November 27, 2009 to review the safety, effectiveness, durability and cost-effectiveness of ELT for the treatment of primary VV based on an evidence-based review by the Medical Advisory Secretariat (MAS).
Clinical Condition
VV are tortuous, twisted, or elongated veins. This can be due to existing (inherited) valve dysfunction or decreased vein elasticity (primary venous reflux) or valve damage from prior thrombotic events (secondary venous reflux). The end result is pooling of blood in the veins, increased venous pressure and subsequent vein enlargement. As a result of high venous pressure, branch vessels balloon out leading to varicosities (varicose veins).
Symptoms typically affect the lower extremities and include (but are not limited to): aching, swelling, throbbing, night cramps, restless legs, leg fatigue, itching and burning. Left untreated, venous reflux tends to be progressive, often leading to chronic venous insufficiency (CVI).
A number of complications are associated with untreated venous reflux: including superficial thrombophlebitis as well as variceal rupture and haemorrhage. CVI often results in chronic skin changes referred to as stasis dermatitis. Stasis dermatitis is comprised of a spectrum of cutaneous abnormalities including edema, hyperpigmentation, eczema, lipodermatosclerosis and stasis ulceration. Ulceration represents the disease end point for severe CVI.
CVI is associated with a reduced quality of life particularly in relation to pain, physical function and mobility. In severe cases, VV with ulcers, QOL has been rated to be as bad or worse as other chronic diseases such as back pain and arthritis.
Lower limb VV is a common disease affecting adults and estimated to be the seventh most common reason for physician referral in the US. There is a strong familial predisposition to VV with the risk in offspring being 90% if both parents affected, 20% when neither is affected, and 45% (25% boys, 62% girls) if one parent is affected. Globally, the prevalence of VV ranges from 5% to 15% among men and 3% to 29% among women varying by the age, gender and ethnicity of the study population, survey methods and disease definition and measurement. The annual incidence of VV estimated from the Framingham Study was reported to be 2.6% among women and 1.9% among men and did not vary within the age range (40-89 years) studied.
Approximately 1% of the adult population has a stasis ulcer of venous origin at any one time with 4% at risk. The majority of leg ulcer patients are elderly with simple superficial vein reflux. Stasis ulcers are often lengthy medical problems and can last for several years and, despite effective compression therapy and multilayer bandaging are associated with high recurrence rates. Recent trials involving surgical treatment of superficial vein reflux have resulted in healing and significantly reduced recurrence rates.
Endovascular Laser Therapy for VV
ELT is an image-guided, minimally invasive treatment alternative to surgical stripping of superficial venous reflux. It does not require an operating room or general anesthesia and has been performed in outpatient settings by a variety of medical specialties including surgeons (vascular or general), interventional radiologists and phlebologists. Rather than surgically removing the vein, ELT works by destroying, cauterizing or ablating the refluxing vein segment using heat energy delivered via laser fibre.
Prior to ELT, colour-flow Doppler ultrasonography is used to confirm and map all areas of venous reflux to devise a safe and effective treatment plan. The ELT procedure involves the introduction of a guide wire into the target vein under ultrasound guidance followed by the insertion of an introducer sheath through which an optical fibre carrying the laser energy is advanced. A tumescent anesthetic solution is injected into the soft tissue surrounding the target vein along its entire length. This serves to anaesthetize the vein so that the patient feels no discomfort during the procedure. It also serves to insulate the heat from damaging adjacent structures, including nerves and skin. Once satisfactory positioning has been confirmed with ultrasound, the laser is activated. Both the laser fibre and the sheath are simultaneously, slowly and continuously pulled back along the length of the target vessel. At the end of the procedure, homeostasis is then achieved by applying pressure to the entry point.
Adequate and proper compression stockings and bandages are applied after the procedure to reduce the risk of venous thromboembolism, and to reduce postoperative bruising and tenderness. Patients are encouraged to walk immediately after the procedure and most patients return to work or usual activity within a few days. Follow-up protocols vary, with most patients returning 1-3 weeks later for an initial follow-up visit. At this point, the initial clinical result is assessed and occlusion of the treated vessels is confirmed with ultrasound. Patients often have a second follow-up visit 1-3 months following ELT at which time clinical evaluation and ultrasound are repeated. If required, sclerotherapy may be performed during the ELT procedure or at any follow-up visits.
Regulatory Status
Endovascular laser for the treatment of VV was approved by Health Canada as a class 3 device in 2002. The treatment has been an insured service in Saskatchewan since 2007 and is the only province to insure ELT. Although the treatment is not an insured service in Ontario, it has been provided by various medical specialties since 2002 in over 20 private clinics.
Methods
Literature Search
The MAS evidence-based review was performed as an update to the 2007 health technology review performed by the Australian Medical Services Committee (MSAC) to support public financing decisions. The literature search was performed on August 18, 2009 using standard bibliographic databases for studies published from January 1, 2007 to August 15, 2009. Search alerts were generated and reviewed for additional relevant literature up until October 1, 2009.
Inclusion Criteria
English language full-reports and human studies
Original reports with defined study methodology
Reports including standardized measurements on outcome events such as technical success, safety, effectiveness, durability, quality of life or patient satisfaction
Reports involving ELT for VV (great or small saphenous veins)
Randomized controlled trials (RCTs), systematic reviews and meta-analyses
Cohort and controlled clinical studies involving > 1 month ultrasound imaging follow-up
Exclusion Criteria
Non systematic reviews, letters, comments and editorials
Reports not involving outcome events such as safety, effectiveness, durability, or patient satisfaction following an intervention with ELT
Reports not involving interventions with ELT for VV
Pilot studies or studies with small samples ( < 50 subjects)
Summary of Findings
The MAS evidence search identified 14 systematic reviews, 29 cohort studies on safety and effectiveness, four cost studies and 12 randomized controlled trials involving ELT, six of these comparing endovascular laser with surgical ligation and saphenous vein stripping.
Since 2007, 22 cohort studies involving 10,883 patients undergoing ELT of the great saphenous vein (GSV) have been published. Imaging defined treatment effectiveness of mean vein closure rates were reported to be greater than 90% (range 93%- 99%) at short term follow-up. Longer than one year follow-up was reported in five studies with life table analysis performed in four but the follow up was still limited at three and four years. The overall pooled major adverse event rate, including DVT, PE, skin burns or nerve damage events extracted from these studies, was 0.63% (69/10,883).
The overall level of evidence of randomized trials comparing ELT with surgical ligation and vein stripping (n= 6) was graded as moderate to high. Recovery after treatment was significantly quicker after ELT (return to work median number of days, 4 vs. 17; p= .005). Major adverse events occurring after surgery were higher [(1.8% (n=4) vs. 0.4% (n = 1) 1 but not significantly. Treatment effectiveness as measured by imaging vein absence or closure, symptom relief or quality of life similar in the two treatment groups and both treatments resulted in statistically significantly improvements in these outcomes. Recurrence was low after both treatments at follow up but neovascularization (growth of new vessels, a key predictor of long term recurrence was significantly more common (18% vs. 1%; p = .001) after surgery. Although patient satisfaction was reported to be high (>80%) with both treatments, patient preferences evaluated through recruitment process, physician reports and consumer groups were strongly in favour of ELT. For patients minimal complications, quick recovery and dependability of outpatient scheduling were key considerations.
As clinical effectiveness of the two treatments was similar, a cost-analysis was performed to compare differences in resources and costs between the two procedures. A budget impact analysis for introducing ELT as an insured service was also performed. The average case cost (based on Ontario hospital costs and medical resources) for surgical vein stripping was estimated to be $1,799. Because of the uncertainties with resources associated with ELT, in addition to the device related costs, hospital costs were varied and assumed to be the same as or less than (40%) those for surgery resulting in an average ELT case cost of $2,025 or $1,602.
Based on the historical pattern of surgical vein stripping for varices a 5-year projection was made for annual volumes and costs. In Ontario in 2007/2008, 3481 surgical vein stripping procedures were performed, 28% for repeat procedures. Annual volumes of ELT currently being performed in the province in over 20 private clinics were estimated to be approximately 840. If ELT were publicly reimbursed, it was assumed that it would capture 35% of the vein stripping market in the first year and increase to 55% in subsequent years. Based on these assumptions if ELT were not publicly reimbursed, the province would be paying approximately $5.9 million and if ELT were reimbursed the province would pay $8.2 million if the hospital costs for ELT were the same as surgery and $7.1 million if the hospital costs were less (40%) than surgery.
The conclusions on the comparative outcomes between laser ablation and surgical ligation and saphenous vein stripping are summarized in the table below (ES Table 1).
Outcome comparisons of ELT vs. surgery for VV
The outcomes of the evidence-based review on these treatments based on three different perspectives are summarized below:
Patient Outcomes – ELT vs. Surgery
ELT has a quicker recovery attributable to the decreased pain, lower minor complications, use of local anesthesia with immediate ambulation.
ELT is as effective as surgery in the short term as assessed by imaging anatomic outcomes, symptomatic relief and HRQOL outcomes.
Recurrence is similar but neovascularization, a key predictor of long term recurrence, is significantly higher with surgery.
Patient satisfaction is equally high after both treatments but patient preference is much more strongly for ELT. Surgeons performing ELT are satisfied with treatment outcomes and regularly offer ELT as a treatment alternative to surgery.
Clinical or Technical Advantages – ELT Over Surgery
An endovascular approach can more easily and more precisely treat multilevel disease and difficult to treat areas
ELT is an effective and a less invasive treatment for the elderly with VV and those with venous leg ulcers.
System Outcomes – ELT Replacing Surgery
ELT may offer system advantages in that the treatment can be offered by several medical specialties in outpatient settings and because it does not require an operating theatre or general anesthesia.
The treatment may result in ↓ pre-surgical investigations, decanting of patients from OR, ↓ demand on anesthetists time, ↓ hospital stay, ↓decrease wait time for VV treatment and provide more reliable outpatient scheduling.
Depending on the reimbursement mechanism for the treatment, however, it may also result in closure of outpatient clinics with an increasingly centralization of procedures in selected hospitals with large capital budgets resulting in larger and longer waiting lists.
Procedure costs may be similar for the two treatments but the budget impact may be greater with insurance of ELT because of the transfer of the cases from the private market to the public payer system.
PMCID: PMC3377531  PMID: 23074409
19.  Information from Pharmaceutical Companies and the Quality, Quantity, and Cost of Physicians' Prescribing: A Systematic Review 
PLoS Medicine  2010;7(10):e1000352.
Geoff Spurling and colleagues report findings of a systematic review looking at the relationship between exposure to promotional material from pharmaceutical companies and the quality, quantity, and cost of prescribing. They fail to find evidence of improvements in prescribing after exposure, and find some evidence of an association with higher prescribing frequency, higher costs, or lower prescribing quality.
Background
Pharmaceutical companies spent $57.5 billion on pharmaceutical promotion in the United States in 2004. The industry claims that promotion provides scientific and educational information to physicians. While some evidence indicates that promotion may adversely influence prescribing, physicians hold a wide range of views about pharmaceutical promotion. The objective of this review is to examine the relationship between exposure to information from pharmaceutical companies and the quality, quantity, and cost of physicians' prescribing.
Methods and Findings
We searched for studies of physicians with prescribing rights who were exposed to information from pharmaceutical companies (promotional or otherwise). Exposures included pharmaceutical sales representative visits, journal advertisements, attendance at pharmaceutical sponsored meetings, mailed information, prescribing software, and participation in sponsored clinical trials. The outcomes measured were quality, quantity, and cost of physicians' prescribing. We searched Medline (1966 to February 2008), International Pharmaceutical Abstracts (1970 to February 2008), Embase (1997 to February 2008), Current Contents (2001 to 2008), and Central (The Cochrane Library Issue 3, 2007) using the search terms developed with an expert librarian. Additionally, we reviewed reference lists and contacted experts and pharmaceutical companies for information. Randomized and observational studies evaluating information from pharmaceutical companies and measures of physicians' prescribing were independently appraised for methodological quality by two authors. Studies were excluded where insufficient study information precluded appraisal. The full text of 255 articles was retrieved from electronic databases (7,185 studies) and other sources (138 studies). Articles were then excluded because they did not fulfil inclusion criteria (179) or quality appraisal criteria (18), leaving 58 included studies with 87 distinct analyses. Data were extracted independently by two authors and a narrative synthesis performed following the MOOSE guidelines. Of the set of studies examining prescribing quality outcomes, five found associations between exposure to pharmaceutical company information and lower quality prescribing, four did not detect an association, and one found associations with lower and higher quality prescribing. 38 included studies found associations between exposure and higher frequency of prescribing and 13 did not detect an association. Five included studies found evidence for association with higher costs, four found no association, and one found an association with lower costs. The narrative synthesis finding of variable results was supported by a meta-analysis of studies of prescribing frequency that found significant heterogeneity. The observational nature of most included studies is the main limitation of this review.
Conclusions
With rare exceptions, studies of exposure to information provided directly by pharmaceutical companies have found associations with higher prescribing frequency, higher costs, or lower prescribing quality or have not found significant associations. We did not find evidence of net improvements in prescribing, but the available literature does not exclude the possibility that prescribing may sometimes be improved. Still, we recommend that practitioners follow the precautionary principle and thus avoid exposure to information from pharmaceutical companies.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
A prescription drug is a medication that can be supplied only with a written instruction (“prescription”) from a physician or other licensed healthcare professional. In 2009, 3.9 billion drug prescriptions were dispensed in the US alone and US pharmaceutical companies made US$300 billion in sales revenue. Every year, a large proportion of this revenue is spent on drug promotion. In 2004, for example, a quarter of US drug revenue was spent on pharmaceutical promotion. The pharmaceutical industry claims that drug promotion—visits from pharmaceutical sales representatives, advertisements in journals and prescribing software, sponsorship of meetings, mailed information—helps to inform and educate healthcare professionals about the risks and benefits of their products and thereby ensures that patients receive the best possible care. Physicians, however, hold a wide range of views about pharmaceutical promotion. Some see it as a useful and convenient source of information. Others deny that they are influenced by pharmaceutical company promotion but claim that it influences other physicians. Meanwhile, several professional organizations have called for tighter control of promotional activities because of fears that pharmaceutical promotion might encourage physicians to prescribe inappropriate or needlessly expensive drugs.
Why Was This Study Done?
But is there any evidence that pharmaceutical promotion adversely influences prescribing? Reviews of the research literature undertaken in 2000 and 2005 provide some evidence that drug promotion influences prescribing behavior. However, these reviews only partly assessed the relationship between information from pharmaceutical companies and prescribing costs and quality and are now out of date. In this study, therefore, the researchers undertake a systematic review (a study that uses predefined criteria to identify all the research on a given topic) to reexamine the relationship between exposure to information from pharmaceutical companies and the quality, quantity, and cost of physicians' prescribing.
What Did the Researchers Do and Find?
The researchers searched the literature for studies of licensed physicians who were exposed to promotional and other information from pharmaceutical companies. They identified 58 studies that included a measure of exposure to any type of information directly provided by pharmaceutical companies and a measure of physicians' prescribing behavior. They then undertook a “narrative synthesis,” a descriptive analysis of the data in these studies. Ten of the studies, they report, examined the relationship between exposure to pharmaceutical company information and prescribing quality (as judged, for example, by physician drug choices in response to clinical vignettes). All but one of these studies suggested that exposure to drug company information was associated with lower prescribing quality or no association was detected. In the 51 studies that examined the relationship between exposure to drug company information and prescribing frequency, exposure to information was associated with more frequent prescribing or no association was detected. Thus, for example, 17 out of 29 studies of the effect of pharmaceutical sales representatives' visits found an association between visits and increased prescribing; none found an association with less frequent prescribing. Finally, eight studies examined the relationship between exposure to pharmaceutical company information and prescribing costs. With one exception, these studies indicated that exposure to information was associated with a higher cost of prescribing or no association was detected. So, for example, one study found that physicians with low prescribing costs were more likely to have rarely or never read promotional mail or journal advertisements from pharmaceutical companies than physicians with high prescribing costs.
What Do These Findings Mean?
With rare exceptions, these findings suggest that exposure to pharmaceutical company information is associated with either no effect on physicians' prescribing behavior or with adverse affects (reduced quality, increased frequency, or increased costs). Because most of the studies included in the review were observational studies—the physicians in the studies were not randomly selected to receive or not receive drug company information—it is not possible to conclude that exposure to information actually causes any changes in physician behavior. Furthermore, although these findings provide no evidence for any net improvement in prescribing after exposure to pharmaceutical company information, the researchers note that it would be wrong to conclude that improvements do not sometimes happen. The findings support the case for reforms to reduce negative influence to prescribing from pharmaceutical promotion.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000352.
Wikipedia has pages on prescription drugs and on pharmaceutical marketing (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The UK General Medical Council provides guidelines on good practice in prescribing medicines
The US Food and Drug Administration provides information on prescription drugs and on its Bad Ad Program
Healthy Skepticism is an international nonprofit membership association that aims to improve health by reducing harm from misleading health information
The Drug Promotion Database was developed by the World Health Organization Department of Essential Drugs & Medicines Policy and Health Action International Europe to address unethical and inappropriate drug promotion
doi:10.1371/journal.pmed.1000352
PMCID: PMC2957394  PMID: 20976098
20.  Extracorporeal Lung Support Technologies – Bridge to Recovery and Bridge to Lung Transplantation in Adult Patients 
Executive Summary
For cases of acute respiratory distress syndrome (ARDS) and progressive chronic respiratory failure, the first choice or treatment is mechanical ventilation. For decades, this method has been used to support critically ill patients in respiratory failure. Despite its life-saving potential, however, several experimental and clinical studies have suggested that ventilator-induced lung injury can adversely affect the lungs and patient outcomes. Current opinion is that by reducing the pressure and volume of gas delivered to the lungs during mechanical ventilation, the stress applied to the lungs is eased, enabling them to rest and recover. In addition, mechanical ventilation may fail to provide adequate gas exchange, thus patients may suffer from severe hypoxia and hypercapnea. For these reasons, extracorporeal lung support technologies may play an important role in the clinical management of patients with lung failure, allowing not only the transfer of oxygen and carbon dioxide (CO2) but also buying the lungs the time needed to rest and heal.
Objective
The objective of this analysis was to assess the effectiveness, safety, and cost-effectiveness of extracorporeal lung support technologies in the improvement of pulmonary gas exchange and the survival of adult patients with acute pulmonary failure and those with end-stage chronic progressive lung disease as a bridge to lung transplantation (LTx). The application of these technologies in primary graft dysfunction (PGD) after LTx is beyond the scope of this review and is not discussed.
Clinical Applications of Extracorporeal Lung Support
Extracorporeal lung support technologies [i.e., Interventional Lung Assist (ILA) and extracorporeal membrane oxygenation (ECMO)] have been advocated for use in the treatment of patients with respiratory failure. These techniques do not treat the underlying lung condition; rather, they improve gas exchange while enabling the implantation of a protective ventilation strategy to prevent further damage to the lung tissues imposed by the ventilator. As such, extracorporeal lung support technologies have been used in three major lung failure case types:
As a bridge to recovery in acute lung failure – for patients with injured or diseased lungs to give their lungs time to heal and regain normal physiologic function.
As a bridge to LTx – for patients with irreversible end stage lung disease requiring LTx.
As a bridge to recovery after LTx – used as lung support for patients with PGD or severe hypoxemia.
Ex-Vivo Lung Perfusion and Assessment
Recently, the evaluation and reconditioning of donor lungs ex-vivo has been introduced into clinical practice as a method of improving the rate of donor lung utilization. Generally, about 15% to 20% of donor lungs are suitable for LTx, but these figures may increase with the use of ex-vivo lung perfusion. The ex-vivo evaluation and reconditioning of donor lungs is currently performed at the Toronto General Hospital (TGH) and preliminary results have been encouraging (Personal communication, clinical expert, December 17, 2009). If its effectiveness is confirmed, the use of the technique could lead to further expansion of donor organ pools and improvements in post-LTx outcomes.
Extracorporeal Lung support Technologies
ECMO
The ECMO system consists of a centrifugal pump, a membrane oxygenator, inlet and outlet cannulas, and tubing. The exchange of oxygen and CO2 then takes place in the oxygenator, which delivers the reoxygenated blood back into one of the patient’s veins or arteries. Additional ports may be added for haemodialysis or ultrafiltration.
Two different techniques may be used to introduce ECMO: venoarterial and venovenous. In the venoarterial technique, cannulation is through either the femoral artery and the femoral vein, or through the carotid artery and the internal jugular vein. In the venovenous technique cannulation is through both femoral veins or a femoral vein and internal jugular vein; one cannula acts as inflow or arterial line, and the other as an outflow or venous line. Venovenous ECMO will not provide adequate support if a patient has pulmonary hypertension or right heart failure. Problems associated with cannulation during the procedure include bleeding around the cannulation site and limb ischemia distal to the cannulation site.
ILA
Interventional Lung Assist (ILA) is used to remove excess CO2 from the blood of patients in respiratory failure. The system is characterized by a novel, low-resistance gas exchange device with a diffusion membrane composed of polymethylpentene (PMP) fibres. These fibres are woven into a complex configuration that maximizes the exchange of oxygen and CO2 by simple diffusion. The system is also designed to operate without the help of an external pump, though one can be added if higher blood flow is required. The device is then applied across an arteriovenous shunt between the femoral artery and femoral vein. Depending on the size of the arterial cannula used and the mean systemic arterial pressure, a blood flow of up to 2.5 L/min can be achieved (up to 5.5 L/min with an external pump). The cannulation is performed after intravenous administration of heparin.
Recently, the first commercially available extracorporeal membrane ventilator (NovaLung GmbH, Hechingen, Germany) was approved for clinical use by Health Canada for patients in respiratory failure. The system has been used in more than 2,000 patients with various indications in Europe, and was used for the first time in North America at the Toronto General Hospital in 2006.
Evidence-Based Analysis
The research questions addressed in this report are:
Does ILA/ECMO facilitate gas exchange in the lungs of patients with severe respiratory failure?
Does ILA/ECMO improve the survival rate of patients with respiratory failure caused by a range of underlying conditions including patients awaiting LTx?
What are the possible serious adverse events associated with ILA/ECMO therapy?
To address these questions, a systematic literature search was performed on September 28, 2009 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from January 1, 2005 to September 28, 2008. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria, full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search. Articles with an unknown eligibility were reviewed with a second clinical epidemiologist and then a group of epidemiologists until consensus was established.
Inclusion Criteria
Studies in which ILA/ECMO was used as a bridge to recovery or bridge to LTx
Studies containing information relevant to the effectiveness and safety of the procedure
Studies including at least five patients
Exclusion Criteria
Studies reporting the use of ILA/ECMO for inter-hospital transfers of critically ill patients
Studies reporting the use of ILA/ECMO in patients during or after LTx
Animal or laboratory studies
Case reports
Outcomes of Interest
Reduction in partial pressure of CO2
Correction of respiratory acidosis
Improvement in partial pressure of oxygen
Improvement in patient survival
Frequency and severity of adverse events
The search yielded 107 citations in Medline and 107 citations in EMBASE. After reviewing the information provided in the titles and abstracts, eight citations were found to meet the study inclusion criteria. One study was then excluded because of an overlap in the study population with a previous study. Reference checking did not produce any additional studies for inclusion. Seven case series studies, all conducted in Germany, were thus included in this review (see Table 1).
Also included is the recently published CESAR trial, a multicentre RCT in the UK in which ECMO was compared with conventional intensive care management. The results of the CESAR trial were published when this review was initiated. In the absence of any other recent RCT on ECMO, the results of this trial were considered for this assessment and no further searches were conducted. A literature search was then conducted for application of ECMO as bridge to LTx patients (January, 1, 2005 to current). A total of 127 citations on this topic were identified and reviewed but none were found to have examined the use of ECMO as bridge to LTx.
Quality of Evidence
To grade the quality of evidence, the grading system formulated by the GRADE working group and adopted by MAS was applied. The GRADE system classifies the quality of a body of evidence as high, moderate, low, or very low according to four key elements: study design, study quality, consistency across studies, and directness.
Results
Trials on ILA
Of the seven studies identified, six involved patients with ARDS caused by a range of underlying conditions; the seventh included only patients awaiting LTx. All studies reported the rate of gas exchange and respiratory mechanics before ILA and for up to 7 days of ILA therapy. Four studies reported the means and standard deviations of blood gas transfer and arterial blood pH, which were used for meta-analysis.
Fischer et al. reported their first experience on the use of ILA as a bridge to LTx. In their study, 12 patients at high urgency status for LTx, who also had severe ventilation refractory hypercapnea and respiratory acidosis, were connected to ILA prior to LTx. Seven patients had a systemic infection or sepsis prior to ILA insertion. Six hours after initiation of ILA, the partial pressure of CO2 in arterial blood significantly decreased (P < .05) and arterial blood pH significantly improved (P < .05) and remained stable for one week (last time point reported). The partial pressure of oxygen in arterial blood improved from 71 mmHg to 83 mmHg 6 hours after insertion of ILA. The ratio of PaO2/FiO2 improved from 135 at baseline to 168 at 24 hours after insertion of ILA but returned to baseline values in the following week.
Trials on ECMO
The UK-based CESAR trial was conducted to assess the effectiveness and cost of ECMO therapy for severe, acute respiratory failure. The trial protocol were published in 2006 and details of the methods used for the economic evaluation were published in 2008. The study itself was a pragmatic trial (similar to a UK trial of neonatal ECMO), in which best standard practice was compared with an ECMO protocol. The trial involved 180 patients with acute but potentially reversible respiratory failure, with each also having a Murray score of ≥ 3.0 or uncompensated hypercapnea at a pH of < 7.2. Enrolled patients were randomized in a 1:1 ratio to receive either conventional ventilation treatment or ECMO while on ventilator. Conventional management included intermittent positive pressure ventilation, high frequency oscillatory ventilation, or both. As a pragmatic trial, a specific management protocol was not followed; rather the treatment centres were advised to follow a low volume low pressure ventilation strategy. A tidal volume of 4 to 8 mL/kg body weight and a plateau pressure of < 30 cm H2O were recommended.
Conclusions
ILA
Bridge to recovery
No RCTs or observational studies compared ILA to other treatment modalities.
Case series have shown that ILA therapy results in significant CO2 removal from arterial blood and correction of respiratory acidosis, as well as an improvement in oxygen transfer.
ILA therapy enabled a lowering of respiratory settings to protect the lungs without causing a negative impact on arterial blood CO2 and arterial blood pH.
The impact of ILA on patient long-term survival cannot be determined through the studies reviewed.
In-hospital mortality across studies ranged from 20% to 65%.
Ischemic complications were the most frequent adverse events following ILA therapy.
Leg amputation is a rare but possible outcome of ILA therapy, having occurred in about 0.9% of patients in these case series. New techniques involving the insertion of additional cannula into the femoral artery to perfuse the leg may lower this rate.
Bridge to LTx
The results of one case series (n=12) showed that ILA effectively removes CO2 from arterial blood and corrects respiratory acidosis in patients with ventilation refractory hypercapnea awaiting a LTx
Eight of the 12 patients (67%) awaiting a LTx were successfully transplanted and one-year survival for those transplanted was 80%
Since all studies are case series, the grade of the evidence for these observations is classified as “LOW”.
ECMO
Bridge to recovery
Based on the results of a pragmatic trial and an intention to treat analysis, referral of patient to an ECMO based centre significantly improves patient survival without disability compared to conventional ventilation. The results of CESAR trial showed that:
For patients with information about disability, survival without severe disability was significantly higher in ECMO arm
Assuming that the three patients in the conventional ventilation arm who did not have information about severe disability were all disabled, the results were also significant.
Assuming that none of these patients were disabled, the results were at borderline significance
A greater, though not statistically significant, proportion of patients in ECMO arm survived.
The rate of serious adverse events was higher among patients in ECMO group
The grade of evidence for the above observations is classified as “HIGH”.
Bridge to LTx
No studies fitting the inclusion criteria were identified.
There is no accurate data on the use of ECMO in patients awaiting LTx.
Economic Analysis
The objective of the economic analysis was to determine the costs associated with extracorporeal lung support technologies for bridge to LTx in adults. A literature search was conducted for which the target population was adults eligible for extracorporeal lung support. The primary analytic perspective was that of the Ministry of Health and Long-Term Care (MOHLTC). Articles published in English and fitting the following inclusion criteria were reviewed:
Full economic evaluations including cost-effectiveness analyses (CEA), cost-utility analyses (CUA), cost-benefit analyses (CBA);
Economic evaluations reporting incremental cost-effectiveness ratios (ICER) i.e. cost per quality adjusted life year (QALY), life years gained (LYG), or cost per event avoided; and
Studies in patients eligible for lung support technologies for to lung transplantation.
The search yielded no articles reporting comparative economic analyses.
Resource Use and Costs
Costs associated with both ILA and ECMO (outlined in Table ES-1) were obtained from the University Health Network (UHN) case costing initiative (personal communication, UHN, January 2010). Consultation with a clinical expert in the field was also conducted to verify resource utilization. The consultant was situated at the UHN in Toronto. The UHN has one ECMO machine, which cost approximately $100,000. The system is 18 years old and is used an average of 3 to 4 times a year with 35 procedures being performed over the last 9 years. The disposable cost per patient associated with ECMO is, on average, $2,200. There is a maintenance cost associated with the machine (not reported by the UHN), which is currently absorbed by the hospital’s biomedical engineering department.
The average capital cost of an ILA device is $7,100 per device, per patient, while the average cost of the reusable pump $65,000. The UHN has performed 16 of these procedures over the last 2.5 years. Similarly, there is a maintenance cost not that was reported by UHN but is absorbed by the hospital’s biomedical engineering department.
Resources Associated with Extracorporeal Lung Support Technologies
Hospital costs associated with ILA were based on the average cost incurred by the hospital for 11 cases performed in the FY 07/08 (personal communication, UHN, January 2010). The resources incurred with this hospital procedure included:
Device and disposables
OR transplant
Surgical ICU
Laboratory work
Medical imaging
Pharmacy
Clinical nutrition
Physiotherapy
Occupational therapy
Speech and language pathology
Social work
The average length of stay in hospital was 61 days for ILA (range: 5 to 164 days) and the average direct cost was $186,000 per case (range: $19,000 to $552,000). This procedure has a high staffing requirement to monitor patients in hospital, driving up the average cost per case.
PMCID: PMC3415698  PMID: 23074408
21.  Polysomnography in Patients With Obstructive Sleep Apnea 
Executive Summary
Objective
The objective of this health technology policy assessment was to evaluate the clinical utility and cost-effectiveness of sleep studies in Ontario.
Clinical Need: Target Population and Condition
Sleep disorders are common and obstructive sleep apnea (OSA) is the predominant type. Obstructive sleep apnea is the repetitive complete obstruction (apnea) or partial obstruction (hypopnea) of the collapsible part of the upper airway during sleep. The syndrome is associated with excessive daytime sleepiness or chronic fatigue. Several studies have shown that OSA is associated with hypertension, stroke, and other cardiovascular disorders; many researchers believe that these cardiovascular disorders are consequences of OSA. This has generated increasing interest in recent years in sleep studies.
The Technology Being Reviewed
There is no ‘gold standard’ for the diagnosis of OSA, which makes it difficult to calibrate any test for diagnosis. Traditionally, polysomnography (PSG) in an attended setting (sleep laboratory) has been used as a reference standard for the diagnosis of OSA. Polysomnography measures several sleep variables, one of which is the apnea-hypopnea index (AHI) or respiratory disturbance index (RDI). The AHI is defined as the sum of apneas and hypopneas per hour of sleep; apnea is defined as the absence of airflow for ≥ 10 seconds; and hypopnea is defined as reduction in respiratory effort with ≥ 4% oxygen desaturation. The RDI is defined as the sum of apneas, hypopneas, and abnormal respiratory events per hour of sleep. Often the two terms are used interchangeably. The AHI has been widely used to diagnose OSA, although with different cut-off levels, the basis for which are often unclear or arbitrarily determined. Generally, an AHI of more than five events per hour of sleep is considered abnormal and the patient is considered to have a sleep disorder. An abnormal AHI accompanied by excessive daytime sleepiness is the hallmark for OSA diagnosis. For patients diagnosed with OSA, continuous positive airway pressure (CPAP) therapy is the treatment of choice. Polysomnography may also used for titrating CPAP to individual needs.
In January 2005, the College of Physicians and Surgeons of Ontario published the second edition of Independent Health Facilities: Clinical Practice Parameters and Facility Standards: Sleep Medicine, commonly known as “The Sleep Book.” The Sleep Book states that OSA is the most common primary respiratory sleep disorder and a full overnight sleep study is considered the current standard test for individuals in whom OSA is suspected (based on clinical signs and symptoms), particularly if CPAP or surgical therapy is being considered.
Polysomnography in a sleep laboratory is time-consuming and expensive. With the evolution of technology, portable devices have emerged that measure more or less the same sleep variables in sleep laboratories as in the home. Newer CPAP devices also have auto-titration features and can record sleep variables including AHI. These devices, if equally accurate, may reduce the dependency on sleep laboratories for the diagnosis of OSA and the titration of CPAP, and thus may be more cost-effective.
Difficulties arise, however, when trying to assess and compare the diagnostic efficacy of in-home PSG versus in-lab. The AHI measured from portable devices in-home is the sum of apneas and hypopneas per hour of time in bed, rather than of sleep, and the absolute diagnostic efficacy of in-lab PSG is unknown. To compare in-home PSG with in-lab PSG, several researchers have used correlation coefficients or sensitivity and specificity, while others have used Bland-Altman plots or receiver operating characteristics (ROC) curves. All these approaches, however, have potential pitfalls. Correlation coefficients do not measure agreement; sensitivity and specificity are not helpful when the true disease status is unknown; and Bland-Altman plots measure agreement (but are helpful when the range of clinical equivalence is known). Lastly, receiver operating characteristics curves are generated using logistic regression with the true disease status as the dependent variable and test values as the independent variable. Thus, each value of the test is used as a cut-point to measure sensitivity and specificity, which are then plotted on an x-y plane. The cut-point that maximizes both sensitivity and specificity is chosen as the cut-off level to discriminate between disease and no-disease states. In the absence of a gold standard to determine the true disease status, ROC curves are of minimal value.
At the request of the Ontario Health Technology Advisory Committee (OHTAC), MAS has thus reviewed the literature on PSG published over the last two years to examine new developments.
Methods
Review Strategy
There is a large body of literature on sleep studies and several reviews have been conducted. Two large cohort studies, the Sleep Heart Health Study and the Wisconsin Sleep Cohort Study, are the main sources of evidence on sleep literature.
To examine new developments on PSG published in the past two years, MEDLINE, EMBASE, MEDLINE In-Process & Other Non-Indexed Citations, the Cochrane Database of Systematic Reviews and Cochrane CENTRAL, INAHTA, and websites of other health technology assessment agencies were searched. Any study that reported results of in-home or in-lab PSG was included. All articles that reported findings from the Sleep Heart Health Study and the Wisconsin Sleep Cohort Study were also reviewed.
Diffusion of Sleep Laboratories
To estimate the diffusion of sleep laboratories, a list of sleep laboratories licensed under the Independent Health Facility Act was obtained. The annual number of sleep studies per 100,000 individuals in Ontario from 2000 to 2004 was also estimated using administrative databases.
Summary of Findings
Literature Review
A total of 315 articles were identified that were published in the past two years; 227 were excluded after reviewing titles and abstracts. A total of 59 articles were identified that reported findings of the Sleep Heart Health Study and the Wisconsin Sleep Cohort Study.
Prevalence
Based on cross-sectional data from the Wisconsin Sleep Cohort Study of 602 men and women aged 30 to 60 years, it is estimated that the prevalence of sleep-disordered breathing is 9% in women and 24% in men, on the basis of more than five AHI events per hour of sleep. Among the women with sleep disorder breathing, 22.6% had daytime sleepiness and among the men, 15.5% had daytime sleepiness. Based on this, the prevalence of OSA in the middle-aged adult population is estimated to be 2% in women and 4% in men.
Snoring is present in 94% of OSA patients, but not all snorers have OSA. Women report daytime sleepiness less often compared with their male counterparts (of similar age, body mass index [BMI], and AHI). Prevalence of OSA tends to be higher in older age groups compared with younger age groups.
Diagnostic Value of Polysomnography
It is believed that PSG in the sleep laboratory is more accurate than in-home PSG. In the absence of a gold standard, however, claims of accuracy cannot be substantiated. In general, there is poor correlation between PSG variables and clinical variables. A variety of cut-off points of AHI (> 5, > 10, and > 15) are arbitrarily used to diagnose and categorize severity of OSA, though the clinical importance of these cut-off points has not been determined.
Recently, a study of the use of a therapeutic trial of CPAP to diagnose OSA was reported. The authors studied habitual snorers with daytime sleepiness in the absence of other medical or psychiatric disorders. Using PSG as the reference standard, the authors calculated the sensitivity of this test to be 80% and its specificity to be 97%. Further, they concluded that PSG could be avoided in 46% of this population.
Obstructive Sleep Apnea and Obesity
Obstructive sleep apnea is strongly associated with obesity. Obese individuals (BMI >30 kg/m2) are at higher risk for OSA compared with non-obese individuals and up to 75% of OSA patients are obese. It is hypothesized that obese individuals have large deposits of fat in the neck that cause the upper airway to collapse in the supine position during sleep. The observations reported from several studies support the hypothesis that AHIs (or RDIs) are significantly reduced with weight loss in obese individuals.
Obstructive Sleep Apnea and Cardiovascular Diseases
Associations have been shown between OSA and comorbidities such as diabetes mellitus and hypertension, which are known risk factors for myocardial infarction and stroke. Patients with more severe forms of OSA (based on AHI) report poorer quality of life and increased health care utilization compared with patients with milder forms of OSA. From animal models, it is hypothesized that sleep fragmentation results in glucose intolerance and hypertension. There is, however, no evidence from prospective studies in humans to establish a causal link between OSA and hypertension or diabetes mellitus. It is also not clear that the associations between OSA and other diseases are independent of obesity; in most of these studies, patients with higher values of AHI had higher values of BMI compared with patients with lower AHI values.
A recent meta-analysis of bariatric surgery has shown that weight loss in obese individuals (mean BMI = 46.8 kg/m2; range = 32.30–68.80) significantly improved their health profile. Diabetes was resolved in 76.8% of patients, hypertension was resolved in 61.7% of patients, hyperlipidemia improved in 70% of patients, and OSA resolved in 85.7% of patients. This suggests that obesity leads to OSA, diabetes, and hypertension, rather than OSA independently causing diabetes and hypertension.
Health Technology Assessments, Guidelines, and Recommendations
In April 2005, the Centers for Medicare and Medicaid Services (CMS) in the United States published its decision and review regarding in-home and in-lab sleep studies for the diagnosis and treatment of OSA with CPAP. In order to cover CPAP, CMS requires that a diagnosis of OSA be established using PSG in a sleep laboratory. After reviewing the literature, CMS concluded that the evidence was not adequate to determine that unattended portable sleep study was reasonable and necessary in the diagnosis of OSA.
In May 2005, the Canadian Coordinating Office of Health Technology Assessment (CCOHTA) published a review of guidelines for referral of patients to sleep laboratories. The review included 37 guidelines and associated reviews that covered 18 applications of sleep laboratory studies. The CCOHTA reported that the level of evidence for many applications was of limited quality, that some cited studies were not relevant to the recommendations made, that many recommendations reflect consensus positions only, and that there was a need for more good quality studies of many sleep laboratory applications.
Diffusion
As of the time of writing, there are 97 licensed sleep laboratories in Ontario. In 2000, the number of sleep studies performed in Ontario was 376/100,000 people. There was a steady rise in sleep studies in the following years such that in 2004, 769 sleep studies per 100,000 people were performed, for a total of 96,134 sleep studies. Based on prevalence estimates of the Wisconsin Sleep Cohort Study, it was estimated that 927,105 people aged 30 to 60 years have sleep-disordered breathing. Thus, there may be a 10-fold rise in the rate of sleep tests in the next few years.
Economic Analysis
In 2004, approximately 96,000 sleep studies were conducted in Ontario at a total cost of ~$47 million (Cdn). Since obesity is associated with sleep disordered breathing, MAS compared the costs of sleep studies to the cost of bariatric surgery. The cost of bariatric surgery is $17,350 per patient. In 2004, Ontario spent $4.7 million per year for 270 patients to undergo bariatric surgery in the province, and $8.2 million for 225 patients to seek out-of-country treatment. Using a Markov model, it was concluded that shifting costs from sleep studies to bariatric surgery would benefit more patients with OSA and may also prevent health consequences related to diabetes, hypertension, and hyperlipidemia. It is estimated that the annual cost of treating comorbid conditions in morbidly obese patients often exceeds $10,000 per patient. Thus, the downstream cost savings could be substantial.
Considerations for Policy Development
Weight loss is associated with a decrease in OSA severity. Treating and preventing obesity would also substantially reduce the economic burden associated with diabetes, hypertension, hyperlipidemia, and OSA. Promotion of healthy weights may be achieved by a multisectorial approach as recommended by the Chief Medical Officer of Health for Ontario. Bariatric surgery has the potential to help morbidly obese individuals (BMI > 35 kg/m2 with an accompanying comorbid condition, or BMI > 40 kg/m2) lose weight. In January 2005, MAS completed an assessment of bariatric surgery, based on which OHTAC recommended an improvement in access to these surgeries for morbidly obese patients in Ontario.
Habitual snorers with excessive daytime sleepiness have a high pretest probability of having OSA. These patients could be offered a therapeutic trial of CPAP to diagnose OSA, rather than a PSG. A majority of these patients are also obese and may benefit from weight loss. Individualized weight loss programs should, therefore, be offered and patients who are morbidly obese should be offered bariatric surgery.
That said, and in view of the still evolving understanding of the causes, consequences and optimal treatment of OSA, further research is warranted to identify which patients should be screened for OSA.
PMCID: PMC3379160  PMID: 23074483
22.  The endocannabinoid system links gut microbiota to adipogenesis 
We investigated several models of gut microbiota modulation: selective (prebiotics, probiotics, high-fat), drastic (antibiotics, germ-free mice) and mice bearing specific mutations of a key gene involved in the toll-like receptors (TLR) bacteria-host interaction (Myd88−/−). Here we report that gut microbiota modulates the intestinal endocannabinoid (eCB) system-tone, which in turn regulates gut permeability and plasma lipopolysaccharide (LPS) levels.The activation of the intestinal endocannabinoid system increases gut permeability which in turn enhances plasma LPS levels and inflammation in physiological and pathological conditions such as obesity and type 2 diabetes.The investigation of adipocyte differentiation and lipogenesis (both markers of adipogenesis) indicate that gut microbiota controls adipose tissue physiology through LPS-eCB system regulatory loops and may play a critical role in the adipose tissue plasticity during obesity.In vivo, ex vivo and in vitro studies indicate that LPS acts as a master switch on adipose tissue metabolism, by blocking the cannabinoid-driven adipogenesis.
Obesity and type II diabetes have reached epidemic proportions and are associated with a massive expansion of the adipose tissue. Recent data have shown that these metabolic disorders are characterised by low-grade inflammation of unknown molecular origin (Hotamisligil and Erbay, 2008; Shoelson and Goldfine, 2009); therefore, it is of the utmost importance to identify the link between inflammation and adipose tissue metabolism and plasticity. Among the latest important discoveries published in the field, two new concepts have driven this study. First, emerging data have shown that gut microbiota is involved in the control of energy homeostasis (Ley et al, 2005; Turnbaugh et al, 2006; Claus et al, 2008) Obesity is characterised by the massive expansion of adipose tissues and is associated with inflammation (Weisberg et al, 2003). It is possible that both this expansion and the associated inflammation are controlled by microbiota and lipopolysaccharide (LPS) (Cani et al, 2007a, 2008), a cell wall component of Gram-negative bacteria that is among the most potent inducers of inflammation (Cani et al, 2007a, 2007b, 2008; Cani and Delzenne, 2009). Second, obesity is also characterised by greater endocannabinoid (eCB) system tone (increased eCB plasma levels, altered expression of the cannabinoid receptor 1 (CB1 mRNA) and increased eCB levels in the adipose tissue) (Engeli et al, 2005; Bluher et al, 2006; Matias et al, 2006; Cote et al, 2007; D'Eon et al, 2008; Starowicz et al, 2008; Di Marzo et al, 2009; Izzo et al, 2009).
Several studies have suggested a close relationship between LPS, gut microbiota and the eCB system. Indeed, LPS controls the synthesis of eCB in macrophages, whereas macrophage infiltration in the adipose tissue occurring during obesity is an important factor in the development of the metabolic disorders (Weisberg et al, 2003). We have shown that macrophage infiltration is not only dependent on the activation of the receptor CD14 by LPS, but is also dependent on the gut microbiota composition and the gut barrier function (gut permeability) (Cani et al, 2007a, 2008). Moreover, LPS controls the synthesis of eCBs both in vivo (Hoareau et al, 2009) and in vitro (Di Marzo et al, 1999; Maccarrone et al, 2001) through mechanisms dependent of the LPS receptor signalling pathway (Liu et al, 2003). Thus, obesity is nowadays associated with changes in gut microbiota and a higher endocannabinoid system tone, both having a function in the disease's pathophysiology.
Given that the convergent molecular mechanisms that may affect these different supersystem activities and adiposity remain to be elucidated, we tested the hypothesis that the gut microbiota and the eCB system control gut permeability and adipogenesis, by a LPS-dependent mechanism, under both physiological and obesity-related conditions.
First, we found that high-fat diet-induced obese and diabetic animals exhibit threefold higher colonic CB1 mRNA, whereas no modification was observed in the small intestinal segment (jejunum). Moreover, selective modulation of gut microbiota using prebiotics (i.e. non-digestible compounds fermented by specific bacteria in the gut) (Gibson and Roberfroid, 1995) reduces by about one half this effect. Similarly, in genetically obese mice (ob/ob), prebiotic treatment decreases colonic CB1 mRNA and colonic eCB concentrations (AEA) (Figure 2A). In addition, we have observed a modulation of FAAH and MGL mRNA (Figure 2A). Furthermore, we have found that antibiotic treatment decreasing the number of gut bacteria content was associated with a strong reduction of the CB1 receptor levels in the colon of healthy mice.
Second, we show that the endocannabinoid system controls gut barrier function (in vivo and in vitro) and endotoxaemia. More precisely, we designed two in vivo experiments in obese and lean mice (Figure 2). In a first experiment, we blocked the CB1 receptor in obese mice with a specific and selective antagonist (SR141716A) and found that the blockade of the CB1 receptor reduces plasma LPS levels by a mechanism linked to the improvement of the gut barrier function (Figure 2C) as shown by the lower alteration of tight junctions proteins (zonula occludens-1 (ZO-1) and occludin) distribution and localisation, and independently of food intake behaviour (Figures 2D and 3). In a second set of experiments performed in lean wild-type mice, we mimicked the increased eCB system tone observed during obesity by chronic (4-week) infusion of a cannabinoid receptor agonist (HU-210) through mini-pumps implanted subcutaneously. We found that cannabinoid agonist administration significantly increased plasma LPS levels. Furthermore, increased plasma fluorescein isothiocyanate-dextran levels were observed after oral gavage (Figure 2F and G). These sets of in vivo experiments strongly suggest that an overactive eCB system increases gut permeability. Finally, in a cellular model of intestinal epithelial barrier (Caco-2 cells monolayer), we found that CB1 receptor antagonist normalised LPS and the cannabinoid receptors agonist HU-210-induced epithelial barrier alterations.
Third, we provide evidence that adipogenesis is under the control of the gut microbiota, through the modulation of the gut and adipose tissue endocannabinoid systems in both physiological and pathological conditions. We found that the higher eCB system tone (found in obesity or mimicked by eCB agonist) participates to the regulation of adipogenesis by directly acting on the adipose tissue, but also indirectly by increasing plasma LPS levels, which consequently impair adipogenesis and promote inflammatory states. Here, we found that both the specific modulation of the gut microbiota and the blockade of the CB1 receptor decrease plasma LPS levels and is associated with higher adipocyte differentiation and lipogenesis rate. One possible explanation for these surprising data could be as follows: plasma LPS levels might be under the control of CB1 in the intestine (gut barrier function); therefore, under particular pathophysiological conditions in vivo (e.g. obesity/type II diabetes), this could lead to higher circulating LPS levels. Furthermore, CB1 receptor blockade might paradoxically increase adipogenesis because of the ability of CB1 antagonist to reduce gut permeability and counteract the LPS-induced inhibitory effect on adipocyte differentiation and lipogenesis (i.e. a disinhibition mechanism). In summary, given that these treatments reduce gut permeability and, hence, plasma LPS levels and inflammatory tone, we hypothesised that LPS could act as a regulator in this process. This hypothesis was further supported in vitro and in vivo by the observation that cannabinoid-induced adipocyte differentiation and lipogenesis were directly altered (i.e. reduced) in the presence of physiological levels of LPS. In summary, because these treatments reduce gut permeability, hence, plasma LPS and inflammatory tone, we hypothesised that LPS acts as a regulator in this process. Altogether, our data provide the evidence that the consequences of obesity and gut microbiota dysregulation on gut permeability and metabolic endotoxaemia are clearly mediated by the eCB system, those observed on adiposity are likely the result of two systems interactions: LPS-dependent pathways activities and eCB system tone dysregulation (Figure 9).
Our results indicate that the endocannabinoid system tone and the plasma LPS levels have a critical function in the regulation of the adipose tissue plasticity. As obesity is commonly characterised by increased eCB system tone, higher plasma LPS levels, altered gut microbiota and impaired adipose tissue metabolism, it is likely that the increased eCB system tone found in obesity is caused by a failure or a vicious cycle within the pathways controlling the eCB system.
These findings show that two novel therapeutic targets in the treatment of obesity, the gut microbiota and the endocannabinoid system, are closely interconnected. They also provide evidence for the presence of a new integrative physiological axis between gut and adipose tissue regulated by LPS and endocannabinoids. Finally, we propose that the increased endotoxaemia and endocannabinoid system tone found in obesity might explain the altered adipose tissue metabolism.
Obesity is characterised by altered gut microbiota, low-grade inflammation and increased endocannabinoid (eCB) system tone; however, a clear connection between gut microbiota and eCB signalling has yet to be confirmed. Here, we report that gut microbiota modulate the intestinal eCB system tone, which in turn regulates gut permeability and plasma lipopolysaccharide (LPS) levels. The impact of the increased plasma LPS levels and eCB system tone found in obesity on adipose tissue metabolism (e.g. differentiation and lipogenesis) remains unknown. By interfering with the eCB system using CB1 agonist and antagonist in lean and obese mouse models, we found that the eCB system controls gut permeability and adipogenesis. We also show that LPS acts as a master switch to control adipose tissue metabolism both in vivo and ex vivo by blocking cannabinoid-driven adipogenesis. These data indicate that gut microbiota determine adipose tissue physiology through LPS-eCB system regulatory loops and may have critical functions in adipose tissue plasticity during obesity.
doi:10.1038/msb.2010.46
PMCID: PMC2925525  PMID: 20664638
adipose tissue; endocannabinoids; gut microbiota; lipopolysaccharide (LPS); obesity
23.  The Effects of Pay for Performance on Disparities in Stroke, Hypertension, and Coronary Heart Disease Management: Interrupted Time Series Study 
PLoS ONE  2011;6(12):e27236.
Background
The Quality and Outcomes Framework (QOF), a major pay-for-performance programme, was introduced into United Kingdom primary care in April 2004. The impact of this programme on disparities in health care remains unclear. This study examines the following questions: has this pay for performance programme improved the quality of care for coronary heart disease, stroke and hypertension in white, black and south Asian patients? Has this programme reduced disparities in the quality of care between these ethnic groups? Did general practices with different baseline performance respond differently to this programme?
Methodology/Principal Findings
Retrospective cohort study of patients registered with family practices in Wandsworth, London during 2007. Segmented regression analysis of interrupted time series was used to take into account the previous time trend. Primary outcome measures were mean systolic and diastolic blood pressure, and cholesterol levels. Our findings suggest that the implementation of QOF resulted in significant short term improvements in blood pressure control. The magnitude of benefit varied between ethnic groups with a statistically significant short term reduction in systolic BP in white and black but not in south Asian patients with hypertension. Disparities in risk factor control were attenuated only on few measures and largely remained intact at the end of the study period.
Conclusions/Significance
Pay for performance programmes such as the QOF in the UK should set challenging but achievable targets. Specific targets aimed at reducing ethnic disparities in health care may also be needed.
doi:10.1371/journal.pone.0027236
PMCID: PMC3240616  PMID: 22194781
24.  Trends in utilization of lipid- and blood pressure-lowering agents and goal attainment among the U.S. diabetic population, 1999-2008 
Background
For patients with diabetes, clinical practice guidelines recommend treating to a low-density lipoprotein cholesterol (LDL-C) goal of <2.59 mmol/L (100 mg/dL) and a blood pressure (BP) target of <130/80 mmHg. This analysis assessed recent trends in the utilization of lipid-lowering and BP-lowering agents, as well as LDL-C and BP goal attainment, in the U.S. adult diabetic population.
Methods
9,167 men and nonpregnant women aged ≥20 years were identified from the fasting subsample of the 1999-2008 National Health and Nutritional Examination Survey. Diabetes was identified in 1,214 participants by self report, self-reported use of insulin or oral medications for diabetes, or fasting glucose ≥6.99 mmol/L (126 mg/dL).
Results
The prevalence of diagnosed or undiagnosed diabetes increased significantly over the past decade, from 7.4% in 1999-2000 to 11.9% in 2007-2008 (P = 0.0007). During this period, the use of lipid-lowering agents by participants with diabetes increased from 19.5% to 42.2% (P < 0.0001), and the proportion at LDL-C goal increased from 29.7% to 54.4% (P < 0.0001). Although there was a significant increase in antihypertensive medication use (from 35.4% to 58.9%; P < 0.0001), there was no significant change in the proportion of participants at BP goal (from 47.6% to 55.1%; P = 0.1333) or prevalence of hypertension (from 66.6% to 74.2%; P = 0.3724).
Conclusions
The proportion of diabetic individuals taking lipid- and BP-lowering agents has increased significantly in recent years. However, while there has been a significant improvement in LDL-C goal attainment, nearly one-half of all U.S. adults with diabetes are not at recommended LDL-C or BP treatment goals.
doi:10.1186/1475-2840-10-31
PMCID: PMC3098774  PMID: 21496321
25.  Caregiver- and Patient-Directed Interventions for Dementia 
Executive Summary
In early August 2007, the Medical Advisory Secretariat began work on the Aging in the Community project, an evidence-based review of the literature surrounding healthy aging in the community. The Health System Strategy Division at the Ministry of Health and Long-Term Care subsequently asked the secretariat to provide an evidentiary platform for the ministry’s newly released Aging at Home Strategy.
After a broad literature review and consultation with experts, the secretariat identified 4 key areas that strongly predict an elderly person’s transition from independent community living to a long-term care home. Evidence-based analyses have been prepared for each of these 4 areas: falls and fall-related injuries, urinary incontinence, dementia, and social isolation. For the first area, falls and fall-related injuries, an economic model is described in a separate report.
Please visit the Medical Advisory Secretariat Web site, http://www.health.gov.on.ca/english/providers/program/mas/mas_about.html, to review these titles within the Aging in the Community series.
Aging in the Community: Summary of Evidence-Based Analyses
Prevention of Falls and Fall-Related Injuries in Community-Dwelling Seniors: An Evidence-Based Analysis
Behavioural Interventions for Urinary Incontinence in Community-Dwelling Seniors: An Evidence-Based Analysis
Caregiver- and Patient-Directed Interventions for Dementia: An Evidence-Based Analysis
Social Isolation in Community-Dwelling Seniors: An Evidence-Based Analysis
The Falls/Fractures Economic Model in Ontario Residents Aged 65 Years and Over (FEMOR)
This report features the evidence-based analysis on caregiver- and patient-directed interventions for dementia and is broken down into 4 sections:
Introduction
Caregiver-Directed Interventions for Dementia
Patient-Directed Interventions for Dementia
Economic Analysis of Caregiver- and Patient-Directed Interventions for Dementia
Caregiver-Directed Interventions for Dementia
Objective
To identify interventions that may be effective in supporting the well-being of unpaid caregivers of seniors with dementia living in the community.
Clinical Need: Target Population and Condition
Dementia is a progressive and largely irreversible syndrome that is characterized by a loss of cognitive function severe enough to impact social or occupational functioning. The components of cognitive function affected include memory and learning, attention, concentration and orientation, problem-solving, calculation, language, and geographic orientation. Dementia was identified as one of the key predictors in a senior’s transition from independent community living to admission to a long-term care (LTC) home, in that approximately 90% of individuals diagnosed with dementia will be institutionalized before death. In addition, cognitive decline linked to dementia is one of the most commonly cited reasons for institutionalization.
Prevalence estimates of dementia in the Ontario population have largely been extrapolated from the Canadian Study of Health and Aging conducted in 1991. Based on these estimates, it is projected that there will be approximately 165,000 dementia cases in Ontario in the year 2008, and by 2010 the number of cases will increase by nearly 17% over 2005 levels. By 2020 the number of cases is expected to increase by nearly 55%, due to a rise in the number of people in the age categories with the highest prevalence (85+). With the increase in the aging population, dementia will continue to have a significant economic impact on the Canadian health care system. In 1991, the total costs associated with dementia in Canada were $3.9 billion (Cdn) with $2.18 billion coming from LTC.
Caregivers play a crucial role in the management of individuals with dementia because of the high level of dependency and morbidity associated with the condition. It has been documented that a greater demand is faced by dementia caregivers compared with caregivers of persons with other chronic diseases. The increased burden of caregiving contributes to a host of chronic health problems seen among many informal caregivers of persons with dementia. Much of this burden results from managing the behavioural and psychological symptoms of dementia (BPSD), which have been established as a predictor of institutionalization for elderly patients with dementia.
It is recognized that for some patients with dementia, an LTC facility can provide the most appropriate care; however, many patients move into LTC unnecessarily. For individuals with dementia to remain in the community longer, caregivers require many types of formal and informal support services to alleviate the stress of caregiving. These include both respite care and psychosocial interventions. Psychosocial interventions encompass a broad range of interventions such as psychoeducational interventions, counseling, supportive therapy, and behavioural interventions.
Assuming that 50% of persons with dementia live in the community, a conservative estimate of the number of informal caregivers in Ontario is 82,500. Accounting for the fact that 29% of people with dementia live alone, this leaves a remaining estimate of 58,575 Ontarians providing care for a person with dementia with whom they reside.
Description of Interventions
The 2 main categories of caregiver-directed interventions examined in this review are respite care and psychosocial interventions. Respite care is defined as a break or relief for the caregiver. In most cases, respite is provided in the home, through day programs, or at institutions (usually 30 days or less). Depending on a caregiver’s needs, respite services will vary in delivery and duration. Respite care is carried out by a variety of individuals, including paid staff, volunteers, family, or friends.
Psychosocial interventions encompass a broad range of interventions and have been classified in various ways in the literature. This review will examine educational, behavioural, dementia-specific, supportive, and coping interventions. The analysis focuses on behavioural interventions, that is, those designed to help the caregiver manage BPSD. As described earlier, BPSD are one of the most challenging aspects of caring for a senior with dementia, causing an increase in caregiver burden. The analysis also examines multicomponent interventions, which include at least 2 of the above-mentioned interventions.
Methods of Evidence-Based Analysis
A comprehensive search strategy was used to identify systematic reviews and randomized controlled trials (RCTs) that examined the effectiveness of interventions for caregivers of dementia patients.
Questions
Section 2.1
Are respite care services effective in supporting the well-being of unpaid caregivers of seniors with dementia in the community?
Do respite care services impact on rates of institutionalization of these seniors?
Section 2.2
Which psychosocial interventions are effective in supporting the well-being of unpaid caregivers of seniors with dementia in the community?
Which interventions reduce the risk for institutionalization of seniors with dementia?
Outcomes of Interest
any quantitative measure of caregiver psychological health, including caregiver burden, depression, quality of life, well-being, strain, mastery (taking control of one’s situation), reactivity to behaviour problems, etc.;
rate of institutionalization; and
cost-effectiveness.
Assessment of Quality of Evidence
The quality of the evidence was assessed as High, Moderate, Low, or Very low according to the GRADE methodology and GRADE Working Group. As per GRADE the following definitions apply:
Summary of Findings
Conclusions in Table 1 are drawn from Sections 2.1 and 2.2 of the report.
Summary of Conclusions on Caregiver-Directed Interventions
There is limited evidence from RCTs that respite care is effective in improving outcomes for those caring for seniors with dementia.
There is considerable qualitative evidence of the perceived benefits of respite care.
Respite care is known as one of the key formal support services for alleviating caregiver burden in those caring for dementia patients.
Respite care services need to be tailored to individual caregiver needs as there are vast differences among caregivers and patients with dementia (severity, type of dementia, amount of informal/formal support available, housing situation, etc.)
There is moderate- to high-quality evidence that individual behavioural interventions (≥ 6 sessions), directed towards the caregiver (or combined with the patient) are effective in improving psychological health in dementia caregivers.
There is moderate- to high-quality evidence that multicomponent interventions improve caregiver psychosocial health and may affect rates of institutionalization of dementia patients.
RCT indicates randomized controlled trial.
Patient-Directed Interventions for Dementia
Objective
The section on patient-directed interventions for dementia is broken down into 4 subsections with the following questions:
3.1 Physical Exercise for Seniors with Dementia – Secondary Prevention
What is the effectiveness of physical exercise for the improvement or maintenance of basic activities of daily living (ADLs), such as eating, bathing, toileting, and functional ability, in seniors with mild to moderate dementia?
3.2 Nonpharmacologic and Nonexercise Interventions to Improve Cognitive Functioning in Seniors With Dementia – Secondary Prevention
What is the effectiveness of nonpharmacologic interventions to improve cognitive functioning in seniors with mild to moderate dementia?
3.3 Physical Exercise for Delaying the Onset of Dementia – Primary Prevention
Can exercise decrease the risk of subsequent cognitive decline/dementia?
3.4 Cognitive Interventions for Delaying the Onset of Dementia – Primary Prevention
Does cognitive training decrease the risk of cognitive impairment, deterioration in the performance of basic ADLs or instrumental activities of daily living (IADLs),1 or incidence of dementia in seniors with good cognitive and physical functioning?
Clinical Need: Target Population and Condition
Secondary Prevention2
Exercise
Physical deterioration is linked to dementia. This is thought to be due to reduced muscle mass leading to decreased activity levels and muscle atrophy, increasing the potential for unsafe mobility while performing basic ADLs such as eating, bathing, toileting, and functional ability.
Improved physical conditioning for seniors with dementia may extend their independent mobility and maintain performance of ADL.
Nonpharmacologic and Nonexercise Interventions
Cognitive impairments, including memory problems, are a defining feature of dementia. These impairments can lead to anxiety, depression, and withdrawal from activities. The impact of these cognitive problems on daily activities increases pressure on caregivers.
Cognitive interventions aim to improve these impairments in people with mild to moderate dementia.
Primary Prevention3
Exercise
Various vascular risk factors have been found to contribute to the development of dementia (e.g., hypertension, hypercholesterolemia, diabetes, overweight).
Physical exercise is important in promoting overall and vascular health. However, it is unclear whether physical exercise can decrease the risk of cognitive decline/dementia.
Nonpharmacologic and Nonexercise Interventions
Having more years of education (i.e., a higher cognitive reserve) is associated with a lower prevalence of dementia in crossectional population-based studies and a lower incidence of dementia in cohorts followed longitudinally. However, it is unclear whether cognitive training can increase cognitive reserve or decrease the risk of cognitive impairment, prevent or delay deterioration in the performance of ADLs or IADLs or reduce the incidence of dementia.
Description of Interventions
Physical exercise and nonpharmacologic/nonexercise interventions (e.g., cognitive training) for the primary and secondary prevention of dementia are assessed in this review.
Evidence-Based Analysis Methods
A comprehensive search strategy was used to identify systematic reviews and RCTs that examined the effectiveness, safety and cost effectiveness of exercise and cognitive interventions for the primary and secondary prevention of dementia.
Questions
Section 3.1: What is the effectiveness of physical exercise for the improvement or maintenance of ADLs in seniors with mild to moderate dementia?
Section 3.2: What is the effectiveness of nonpharmacologic/nonexercise interventions to improve cognitive functioning in seniors with mild to moderate dementia?
Section 3.3: Can exercise decrease the risk of subsequent cognitive decline/dementia?
Section 3.4: Does cognitive training decrease the risk of cognitive impairment, prevent or delay deterioration in the performance of ADLs or IADLs, or reduce the incidence of dementia in seniors with good cognitive and physical functioning?
Assessment of Quality of Evidence
The quality of the evidence was assessed as High, Moderate, Low, or Very low according to the GRADE methodology. As per GRADE the following definitions apply:
Summary of Findings
Table 2 summarizes the conclusions from Sections 3.1 through 3.4.
Summary of Conclusions on Patient-Directed Interventions*
Previous systematic review indicated that “cognitive training” is not effective in patients with dementia.
A recent RCT suggests that CST (up to 7 weeks) is effective for improving cognitive function and quality of life in patients with dementia.
Regular leisure time physical activity in midlife is associated with a reduced risk of dementia in later life (mean follow-up 21 years).
Regular physical activity in seniors is associated with a reduced risk of cognitive decline (mean follow-up 2 years).
Regular physical activity in seniors is associated with a reduced risk of dementia (mean follow-up 6–7 years).
Evidence that cognitive training for specific functions (memory, reasoning, and speed of processing) produces improvements in these specific domains.
Limited inconclusive evidence that cognitive training can offset deterioration in the performance of self-reported IADL scores and performance assessments.
1° indicates primary; 2°, secondary; CST, cognitive stimulation therapy; IADL, instrumental activities of daily living; RCT, randomized controlled trial.
Benefit/Risk Analysis
As per the GRADE Working Group, the overall recommendations consider 4 main factors:
the trade-offs, taking into account the estimated size of the effect for the main outcome, the confidence limits around those estimates, and the relative value placed on the outcome;
the quality of the evidence;
translation of the evidence into practice in a specific setting, taking into consideration important factors that could be expected to modify the size of the expected effects such as proximity to a hospital or availability of necessary expertise; and
uncertainty about the baseline risk for the population of interest.
The GRADE Working Group also recommends that incremental costs of health care alternatives should be considered explicitly alongside the expected health benefits and harms. Recommendations rely on judgments about the value of the incremental health benefits in relation to the incremental costs. The last column in Table 3 reflects the overall trade-off between benefits and harms (adverse events) and incorporates any risk/uncertainty (cost-effectiveness).
Overall Summary Statement of the Benefit and Risk for Patient-Directed Interventions*
Economic Analysis
Budget Impact Analysis of Effective Interventions for Dementia
Caregiver-directed behavioural techniques and patient-directed exercise programs were found to be effective when assessing mild to moderate dementia outcomes in seniors living in the community. Therefore, an annual budget impact was calculated based on eligible seniors in the community with mild and moderate dementia and their respective caregivers who were willing to participate in interventional home sessions. Table 4 describes the annual budget impact for these interventions.
Annual Budget Impact (2008 Canadian Dollars)
Assumed 7% prevalence of dementia aged 65+ in Ontario.
Assumed 8 weekly sessions plus 4 monthly phone calls.
Assumed 12 weekly sessions plus biweekly sessions thereafter (total of 20).
Assumed 2 sessions per week for first 5 weeks. Assumed 90% of seniors in the community with dementia have mild to moderate disease. Assumed 4.5% of seniors 65+ are in long-term care, and the remainder are in the community. Assumed a rate of participation of 60% for both patients and caregivers and of 41% for patient-directed exercise. Assumed 100% compliance since intervention administered at the home. Cost for trained staff from Ministry of Health and Long-Term Care data source. Assumed cost of personal support worker to be equivalent to in-home support. Cost for recreation therapist from Alberta government Website.
Note: This budget impact analysis was calculated for the first year after introducing the interventions from the Ministry of Health and Long-Term Care perspective using prevalence data only. Prevalence estimates are for seniors in the community with mild to moderate dementia and their respective caregivers who are willing to participate in an interventional session administered at the home setting. Incidence and mortality rates were not factored in. Current expenditures in the province are unknown and therefore were not included in the analysis. Numbers may change based on population trends, rate of intervention uptake, trends in current programs in place in the province, and assumptions on costs. The number of patients was based on patients likely to access these interventions in Ontario based on assumptions stated below from the literature. An expert panel confirmed resource consumption.
PMCID: PMC3377513  PMID: 23074509

Results 1-25 (1098939)