PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1158307)

Clipboard (0)
None

Related Articles

1.  Associations between Stroke Mortality and Weekend Working by Stroke Specialist Physicians and Registered Nurses: Prospective Multicentre Cohort Study 
PLoS Medicine  2014;11(8):e1001705.
In a multicenter observational study, Benjamin Bray and colleagues evaluate whether weekend rounds by stroke specialist physicians, or the ratio of registered nurses to beds on weekends, is associated with patient mortality after stroke.
Please see later in the article for the Editors' Summary
Background
Observational studies have reported higher mortality for patients admitted on weekends. It is not known whether this “weekend effect” is modified by clinical staffing levels on weekends. We aimed to test the hypotheses that rounds by stroke specialist physicians 7 d per week and the ratio of registered nurses to beds on weekends are associated with mortality after stroke.
Methods and Findings
We conducted a prospective cohort study of 103 stroke units (SUs) in England. Data of 56,666 patients with stroke admitted between 1 June 2011 and 1 December 2012 were extracted from a national register of stroke care in England. SU characteristics and staffing levels were derived from cross-sectional survey. Cox proportional hazards models were used to estimate hazard ratios (HRs) of 30-d post-admission mortality, adjusting for case mix, organisational, staffing, and care quality variables. After adjusting for confounders, there was no significant difference in mortality risk for patients admitted to a stroke service with stroke specialist physician rounds fewer than 7 d per week (adjusted HR [aHR] 1.04, 95% CI 0.91–1.18) compared to patients admitted to a service with rounds 7 d per week. There was a dose–response relationship between weekend nurse/bed ratios and mortality risk, with the highest risk of death observed in stroke services with the lowest nurse/bed ratios. In multivariable analysis, patients admitted on a weekend to a SU with 1.5 nurses/ten beds had an estimated adjusted 30-d mortality risk of 15.2% (aHR 1.18, 95% CI 1.07–1.29) compared to 11.2% for patients admitted to a unit with 3.0 nurses/ten beds (aHR 0.85, 95% CI 0.77–0.93), equivalent to one excess death per 25 admissions. The main limitation is the risk of confounding from unmeasured characteristics of stroke services.
Conclusions
Mortality outcomes after stroke are associated with the intensity of weekend staffing by registered nurses but not 7-d/wk ward rounds by stroke specialist physicians. The findings have implications for quality improvement and resource allocation in stroke care.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
In a perfect world, a patient admitted to hospital on a weekend or during the night should have as good an outcome as a patient admitted during regular working hours. But several observational studies (investigations that record patient outcomes without intervening in any way; clinical trials, by contrast, test potential healthcare interventions by comparing the outcomes of patients who are deliberately given different treatments) have reported that admission on weekends is associated with a higher mortality (death) rate than admission on weekdays. This “weekend effect” has led to calls for increased medical and nursing staff to be available in hospitals during the weekend and overnight to ensure that the healthcare provided at these times is of equal quality to that provided during regular working hours. In the UK, for example, “seven-day working” has been identified as a policy and service improvement priority for the National Health Service.
Why Was This Study Done?
Few studies have actually tested the relationship between patient outcomes and weekend physician or nurse staffing levels. It could be that patients who are admitted to hospital on the weekend have poor outcomes because they are generally more ill than those admitted on weekdays. Before any health system introduces potentially expensive increases in weekend staffing levels, better evidence that this intervention will improve patient outcomes is needed. In this prospective cohort study (a study that compares the outcomes of groups of people with different baseline characteristics), the researchers ask whether mortality after stroke is associated with weekend working by stroke specialist physicians and registered nurses. Stroke occurs when the brain's blood supply is interrupted by a blood vessel in the brain bursting (hemorrhagic stroke) or being blocked by a blood clot (ischemic stroke). Swift treatment can limit the damage to the brain caused by stroke, but of the 15 million people who have a stroke every year, about 6 million die within a few hours and another 5 million are left disabled.
What Did the Researchers Do and Find?
The researchers extracted clinical data on 56,666 patients who were admitted to stroke units in England over an 18-month period from a national stroke register. They obtained information on the characteristics and staffing levels of the stroke units from a biennial survey of hospitals admitting patients with stroke, and information on deaths among patients with stroke from the national register of deaths. A quarter of the patients were admitted on a weekend, almost half the stroke units provided stroke specialist physician rounds seven days per week, and the remainder provided rounds five days per week. After adjustment for factors that might have affected outcomes (“confounders”) such as stroke severity and the level of acute stroke care available in each stroke unit, there was no significant difference in mortality risk between patients admitted to a stroke unit with rounds seven days/week and patients admitted to a unit with rounds fewer than seven days/week. However, patients admitted on a weekend to a stroke unit with 1.5 nurses/ten beds had a 30-day mortality risk of 15.2%, whereas patients admitted to a unit with 3.0 nurses/ten beds had a mortality risk of 11.2%, a mortality risk difference equivalent to one excess death per 25 admissions.
What Do These Findings Mean?
These findings show that the provision of stroke specialist physician rounds seven days/week in stroke units in England did not influence the (weak) association between weekend admission for stroke and death recorded in this study, but mortality outcomes after stroke were associated with the intensity of weekend staffing by registered nurses. The accuracy of these findings may be affected by the measure used to judge the level of acute care available in each stroke unit and by residual confounding. For example, patients admitted to units with lower nursing levels may have shared other unknown characteristics that increased their risk of dying after stroke. Moreover, this study considered the impact of staffing levels on mortality only and did not consider other relevant outcomes such as long-term disability. Despite these limitations, these findings support the provision of higher weekend ratios of registered nurses to beds in stroke units, but given the high costs of increasing weekend staffing levels, it is important that controlled trials of different models of physician and nursing staffing are undertaken as soon as possible.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001705.
This study is further discussed in a PLOS Medicine Perspective by Meeta Kerlin
Information about plans to introduce seven-day working into the National Health Service in England is available; the 2013 publication “NHS Services—Open Seven Days a Week: Every Day Counts” provides examples of how hospitals across England are working together to provide routine healthcare services seven days a week; a “Behind the Headlines” article on the UK National Health Service Choices website describes a recent observational study that investigated the association between admission to hospital on the weekend and death, and newspaper coverage of the study's results; the Choices website also provides information about stroke for patients and their families, including personal stories
A US nurses' site includes information on the association of nurse staffing with patient safety
The US National Institute of Neurological Disorders and Stroke provides information about all aspects of stroke (in English and Spanish); its Know Stroke site provides educational materials about stroke prevention, treatment, and rehabilitation, including personal stories (in English and Spanish); the US National Institute of Health SeniorHealth website has additional information about stroke
The Internet Stroke Center provides detailed information about stroke for patients, families, and health professionals (in English and Spanish)
doi:10.1371/journal.pmed.1001705
PMCID: PMC4138029  PMID: 25137386
2.  Highly Reliable Procedural Teams: The Journey to Spread the Universal Protocol in Diagnostic Imaging 
The Permanente Journal  2014;18(1):33-37.
The Joint Commission’s Universal Protocol has been widely implemented in surgical settings since publication in 2003, and the elements are applied to procedures occurring in other health care arenas, in particular, diagnostic imaging. The teams underwent human factors training and then adapted key interventions used in surgical suites to their workflows. Perception of the safety climate improved 25% in interventional radiology and 4.5% in mammography. Perception of the teamwork climate decreased 5.4% in interventional radiology and 16.6% in mammography. The study reveals unexpected challenges and requires long-term effort and focus.
Context:
The Joint Commission’s Universal Protocol has been widely implemented in surgical settings since publication in 2003. The elements improved patient safety in operating rooms, and the same rigor is being applied to procedures occurring in other health care arenas, in particular, diagnostic imaging.
Objective:
In 2011, Kaiser Permanente West Los Angeles’s Diagnostic Imaging Department desired to adapt previous work on Universal Protocol implementation to improve patient safety in interventional radiology and mammography procedures.
Design:
The teams underwent human factors training and then adapted key interventions used in surgical suites to their workflows. Time-out posters, use of whiteboards, “glitch books,” and regular audits provided structure to overcome the risks that human factors present.
Main Outcome Measures:
Staff and physician perceptions of the teamwork and safety climates in their modalities were measured using the Safety Attitudes Questionnaire at baseline and at 18 months after training. Unusual Occurrence Reports were also reviewed to identify events and near misses that could be prevented. Implementation of key process changes were identified as process measures.
Results:
Perception of the safety climate improved 25% in interventional radiology and 4.5% in mammography. Perception of the teamwork climate decreased 5.4% in interventional radiology and 16.6% in mammography. Unusual occurrences were underreported at baseline, and there is ongoing reluctance to document near misses.
Conclusion:
This work provides important considerations of the impact of departmental cultures for the implementation of the Universal Protocol in procedural areas. It also reveals unexpected challenges, and requires long-term effort and focus.
doi:10.7812/TPP/13-071
PMCID: PMC3951028  PMID: 24626070
3.  Patient-Safety-Related Hospital Deaths in England: Thematic Analysis of Incidents Reported to a National Database, 2010–2012 
PLoS Medicine  2014;11(6):e1001667.
Sukhmeet Panesar and colleagues classified reports of patient-safety-related hospital deaths in England to identify patterns of cases where improvements might be possible.
Please see later in the article for the Editors' Summary
Background
Hospital mortality is increasingly being regarded as a key indicator of patient safety, yet methodologies for assessing mortality are frequently contested and seldom point directly to areas of risk and solutions. The aim of our study was to classify reports of deaths due to unsafe care into broad areas of systemic failure capable of being addressed by stronger policies, procedures, and practices. The deaths were reported to a patient safety incident reporting system after mandatory reporting of such incidents was introduced.
Methods and Findings
The UK National Health Service database was searched for incidents resulting in a reported death of an adult over the period of the study. The study population comprised 2,010 incidents involving patients aged 16 y and over in acute hospital settings. Each incident report was reviewed by two of the authors, and, by scrutinising the structured information together with the free text, a main reason for the harm was identified and recorded as one of 18 incident types. These incident types were then aggregated into six areas of apparent systemic failure: mismanagement of deterioration (35%), failure of prevention (26%), deficient checking and oversight (11%), dysfunctional patient flow (10%), equipment-related errors (6%), and other (12%). The most common incident types were failure to act on or recognise deterioration (23%), inpatient falls (10%), healthcare-associated infections (10%), unexpected per-operative death (6%), and poor or inadequate handover (5%). Analysis of these 2,010 fatal incidents reveals patterns of issues that point to actionable areas for improvement.
Conclusions
Our approach demonstrates the potential utility of patient safety incident reports in identifying areas of service failure and highlights opportunities for corrective action to save lives.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Being admitted to the hospital is worrying for patients and for their relatives. Will the patient recover or die in the hospital? Some seriously ill patients will inevitably die, but in an ideal world, no one should die in the hospital because of inadequate or unsafe care (an avoidable death). No one should die, for example, because healthcare professionals fail to act on signs that indicate a decline in a patient's clinical condition. Hospital mortality (death) is often regarded as a key indicator of patient safety in hospitals, and death rate indicators such as the “hospital standardized mortality ratio” (the ratio of the actual number of acute in-hospital deaths to the expected number of in-hospital deaths) are widely used to monitor and improve hospital safety standards. In England, for example, a 2012 report that included this measure as an indicator of hospital performance led to headlines of “worryingly high” hospital death rates and to a review of the quality of care in the hospitals with the highest death rates.
Why Was This Study Done?
Hospital standardized mortality ratios and other measures of in-patient mortality can be misleading because they can, for example, reflect the burden of disease near the hospital rather than the hospital's quality of care or safety levels. Moreover, comparative data on hospital mortality rates are of limited value in identifying areas of risk to patients or solutions to the problem of avoidable deaths. In this study, to identify areas of service failure amenable to improvement through strengthened clinical policies, procedures, and practices, the researchers undertake a thematic analysis of deaths in hospitals in England that were reported by healthcare staff to a mandatory patient-safety-related incident reporting system. Since 2004, staff in the UK National Health Service (the NHS comprises the publicly funded healthcare systems in England, Scotland, Wales, and Northern Ireland) have been encouraged to report any unintended or unexpected incident in which they believe a patient's safety was compromised. Since June 2010, it has been mandatory for staff in England and Wales to report deaths due to patient-safety-related incidents. A thematic analysis examines patterns (“themes”) within nonnumerical (qualitative) data.
What Did the Researchers Do and Find?
By searching the NHS database of patient-safety-related incidents, the researchers identified 2010 incidents that occurred between 1 June 2010 and 31 October 2012 that resulted in the death of adult patients in acute hospital settings. By scrutinizing the structured information in each incident report and the associated free text in which the reporter described what happened and why they think it happened, the researchers classified the reports into 18 incident categories. These categories fell into six broad areas of systemic failure—mismanagement of deterioration (35% of incidents), failure of prevention (26%), deficient checking and oversight (11%), dysfunctional patient flow (10%), equipment-related errors (6%), and other (12%, incidents where the problem underlying death was unclear). Management of deterioration, for example, included the incident categories “failure to act on or recognize deterioration” (23% of reported incidents), “failure to give ordered treatment/support in a timely manner,” and “failure to observe.” Failure of prevention included the incident categories “falls” (10% of reported incidents), “healthcare-associated infections” (also 10% of reported incidents), “pressure sores,” “suicides,” and “deep vein thrombosis/pulmonary embolism.”
What Do These Findings Mean?
Although the accuracy of these findings may be limited by data quality and by other aspects of the study design, they reveal patterns of patient-safety-related deaths in hospitals in England and highlight areas of healthcare that can be targeted for improvement. The finding that the mismanagement of deterioration of acutely ill patients is involved in a third of patient-safety-related deaths identifies an area of particular concern in the NHS and, potentially, in other healthcare systems. One way to reduce deaths associated with the mismanagement of deterioration, suggest the researchers, might be to introduce a standardized early warning score to ensure uniform identification of this population of patients. The researchers also suggest that more effort should be put into designing programs to prevent falls and other incidents and into ensuring that these programs are effectively implemented. More generally, the classification system developed here has the potential to help hospital boards and clinicians identify areas of patient care that require greater scrutiny and intervention and thereby save the lives of many hospital patients.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001667.
The NHS provides information about patient safety, including a definition of a patient safety incident and information on reporting patient safety incidents
The NHS Choices website includes several “Behind the Headlines” articles that discuss patient safety in hospitals, including an article that discusses the 2012 report of high hospital death rates in England, “Fit for the Future?” and another that discusses the Keogh review of the quality of care in the hospitals with highest death rates
The US Agency for Healthcare Research and Quality provides information on patient safety in the US
Wikipedia has pages on thematic analysis and on patient safety (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
doi:10.1371/journal.pmed.1001667
PMCID: PMC4068985  PMID: 24959751
4.  Impact of the World Health Organization's Surgical Safety Checklist on safety culture in the operating theatre: a controlled intervention study 
BJA: British Journal of Anaesthesia  2013;110(5):807-815.
Background
Positive changes in safety culture have been hypothesized to be one of the mechanisms behind the reduction in mortality and morbidity after the introduction of the World Health Organization's Surgical Safety Checklist (SSC). We aimed to study the checklist effects on safety culture perceptions in operating theatre personnel using a prospective controlled intervention design at a single Norwegian university hospital.
Methods
We conducted a study with pre- and post-intervention surveys using the intervention and control groups. The primary outcome was the effects of the Norwegian version of the SSC on safety culture perceptions. Safety culture was measured using the validated Norwegian version of the Hospital Survey on Patient Safety Culture. Descriptive characteristics of operating theatre personnel and checklist compliance data were also recorded. A mixed linear regression model was used to assess changes in safety culture.
Results
The response rate was 61% (349/575) at baseline and 51% (292/569) post-intervention. Checklist compliance ranged from 77% to 85%. We found significant positive changes in the checklist intervention group for the culture factors ‘frequency of events reported’ and ‘adequate staffing’ with regression coefficients at −0.25 [95% confidence interval (CI), −0.47 to −0.07] and 0.21 (95% CI, 0.07–0.35), respectively. Overall, the intervention group reported significantly more positive culture scores—including at baseline.
Conclusions
Implementation of the SSC had rather limited impact on the safety culture within this hospital.
doi:10.1093/bja/aet005
PMCID: PMC3630285  PMID: 23404986
checklist; safety; safety climate; safety culture; surgery
5.  Assessing system failures in operating rooms and intensive care units 
Background
The current awareness of the potential safety risks in healthcare environments has led to the development of largely reactive methods of systems analysis. Proactive methods are able to objectively detect structural shortcomings before mishaps and have been widely used in other high‐risk industries.
Methods
The Leiden Operating Theatre and Intensive Care Safety (LOTICS) scale was developed and evaluated with respect to factor structure and reliability of the scales. The survey was administered to the staff of operating rooms at two university hospitals, and intensive care units (ICUs) of one university hospital and one teaching hospital. The response rate varied between 40–47%. Data of 330 questionnaires were analysed. Safety aspects between the different groups were compared.
Results
Factor analyses and tests for reliability resulted in nine subscales. To these scales another two were added making a total of 11. The reliability of the scales varied from 0.75 to 0.88. The results clearly showed differences between units (OR1, OR2, ICU1, ICU2) and staff.
Conclusion
The results seem to justify the conclusion that the LOTICS scale can be used in both the operating room and ICU to gain insight into the system failures, in a relatively quick and reliable manner. Furthermore the LOTICS scale can be used to compare organisations to each other, monitor changes in patient safety, as well as monitor the effectiveness of the changes made to improve the level of patient safety.
doi:10.1136/qshc.2005.015446
PMCID: PMC2464926  PMID: 17301205
6.  Patient safety in surgical environments: Cross-countries comparison of psychometric properties and results of the Norwegian version of the Hospital Survey on Patient Safety 
Background
How hospital health care personnel perceive safety climate has been assessed in several countries by using the Hospital Survey on Patient Safety (HSOPS). Few studies have examined safety climate factors in surgical departments per se. This study examined the psychometric properties of a Norwegian translation of the HSOPS and also compared safety climate factors from a surgical setting to hospitals in the United States, the Netherlands and Norway.
Methods
This survey included 575 surgical personnel in Haukeland University Hospital in Bergen, an 1100-bed tertiary hospital in western Norway: surgeons, operating theatre nurses, anaesthesiologists, nurse anaesthetists and ancillary personnel. Of these, 358 returned the HSOPS, resulting in a 62% response rate. We used factor analysis to examine the applicability of the HSOPS factor structure in operating theatre settings. We also performed psychometric analysis for internal consistency and construct validity. In addition, we compared the percent of average positive responds of the patient safety climate factors with results of the US HSOPS 2010 comparative data base report.
Results
The professions differed in their perception of patient safety climate, with anaesthesia personnel having the highest mean scores. Factor analysis using the original 12-factor model of the HSOPS resulted in low reliability scores (r = 0.6) for two factors: "adequate staffing" and "organizational learning and continuous improvement". For the remaining factors, reliability was ≥ 0.7. Reliability scores improved to r = 0.8 by combining the factors "organizational learning and continuous improvement" and "feedback and communication about error" into one six-item factor, supporting an 11-factor model. The inter-item correlations were found satisfactory.
Conclusions
The psychometric properties of the questionnaire need further investigations to be regarded as reliable in surgical environments. The operating theatre personnel perceived their hospital's patient safety climate far more negatively than the health care personnel in hospitals in the United States and with perceptions more comparable to those of health care personnel in hospitals in the Netherlands. In fact, the surgical personnel in our hospital may perceive that patient safety climate is less focused in our hospital, at least compared with the results from hospitals in the United States.
doi:10.1186/1472-6963-10-279
PMCID: PMC2955019  PMID: 20860787
7.  Tuberculosis among Health-Care Workers in Low- and Middle-Income Countries: A Systematic Review 
PLoS Medicine  2006;3(12):e494.
Background
The risk of transmission of Mycobacterium tuberculosis from patients to health-care workers (HCWs) is a neglected problem in many low- and middle-income countries (LMICs). Most health-care facilities in these countries lack resources to prevent nosocomial transmission of tuberculosis (TB).
Methods and Findings
We conducted a systematic review to summarize the evidence on the incidence and prevalence of latent TB infection (LTBI) and disease among HCWs in LMICs, and to evaluate the impact of various preventive strategies that have been attempted. To identify relevant studies, we searched electronic databases and journals, and contacted experts in the field. We identified 42 articles, consisting of 51 studies, and extracted data on incidence, prevalence, and risk factors for LTBI and disease among HCWs. The prevalence of LTBI among HCWs was, on average, 54% (range 33% to 79%). Estimates of the annual risk of LTBI ranged from 0.5% to 14.3%, and the annual incidence of TB disease in HCWs ranged from 69 to 5,780 per 100,000. The attributable risk for TB disease in HCWs, compared to the risk in the general population, ranged from 25 to 5,361 per 100,000 per year. A higher risk of acquiring TB disease was associated with certain work locations (inpatient TB facility, laboratory, internal medicine, and emergency facilities) and occupational categories (radiology technicians, patient attendants, nurses, ward attendants, paramedics, and clinical officers).
Conclusions
In summary, our review demonstrates that TB is a significant occupational problem among HCWs in LMICs. Available evidence reinforces the need to design and implement simple, effective, and affordable TB infection-control programs in health-care facilities in these countries.
A systematic review demonstrates that tuberculosis is an important occupational problem among health care workers in low and middle-income countries.
Editors' Summary
Background.
One third of the world's population is infected with Mycobacterium tuberculosis, the bacterium that causes tuberculosis (TB). In many people, the bug causes no health problems—it remains latent. But about 10% of infected people develop active, potentially fatal TB, often in their lungs. People with active pulmonary TB readily spread the infection to other people, including health-care workers (HCWs), in small airborne droplets produced when they cough or sneeze. In high-income countries such as the US, guidelines are in place to minimize the transmission of TB in health-care facilities. Administrative controls (for example, standard treatment plans for people with suspected or confirmed TB) aim to reduce the exposure of HCWs to people with TB. Environmental controls (for example, the use of special isolation rooms) aim to prevent the spread and to reduce the concentration of infectious droplets in the air. Finally, respiratory-protection controls (for example, personal respirators for nursing staff) aim to reduce the risk of infection when exposure to M. tuberculosis is unavoidably high. Together, these three layers of control have reduced the incidence of TB in HCWs (the number who catch TB annually) in high-income countries.
Why Was This Study Done?
But what about low- and middle-income countries (LMICs) where more than 90% of the world's cases of TB occur? Here, there is little money available to implement even low-cost strategies to reduce TB transmission in health-care facilities—so how important an occupational disease is TB in HCWs in these countries? In this study, the researchers have systematically reviewed published papers to find out the incidence and prevalence (how many people in a population have a specific disease) of active TB and latent TB infections (LTBIs) in HCWs in LMICs. They have also investigated whether any of the preventative strategies used in high-income countries have been shown to reduce the TB burden in HCWs in poorer countries.
What Did the Researchers Do and Find?
To identify studies on TB transmission to HCWs in LMICs, the researchers searched electronic databases and journals, and also contacted experts on TB transmission. They then extracted and analyzed the relevant data on TB incidence, prevalence, risk factors, and control measures. Averaged-out over the 51 identified studies, 54% of HCWs had LTBI. In most of the studies, increasing age and duration of employment in health-care facilities, indicating a longer cumulative exposure to infection, was associated with a higher prevalence of LTBI. The same trend was seen in a subgroup of medical and nursing students. After accounting for the incidence of TB in the relevant general population, the excess incidence of TB in the different studies that was attributable to being a HCW ranged from 25 to 5,361 cases per 100, 000 people per year. In addition, a higher risk of acquiring TB was associated with working in specific locations (for example, inpatient TB facilities or diagnostic laboratories) and with specific occupations, including nurses and radiology attendants; most of the health-care facilities examined in the published studies had no specific TB infection-control programs in place.
What Do These Findings Mean?
As with all systematic reviews, the accuracy of these findings may be limited by some aspects of the original studies, such as how the incidence of LTBI was measured. In addition, the possibility that the researchers missed some relevant published studies, or that only studies where there was a high incidence of TB in HCWs were published, may also affect the findings of this study. Nevertheless, they suggest that TB is an important occupational disease in HCWs in LMICs and that the HCWs most at risk of TB are those exposed to the most patients with TB. Reduction of that risk should be a high priority because occupational TB leads to the loss of essential, skilled HCWs. Unfortunately, there are few data available to indicate how this should be done. Thus, the researchers conclude, well-designed field studies are urgently needed to evaluate whether the TB-control measures that have reduced TB transmission to HCWs in high-income countries will work and be affordable in LMICs.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030494.
• US National Institute of Allergy and Infectious Diseases patient fact sheet on tuberculosis
• US Centers for Disease Control and Prevention information for patients and professionals on tuberculosis
• MedlinePlus encyclopedia entry on tuberculosis
• NHS Direct Online, from the UK National Health Service, patient information on tuberculosis
• US National Institute for Occupational Health and Safety, information about tuberculosis for health-care workers
• American Lung Association information on tuberculosis and health-care workers
doi:10.1371/journal.pmed.0030494
PMCID: PMC1716189  PMID: 17194191
8.  Rates of Latent Tuberculosis in Health Care Staff in Russia 
PLoS Medicine  2007;4(2):e55.
Background
Russia is one of 22 high burden tuberculosis (TB) countries. Identifying individuals, particularly health care workers (HCWs) with latent tuberculosis infection (LTBI), and determining the rate of infection, can assist TB control through chemoprophylaxis and improving institutional cross-infection strategies. The objective of the study was to estimate the prevalence and determine the relative risks and risk factors for infection, within a vertically organised TB service in a country with universal bacille Calmette-Guérin (BCG) vaccination.
Methods and Findings
We conducted a cross-sectional study to assess the prevalence of and risk factors for LTBI among unexposed students, minimally exposed medical students, primary care health providers, and TB hospital health providers in Samara, Russian Federation. We used a novel in vitro assay (for gamma-interferon [IFN-γ]) release to establish LTBI and a questionnaire to address risk factors. LTBI was seen in 40.8% (107/262) of staff and was significantly higher in doctors and nurses (39.1% [90/230]) than in students (8.7% [32/368]) (relative risk [RR] 4.5; 95% confidence interval [CI] 3.1–6.5) and in TB service versus primary health doctors and nurses: respectively 46.9% (45/96) versus 29.3% (34/116) (RR 1.6; 95% CI 1.1–2.3). There was a gradient of LTBI, proportional to exposure, in medical students, primary health care providers, and TB doctors: respectively, 10.1% (24/238), 25.5% (14/55), and 55% (22/40). LTBI was also high in TB laboratory workers: 11/18 (61.1%).
Conclusions
IFN-γ assays have a useful role in screening HCWs with a high risk of LTBI and who are BCG vaccinated. TB HCWs were at significantly higher risk of having LTBI. Larger cohort studies are needed to evaluate the individual risks of active TB development in positive individuals and the effectiveness of preventive therapy based on IFN-γ test results.
Gamma-interferon assays were used in a cross-sectional study of Russian health care workers and found high rates of latent tuberculosis infection.
Editors' Summary
Background.
Tuberculosis (TB) is a very common and life-threatening infection caused by a bacterium, Mycobacterium tuberculosis, which is carried by about a third of the world's population. Many people who are infected do not develop the symptoms of disease; this is called “latent infection.” However, it is important to detect latent infection among people in high-risk communities, in order to prevent infected people from developing active disease, and therefore also reduce the spread of TB within the community. 22 countries account for 80% of the world's active TB, and Russia is one of these. Health care workers are particularly at risk for developing active TB disease in Russia, but the extent of latent infection is not known. In order to design appropriate measures for controlling TB in Russia, it is important to know how common latent infection is among health care workers, as well as other members of the community.
Why Was This Study Done?
The researchers here had been studying the spread of tuberculosis in Samara City in southeastern Russia, where the rate of TB disease among health care workers was very high; in 2004 the number of TB cases among health care workers on TB wards was over ten times that in the general population. There was also no information available on the rates of latent TB infection among health care workers in Samara City. The researchers therefore wanted to work out what proportion of health care workers in Samara City had latent TB infection, and particularly to compare groups whom they thought would be at different levels of risk (students, clinicians outside of TB wards, clinicians on TB wards, etc.). Finally, the researchers also wanted to use a new test for detecting latent TB infection. The traditional test for detecting TB infection (tuberculin skin test) is not very reliable among people who have received the Bacillus Calmette-Guérin (BCG) vaccination against TB earlier in life, as is the case in Russia. In this study a new test was therefore used, based on measuring the immune response to two proteins produced by M. tuberculosis, which are not present in the BCG vaccine strain.
What Did the Researchers Do and Find?
In this study the researchers tested health care workers from all the TB clinics in Samara City, as well as other clinical staff and students, for latent tuberculosis. In total, 630 people had blood samples taken for testing. A questionnaire was also used to collect information on possible risk factors for TB. As expected, the rate of latent TB infection was highest among clinical staff working in the TB clinics, 47% of whom were infected with M. tuberculosis. This compared to a 10% infection rate among medical students and 29% infection rate among primary care health workers. The differences in infection rate between medical students, primary care health workers, and TB clinic staff were statistically significant and reflected progressively increasing exposure to TB. Among primary care health workers, past exposure to TB was a risk factor for having latent TB infection.
What Do These Findings Mean?
This study showed that there was a high rate of latent TB infection among health care workers in Samara City and that infection is increasingly likely among people with either past or present exposure to TB. The results suggest that further research should be carried out to test whether mass screening for latent infection, followed by treatment, will reduce the rate of active TB disease among health care workers and also prevent further spread of TB. There are concerns that widespread treatment of latent infection may not be completely effective due to the relatively high prevalence of drug-resistant TB strains and any new initiatives would therefore need to be carefully evaluated.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040055.
The Stop TB Partnership has been set up to eliminate TB as a public health problem; its site provides data and resources about TB in each of the 22 most-affected countries, including Russia
Tuberculosis minisite from the World Health Organization, providing data on tuberculosis worldwide, details of the Stop TB strategy, as well as fact sheets and current guidelines
The US Centers for Disease Control has a tuberculosis minisite, including a fact sheet on latent TB
Information from the US Centers for Disease Control about the QuantiFERON-TB Gold test, used to test for latent TB infection in this study
doi:10.1371/journal.pmed.0040055
PMCID: PMC1796908  PMID: 17298167
9.  Patient safety in primary care: a survey of general practitioners in the Netherlands 
Background
Primary care encompasses many different clinical domains and patient groups, which means that patient safety in primary care may be equally broad. Previous research on safety in primary care has focused on medication safety and incident reporting. In this study, the views of general practitioners (GPs) on patient safety were examined.
Methods
A web-based survey of a sample of GPs was undertaken. The items were derived from aspects of patient safety issues identified in a prior interview study. The questionnaire used 10 clinical cases and 15 potential risk factors to explore GPs' views on patient safety.
Results
A total of 68 GPs responded (51.5% response rate). None of the clinical cases was uniformly judged as particularly safe or unsafe by the GPs. Cases judged to be unsafe by a majority of the GPs concerned either the maintenance of medical records or prescription and monitoring of medication. Cases which only a few GPs judged as unsafe concerned hygiene, the diagnostic process, prevention and communication. The risk factors most frequently judged to constitute a threat to patient safety were a poor doctor-patient relationship, insufficient continuing education on the part of the GP and a patient age over 75 years. Language barriers and polypharmacy also scored high. Deviation from evidence-based guidelines and patient privacy in the reception/waiting room were not perceived as risk factors by most of the GPs.
Conclusion
The views of GPs on safety and risk in primary care did not completely match those presented in published papers and policy documents. The GPs in the present study judged a broader range of factors than in previously published research on patient safety in primary care, including a poor doctor-patient relationship, to pose a potential threat to patient safety. Other risk factors such as infection prevention, deviation from guidelines and incident reporting were judged to be less relevant than by policy makers.
doi:10.1186/1472-6963-10-21
PMCID: PMC2823738  PMID: 20092616
10.  Large scale organisational intervention to improve patient safety in four UK hospitals: mixed method evaluation 
Objectives To conduct an independent evaluation of the first phase of the Health Foundation’s Safer Patients Initiative (SPI), and to identify the net additional effect of SPI and any differences in changes in participating and non-participating NHS hospitals.
Design Mixed method evaluation involving five substudies, before and after design.
Setting NHS hospitals in the United Kingdom.
Participants Four hospitals (one in each country in the UK) participating in the first phase of the SPI (SPI1); 18 control hospitals.
Intervention The SPI1 was a compound (multi-component) organisational intervention delivered over 18 months that focused on improving the reliability of specific frontline care processes in designated clinical specialties and promoting organisational and cultural change.
Results Senior staff members were knowledgeable and enthusiastic about SPI1. There was a small (0.08 points on a 5 point scale) but significant (P<0.01) effect in favour of the SPI1 hospitals in one of 11 dimensions of the staff questionnaire (organisational climate). Qualitative evidence showed only modest penetration of SPI1 at medical ward level. Although SPI1 was designed to engage staff from the bottom up, it did not usually feel like this to those working on the wards, and questions about legitimacy of some aspects of SPI1 were raised. Of the five components to identify patients at risk of deterioration—monitoring of vital signs (14 items); routine tests (three items); evidence based standards specific to certain diseases (three items); prescribing errors (multiple items from the British National Formulary); and medical history taking (11 items)—there was little net difference between control and SPI1 hospitals, except in relation to quality of monitoring of acute medical patients, which improved on average over time across all hospitals. Recording of respiratory rate increased to a greater degree in SPI1 than in control hospitals; in the second six hours after admission recording increased from 40% (93) to 69% (165) in control hospitals and from 37% (141) to 78% (296) in SPI1 hospitals (odds ratio for “difference in difference” 2.1, 99% confidence interval 1.0 to 4.3; P=0.008). Use of a formal scoring system for patients with pneumonia also increased over time (from 2% (102) to 23% (111) in control hospitals and from 2% (170) to 9% (189) in SPI1 hospitals), which favoured controls and was not significant (0.3, 0.02 to 3.4; P=0.173). There were no improvements in the proportion of prescription errors and no effects that could be attributed to SPI1 in non-targeted generic areas (such as enhanced safety culture). On some measures, the lack of effect could be because compliance was already high at baseline (such as use of steroids in over 85% of cases where indicated), but even when there was more room for improvement (such as in quality of medical history taking), there was no significant additional net effect of SPI1. There were no changes over time or between control and SPI1 hospitals in errors or rates of adverse events in patients in medical wards. Mortality increased from 11% (27) to 16% (39) among controls and decreased from 17% (63) to 13% (49) among SPI1 hospitals, but the risk adjusted difference was not significant (0.5, 0.2 to 1.4; P=0.085). Poor care was a contributing factor in four of the 178 deaths identified by review of case notes. The survey of patients showed no significant differences apart from an increase in perception of cleanliness in favour of SPI1 hospitals.
Conclusions The introduction of SPI1 was associated with improvements in one of the types of clinical process studied (monitoring of vital signs) and one measure of staff perceptions of organisational climate. There was no additional effect of SPI1 on other targeted issues nor on other measures of generic organisational strengthening.
doi:10.1136/bmj.d195
PMCID: PMC3033440  PMID: 21292719
11.  Natural Ventilation for the Prevention of Airborne Contagion 
PLoS Medicine  2007;4(2):e68.
Background
Institutional transmission of airborne infections such as tuberculosis (TB) is an important public health problem, especially in resource-limited settings where protective measures such as negative-pressure isolation rooms are difficult to implement. Natural ventilation may offer a low-cost alternative. Our objective was to investigate the rates, determinants, and effects of natural ventilation in health care settings.
Methods and Findings
The study was carried out in eight hospitals in Lima, Peru; five were hospitals of “old-fashioned” design built pre-1950, and three of “modern” design, built 1970–1990. In these hospitals 70 naturally ventilated clinical rooms where infectious patients are likely to be encountered were studied. These included respiratory isolation rooms, TB wards, respiratory wards, general medical wards, outpatient consulting rooms, waiting rooms, and emergency departments. These rooms were compared with 12 mechanically ventilated negative-pressure respiratory isolation rooms built post-2000. Ventilation was measured using a carbon dioxide tracer gas technique in 368 experiments. Architectural and environmental variables were measured. For each experiment, infection risk was estimated for TB exposure using the Wells-Riley model of airborne infection. We found that opening windows and doors provided median ventilation of 28 air changes/hour (ACH), more than double that of mechanically ventilated negative-pressure rooms ventilated at the 12 ACH recommended for high-risk areas, and 18 times that with windows and doors closed (p < 0.001). Facilities built more than 50 years ago, characterised by large windows and high ceilings, had greater ventilation than modern naturally ventilated rooms (40 versus 17 ACH; p < 0.001). Even within the lowest quartile of wind speeds, natural ventilation exceeded mechanical (p < 0.001). The Wells-Riley airborne infection model predicted that in mechanically ventilated rooms 39% of susceptible individuals would become infected following 24 h of exposure to untreated TB patients of infectiousness characterised in a well-documented outbreak. This infection rate compared with 33% in modern and 11% in pre-1950 naturally ventilated facilities with windows and doors open.
Conclusions
Opening windows and doors maximises natural ventilation so that the risk of airborne contagion is much lower than with costly, maintenance-requiring mechanical ventilation systems. Old-fashioned clinical areas with high ceilings and large windows provide greatest protection. Natural ventilation costs little and is maintenance free, and is particularly suited to limited-resource settings and tropical climates, where the burden of TB and institutional TB transmission is highest. In settings where respiratory isolation is difficult and climate permits, windows and doors should be opened to reduce the risk of airborne contagion.
In eight hospitals in Lima, opening windows and doors maximised natural ventilation and lowered the risk of airborne infection. Old-fashioned clinical areas with high ceilings and large windows provide greatest protection.
Editors' Summary
Background.
Tuberculosis (TB) is a major cause of ill health and death worldwide, with around one-third of the world's population infected with the bacterium that causes it (Mycobacterium tuberculosis). One person with active tuberculosis can go on to infect many others; the bacterium is passed in tiny liquid droplets that are produced when someone with active disease coughs, sneezes, spits, or speaks. The risk of tuberculosis being transmitted in hospital settings is particularly high, because people with tuberculosis are often in close contact with very many other people. Currently, most guidelines recommend that the risk of transmission be controlled in certain areas where TB is more likely by making sure that the air in rooms is changed with fresh air between six and 12 times an hour. Air changes can be achieved with simple measures such as opening windows and doors, or by installing mechanical equipment that forces air changes and also keeps the air pressure in an isolation room lower than that outside it. Such “negative pressure,” mechanically ventilated systems are often used on tuberculosis wards to prevent air flowing from isolation rooms to other rooms outside, and so to prevent people on the tuberculosis ward from infecting others.
Why Was This Study Done?
In many parts of the world, hospitals do not have equipment even for simple air conditioning, let alone the special equipment needed for forcing high air changes in isolation rooms and wards. Instead they rely on opening windows and doors in order to reduce the transmission of TB, and this is called natural ventilation. However, it is not clear whether these sorts of measures are adequate for controlling TB transmission. It is important to find out what sorts of systems work best at controlling TB in the real world, so that hospitals and wards can be designed appropriately, within available resources.
What Did the Researchers Do and Find?
This study was based in Lima, Peru's capital city. The researchers studied a variety of rooms, including tuberculosis wards and respiratory isolation rooms, in the city's hospitals. Rooms which had only natural measures for encouraging airflow were compared with mechanically ventilated, negative pressure rooms, which were built much more recently. A comparison was also done between rooms in old hospitals that were naturally ventilated with rooms in newer hospitals that were also naturally ventilated. The researchers used a particular method to measure the number of air changes per hour within each room, and based on this they estimated the risk of a person with TB infecting others using a method called the Wells-Riley equation. The results showed that natural ventilation provided surprisingly high rates of air exchange, with an average of 28 air changes per hour. Hospitals over 50 years old, which generally had large windows and high ceilings, had the highest ventilation, with an average of 40 air changes per hour. This rate compared with 17 air changes per hour in naturally ventilated rooms in modern hospitals, which tended to have lower ceilings and smaller windows. The rooms with modern mechanical ventilation were supposed to have 12 air changes per hour but in reality this was not achieved, as the systems were not maintained properly. The Wells-Riley equation predicted that if an untreated person with tuberculosis was exposed to other people, within 24 hours this person would infect 39% of the people in the mechanically ventilated room, 33% of people in the naturally ventilated new hospital rooms, and only 11% of the people in the naturally ventilated old hospital rooms.
What Do These Findings Mean?
These findings suggest that natural methods of encouraging airflow (e.g., opening doors and windows) work well and in theory could reduce the likelihood of TB being carried from one person to another. Some aspects of the design of wards in old hospitals (such as large windows and high ceilings) are also likely to achieve better airflow and reduce the risk of infection. In poor countries, where mechanical ventilation systems might be too expensive to install and maintain properly, rooms that are designed to naturally achieve good airflow might be the best choice. Another advantage of natural ventilation is that it is not restricted by cost to just high-risk areas, and can therefore be used in many different parts of the hospital, including emergency departments, outpatient departments, and waiting rooms, and it is here that many infectious patients are to be found.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040068.
Information from the World Health Organization on tuberculosis, detailing global efforts to prevent the spread of TB
The World Health Organization publishes guidelines for the prevention of TB in health care facilities in resource-limited settings
Tuberculosis infection control in the era of expanding HIV care and treatment is discussed in an addendum to the above booklet
The Centers for Disease Control have published guidelines for preventing the transmission of mycobacterium tuberculosis in health care settings
Wikipedia has an entry on nosocomial infections (diseases that are spread in hospital). Wikipedia is an internet encyclopedia anyone can edit
A PLoS Medicine Perspective by Peter Wilson, “Is Natural Ventilation a Useful Tool to Prevent the Airborne Spread of TB?” discusses the implications of this study
doi:10.1371/journal.pmed.0040068
PMCID: PMC1808096  PMID: 17326709
12.  Evaluation of preoperative and perioperative operating room briefings at the Hospital for Sick Children 
Canadian Journal of Surgery  2009;52(4):309-315.
Background
Wrong-site, wrong-procedure and wrong-patient surgeries are catastrophic events for patients, medical caregivers and institutions. Operating room (OR) briefings are intended to reduce the risk of wrong-site surgeries and promote collaboration among OR personnel. The purpose of our study was to evaluate 2 OR briefing safety initiatives, “07:35 huddles” (preoperative OR briefing) and “surgical time-outs” (perioperative OR briefing), at the Hospital for Sick Children in Toronto, Ont.
Methods
First, we evaluated the completion and components of the 07:35 huddles and surgical time-outs briefings using direct observations. We then evaluated the attitudes of the OR staff regarding safety in the OR using the “Safety Attitudes Questionnaire, Operating Room version.” Finally, we conducted personal interviews with OR personnel.
Results
Based on direct observations, 102 of 159 (64.1%) 07:35 huddles and 230 of 232 (99.1%) surgical time-outs briefings were completed. The perception of safety in the OR improved, but only among nurses. Regarding difficulty discussing errors in the OR, the nurses’ mean scores improved from 3.5 (95% confidence interval [CI] 3.2–3.8) prebriefing to 2.8 (95% CI 2.5–3.2) postbriefing on a 5-point Likert scale (p < 0.05). Personal interviews confirmed that, mainly among the nursing staff, pre-and perioperative briefing tools increase the perception of communication within the OR, such that discussions regarding errors within the OR are more encouraged.
Conclusion
Structured communication tools, such as 07:35 huddles and surgical time-outs briefings, especially for the nursing personnel, change the notion of individual advocacy to one of teamwork and being proactive about patient safety.
PMCID: PMC2724800  PMID: 19680516
13.  Effect of an Educational Toolkit on Quality of Care: A Pragmatic Cluster Randomized Trial 
PLoS Medicine  2014;11(2):e1001588.
In a pragmatic cluster-randomized trial, Baiju Shah and colleagues evaluated the effectiveness of printed educational materials for clinician education focusing on cardiovascular disease screening and risk reduction in people with diabetes.
Please see later in the article for the Editors' Summary
Background
Printed educational materials for clinician education are one of the most commonly used approaches for quality improvement. The objective of this pragmatic cluster randomized trial was to evaluate the effectiveness of an educational toolkit focusing on cardiovascular disease screening and risk reduction in people with diabetes.
Methods and Findings
All 933,789 people aged ≥40 years with diagnosed diabetes in Ontario, Canada were studied using population-level administrative databases, with additional clinical outcome data collected from a random sample of 1,592 high risk patients. Family practices were randomly assigned to receive the educational toolkit in June 2009 (intervention group) or May 2010 (control group). The primary outcome in the administrative data study, death or non-fatal myocardial infarction, occurred in 11,736 (2.5%) patients in the intervention group and 11,536 (2.5%) in the control group (p = 0.77). The primary outcome in the clinical data study, use of a statin, occurred in 700 (88.1%) patients in the intervention group and 725 (90.1%) in the control group (p = 0.26). Pre-specified secondary outcomes, including other clinical events, processes of care, and measures of risk factor control, were also not improved by the intervention. A limitation is the high baseline rate of statin prescribing in this population.
Conclusions
The educational toolkit did not improve quality of care or cardiovascular outcomes in a population with diabetes. Despite being relatively easy and inexpensive to implement, printed educational materials were not effective. The study highlights the need for a rigorous and scientifically based approach to the development, dissemination, and evaluation of quality improvement interventions.
Trial Registration
http://www.ClinicalTrials.gov NCT01411865 and NCT01026688
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Clinical practice guidelines help health care providers deliver the best care to patients by combining all the evidence on disease management into specific recommendations for care. However, the implementation of evidence-based guidelines is often far from perfect. Take the example of diabetes. This common chronic disease, which is characterized by high levels of sugar (glucose) in the blood, impairs the quality of life of patients and shortens life expectancy by increasing the risk of cardiovascular diseases (conditions that affect the heart and circulation) and other life-threatening conditions. Patients need complex care to manage the multiple risk factors (high blood sugar, high blood pressure, high levels of fat in the blood) that are associated with the long-term complications of diabetes, and they need to be regularly screened and treated for these complications. Clinical practice guidelines for diabetes provide recommendations on screening and diagnosis, drug treatment, and cardiovascular disease risk reduction, and on helping patients self-manage their disease. Unfortunately, the care delivered to patients with diabetes frequently fails to meet the standards laid down in these guidelines.
Why Was This Study Done?
How can guideline adherence and the quality of care provided to patients be improved? A common approach is to send printed educational materials to clinicians. For example, when the Canadian Diabetes Association (CDA) updated its clinical practice guidelines in 2008, it mailed educational toolkits that contained brochures and other printed materials targeting key themes from the guidelines to family physicians. In this pragmatic cluster randomized trial, the researchers investigate the effect of the CDA educational toolkit that targeted cardiovascular disease screening and treatment on the quality of care of people with diabetes. A pragmatic trial asks whether an intervention works under real-life conditions and whether it works in terms that matter to the patient; a cluster randomized trial randomly assigns groups of people to receive alternative interventions and compares outcomes in the differently treated “clusters.”
What Did the Researchers Do and Find?
The researchers randomly assigned family practices in Ontario, Canada to receive the educational toolkit in June 2009 (intervention group) or in May 2010 (control group). They examined outcomes between July 2009 and April 2010 in all patients with diabetes in Ontario aged over 40 years (933,789 people) using population-level administrative data. In Canada, administrative databases record the personal details of people registered with provincial health plans, information on hospital visits and prescriptions, and physician service claims for consultations, assessments, and diagnostic and therapeutic procedures. They also examined clinical outcome data from a random sample of 1,592 patients at high risk of cardiovascular complications. In the administrative data study, death or non-fatal heart attack (the primary outcome) occurred in about 11,500 patients in both the intervention and control group. In the clinical data study, the primary outcome―use of a statin to lower blood fat levels―occurred in about 700 patients in both study groups. Secondary outcomes, including other clinical events, processes of care, and measures of risk factor control were also not improved by the intervention. Indeed, in the administrative data study, some processes of care outcomes related to screening for heart disease were statistically significantly worse in the intervention group than in the control group, and in the clinical data study, fewer patients in the intervention group reached blood pressure targets than in the control group.
What Do These Findings Mean?
These findings suggest that the CDA cardiovascular diseases educational toolkit did not improve quality of care or cardiovascular outcomes in a population with diabetes. Indeed, the toolkit may have led to worsening in some secondary outcomes although, because numerous secondary outcomes were examined, this may be a chance finding. Limitations of the study include its length, which may have been too short to see an effect of the intervention on clinical outcomes, and the possibility of a ceiling effect—the control group in the clinical data study generally had good care, which left little room for improvement of the quality of care in the intervention group. Overall, however, these findings suggest that printed educational materials may not be an effective way to improve the quality of care for patients with diabetes and other complex conditions and highlight the need for a rigorous, scientific approach to the development, dissemination, and evaluation of quality improvement interventions.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001588.
The US National Diabetes Information Clearinghouse provides information about diabetes for patients, health care professionals, and the general public (in English and Spanish)
The UK National Health Service Choices website provides information (including some personal stories) for patients and carers about type 2 diabetes, the commonest form of diabetes
The Canadian Diabetes Association also provides information about diabetes for patients (including some personal stories about living with diabetes) and health care professionals; its latest clinical practice guidelines are available on its website
The UK National Institute for Health and Care Excellence provides general information about clinical guidelines and about health care quality standards in the UK
The US Agency for Healthcare Research and Quality aims to improve the quality, safety, efficiency, and effectiveness of health care for all Americans (information in English and Spanish); the US National Guideline Clearinghouse is a searchable database of clinical practice guidelines
The International Diabetes Federation provides information about diabetes for patients and health care professionals, along with international statistics on the burden of diabetes
doi:10.1371/journal.pmed.1001588
PMCID: PMC3913553  PMID: 24505216
14.  Evaluating the PRASE patient safety intervention - a multi-centre, cluster trial with a qualitative process evaluation: study protocol for a randomised controlled trial 
Trials  2014;15(1):420.
Background
Estimates show that as many as one in 10 patients are harmed while receiving hospital care. Previous strategies to improve safety have focused on developing incident reporting systems and changing systems of care and professional behaviour, with little involvement of patients. The need to engage with patients about the quality and safety of their care has never been more evident with recent high profile reviews of poor hospital care all emphasising the need to develop and support better systems for capturing and responding to the patient perspective on their care. Over the past 3 years, our research team have developed, tested and refined the PRASE (Patient Reporting and Action for a Safe Environment) intervention, which gains patient feedback about quality and safety on hospital wards.
Methods/design
A multi-centre, cluster, wait list design, randomised controlled trial with an embedded qualitative process evaluation. The aim is to assess the efficacy of the PRASE intervention, in achieving patient safety improvements over a 12-month period.
The trial will take place across 32 hospital wards in three NHS Hospital Trusts in the North of England. The PRASE intervention comprises two tools: (1) a 44-item questionnaire which asks patients about safety concerns and issues; and (2) a proforma for patients to report (a) any specific patient safety incidents they have been involved in or witnessed and (b) any positive experiences. These two tools then provide data which are fed back to wards in a structured feedback report. Using this report, ward staff are asked to hold action planning meetings (APMs) in order to action plan, then implement their plans in line with the issues raised by patients in order to improve patient safety and the patient experience.
The trial will be subjected to a rigorous qualitative process evaluation which will enable interpretation of the trial results. Methods: fieldworker diaries, ethnographic observation of APMs, structured interviews with APM lead and collection of key data about intervention wards. Intervention fidelity will be assessed primarily by adherence to the intervention via scoring based on an adapted framework.
Discussion
This study will be one of the largest patient safety trials ever conducted, involving 32 hospital wards. The results will further understanding about how patient feedback on the safety of care can be used to improve safety at a ward level. Incorporating the ‘patient voice’ is critical if patient feedback is to be situated as an integral part of patient safety improvements.
Trial registration
ISRCTN07689702, 16 Aug 2013
Electronic supplementary material
The online version of this article (doi:10.1186/1745-6215-15-420) contains supplementary material, which is available to authorized users.
doi:10.1186/1745-6215-15-420
PMCID: PMC4229607  PMID: 25354689
Patient safety; Safety management; Medical error; Patient participation
15.  Developing an efficient scheduling template of a chemotherapy treatment unit 
The Australasian Medical Journal  2011;4(10):575-588.
This study was undertaken to improve the performance of a Chemotherapy Treatment Unit by increasing the throughput and reducing the average patient’s waiting time. In order to achieve this objective, a scheduling template has been built. The scheduling template is a simple tool that can be used to schedule patients' arrival to the clinic. A simulation model of this system was built and several scenarios, that target match the arrival pattern of the patients and resources availability, were designed and evaluated. After performing detailed analysis, one scenario provide the best system’s performance. A scheduling template has been developed based on this scenario. After implementing the new scheduling template, 22.5% more patients can be served.
Introduction
CancerCare Manitoba is a provincially mandated cancer care agency. It is dedicated to provide quality care to those who have been diagnosed and are living with cancer. MacCharles Chemotherapy unit is specially built to provide chemotherapy treatment to the cancer patients of Winnipeg. In order to maintain an excellent service, it tries to ensure that patients get their treatment in a timely manner. It is challenging to maintain that goal because of the lack of a proper roster, the workload distribution and inefficient resource allotment. In order to maintain the satisfaction of the patients and the healthcare providers, by serving the maximum number of patients in a timely manner, it is necessary to develop an efficient scheduling template that matches the required demand with the availability of resources. This goal can be reached using simulation modelling. Simulation has proven to be an excellent modelling tool. It can be defined as building computer models that represent real world or hypothetical systems, and hence experimenting with these models to study system behaviour under different scenarios.1, 2
A study was undertaken at the Children's Hospital of Eastern Ontario to identify the issues behind the long waiting time of a emergency room.3 A 20-­‐day field observation revealed that the availability of the staff physician and interaction affects the patient wait time. Jyväskylä et al.4 used simulation to test different process scenarios, allocate resources and perform activity-­‐based cost analysis in the Emergency Department (ED) at the Central Hospital. The simulation also supported the study of a new operational method, named "triage-team" method without interrupting the main system. The proposed triage team method categorises the entire patient according to the urgency to see the doctor and allows the patient to complete the necessary test before being seen by the doctor for the first time. The simulation study showed that it will decrease the throughput time of the patient and reduce the utilisation of the specialist and enable the ordering all the tests the patient needs right after arrival, thus quickening the referral to treatment.
Santibáñez et al.5 developed a discrete event simulation model of British Columbia Cancer Agency"s ambulatory care unit which was used to study the impact of scenarios considering different operational factors (delay in starting clinic), appointment schedule (appointment order, appointment adjustment, add-­‐ons to the schedule) and resource allocation. It was found that the best outcomes were obtained when not one but multiple changes were implemented simultaneously. Sepúlveda et al.6 studied the M. D. Anderson Cancer Centre Orlando, which is a cancer treatment facility and built a simulation model to analyse and improve flow process and increase capacity in the main facility. Different scenarios were considered like, transferring laboratory and pharmacy areas, adding an extra blood draw room and applying different scheduling techniques of patients. The study shows that by increasing the number of short-­‐term (four hours or less) patients in the morning could increase chair utilisation.
Discrete event simulation also helps improve a service where staff are ignorant about the behaviour of the system as a whole; which can also be described as a real professional system. Niranjon et al.7 used simulation successfully where they had to face such constraints and lack of accessible data. Carlos et al. 8 used Total quality management and simulation – animation to improve the quality of the emergency room. Simulation was used to cover the key point of the emergency room and animation was used to indicate the areas of opportunity required. This study revealed that a long waiting time, overload personnel and increasing withdrawal rate of patients are caused by the lack of capacity in the emergency room.
Baesler et al.9 developed a methodology for a cancer treatment facility to find stochastically a global optimum point for the control variables. A simulation model generated the output using a goal programming framework for all the objectives involved in the analysis. Later a genetic algorithm was responsible for performing the search for an improved solution. The control variables that were considered in this research are number of treatment chairs, number of drawing blood nurses, laboratory personnel, and pharmacy personnel. Guo et al. 10 presented a simulation framework considering demand for appointment, patient flow logic, distribution of resources, scheduling rules followed by the scheduler. The objective of the study was to develop a scheduling rule which will ensure that 95% of all the appointment requests should be seen within one week after the request is made to increase the level of patient satisfaction and balance the schedule of each doctor to maintain a fine harmony between "busy clinic" and "quiet clinic".
Huschka et al.11 studied a healthcare system which was about to change their facility layout. In this case a simulation model study helped them to design a new healthcare practice by evaluating the change in layout before implementation. Historical data like the arrival rate of the patients, number of patients visited each day, patient flow logic, was used to build the current system model. Later, different scenarios were designed which measured the changes in the current layout and performance.
Wijewickrama et al.12 developed a simulation model to evaluate appointment schedule (AS) for second time consultations and patient appointment sequence (PSEQ) in a multi-­‐facility system. Five different appointment rule (ARULE) were considered: i) Baily; ii) 3Baily; iii) Individual (Ind); iv) two patients at a time (2AtaTime); v) Variable Interval and (V-­‐I) rule. PSEQ is based on type of patients: Appointment patients (APs) and new patients (NPs). The different PSEQ that were studied in this study were: i) first-­‐ come first-­‐serve; ii) appointment patient at the beginning of the clinic (APBEG); iii) new patient at the beginning of the clinic (NPBEG); iv) assigning appointed and new patients in an alternating manner (ALTER); v) assigning a new patient after every five-­‐appointment patients. Also patient no show (0% and 5%) and patient punctuality (PUNCT) (on-­‐time and 10 minutes early) were also considered. The study found that ALTER-­‐Ind. and ALTER5-­‐Ind. performed best on 0% NOSHOW, on-­‐time PUNCT and 5% NOSHOW, on-­‐time PUNCT situation to reduce WT and IT per patient. As NOSHOW created slack time for waiting patients, their WT tends to reduce while IT increases due to unexpected cancellation. Earliness increases congestion whichin turn increases waiting time.
Ramis et al.13 conducted a study of a Medical Imaging Center (MIC) to build a simulation model which was used to improve the patient journey through an imaging centre by reducing the wait time and making better use of the resources. The simulation model also used a Graphic User Interface (GUI) to provide the parameters of the centre, such as arrival rates, distances, processing times, resources and schedule. The simulation was used to measure the waiting time of the patients in different case scenarios. The study found that assigning a common function to the resource personnel could improve the waiting time of the patients.
The objective of this study is to develop an efficient scheduling template that maximises the number of served patients and minimises the average patient's waiting time at the given resources availability. To accomplish this objective, we will build a simulation model which mimics the working conditions of the clinic. Then we will suggest different scenarios of matching the arrival pattern of the patients with the availability of the resources. Full experiments will be performed to evaluate these scenarios. Hence, a simple and practical scheduling template will be built based on the indentified best scenario. The developed simulation model is described in section 2, which consists of a description of the treatment room, and a description of the types of patients and treatment durations. In section 3, different improvement scenarios are described and their analysis is presented in section 4. Section 5 illustrates a scheduling template based on one of the improvement scenarios. Finally, the conclusion and future direction of our work is exhibited in section 6.
Simulation Model
A simulation model represents the actual system and assists in visualising and evaluating the performance of the system under different scenarios without interrupting the actual system. Building a proper simulation model of a system consists of the following steps.
Observing the system to understand the flow of the entities, key players, availability of resources and overall generic framework.
Collecting the data on the number and type of entities, time consumed by the entities at each step of their journey, and availability of resources.
After building the simulation model it is necessary to confirm that the model is valid. This can be done by confirming that each entity flows as it is supposed to and the statistical data generated by the simulation model is similar to the collected data.
Figure 1 shows the patient flow process in the treatment room. On the patient's first appointment, the oncologist comes up with the treatment plan. The treatment time varies according to the patient’s condition, which may be 1 hour to 10 hours. Based on the type of the treatment, the physician or the clinical clerk books an available treatment chair for that time period.
On the day of the appointment, the patient will wait until the booked chair is free. When the chair is free a nurse from that station comes to the patient, verifies the name and date of birth and takes the patient to a treatment chair. Afterwards, the nurse flushes the chemotherapy drug line to the patient's body which takes about five minutes and sets up the treatment. Then the nurse leaves to serve another patient. Chemotherapy treatment lengths vary from less than an hour to 10 hour infusions. At the end of the treatment, the nurse returns, removes the line and notifies the patient about the next appointment date and time which also takes about five minutes. Most of the patients visit the clinic to take care of their PICC line (a peripherally inserted central catheter). A PICC is a line that is used to inject the patient with the chemical. This PICC line should be regularly cleaned, flushed to maintain patency and the insertion site checked for signs of infection. It takes approximately 10–15 minutes to take care of a PICC line by a nurse.
Cancer Care Manitoba provided access to the electronic scheduling system, also known as "ARIA" which is comprehensive information and image management system that aggregates patient data into a fully-­‐electronic medical chart, provided by VARIAN Medical System. This system was used to find out how many patients are booked in every clinic day. It also reveals which chair is used for how many hours. It was necessary to search a patient's history to find out how long the patient spends on which chair. Collecting the snapshot of each patient gives the complete picture of a one day clinic schedule.
The treatment room consists of the following two main limited resources:
Treatment Chairs: Chairs that are used to seat the patients during the treatment.
Nurses: Nurses are required to inject the treatment line into the patient and remove it at the end of the treatment. They also take care of the patients when they feel uncomfortable.
Mc Charles Chemotherapy unit consists of 11 nurses, and 5 stations with the following description:
Station 1: Station 1 has six chairs (numbered 1 to 6) and two nurses. The two nurses work from 8:00 to 16:00.
Station 2: Station 2 has six chairs (7 to 12) and three nurses. Two nurses work from 8:00 to 16:00 and one nurse works from 12:00 to 20:00.
Station 3: Station 4 has six chairs (13 to 18) and two nurses. The two nurses work from 8:00 to 16:00.
Station 4: Station 4 has six chairs (19 to 24) and three nurses. One nurse works from 8:00 to 16:00. Another nurse works from 10:00 to 18:00.
Solarium Station: Solarium Station has six chairs (Solarium Stretcher 1, Solarium Stretcher 2, Isolation, Isolation emergency, Fire Place 1, Fire Place 2). There is only one nurse assigned to this station that works from 12:00 to 20:00. The nurses from other stations can help when need arises.
There is one more nurse known as the "float nurse" who works from 11:00 to 19:00. This nurse can work at any station. Table 1 summarises the working hours of chairs and nurses. All treatment stations start at 8:00 and continue until the assigned nurse for that station completes her shift.
Currently, the clinic uses a scheduling template to assign the patients' appointments. But due to high demand of patient appointment it is not followed any more. We believe that this template can be improved based on the availability of nurses and chairs. Clinic workload was collected from 21 days of field observation. The current scheduling template has 10 types of appointment time slot: 15-­‐minute, 1-­‐hour, 1.5-­‐hour, 2-­‐hour, 3-­‐hour, 4-­‐hour, 5-­‐hour, 6-­‐hour, 8-­‐hour and 10-­‐hour and it is designed to serve 95 patients. But when the scheduling template was compared with the 21 days observations, it was found that the clinic is serving more patients than it is designed for. Therefore, the providers do not usually follow the scheduling template. Indeed they very often break the time slots to accommodate slots that do not exist in the template. Hence, we find that some of the stations are very busy (mostly station 2) and others are underused. If the scheduling template can be improved, it will be possible to bring more patients to the clinic and reduce their waiting time without adding more resources.
In order to build or develop a simulation model of the existing system, it is necessary to collect the following data:
Types of treatment durations.
Numbers of patients in each treatment type.
Arrival pattern of the patients.
Steps that the patients have to go through in their treatment journey and required time of each step.
Using the observations of 2,155 patients over 21 days of historical data, the types of treatment durations and the number of patients in each type were estimated. This data also assisted in determining the arrival rate and the frequency distribution of the patients. The patients were categorised into six types. The percentage of these types and their associated service times distributions are determined too.
ARENA Rockwell Simulation Software (v13) was used to build the simulation model. Entities of the model were tracked to verify that the patients move as intended. The model was run for 30 replications and statistical data was collected to validate the model. The total number of patients that go though the model was compared with the actual number of served patients during the 21 days of observations.
Improvement Scenarios
After verifying and validating the simulation model, different scenarios were designed and analysed to identify the best scenario that can handle more patients and reduces the average patient's waiting time. Based on the clinic observation and discussion with the healthcare providers, the following constraints have been stated:
The stations are filled up with treatment chairs. Therefore, it is literally impossible to fit any more chairs in the clinic. Moreover, the stakeholders are not interested in adding extra chairs.
The stakeholders and the caregivers are not interested in changing the layout of the treatment room.
Given these constraints the options that can be considered to design alternative scenarios are:
Changing the arrival pattern of the patients: that will fit over the nurses' availability.
Changing the nurses' schedule.
Adding one full time nurse at different starting times of the day.
Figure 2 compares the available number of nurses and the number of patients' arrival during different hours of a day. It can be noticed that there is a rapid growth in the arrival of patients (from 13 to 17) between 8:00 to 10:00 even though the clinic has the equal number of nurses during this time period. At 12:00 there is a sudden drop of patient arrival even though there are more available nurses. It is clear that there is an imbalance in the number of available nurses and the number of patient arrivals over different hours of the day. Consequently, balancing the demand (arrival rate of patients) and resources (available number of nurses) will reduce the patients' waiting time and increases the number of served patients. The alternative scenarios that satisfy the above three constraints are listed in Table 2. These scenarios respect the following rules:
Long treatments (between 4hr to 11hr) have to be scheduled early in the morning to avoid working overtime.
Patients of type 1 (15 minutes to 1hr treatment) are the most common. They can be fitted in at any time of the day because they take short treatment time. Hence, it is recommended to bring these patients in at the middle of the day when there are more nurses.
Nurses get tired at the end of the clinic day. Therefore, fewer patients should be scheduled at the late hours of the day.
In Scenario 1, the arrival pattern of the patient was changed so that it can fit with the nurse schedule. This arrival pattern is shown Table 3. Figure 3 shows the new patients' arrival pattern compared with the current arrival pattern. Similar patterns can be developed for the remaining scenarios too.
Analysis of Results
ARENA Rockwell Simulation software (v13) was used to develop the simulation model. There is no warm-­‐up period because the model simulates day-­‐to-­‐day scenarios. The patients of any day are supposed to be served in the same day. The model was run for 30 days (replications) and statistical data was collected to evaluate each scenario. Tables 4 and 5 show the detailed comparison of the system performance between the current scenario and Scenario 1. The results are quite interesting. The average throughput rate of the system has increased from 103 to 125 patients per day. The maximum throughput rate can reach 135 patients. Although the average waiting time has increased, the utilisation of the treatment station has increased by 15.6%. Similar analysis has been performed for the rest of the other scenarios. Due to the space limitation the detailed results are not given. However, Table 6 exhibits a summary of the results and comparison between the different scenarios. Scenario 1 was able to significantly increase the throughput of the system (by 21%) while it still results in an acceptable low average waiting time (13.4 minutes). In addition, it is worth noting that adding a nurse (Scenarios 3, 4, and 5) does not significantly reduce the average wait time or increase the system's throughput. The reason behind this is that when all the chairs are busy, the nurses have to wait until some patients finish the treatment. As a consequence, the other patients have to wait for the commencement of their treatment too. Therefore, hiring a nurse, without adding more chairs, will not reduce the waiting time or increase the throughput of the system. In this case, the only way to increase the throughput of the system is by adjusting the arrival pattern of patients over the nurses' schedule.
Developing a Scheduling Template based on Scenario 1
Scenario 1 provides the best performance. However a scheduling template is necessary for the care provider to book the patients. Therefore, a brief description is provided below on how scheduling the template is developed based on this scenario.
Table 3 gives the number of patients that arrive hourly, following Scenario 1. The distribution of each type of patient is shown in Table 7. This distribution is based on the percentage of each type of patient from the collected data. For example, in between 8:00-­‐9:00, 12 patients will come where 54.85% are of Type 1, 34.55% are of Type 2, 15.163% are of Type 3, 4.32% are of Type 4, 2.58% are of Type 5 and the rest are of Type 6. It is worth noting that, we assume that the patients of each type arrive as a group at the beginning of the hourly time slot. For example, all of the six patients of Type 1 from 8:00 to 9:00 time slot arrive at 8:00.
The numbers of patients from each type is distributed in such a way that it respects all the constraints described in Section 1.3. Most of the patients of the clinic are from type 1, 2 and 3 and they take less amount of treatment time compared with the patients of other types. Therefore, they are distributed all over the day. Patients of type 4, 5 and 6 take a longer treatment time. Hence, they are scheduled at the beginning of the day to avoid overtime. Because patients of type 4, 5 and 6 come at the beginning of the day, most of type 1 and 2 patients come at mid-­‐day (12:00 to 16:00). Another reason to make the treatment room more crowded in between 12:00 to 16:00 is because the clinic has the maximum number of nurses during this time period. Nurses become tired at the end of the clinic which is a reason not to schedule any patient after 19:00.
Based on the patient arrival schedule and nurse availability a scheduling template is built and shown in Figure 4. In order to build the template, if a nurse is available and there are patients waiting for service, a priority list of these patients will be developed. They are prioritised in a descending order based on their estimated slack time and secondarily based on the shortest service time. The secondary rule is used to break the tie if two patients have the same slack. The slack time is calculated using the following equation:
Slack time = Due time - (Arrival time + Treatment time)
Due time is the clinic closing time. To explain how the process works, assume at hour 8:00 (in between 8:00 to 8:15) two patients in station 1 (one 8-­‐hour and one 15-­‐ minute patient), two patients in station 2 (two 12-­‐hour patients), two patients in station 3 (one 2-­‐hour and one 15-­‐ minute patient) and one patient in station 4 (one 3-­‐hour patient) in total seven patients are scheduled. According to Figure 2, there are seven nurses who are available at 8:00 and it takes 15 minutes to set-­‐up a patient. Therefore, it is not possible to schedule more than seven patients in between 8:00 to 8:15 and the current scheduling is also serving seven patients by this time. The rest of the template can be justified similarly.
doi:10.4066/AMJ.2011.837
PMCID: PMC3562880  PMID: 23386870
16.  Endovascular Radiofrequency Ablation for Varicose Veins 
Executive Summary
Objective
The objective of the MAS evidence review was to conduct a systematic review of the available evidence on the safety, effectiveness, durability and cost–effectiveness of endovascular radiofrequency ablation (RFA) for the treatment of primary symptomatic varicose veins.
Background
The Ontario Health Technology Advisory Committee (OHTAC) met on August 26th, 2010 to review the safety, effectiveness, durability, and cost-effectiveness of RFA for the treatment of primary symptomatic varicose veins based on an evidence-based review by the Medical Advisory Secretariat (MAS).
Clinical Condition
Varicose veins (VV) are tortuous, twisted, or elongated veins. This can be due to existing (inherited) valve dysfunction or decreased vein elasticity (primary venous reflux) or valve damage from prior thrombotic events (secondary venous reflux). The end result is pooling of blood in the veins, increased venous pressure and subsequent vein enlargement. As a result of high venous pressure, branch vessels balloon out leading to varicosities (varicose veins).
Symptoms typically affect the lower extremities and include (but are not limited to): aching, swelling, throbbing, night cramps, restless legs, leg fatigue, itching and burning. Left untreated, venous reflux tends to be progressive, often leading to chronic venous insufficiency (CVI). A number of complications are associated with untreated venous reflux: including superficial thrombophlebitis as well as variceal rupture and haemorrhage. CVI often results in chronic skin changes referred to as stasis dermatitis. Stasis dermatitis is comprised of a spectrum of cutaneous abnormalities including edema, hyperpigmentation, eczema, lipodermatosclerosis and stasis ulceration. Ulceration represents the disease end point for severe CVI. CVI is associated with a reduced quality of life particularly in relation to pain, physical function and mobility. In severe cases, VV with ulcers, QOL has been rated to be as bad or worse as other chronic diseases such as back pain and arthritis.
Lower limb VV is a very common disease affecting adults – estimated to be the 7th most common reason for physician referral in the US. There is a very strong familial predisposition to VV. The risk in offspring is 90% if both parents affected, 20% when neither affected and 45% (25% boys, 62% girls) if one parent affected. The prevalence of VV worldwide ranges from 5% to 15% among men and 3% to 29% among women varying by the age, gender and ethnicity of the study population, survey methods and disease definition and measurement. The annual incidence of VV estimated from the Framingham Study was reported to be 2.6% among women and 1.9% among men and did not vary within the age range (40-89 years) studied.
Approximately 1% of the adult population has a stasis ulcer of venous origin at any one time with 4% at risk. The majority of leg ulcer patients are elderly with simple superficial vein reflux. Stasis ulcers are often lengthy medical problems and can last for several years and, despite effective compression therapy and multilayer bandaging are associated with high recurrence rates. Recent trials involving surgical treatment of superficial vein reflux have resulted in healing and significantly reduced recurrence rates.
Endovascular Radiofrequency Ablation for Varicose Veins
RFA is an image-guided minimally invasive treatment alternative to surgical stripping of superficial venous reflux. RFA does not require an operating room or general anaesthesia and has been performed in an outpatient setting by a variety of medical specialties including surgeons and interventional radiologists. Rather than surgically removing the vein, RFA works by destroying or ablating the refluxing vein segment using thermal energy delivered through a radiofrequency catheter.
Prior to performing RFA, color-flow Doppler ultrasonography is used to confirm and map all areas of venous reflux to devise a safe and effective treatment plan. The RFA procedure involves the introduction of a guide wire into the target vein under ultrasound guidance followed by the insertion of an introducer sheath through which the RFA catheter is advanced. Once satisfactory positioning has been confirmed with ultrasound, a tumescent anaesthetic solution is injected into the soft tissue surrounding the target vein along its entire length. This serves to anaesthetize the vein, insulate the heat from damaging adjacent structures, including nerves and skin and compresses the vein increasing optimal contact of the vessel wall with the electrodes or expanded prongs of the RF device. The RF generator is then activated and the catheter is slowly pulled along the length of the vein. At the end of the procedure, hemostasis is then achieved by applying pressure to the vein entry point.
Adequate and proper compression stockings and bandages are applied after the procedure to reduce the risk of venous thromboembolism and to reduce postoperative bruising and tenderness. Patients are encouraged to walk immediately after the procedure. Follow-up protocols vary, with most patients returning 1 to 3 weeks later for an initial follow-up visit. At this point, the initial clinical result is assessed and occlusion of the treated vessels is confirmed with ultrasound. Patients often have a second follow-up visit 1 to 3 months following RFA at which time clinical evaluation and ultrasound are repeated. If required, additional procedures such as phlebectomy or sclerotherapy may be performed during the RFA procedure or at any follow-up visits.
Regulatory Status
The Closure System® radiofrequency generator for endovascular thermal ablation of varicose veins was approved by Health Canada as a class 3 device in March 2005, registered under medical device license 67865. The RFA intravascular catheter was approved by Health Canada in November 2007 for the ClosureFast catheter, registered under medical device license 16574. The Closure System® also has regulatory approvals in Australia, Europe (CE Mark) and the United States (FDA clearance). In Ontario, RFA is not an insured service and is currently being introduced in private clinics.
Methods
Literature Search
The MAS evidence–based review was performed to support public financing decisions. The literature search was performed on March 9th, 2010 using standard bibliographic databases for studies published up until March, 2010.
Inclusion Criteria
English language full-reports and human studies Original reports with defined study methodologyReports including standardized measurements on outcome events such as technical success, safety, effectiveness, durability, quality of life or patient satisfaction Reports involving RFA for varicose veins (great or small saphenous veins)Randomized controlled trials (RCTs), systematic reviews and meta-analysesCohort and controlled clinical studies involving ≥ 1 month ultrasound imaging follow-up
Exclusion Criteria
Non systematic reviews, letters, comments and editorials Reports not involving outcome events such as safety, effectiveness, durability, or patient satisfaction following an intervention with RFAReports not involving interventions with RFA for varicose veinsPilot studies or studies with small samples (< 50 subjects)
Summary of Findings
The MAS evidence search on the safety and effectiveness of endovascular RFA ablation of VV identified the following evidence: three HTAs, nine systematic reviews, eight randomized controlled trials (five comparing RFA to surgery and three comparing RFA to ELT), five controlled clinical trials and fourteen cohort case series (four were multicenter registry studies).
The majority (12⁄14) of the cohort studies (3,664) evaluating RFA for VV involved treatment with first generation RFA catheters and the great saphenous vein (GSV) was the target vessel in all studies. Major adverse events were uncommonly reported and the overall pooled major adverse event rate extracted from the cohort studies was 2.9% (105⁄3,664). Imaging defined treatment effectiveness of vein closure rates were variable ranging from 68% to 96% at post-operative follow-up. Vein ablation rate at 6-month follow-up was reported in four studies with rates close to 90%. Only one study reported vein closure rates at 2 years but only for a minority of the eligible cases. The two studies reporting on RFA ablation with the more efficient second generation catheters involved better follow-up and reported higher ablation rates close to 100% at 6-month follow-up with no major adverse events. A large prospective registry trial that recruited over 1,000 patients at thirty-four largely European centers reported on treatment success in six overlapping reports on selected patient subgroups at various follow-up points up to 5 year. However, the follow-up for eligible recruited patients at all time points was low resulting in inadequate estimates of longer term treatment efficacy.
The overall level of evidence of randomized trials comparing RFA with surgical ligation and vein stripping (n = 5) was graded as low to moderate. In all trials RFA ablation was performed with first generation catheters in the setting of the operating theatre under general anaesthesia, usually without tumescent anaesthesia. Procedure times were significantly longer after RFA than surgery. Recovery after treatment was significantly quicker after RFA both with return to usual activity and return to work with on average a one week less of work loss. Major adverse events occurring after surgery were higher [(1.8% (n=4) vs. 0.4% (n = 1) than after RFA but not significantly. Treatment effectiveness measured by imaging defined vein absence or vein closure was comparable in the two treatment groups. Significant improvements in vein symptoms and quality of life over baseline were reported for both treatment groups. Improvements in these outcomes were significantly greater in the RFA group than the surgery group in the peri-operative period but not in later follow-up. Follow-up in these trials was inadequate to evaluate longer term recurrence for either treatment. Patient satisfaction was reported to be high for both treatments but was higher for RFA.
The studies comparing endovascular treatment approaches for VV (RFA and ELT) were more limited. Three RCT studies compared RFA (two with the second generation catheter) with ELT but mainly focused on peri-procedural outcomes such as pain, complications and recovery. Vein ablation rates were not evaluated in the trials, except for one small trial involving bilateral VV. Pain responses in patients undergoing ablation were extremely variable and up to 2 weeks, mean pain levels were significantly less with RFA than ELT ablation but differences were not significant at one month. Recovery, evaluated as return to usual activity or return to work, however, was similar in the treatment groups. Vein symptom and QOL improvements were improved in both groups but were significantly better in the RFA group than the ELT group at 2 weeks, but not at one month. Vein ablation rates were evaluated in several controlled clinical studies comparing the treatments between centers or within centers between individuals or over time. Comparisons in these studies were inconsistent with vein ablation rates for RFA reported to be similar to, higher than and lower than those with ELT.
Economic Analysis
RFA and surgical vein stripping, the main comparator reimbursed by the public system, are comparable in clinical benefits. Hence a cost-analysis was conducted to identify the differences in resources and costs between both procedures and a budgetary impact analysis (BIA) was conducted to project costs over a 5- year period in the province of Ontario. The target population of this economic analysis was patients with symptomatic varicose veins and the primary analytic perspective was that of the Ministry of Health and Long-Term Care.
The average case cost (based on Ontario hospital costs and medical resources) for surgical vein stripping was estimated to be $1,799. In order to calculate a procedural cost for RFA it was assumed that the hospital cost and physician labour fees, excluding anaesthesia and surgical assistance, were the same as vein stripping surgery. The manufacturer also provided details on the generator with a capital cost of $27,500 and a lifespan of 5 years and the disposables (catheter, sheath, guidewire) with a cost of $673 per case. The average case cost for RFA was therefore estimated to be $1,356. One-way sensitivity analysis was also conducted with hospital cost of RFA varied to 60% that of vein stripping surgery (average cost per case = $627.08) to calculate an impact to the province.
Historical volumes of vein stripping surgeries in Ontario were used to project surgeries in a linear fashion up to five years into the future. Volumes for RFA and ELT were calculated based on share capture from the surgery market based on discussion with clinical expert opinion and existing private data based on discussion with the manufacturer. RFA is expected to compete with ELT and capture some of the market. If ELT is reimbursed by the public sector then numbers will continue to increase from previous private data and share capture from the conventional surgical treatment market. Therefore, RFA cases will also increase since it will be capturing a share of the ELT market. A budget impact to the province was then calculated by multiplying volumes by the cost of the procedure.
RFA is comparable in clinical benefits to vein stripping surgery. It has the extra upfront cost of the generator and cost per case for disposables but does not require an operating theater, anaesthetist or surgical assistant fees. The impact to the province is expected to be 5 M by Year 5 with the introduction of new ELT and RFA image guided endovascular technologies and existing surgery for varicose veins.
Conclusion
The conclusions on the comparative outcomes between endovascular RFA and surgical ligation and saphenous vein stripping and between endovascular RFA and laser ablation for VV treatment are summarized in the table below (ES Table 1).
Outcome comparisons of RFA vs. surgery and RFA vs ELT for varicose veins
ELT refers to endovascular laser ablation; RFA, radiofrequency ablation
The outcomes of the evidence-based review on these treatments for VV based on different perspectives are summarized below:
RFA First versus Second Generation Catheters and Segmental Ablation
Ablation with second generation catheters and segmental ablation offered technical advantages with improved ease and significant decreases in procedure time. RFA ablation with second generation catheters is also no longer restricted to smaller (< 12 mm diameter) saphenous veins. The safety profile with the new device and method of energy delivery is as good as or improved over the first generation device. No major adverse events were reported in two multicenter prospective cohort studies in 6 month follow-up with over 500 patients. Post-operative complications such as bruising and pain were significantly less with RFA ablation with second generation catheters than ELT in two RCT trials.RFA treatment with second generation catheters has ablation rates that are higher than with first generation catheters and are more comparable with the consistently high rates of ELT.
Endovascular RFA versus Surgery
RFA has a quicker recovery attributable to decreased pain and lower minor complications.RFA, in the short term was comparable to surgery in treatment effectiveness as assessed by imaging defined anatomic outcomes such as vein closure, flow or reflux. Other treatment outcomes such as symptomatic relief and HRQOL were significantly improved in both groups and between group differences in the early peri-operative period were likely influenced by pain experiences. Longer term follow-up was inadequate to evaluate recurrence after either treatment.Patient satisfaction was high after both treatments but was higher for RFA than surgery.
Endovascular RFA versus ELT
RFA has significantly less post-operative pain than ELT but differences were not significant when pain was adjusted for analgesic use and pain differences between groups did not persist at 1 month follow-up.Treatment effectiveness, measured as symptom relief and QOL improvement were similar between the endovascular treatments in the short term (within 1 month) Treatment effectiveness measured as imaging defined vein ablation was not measured in any RCT trials (only for bilateral VV disease) and results were inconsistently reported in observational trials.Longer term follow-up was not available to assess recurrence after either treatment.
System Outcomes – RFA Replacing Surgery or Competing with ELT
RFA may offer system advantages in that the treatment can be offered by several medical specialties in outpatient settings and because it does not require an operating theatre or general anaesthesia. The treatment may result in decanting of patients from OR with decreased pre-surgical investigations, demand on anaesthetists’ time, hospital stay and wait time for VV treatment. It may also provide more reliable outpatient scheduling. Procedure costs may be less for endovascular approaches than surgery but the budget impact may be greater with insurance of RFA because of the transfer of cases from the private market to the public payer system.Competition between RFA and ELT endovascular approaches is likely to continue to stimulate innovation and technical changes to advance patient care and result in competitive pricing.
PMCID: PMC3377553  PMID: 23074413
17.  Gene-Lifestyle Interaction and Type 2 Diabetes: The EPIC InterAct Case-Cohort Study 
PLoS Medicine  2014;11(5):e1001647.
In this study, Wareham and colleagues quantified the combined effects of genetic and lifestyle factors on risk of T2D in order to inform strategies for prevention. The authors found that the relative effect of a type 2 diabetes genetic risk score is greater in younger and leaner participants, and the high absolute risk associated with obesity at any level of genetic risk highlights the importance of universal rather than targeted approaches to lifestyle intervention.
Please see later in the article for the Editors' Summary
Background
Understanding of the genetic basis of type 2 diabetes (T2D) has progressed rapidly, but the interactions between common genetic variants and lifestyle risk factors have not been systematically investigated in studies with adequate statistical power. Therefore, we aimed to quantify the combined effects of genetic and lifestyle factors on risk of T2D in order to inform strategies for prevention.
Methods and Findings
The InterAct study includes 12,403 incident T2D cases and a representative sub-cohort of 16,154 individuals from a cohort of 340,234 European participants with 3.99 million person-years of follow-up. We studied the combined effects of an additive genetic T2D risk score and modifiable and non-modifiable risk factors using Prentice-weighted Cox regression and random effects meta-analysis methods. The effect of the genetic score was significantly greater in younger individuals (p for interaction  = 1.20×10−4). Relative genetic risk (per standard deviation [4.4 risk alleles]) was also larger in participants who were leaner, both in terms of body mass index (p for interaction  = 1.50×10−3) and waist circumference (p for interaction  = 7.49×10−9). Examination of absolute risks by strata showed the importance of obesity for T2D risk. The 10-y cumulative incidence of T2D rose from 0.25% to 0.89% across extreme quartiles of the genetic score in normal weight individuals, compared to 4.22% to 7.99% in obese individuals. We detected no significant interactions between the genetic score and sex, diabetes family history, physical activity, or dietary habits assessed by a Mediterranean diet score.
Conclusions
The relative effect of a T2D genetic risk score is greater in younger and leaner participants. However, this sub-group is at low absolute risk and would not be a logical target for preventive interventions. The high absolute risk associated with obesity at any level of genetic risk highlights the importance of universal rather than targeted approaches to lifestyle intervention.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Worldwide, more than 380 million people currently have diabetes, and the condition is becoming increasingly common. Diabetes is characterized by high levels of glucose (sugar) in the blood. Blood sugar levels are usually controlled by insulin, a hormone released by the pancreas after meals (digestion of food produces glucose). In people with type 2 diabetes (the commonest type of diabetes), blood sugar control fails because the fat and muscle cells that normally respond to insulin by removing excess sugar from the blood become less responsive to insulin. Type 2 diabetes can often initially be controlled with diet and exercise (lifestyle changes) and with antidiabetic drugs such as metformin and sulfonylureas, but patients may eventually need insulin injections to control their blood sugar levels. Long-term complications of diabetes, which include an increased risk of heart disease and stroke, reduce the life expectancy of people with diabetes by about ten years compared to people without diabetes.
Why Was This Study Done?
Type 2 diabetes is thought to originate from the interplay between genetic and lifestyle factors. But although rapid progress is being made in understanding the genetic basis of type 2 diabetes, it is not known whether the consequences of adverse lifestyles (for example, being overweight and/or physically inactive) differ according to an individual's underlying genetic risk of diabetes. It is important to investigate this question to inform strategies for prevention. If, for example, obese individuals with a high level of genetic risk have a higher risk of developing diabetes than obese individuals with a low level of genetic risk, then preventative strategies that target lifestyle interventions to obese individuals with a high genetic risk would be more effective than strategies that target all obese individuals. In this case-cohort study, researchers from the InterAct consortium quantify the combined effects of genetic and lifestyle factors on the risk of type 2 diabetes. A case-cohort study measures exposure to potential risk factors in a group (cohort) of people and compares the occurrence of these risk factors in people who later develop the disease with those who remain disease free.
What Did the Researchers Do and Find?
The InterAct study involves 12,403 middle-aged individuals who developed type 2 diabetes after enrollment (incident cases) into the European Prospective Investigation into Cancer and Nutrition (EPIC) and a sub-cohort of 16,154 EPIC participants. The researchers calculated a genetic type 2 diabetes risk score for most of these individuals by determining which of 49 gene variants associated with type 2 diabetes each person carried, and collected baseline information about exposure to lifestyle risk factors for type 2 diabetes. They then used various statistical approaches to examine the combined effects of the genetic risk score and lifestyle factors on diabetes development. The effect of the genetic score was greater in younger individuals than in older individuals and greater in leaner participants than in participants with larger amounts of body fat. The absolute risk of type 2 diabetes, expressed as the ten-year cumulative incidence of type 2 diabetes (the percentage of participants who developed diabetes over a ten-year period) increased with increasing genetic score in normal weight individuals from 0.25% in people with the lowest genetic risk scores to 0.89% in those with the highest scores; in obese people, the ten-year cumulative incidence rose from 4.22% to 7.99% with increasing genetic risk score.
What Do These Findings Mean?
These findings show that in this middle-aged cohort, the relative association with type 2 diabetes of a genetic risk score comprised of a large number of gene variants is greatest in individuals who are younger and leaner at baseline. This finding may in part reflect the methods used to originally identify gene variants associated with type 2 diabetes, and future investigations that include other genetic variants, other lifestyle factors, and individuals living in other settings should be undertaken to confirm this finding. Importantly, however, this study shows that young, lean individuals with a high genetic risk score have a low absolute risk of developing type 2 diabetes. Thus, this sub-group of individuals is not a logical target for preventative interventions. Rather, suggest the researchers, the high absolute risk of type 2 diabetes associated with obesity at any level of genetic risk highlights the importance of universal rather than targeted approaches to lifestyle intervention.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001647.
The US National Diabetes Information Clearinghouse provides information about diabetes for patients, health-care professionals and the general public, including detailed information on diabetes prevention (in English and Spanish)
The UK National Health Service Choices website provides information for patients and carers about type 2 diabetes and about living with diabetes; it also provides people's stories about diabetes
The charity Diabetes UK provides detailed information for patients and carers in several languages, including information on healthy lifestyles for people with diabetes
The UK-based non-profit organization Healthtalkonline has interviews with people about their experiences of diabetes
The Genetic Landscape of Diabetes is published by the US National Center for Biotechnology Information
More information on the InterAct study is available
MedlinePlus provides links to further resources and advice about diabetes and diabetes prevention (in English and Spanish)
doi:10.1371/journal.pmed.1001647
PMCID: PMC4028183  PMID: 24845081
18.  Endovascular Laser Therapy for Varicose Veins 
Executive Summary
Objective
The objective of the MAS evidence review was to conduct a systematic review of the available evidence on the safety, effectiveness, durability and cost–effectiveness of endovascular laser therapy (ELT) for the treatment of primary symptomatic varicose veins (VV).
Background
The Ontario Health Technology Advisory Committee (OHTAC) met on November 27, 2009 to review the safety, effectiveness, durability and cost-effectiveness of ELT for the treatment of primary VV based on an evidence-based review by the Medical Advisory Secretariat (MAS).
Clinical Condition
VV are tortuous, twisted, or elongated veins. This can be due to existing (inherited) valve dysfunction or decreased vein elasticity (primary venous reflux) or valve damage from prior thrombotic events (secondary venous reflux). The end result is pooling of blood in the veins, increased venous pressure and subsequent vein enlargement. As a result of high venous pressure, branch vessels balloon out leading to varicosities (varicose veins).
Symptoms typically affect the lower extremities and include (but are not limited to): aching, swelling, throbbing, night cramps, restless legs, leg fatigue, itching and burning. Left untreated, venous reflux tends to be progressive, often leading to chronic venous insufficiency (CVI).
A number of complications are associated with untreated venous reflux: including superficial thrombophlebitis as well as variceal rupture and haemorrhage. CVI often results in chronic skin changes referred to as stasis dermatitis. Stasis dermatitis is comprised of a spectrum of cutaneous abnormalities including edema, hyperpigmentation, eczema, lipodermatosclerosis and stasis ulceration. Ulceration represents the disease end point for severe CVI.
CVI is associated with a reduced quality of life particularly in relation to pain, physical function and mobility. In severe cases, VV with ulcers, QOL has been rated to be as bad or worse as other chronic diseases such as back pain and arthritis.
Lower limb VV is a common disease affecting adults and estimated to be the seventh most common reason for physician referral in the US. There is a strong familial predisposition to VV with the risk in offspring being 90% if both parents affected, 20% when neither is affected, and 45% (25% boys, 62% girls) if one parent is affected. Globally, the prevalence of VV ranges from 5% to 15% among men and 3% to 29% among women varying by the age, gender and ethnicity of the study population, survey methods and disease definition and measurement. The annual incidence of VV estimated from the Framingham Study was reported to be 2.6% among women and 1.9% among men and did not vary within the age range (40-89 years) studied.
Approximately 1% of the adult population has a stasis ulcer of venous origin at any one time with 4% at risk. The majority of leg ulcer patients are elderly with simple superficial vein reflux. Stasis ulcers are often lengthy medical problems and can last for several years and, despite effective compression therapy and multilayer bandaging are associated with high recurrence rates. Recent trials involving surgical treatment of superficial vein reflux have resulted in healing and significantly reduced recurrence rates.
Endovascular Laser Therapy for VV
ELT is an image-guided, minimally invasive treatment alternative to surgical stripping of superficial venous reflux. It does not require an operating room or general anesthesia and has been performed in outpatient settings by a variety of medical specialties including surgeons (vascular or general), interventional radiologists and phlebologists. Rather than surgically removing the vein, ELT works by destroying, cauterizing or ablating the refluxing vein segment using heat energy delivered via laser fibre.
Prior to ELT, colour-flow Doppler ultrasonography is used to confirm and map all areas of venous reflux to devise a safe and effective treatment plan. The ELT procedure involves the introduction of a guide wire into the target vein under ultrasound guidance followed by the insertion of an introducer sheath through which an optical fibre carrying the laser energy is advanced. A tumescent anesthetic solution is injected into the soft tissue surrounding the target vein along its entire length. This serves to anaesthetize the vein so that the patient feels no discomfort during the procedure. It also serves to insulate the heat from damaging adjacent structures, including nerves and skin. Once satisfactory positioning has been confirmed with ultrasound, the laser is activated. Both the laser fibre and the sheath are simultaneously, slowly and continuously pulled back along the length of the target vessel. At the end of the procedure, homeostasis is then achieved by applying pressure to the entry point.
Adequate and proper compression stockings and bandages are applied after the procedure to reduce the risk of venous thromboembolism, and to reduce postoperative bruising and tenderness. Patients are encouraged to walk immediately after the procedure and most patients return to work or usual activity within a few days. Follow-up protocols vary, with most patients returning 1-3 weeks later for an initial follow-up visit. At this point, the initial clinical result is assessed and occlusion of the treated vessels is confirmed with ultrasound. Patients often have a second follow-up visit 1-3 months following ELT at which time clinical evaluation and ultrasound are repeated. If required, sclerotherapy may be performed during the ELT procedure or at any follow-up visits.
Regulatory Status
Endovascular laser for the treatment of VV was approved by Health Canada as a class 3 device in 2002. The treatment has been an insured service in Saskatchewan since 2007 and is the only province to insure ELT. Although the treatment is not an insured service in Ontario, it has been provided by various medical specialties since 2002 in over 20 private clinics.
Methods
Literature Search
The MAS evidence-based review was performed as an update to the 2007 health technology review performed by the Australian Medical Services Committee (MSAC) to support public financing decisions. The literature search was performed on August 18, 2009 using standard bibliographic databases for studies published from January 1, 2007 to August 15, 2009. Search alerts were generated and reviewed for additional relevant literature up until October 1, 2009.
Inclusion Criteria
English language full-reports and human studies
Original reports with defined study methodology
Reports including standardized measurements on outcome events such as technical success, safety, effectiveness, durability, quality of life or patient satisfaction
Reports involving ELT for VV (great or small saphenous veins)
Randomized controlled trials (RCTs), systematic reviews and meta-analyses
Cohort and controlled clinical studies involving > 1 month ultrasound imaging follow-up
Exclusion Criteria
Non systematic reviews, letters, comments and editorials
Reports not involving outcome events such as safety, effectiveness, durability, or patient satisfaction following an intervention with ELT
Reports not involving interventions with ELT for VV
Pilot studies or studies with small samples ( < 50 subjects)
Summary of Findings
The MAS evidence search identified 14 systematic reviews, 29 cohort studies on safety and effectiveness, four cost studies and 12 randomized controlled trials involving ELT, six of these comparing endovascular laser with surgical ligation and saphenous vein stripping.
Since 2007, 22 cohort studies involving 10,883 patients undergoing ELT of the great saphenous vein (GSV) have been published. Imaging defined treatment effectiveness of mean vein closure rates were reported to be greater than 90% (range 93%- 99%) at short term follow-up. Longer than one year follow-up was reported in five studies with life table analysis performed in four but the follow up was still limited at three and four years. The overall pooled major adverse event rate, including DVT, PE, skin burns or nerve damage events extracted from these studies, was 0.63% (69/10,883).
The overall level of evidence of randomized trials comparing ELT with surgical ligation and vein stripping (n= 6) was graded as moderate to high. Recovery after treatment was significantly quicker after ELT (return to work median number of days, 4 vs. 17; p= .005). Major adverse events occurring after surgery were higher [(1.8% (n=4) vs. 0.4% (n = 1) 1 but not significantly. Treatment effectiveness as measured by imaging vein absence or closure, symptom relief or quality of life similar in the two treatment groups and both treatments resulted in statistically significantly improvements in these outcomes. Recurrence was low after both treatments at follow up but neovascularization (growth of new vessels, a key predictor of long term recurrence was significantly more common (18% vs. 1%; p = .001) after surgery. Although patient satisfaction was reported to be high (>80%) with both treatments, patient preferences evaluated through recruitment process, physician reports and consumer groups were strongly in favour of ELT. For patients minimal complications, quick recovery and dependability of outpatient scheduling were key considerations.
As clinical effectiveness of the two treatments was similar, a cost-analysis was performed to compare differences in resources and costs between the two procedures. A budget impact analysis for introducing ELT as an insured service was also performed. The average case cost (based on Ontario hospital costs and medical resources) for surgical vein stripping was estimated to be $1,799. Because of the uncertainties with resources associated with ELT, in addition to the device related costs, hospital costs were varied and assumed to be the same as or less than (40%) those for surgery resulting in an average ELT case cost of $2,025 or $1,602.
Based on the historical pattern of surgical vein stripping for varices a 5-year projection was made for annual volumes and costs. In Ontario in 2007/2008, 3481 surgical vein stripping procedures were performed, 28% for repeat procedures. Annual volumes of ELT currently being performed in the province in over 20 private clinics were estimated to be approximately 840. If ELT were publicly reimbursed, it was assumed that it would capture 35% of the vein stripping market in the first year and increase to 55% in subsequent years. Based on these assumptions if ELT were not publicly reimbursed, the province would be paying approximately $5.9 million and if ELT were reimbursed the province would pay $8.2 million if the hospital costs for ELT were the same as surgery and $7.1 million if the hospital costs were less (40%) than surgery.
The conclusions on the comparative outcomes between laser ablation and surgical ligation and saphenous vein stripping are summarized in the table below (ES Table 1).
Outcome comparisons of ELT vs. surgery for VV
The outcomes of the evidence-based review on these treatments based on three different perspectives are summarized below:
Patient Outcomes – ELT vs. Surgery
ELT has a quicker recovery attributable to the decreased pain, lower minor complications, use of local anesthesia with immediate ambulation.
ELT is as effective as surgery in the short term as assessed by imaging anatomic outcomes, symptomatic relief and HRQOL outcomes.
Recurrence is similar but neovascularization, a key predictor of long term recurrence, is significantly higher with surgery.
Patient satisfaction is equally high after both treatments but patient preference is much more strongly for ELT. Surgeons performing ELT are satisfied with treatment outcomes and regularly offer ELT as a treatment alternative to surgery.
Clinical or Technical Advantages – ELT Over Surgery
An endovascular approach can more easily and more precisely treat multilevel disease and difficult to treat areas
ELT is an effective and a less invasive treatment for the elderly with VV and those with venous leg ulcers.
System Outcomes – ELT Replacing Surgery
ELT may offer system advantages in that the treatment can be offered by several medical specialties in outpatient settings and because it does not require an operating theatre or general anesthesia.
The treatment may result in ↓ pre-surgical investigations, decanting of patients from OR, ↓ demand on anesthetists time, ↓ hospital stay, ↓decrease wait time for VV treatment and provide more reliable outpatient scheduling.
Depending on the reimbursement mechanism for the treatment, however, it may also result in closure of outpatient clinics with an increasingly centralization of procedures in selected hospitals with large capital budgets resulting in larger and longer waiting lists.
Procedure costs may be similar for the two treatments but the budget impact may be greater with insurance of ELT because of the transfer of the cases from the private market to the public payer system.
PMCID: PMC3377531  PMID: 23074409
19.  Formative evaluation of the telecare fall prevention project for older veterans 
Background
Fall prevention interventions for community-dwelling older adults have been found to reduce falls in some research studies. However, wider implementation of fall prevention activities in routine care has yielded mixed results. We implemented a theory-driven program to improve care for falls at our Veterans Affairs healthcare facility. The first project arising from this program used a nurse advice telephone line to identify patients' risk factors for falls and to triage patients to appropriate services. Here we report the formative evaluation of this project.
Methods
To evaluate the intervention we: 1) interviewed patient and employee stakeholders, 2) reviewed participating patients' electronic health record data and 3) abstracted information from meeting minutes. We describe the implementation process, including whether the project was implemented according to plan; identify barriers and facilitators to implementation; and assess the incremental benefit to the quality of health care for fall prevention received by patients in the project. We also estimate the cost of developing the pilot project.
Results
The project underwent multiple changes over its life span, including the addition of an option to mail patients educational materials about falls. During the project's lifespan, 113 patients were considered for inclusion and 35 participated. Patient and employee interviews suggested support for the project, but revealed that transportation to medical care was a major barrier in following up on fall risks identified by nurse telephone triage. Medical record review showed that the project enhanced usual medical care with respect to home safety counseling. We discontinued the program after 18 months due to staffing limitations and competing priorities. We estimated a cost of $9194 for meeting time to develop the project.
Conclusions
The project appeared feasible at its outset but could not be sustained past the first cycle of evaluation due to insufficient resources and a waning of local leadership support due to competing national priorities. Future projects will need both front-level staff commitment and prolonged high-level leadership involvement to thrive.
doi:10.1186/1472-6963-11-119
PMCID: PMC3127979  PMID: 21605438
20.  Incident Learning and Failure-Mode-and-Effects-Analysis Guided Safety Initiatives in Radiation Medicine 
Frontiers in Oncology  2013;3:305.
By combining incident learning and process failure-mode-and-effects-analysis (FMEA) in a structure-process-outcome framework we have created a risk profile for our radiation medicine practice and implemented evidence-based risk-mitigation initiatives focused on patient safety. Based on reactive reviews of incidents reported in our departmental incident-reporting system and proactive FMEA, high safety-risk procedures in our paperless radiation medicine process and latent risk factors were identified. Six initiatives aimed at the mitigation of associated severity, likelihood-of-occurrence, and detectability risks were implemented. These were the standardization of care pathways and toxicity grading, pre-treatment-planning peer review, a policy to thwart delay-rushed processes, an electronic whiteboard to enhance coordination, and the use of six sigma metrics to monitor operational efficiencies. The effectiveness of these initiatives over a 3-years period was assessed using process and outcome specific metrics within the framework of the department structure. There has been a 47% increase in incident-reporting, with no increase in adverse events. Care pathways have been used with greater than 97% clinical compliance rate. The implementation of peer review prior to treatment-planning and use of the whiteboard have provided opportunities for proactive detection and correction of errors. There has been a twofold drop in the occurrence of high-risk procedural delays. Patient treatment start delays are routinely enforced on cases that would have historically been rushed. Z-scores for high-risk procedures have steadily improved from 1.78 to 2.35. The initiatives resulted in sustained reductions of failure-mode risks as measured by a set of evidence-based metrics over a 3-years period. These augment or incorporate many of the published recommendations for patient safety in radiation medicine by translating them to clinical practice.
doi:10.3389/fonc.2013.00305
PMCID: PMC3863912  PMID: 24380074
incident learning; failure-mode-and-effects-analysis; root cause analysis; patient safety; no-fly-policy; six sigma; electronic whiteboard
21.  Safety measures to prevent workplace violence in emergency primary care centres–a cross-sectional study 
Background
Employees in emergency primary care centres (EPCC) have raised personal safety as an issue. Despite a high risk of experiencing workplace violence at EPCCs in Norway, knowledge regarding applied preventive measures is limited. The description of existing safety measures is an important prerequisite to evaluate and make guidelines for the improvement of preventive practices on a national level. The objective of this study was to investigate to which extent general practitioners work alone in EPCCs in Norway, and to estimate the prevalence of other preventive measures against workplace violence.
Methods
A survey was sent to the managers of all 210 registered EPCCs in Norway. The questionnaire included 22 items on safety measures, including available staff, architecture and outfitting of the reception and consulting rooms, and the availability of electronic safety systems and training or monitoring systems. The data were analysed using descriptive statistics. Differences between EPCCs staffed by one general practitioner alone and EPCCs with more health personnel on duty were explored.
Results
Sixty-one (30%) of the 203 participating EPCCs had more than one person on duty round-the-clock. These EPCCs reported the application of a significantly higher number of safety measures compared to the EPCCs with only one general practitioner on duty during some or part of the 24 hours. Examples of safety measures being more common in highly staffed EPCCs were automatic door locks (p < 0.001), arrangement of furniture in the consulting room ensuring that the patient is not seated between the clinician and the exit (p = 0.014), the possibility of bringing an extra person on emergency call-outs or home visits when needed for security reasons (p = 0.014), and having organised training regarding violence (p < 0.001).
Conclusion
This study shows considerable differences between Norwegian EPCCs regarding applied preventive measures, and a higher prevalence of such measures in EPCCs staffed with several health personnel around-the-clock. More research is needed to understand the reasons for, and the effects of, these differences.
doi:10.1186/1472-6963-13-384
PMCID: PMC3852061  PMID: 24090168
22.  SBAR improves communication and safety climate and decreases incident reports due to communication errors in an anaesthetic clinic: a prospective intervention study 
BMJ Open  2014;4(1):e004268.
Objectives
We aimed to examine staff members’ perceptions of communication within and between different professions, safety attitudes and psychological empowerment, prior to and after implementation of the communication tool Situation-Background-Assessment-Recommendation (SBAR) at an anaesthetic clinic. The aim was also to study whether there was any change in the proportion of incident reports caused by communication errors.
Design
A prospective intervention study with comparison group using preassessments and postassessments. Questionnaire data were collected from staff in an intervention (n=100) and a comparison group (n=69) at the anaesthetic clinic in two hospitals prior to (2011) and after (2012) implementation of SBAR. The proportion of incident reports due to communication errors was calculated during a 1-year period prior to and after implementation.
Setting
Anaesthetic clinics at two hospitals in Sweden.
Participants
All licensed practical nurses, registered nurses and physicians working in the operating theatres, intensive care units and postanaesthesia care units at anaesthetic clinics in two hospitals were invited to participate.
Intervention
Implementation of SBAR in an anaesthetic clinic.
Primary and secondary outcomes
The primary outcomes were staff members’ perception of communication within and between different professions, as well as their perceptions of safety attitudes. Secondary outcomes were psychological empowerment and incident reports due to error of communication.
Results
In the intervention group, there were statistically significant improvements in the factors ‘Between-group communication accuracy’ (p=0.039) and ‘Safety climate’ (p=0.011). The proportion of incident reports due to communication errors decreased significantly (p<0.0001) in the intervention group, from 31% to 11%.
Conclusions
Implementing the communication tool SBAR in anaesthetic clinics was associated with improvement in staff members’ perception of communication between professionals and their perception of the safety climate as well as with a decreased proportion of incident reports related to communication errors.
Trial registration
ISRCTN37251313.
doi:10.1136/bmjopen-2013-004268
PMCID: PMC3902348  PMID: 24448849
23.  Patterns of Obesity Development before the Diagnosis of Type 2 Diabetes: The Whitehall II Cohort Study 
PLoS Medicine  2014;11(2):e1001602.
Examining patterns of change in body mass index (BMI) and other cardiometabolic risk factors in individuals during the years before they were diagnosed with diabetes, Kristine Færch and colleagues report that few of them experienced dramatic BMI changes.
Please see later in the article for the Editors' Summary
Background
Patients with type 2 diabetes vary greatly with respect to degree of obesity at time of diagnosis. To address the heterogeneity of type 2 diabetes, we characterised patterns of change in body mass index (BMI) and other cardiometabolic risk factors before type 2 diabetes diagnosis.
Methods and Findings
We studied 6,705 participants from the Whitehall II study, an observational prospective cohort study of civil servants based in London. White men and women, initially free of diabetes, were followed with 5-yearly clinical examinations from 1991–2009 for a median of 14.1 years (interquartile range [IQR]: 8.7–16.2 years). Type 2 diabetes developed in 645 (1,209 person-examinations) and 6,060 remained free of diabetes during follow-up (14,060 person-examinations). Latent class trajectory analysis of incident diabetes cases was used to identify patterns of pre-disease BMI. Associated trajectories of cardiometabolic risk factors were studied using adjusted mixed-effects models. Three patterns of BMI changes were identified. Most participants belonged to the “stable overweight” group (n = 604, 94%) with a relatively constant BMI level within the overweight category throughout follow-up. They experienced slightly worsening of beta cell function and insulin sensitivity from 5 years prior to diagnosis. A small group of “progressive weight gainers” (n = 15) exhibited a pattern of consistent weight gain before diagnosis. Linear increases in blood pressure and an exponential increase in insulin resistance a few years before diagnosis accompanied the weight gain. The “persistently obese” (n = 26) were severely obese throughout the whole 18 years before diabetes diagnosis. They experienced an initial beta cell compensation followed by loss of beta cell function, whereas insulin sensitivity was relatively stable. Since the generalizability of these findings is limited, the results need confirmation in other study populations.
Conclusions
Three patterns of obesity changes prior to diabetes diagnosis were accompanied by distinct trajectories of insulin resistance and other cardiometabolic risk factors in a white, British population. While these results should be verified independently, the great majority of patients had modest weight gain prior to diagnosis. These results suggest that strategies focusing on small weight reductions for the entire population may be more beneficial than predominantly focusing on weight loss for high-risk individuals.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Worldwide, more than 350 million people have diabetes, a metabolic disorder characterized by high amounts of glucose (sugar) in the blood. Blood sugar levels are normally controlled by insulin, a hormone released by the pancreas after meals (digestion of food produces glucose). In people with type 2 diabetes (the commonest form of diabetes) blood sugar control fails because the fat and muscle cells that normally respond to insulin by removing sugar from the blood become insulin resistant. Type 2 diabetes, which was previously called adult-onset diabetes, can be controlled with diet and exercise, and with drugs that help the pancreas make more insulin or that make cells more sensitive to insulin. Long-term complications, which include an increased risk of heart disease and stroke, reduce the life expectancy of people with diabetes by about 10 years compared to people without diabetes. The number of people with diabetes is expected to increase dramatically over the next decades, coinciding with rising obesity rates in many countries. To better understand diabetes development, to identify people at risk, and to find ways to prevent the disease are urgent public health goals.
Why Was This Study Done?
It is known that people who are overweight or obese have a higher risk of developing diabetes. Because of this association, a common assumption is that people who experienced recent weight gain are more likely to be diagnosed with diabetes. In this prospective cohort study (an investigation that records the baseline characteristics of a group of people and then follows them to see who develops specific conditions), the researchers tested the hypothesis that substantial weight gain precedes a diagnosis of diabetes and explored more generally the patterns of body weight and composition in the years before people develop diabetes. They then examined whether changes in body weight corresponded with changes in other risk factors for diabetes (such as insulin resistance), lipid profiles and blood pressure.
What Did the Researchers Do and Find?
The researchers studied participants from the Whitehall II study, a prospective cohort study initiated in 1985 to investigate the socioeconomic inequalities in disease. Whitehall II enrolled more than 10,000 London-based government employees. Participants underwent regular health checks during which their weight and height were measured, blood tests were done, and they filled out questionnaires for other relevant information. From 1991 onwards, participants were tested every five years for diabetes. The 6,705 participants included in this study were initially free of diabetes, and most of them were followed for at least 14 years. During the follow-up, 645 participants developed diabetes, while 6,060 remained free of the disease.
The researchers used a statistical tool called “latent class trajectory analysis” to study patterns of changes in body mass index (BMI) in the years before people developed diabetes. BMI is a measure of human obesity based on a person's weight and height. Latent class trajectory analysis is an unbiased way to subdivide a number of people into groups that differ based on specified parameters. In this case, the researchers wanted to identify several groups among all the people who eventually developed diabetes each with a distinct pattern of BMI development. Having identified such groups, they also examined how a variety of tests associated with diabetes risk, and risks for heart disease and stroke changed in the identified groups over time.
They identified three different patterns of BMI changes in the 645 participants who developed diabetes. The vast majority (606 individuals, or 94%) belonged to a group they called “stable-overweight.” These people showed no dramatic change in their BMI in the years before they were diagnosed. They were overweight when they first entered the study and gained or lost little weight during the follow-up years. They showed only minor signs of insulin-resistance, starting five years before they developed diabetes. A second, much smaller group of 15 people gained weight consistently in the years before diagnosis. As they were gaining weight, these people also had raises in blood pressure and substantial gains in insulin resistance. The 26 remaining participants who formed the third group were persistently obese for the entire time they participated in the study, in some cases up to 18 years before they were diagnosed with diabetes. They had some signs of insulin resistance in the years before diagnosis, but not the substantial gain often seen as the hallmark of “pre-diabetes.”
What Do These Findings Mean?
These results suggest that diabetes development is a complicated process, and one that differs between individuals who end up with the disease. They call into question the common notion that most people who develop diabetes have recently gained a lot of weight or are obese. A substantial rise in insulin resistance, another established risk factor for diabetes, was only seen in the smallest of the groups, namely the people who gained weight consistently for years before they were diagnosed. When the scientists applied a commonly used predictor of diabetes called the “Framingham diabetes risk score” to their largest “stably overweight” group, they found that these people were not classified as having a particularly high risk, and that their risk scores actually declined in the last five years before their diabetes diagnosis. This suggests that predicting diabetes in this group might be difficult.
The researchers applied their methodology only to this one cohort of white civil servants in England. Before drawing more firm conclusions on the process of diabetes development, it will be important to test whether similar results are seen in other cohorts and among more diverse individuals. If the three groups identified here are found in other cohorts, another question is whether they are as unequal in size as in this example. And if they are, can the large group of stably overweight people be further subdivided in ways that suggest specific mechanisms of disease development? Even without knowing how generalizable the provocative findings of this study are, they should stimulate debate on how to identify people at risk for diabetes and how to prevent the disease or delay its onset.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001602.
The US National Diabetes Information Clearinghouse provides information about diabetes for patients, health-care professionals, and the general public, including information on diabetes prevention (in English and Spanish)
The UK National Health Service Choices website provides information for patients and carers about type 2 diabetes; it includes people's stories about diabetes
The charity Diabetes UK also provides detailed information about diabetes for patients and carers, including information on healthy lifestyles for people with diabetes, and has a further selection of stories from people with diabetes; the charity Healthtalkonline has interviews with people about their experiences of diabetes
MedlinePlus provides links to further resources and advice about diabetes (in English and Spanish)
More information about the Whitehall II study is available
doi:10.1371/journal.pmed.1001602
PMCID: PMC3921118  PMID: 24523667
24.  Primary Prevention of Gestational Diabetes Mellitus and Large-for-Gestational-Age Newborns by Lifestyle Counseling: A Cluster-Randomized Controlled Trial 
PLoS Medicine  2011;8(5):e1001036.
In a cluster-randomized trial, Riitta Luoto and colleagues find that counseling on diet and activity can reduce the birthweight of babies born to women at risk of developing gestational diabetes mellitus (GDM), but fail to find an effect on GDM.
Background
Our objective was to examine whether gestational diabetes mellitus (GDM) or newborns' high birthweight can be prevented by lifestyle counseling in pregnant women at high risk of GDM.
Method and Findings
We conducted a cluster-randomized trial, the NELLI study, in 14 municipalities in Finland, where 2,271 women were screened by oral glucose tolerance test (OGTT) at 8–12 wk gestation. Euglycemic (n = 399) women with at least one GDM risk factor (body mass index [BMI] ≥25 kg/m2, glucose intolerance or newborn's macrosomia (≥4,500 g) in any earlier pregnancy, family history of diabetes, age ≥40 y) were included. The intervention included individual intensified counseling on physical activity and diet and weight gain at five antenatal visits. Primary outcomes were incidence of GDM as assessed by OGTT (maternal outcome) and newborns' birthweight adjusted for gestational age (neonatal outcome). Secondary outcomes were maternal weight gain and the need for insulin treatment during pregnancy. Adherence to the intervention was evaluated on the basis of changes in physical activity (weekly metabolic equivalent task (MET) minutes) and diet (intake of total fat, saturated and polyunsaturated fatty acids, saccharose, and fiber). Multilevel analyses took into account cluster, maternity clinic, and nurse level influences in addition to age, education, parity, and prepregnancy BMI. 15.8% (34/216) of women in the intervention group and 12.4% (22/179) in the usual care group developed GDM (absolute effect size 1.36, 95% confidence interval [CI] 0.71–2.62, p = 0.36). Neonatal birthweight was lower in the intervention than in the usual care group (absolute effect size −133 g, 95% CI −231 to −35, p = 0.008) as was proportion of large-for-gestational-age (LGA) newborns (26/216, 12.1% versus 34/179, 19.7%, p = 0.042). Women in the intervention group increased their intake of dietary fiber (adjusted coefficient 1.83, 95% CI 0.30–3.25, p = 0.023) and polyunsaturated fatty acids (adjusted coefficient 0.37, 95% CI 0.16–0.57, p<0.001), decreased their intake of saturated fatty acids (adjusted coefficient −0.63, 95% CI −1.12 to −0.15, p = 0.01) and intake of saccharose (adjusted coefficient −0.83, 95% CI −1.55 to −0.11, p  =  0.023), and had a tendency to a smaller decrease in MET minutes/week for at least moderate intensity activity (adjusted coefficient 91, 95% CI −37 to 219, p = 0.17) than women in the usual care group. In subgroup analysis, adherent women in the intervention group (n = 55/229) had decreased risk of GDM (27.3% versus 33.0%, p = 0.43) and LGA newborns (7.3% versus 19.5%, p = 0.03) compared to women in the usual care group.
Conclusions
The intervention was effective in controlling birthweight of the newborns, but failed to have an effect on maternal GDM.
Trial registration
Current Controlled Trials ISRCTN33885819
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Gestational diabetes mellitus (GDM) is diabetes that is first diagnosed during pregnancy. Like other types of diabetes, it is characterized by high levels of sugar (glucose) in the blood. Blood-sugar levels are normally controlled by insulin, a hormone that the pancreas releases when blood-sugar levels rise after meals. Hormonal changes during pregnancy and the baby's growth demands increase a pregnant woman's insulin needs and, if her pancreas cannot make enough insulin, GDM develops. Risk factors for GDM, which occurs in 2%–14% of pregnant women, include a high body-mass index (a measure of body fat), excessive weight gain or low physical activity during pregnancy, high dietary intake of polyunsaturated fats, glucose intolerance (an indicator of diabetes) or the birth of a large baby in a previous pregnancy, and a family history of diabetes. GDM is associated with an increased rate of cesarean sections, induced deliveries, birth complications, and large-for-gestational-age (LGA) babies (gestation is the time during which the baby develops within the mother). GDM, which can often be controlled by diet and exercise, usually disappears after pregnancy but increases a woman's subsequent risk of developing diabetes.
Why Was This Study Done?
Although lifestyle changes can be used to control GDM, it is not known whether similar changes can prevent GDM developing (“primary prevention”). In this cluster-randomized controlled trial, the researchers investigate whether individual intensified counseling on physical activity, diet, and weight gain integrated into routine maternity care visits can prevent the development of GDM and the occurrence of LGA babies among newborns. In a cluster-randomized controlled trial, groups of patients rather than individual patients are randomly assigned to receive alternative interventions, and the outcomes in different “clusters” are compared. In this trial, each cluster is a municipality in the Pirkanmaa region of Finland.
What Did the Researchers Do and Find?
The researchers enrolled 399 women, each of whom had a normal blood glucose level at 8–12 weeks gestation but at least one risk factor for GDM. Women in the intervention municipalities received intensified counseling on physical activity at 8–12 weeks' gestation, dietary counseling at 16–18 weeks' gestation, and further physical activity and dietary counseling at each subsequent antenatal visits. Women in the control municipalities received some dietary but little physical activity counseling as part of their usual care. 23.3% and 20.2% of women in the intervention and usual care groups, respectively, developed GDM, a nonstatistically significant difference (that is, a difference that could have occurred by chance). However, the average birthweight and the proportion of LGA babies were both significantly lower in the intervention group than in the usual care group. Food frequency questionnaires completed by the women indicated that, on average, those in the intervention group increased their intake of dietary fiber and polyunsaturated fatty acids and decreased their intake of saturated fatty acids and sucrose as instructed during counseling, The amount of moderate physical activity also tended to decrease less as pregnancy proceeded in the intervention group than in usual care group. Finally, compared to the usual care group, significantly fewer of the 24% of women in the intervention group who actually met dietary and physical activity targets (“adherent” women) developed GDM.
What Do These Findings Mean?
These findings indicate that intensified counseling on diet and physical activity is effective in controlling the birthweight of babies born to women at risk of developing GDM and encourages at least some of them to alter their lifestyle. However, the findings fail to show that the intervention reduces the risk of GDM because of the limited power of the study. The power of a study—the probability that it will achieve a statistically significant result—depends on the study's size and on the likely effect size of the intervention. Before starting this study, the researchers calculated that they would need 420 participants to see a statistically significant difference between the groups if their intervention reduced GDM incidence by 40%. This estimated effect size was probably optimistic and therefore the study lacked power. Nevertheless, the analyses performed among adherent women suggest that lifestyle changes might be a way to prevent GDM and so larger studies should now be undertaken to test this potential primary prevention intervention.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001036.
The US National Institute of Diabetes and Digestive and Kidney Diseases provides information for patients on diabetes and on gestational diabetes (in English and Spanish)
The UK National Health Service Choices website also provides information for patients on diabetes and on gestational diabetes, including links to other useful resources
The MedlinePlus Encyclopedia has pages on diabetes and on gestational diabetes; MedlinePlus provides links to additional resources on diabetes and on gestational diabetes (in English and Spanish)
More information on this trial of primary prevention of GDM is available
doi:10.1371/journal.pmed.1001036
PMCID: PMC3096610  PMID: 21610860
25.  Patient satisfaction in an acute medicine department in Morocco 
Background
Patients' satisfaction is an important indicator for quality of care. Measuring healthcare quality and improving patient satisfaction have become increasingly prevalent, especially among healthcare providers and purchasers of healthcare. This is mainly due to the fact that consumers are becoming increasingly more knowledgeable about healthcare. No studies of inpatients' satisfaction with hospital care have been conducted in Morocco. The first objective of the present study was to confirm the reliability and validity of the Arabic version of the EQS-H (Echelle de Qualité des Soins en Hospitalisation). The second objective was to evaluate patient satisfaction in an acute medicine department in Morocco by using the EQS-H questionnaire; and also to assess the influence of certain demographics, socioeconomics, and health characteristics in patient satisfaction.
Methods
it was a patient survey conducted in an acute medicine department of a Moroccan University Hospital. We surveyed their socio demographic status, and health characteristics at admission. We performed structured face to face interviews with patients who were discharged from hospital. The core of the EQS-H questionnaire was translated to Arabic, adapted to the present setting, and then used to measure patient satisfaction with quality of care. The internal consistency of the EQS-H scale was assessed by Chronbach's coefficient alpha. Validity was assessed by factor analysis. Factors influencing inpatients' satisfaction were identified using multiple linear regression.
Results
The Arabic version of EQS-H demonstrated an excellent internal consistency for the two dimensions studied (0.889 for 'quality of medical information' (MI) and 0.906 for 'Relationship with staff and daily routine' (RS)). The principal component analysis confirmed the bidimensional structure of the questionnaire and explained 60% of the total variance. In the univariate analysis, urban residence, higher income, better perceived health status compared to admission, better perceived health status compared to people of the same age, and satisfaction with life in general were related to MI dimension; Otherwise, mal gender, urban residence, higher income, staying in double room, better perceived health status compared to admission, and satisfaction with life in general were related to RS dimension. The multiple linear regression showed that four independent variables were associated with higher satisfaction in MI: More than 2 prior hospitalizations, a longer length of stay (10-14 days) (P = 0.002), staying in double room (P = 0.022), and better perceived health status compared to admission (P = 0.036). Three independent variables were associated with higher satisfaction in RS: a longer length of stay (10-14 days) (P = 0.017), better perceived health status compared to admission day (P = 0.013), and satisfaction with life in general (P = 0.006).
Conclusions
Our current data assessing patient satisfaction with acute health care by the Arabic version of the EQS-H showed that the satisfaction rate was average on MI dimension; and good on RS dimension of the questionnaire. The majority of participants were satisfied with the overall care. Demographic, socioeconomic, and health characteristics may influence in-patients satisfaction in Morocco, a low/middle income country. An appreciation and understanding of these factors is essential to develop socio culturally appropriate interventions in order to improve satisfaction of patients.
doi:10.1186/1472-6963-10-149
PMCID: PMC2900260  PMID: 20525170

Results 1-25 (1158307)