PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1779862)

Clipboard (0)
None

Related Articles

1.  Depersonalised doctors: a cross-sectional study of 564 doctors, 760 consultations and 1876 patient reports in UK general practice 
BMJ Open  2012;2(1):e000274.
Objectives
The objectives of this study were to assess burnout in a sample of general practitioners (GPs), to determine factors associated with depersonalisation and to investigate its impact on doctors' consultations with patients.
Design
Cross-sectional, postal survey of GPs using the Maslach Burnout Inventory (MBI). Patient survey and tape-recording of consultations for a subsample of respondents stratified by their MBI scores, gender and duration of General Medical Council registration.
Setting
UK general practice.
Participants
GPs within NHS Essex.
Primary and secondary outcome measures
Scores on MBI subscales (depersonalisation, emotional exhaustion, personal accomplishment); scores on Doctors' Interpersonal Skills Questionnaire and patient-centredness scores attributed to tape-recorded consultations by independent observers.
Results
In the postal survey, 564/789 (71%) GPs completed the MBI. High levels of emotional exhaustion (261/564 doctors, 46%) and depersonalisation (237 doctors, 42%) and low levels of personal accomplishment (190 doctors, 34%) were reported. Depersonalisation scores were related to characteristics of the doctor and the practice. Male doctors reported significantly higher (p<0.001) depersonalisation than female doctors. Doctors registered with the General Medical Council under 20 years had significantly higher (p=0.005) depersonalisation scores than those registered for longer. Doctors in group practices had significantly higher (p=0.001) depersonalisation scores than single-handed practitioners. Thirty-eight doctors agreed to complete the patient survey (n=1876 patients) and audio-record consultations (n=760 consultations). Depersonalised doctors were significantly more likely (p=0.03) to consult with patients who reported seeing their ‘usual doctor’. There were no significant associations between doctors' depersonalisation and their patient-rated interpersonal skills or observed patient-centredness.
Conclusions
This is the largest number of doctors completing the MBI with the highest levels of depersonalisation reported. Despite experiencing substantial depersonalisation, doctors' feelings of burnout were not detected by patients or independent observers. Such levels of burnout are, however, worrying and imply a need for action by doctors themselves, their medical colleagues, professional bodies, healthcare organisations and the Department of Health.
Article summary
Article focus
A cross-sectional survey was designed to assess levels of burnout in a census sample of GPs in Essex, UK, and to determine which doctor- or practice-related variables predicted higher levels of burnout.
In the substudy, patients rated the interpersonal skills of their doctor and independent observers assessed the degree of patient-centredness in a sample of the doctors' audio-taped consultations.
Key messages
High levels of burnout were reported in the census survey—46% doctors reported emotional exhaustion, 42% reported depersonalisation and 34% reported low levels of personal accomplishment.
Doctors' depersonalisation scores could be predicted by a range of variables relating to the individual doctor and their practice, but higher depersonalisation scores were not associated with poorer patient ratings of the doctors' interpersonal skills or a reduction in the patient-centredness of their consultations.
While the professional practice and patient-centredness of consultations of the GPs in this study were not affected by feelings of burnout, there is a need to offer help and support for doctors who are experiencing this.
Strengths and limitations of this study
A high response rate (71%) was achieved in the census sample of GPs completing the MBI and a subsample of 38 doctors who satisfied the predetermined sample stratification consented to further assessment (patient survey and audio-taping of consultations).
The study was, however, limited to one county in the UK and thus cannot be extrapolated to other parts of the UK.
There was a differential response rate by the gender of the participant. Male doctors who were registered with the General Medical Council for >20 years were less likely to respond to the survey than their female counterparts.
doi:10.1136/bmjopen-2011-000274
PMCID: PMC3274717  PMID: 22300669
2.  Task Shifting for Scale-up of HIV Care: Evaluation of Nurse-Centered Antiretroviral Treatment at Rural Health Centers in Rwanda 
PLoS Medicine  2009;6(10):e1000163.
Fabienne Shumbusho and colleagues evaluate a task-shifting model of nurse-centered antiretroviral treatment prescribing in rural primary health centers in Rwanda and find that nurses can effectively and safely prescribe ART when given adequate training, mentoring, and support.
Background
The shortage of human resources for health, and in particular physicians, is one of the major barriers to achieve universal access to HIV care and treatment. In September 2005, a pilot program of nurse-centered antiretroviral treatment (ART) prescription was launched in three rural primary health centers in Rwanda. We retrospectively evaluated the feasibility and effectiveness of this task-shifting model using descriptive data.
Methods and Findings
Medical records of 1,076 patients enrolled in HIV care and treatment services from September 2005 to March 2008 were reviewed to assess: (i) compliance with national guidelines for ART eligibility and prescription, and patient monitoring and (ii) key outcomes, such as retention, body weight, and CD4 cell count change at 6, 12, 18, and 24 mo after ART initiation. Of these, no ineligible patients were started on ART and only one patient received an inappropriate ART prescription. Of the 435 patients who initiated ART, the vast majority had adherence and side effects assessed at each clinic visit (89% and 84%, respectively). By March 2008, 390 (90%) patients were alive on ART, 29 (7%) had died, one (<1%) was lost to follow-up, and none had stopped treatment. Patient retention was about 92% by 12 mo and 91% by 24 mo. Depending on initial stage of disease, mean CD4 cell count increased between 97 and 128 cells/µl in the first 6 mo after treatment initiation and between 79 and 129 cells/µl from 6 to 24 mo of treatment. Mean weight increased significantly in the first 6 mo, between 1.8 and 4.3 kg, with no significant increases from 6 to 24 mo.
Conclusions
Patient outcomes in our pilot program compared favorably with other ART cohorts in sub-Saharan Africa and with those from a recent evaluation of the national ART program in Rwanda. These findings suggest that nurses can effectively and safely prescribe ART when given adequate training, mentoring, and support.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Infection with the human immunodeficiency virus (HIV) is a serious health problem in sub-Saharan Africa. The virus attacks white blood cells that protect against infection, most commonly a type of white blood cell called CD4. When a person has been infected with HIV for a long time, the number of CD4 cells they have goes down, resulting in acquired immune deficiency syndrome (AIDS), in which the person's immune system no longer functions effectively.
The World Health Organization (WHO) has divided the disease into four stages as it progresses, according to symptoms including weight loss and so-called opportunistic infections. These are known as clinical stage I, II, III, or IV but were revised and renamed 1, 2, 3, and 4 in September 2005. HIV infection and AIDS cannot be cured but they can be managed with antiretroviral treatment (ART). The WHO currently recommends that ART is begun when the CD4 count falls below 350.
Rwanda is a country situated in the central Africa with a population of around 9 million inhabitants; over 3% of the rural population and 7% of the urban population are infected with HIV. In 2007, the WHO estimated that 220,000 Rwandan children had lost one or both parents to AIDS.
Why Was This Study Done?
The WHO estimates that 9.7 million people with HIV in low- to middle-income countries need ART but at the end of 2007, only 30% of these, including in Rwanda, had access to treatment. In many low-income countries a major factor in this is a lack of doctors. Rwanda, for example, has one doctor per 50,000 inhabitants and one nurse per 3,900 inhabitants.
This situation has led the WHO to recommend “task shifting,” i.e., that the task of prescribing ART should be shifted from doctors to nurses so that more patients can be treated. This type of reorganization is well studied in high-income countries, but the researchers wanted to help develop a system for treating AIDS that would be effective and timely in a predominantly rural, low-income setting such as Rwanda.
What Did the Researchers Do and Find?
In conjunction with the Rwandan Ministry of Health, the researchers developed and piloted a task-shifting program, in which one nurse in each of three rural Rwandan primary health centers (PHCs) was trained to examine HIV patients and prescribe ART in simple cases. Nurses had to complete more than 50 consultations observed by the doctor before being permitted to consult patients independently. More complex cases were referred to a doctor. The authors developed standard checklists, instructions, and evaluation forms to guide nurses and the doctors who supervised them once a week.
The authors evaluated the pilot program by reviewing the records of 1,076 patients who enrolled on it between September 2005 and March 2008. They looked to see whether the nurses had followed guidelines and monitored the patients correctly. They also considered health outcomes for the patients, such as their death rate, their body weight, their CD4 cell count, and whether they maintained contact with caregivers.
They found that by March 2008, 451 patients had been eligible for ART. 435 received treatment and none of the patients were prescribed ART when they should not have been. Only one prescription did not follow national guidelines.
At every visit, nurses were supposed to assess whether patients were taking their drugs and to monitor side effects. They did this and maintained records correctly for the vast majority of the 435 patients who were prescribed ART. 390 patients (over 90%) of the 435 prescribed receiving ART continued to take it and maintain contact with the pilot PHC's program. 29 patients died. Only one was lost to follow up and the others transferred to another ART site. The majority gained weight in the first six months and their CD4 cell counts rose. Outcomes, including death rate, were similar to those treated on the (doctor-led) Rwandan national ART program and other sub-Saharan African national (doctor-led) programs.
What Do These Findings Mean?
The study suggests that nurses are able to prescribe ART safely and effectively in a rural sub-Saharan setting, given sufficient training, mentoring, and support. Nurse-led prescribing of ART could mean that timely, appropriate treatment reaches many more HIV patients. It would reduce the burden of HIV care for doctors, freeing their time for other duties, and the study is already being used by the Rwandan Ministry of Health as a basis for plans to adopt a task-shifting strategy for the national ART program.
The study does have some limitations. The pilot program was funded and designed as a health project to deliver ART in rural areas, rather than a research project to compare nurse-led and doctor-led ART programs. There was no group of equivalent patients treated by doctors rather than nurses for direct comparison, although the authors did compare outcomes with those achieved nationally for doctor-led ART. The most promising sites, nurses, and patients were selected for the pilot and careful monitoring may have been an additional motivation for the nurses and doctors taking part. Health professionals in a scaled-up program may not be as committed as those in the pilot, who were carefully monitored. In addition, the nature of the pilot, which lasted for under three years and recruited new patients throughout, meant that patients were followed up for relatively short periods.
The authors also warn that they did not consider in this study the changes task shifting will make to doctors' roles and the skills required of both doctors and nurses. They recommend that task shifting should be implemented as part of a wider investment in health systems, human resources, training, adapted medical records, tools, and protocols.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000163.
PLoS Medicine includes a page collecting together its recent articles on HIV infection and AIDS that includes research articles, perspectives, editorials, and policy forums
SciDev.net provides news, views, and information about science, technology, and the developing world, including a section specific to HIV/AIDs
The World Health Organization (WHO) has published a downloadable booklet Task Shifting to Tackle Health Worker Shortages
The WHO offers information on HIV and AIDS (in Arabic, Chinese, English, French, Russian, and Spanish) as well as health information and fact sheets on individual countries, including on Rwanda
The UNAIDS/WHO working group on HIV/AIDS and Sexually Transmitted Infections (STI) Surveillance gathers and publishes data on the prevalence of HIV and AIDS in individual countries, including on Rwanda
AIDS.ORG provides information to help prevent HIV infections and to improve the lives of those affected by HIV and AIDS. Factsheets on many aspects of HIV and AIDS are available. It is the official online publisher of AIDS Treatment News
doi:10.1371/journal.pmed.1000163
PMCID: PMC2752160  PMID: 19823569
3.  Balloon Kyphoplasty 
Executive Summary
Objective
To review the evidence on the effectiveness and cost-effectiveness of balloon kyphoplasty for the treatment of vertebral compression fractures (VCFs).
Clinical Need
Vertebral compression fractures are one of the most common types of osteoporotic fractures. They can lead to chronic pain and spinal deformity. They are caused when the vertebral body (the thick block of bone at the front of each vertebra) is too weak to support the loads of activities of daily living. Spinal deformity due to a collapsed vertebral body can substantially affect the quality of life of elderly people, who are especially at risk for osteoporotic fractures due to decreasing bone mass with age. A population-based study across 12 European centres recently found that VCFs have a negative impact on health-related quality of life. Complications associated with VCFs are pulmonary dysfunction, eating disorders, loss of independence, and mental status change due to pain and the use of medications. Osteoporotic VCFs also are associated with a higher rate of death.
VCFs affect an estimated 25% of women over age 50 years and 40% of women over age 80 years. Only about 30% of these fractures are diagnosed in clinical practice. A Canadian multicentre osteoporosis study reported on the prevalence of vertebral deformity in Canada in people over 50 years of age. To define the limit of normality, they plotted a normal distribution, including mean and standard deviations (SDs) derived from a reference population without any deformity. They reported a prevalence rate of 23.5% in women and a rate of 21.5% in men, using 3 SDs from the mean as the limit of normality. When they used 4 SDs, the prevalence was 9.3% and 7.3%, respectively. They also found the prevalence of vertebral deformity increased with age. For people older than 80 years of age, the prevalence for women and men was 45% and 36%, respectively, using 3 SDs as the limit of normality.
About 85% of VCFs are due to primary osteoporosis. Secondary osteoporosis and neoplasms account for the remaining 15%. A VCF is operationally defined as a reduction in vertebral body height of at least 20% from the initial measurement. It is considered mild if the reduction in height is between 20% and 25%; moderate, if it is between 25% and 40%; and severs, if it is more than 40%. The most frequently fractured locations are the third-lower part of the thorax and the superior lumbar levels. The cervical vertebrae and the upper third of the thorax are rarely involved.
Traditionally, bed rest, medication, and bracing are used to treat painful VCFs. However, anti-inflammatory and narcotic medications are often poorly tolerated by the elderly and may harm the gastrointestinal tract. Bed rest and inactivity may accelerate bone loss, and bracing may restrict diaphragmatic movement. Furthermore, medical treatment does not treat the fracture in a way that ameliorates the pain and spinal deformity.
Over the past decade, the injection of bone cement through the skin into a fractured vertebral body has been used to treat VCFs. The goal of cement injection is to reduce pain by stabilizing the fracture. The secondary indication of these procedures is management of painful vertebral fractures caused by benign or malignant neoplasms (e.g., hemangioma, multiple myeloma, and metastatic cancer).
The Technology
Balloon kyphoplasty is a modified vertebroplasty technique. It is a minimally invasive procedure that aims to relieve pain, restore vertebral height, and correct kyphosis. During this procedure, an inflatable bone tamp is inserted into the collapsed vertebral body. Once inflated, the balloon elevates the end plates and thereby restores the height of the vertebral body. The balloon is deflated and removed, and the space is filled with bone cement. Creating a space in the vertebral body enables the application of more viscous cement and at a much lower pressure than is needed for vertebroplasty. This may result in less cement leakage and fewer complications. Balloons typically are inserted bilaterally, into each fractured vertebral body. Kyphoplasty usually is done under general anesthesia in about 1.5 hours. Patients typically are observed for only a few hours after the surgery, but some may require an overnight hospital stay.
Health Canada has licensed KyphX Xpander Inflatable Bone Tamp (Kyphon Inc., Sunnyvale, CA), for kyphoplasty in patients with VCFs. KyphX is the only commercially available device for percutaneous kyphoplasty. The KyphX kit uses a series of bone filler device tubes. Each bone filler device must be loaded manually with cement. The cement is injected into the cavity by pressing an inner stylet.
In the United States, the Food and Drug Administration cleared the KyphX Inflatable Bone Tamp for marketing in July 1998. CE (Conformité European) marketing was obtained in February 2000 for the reduction of fracture and/or creation of a void in cancellous bone.
Review Strategy
The aim of this literature review was to evaluate the safety and effectiveness of balloon kyphoplasty in the treatment of painful VCFs.
INAHTA, Cochrane CCTR (formerly Cochrane Controlled Trials Register), and DSR were searched for health technology assessment reports. In addition, MEDLINE, EMBASE, and MEDLINE In-Process & Other Non-Indexed Citations were searched from January 1, 2000 to September 21, 2004. The search was limited to English-language articles and human studies.
The positive end points selected for this assessment were as follows:
Reduction in pain scores
Reduction in vertebral height loss
Reduction in kyphotic (Cobb) angle
Improvement in quality of life scores
The search did not yield any health technology assessments on balloon kyphoplasty. The search yielded 152 citations, including those for review articles. No randomized controlled trials (RCTs) on balloon kyphoplasty were identified. All of the published studies were either prospective cohort studies or retrospective studies with no controls. Eleven studies (all case series) met the inclusion criteria. There was also a comparative study published in German that had been translated into English.
Summary of Findings
The results of the 1 comparative study (level 3a evidence) that was included in this review showed that, compared with conservative medical care, balloon kyphoplasty significantly improved patient outcomes.
Patients who had balloon kyphoplasty reported a significant reduction in pain that was maintained throughout follow-up (6 months), whereas pain scores did not change in the control group. Patients in the balloon kyphoplasty group did not need pain medication after 3 days. In the control group, about one-half of the patients needed more pain medication in the first 4 weeks after the procedure. After 6 weeks, 82% of the patients in the control group were still taking pain medication regularly.
Adjacent fractures were more frequent in the control group than in the balloon kyphoplasty group.
The case series reported on several important clinical outcomes.
Pain: Four studies on osteoporosis patients and 1 study on patients with multiple myeloma/primary cancers used the Visual Analogue Scale (VAS) to measure pain before and after balloon kyphoplasty. All of these studies reported that patients had significantly less pain after the procedure. This was maintained during follow-up. Two other studies on patients with osteoporosis also used the VAS to measure pain and found a significant improvement in pain scores; however, they did not provide follow-up data.
Vertebral body height: All 5 studies that assessed vertebral body height in patients with osteoporosis reported a significant improvement in vertebral body height after balloon kyphoplasty. One study had 1-year follow-up data for 26 patients. Vertebral body height was significantly better at 6 months and 1 year for both the anterior and midline measurements.
Two studies reported that vertebral body height was restored significantly after balloon kyphoplasty for patients with multiple myeloma or metastatic disease. In another study, the researchers reported complete height restoration in 9% of patients, a mean 56% height restoration in 60% of patients, and no appreciable height restoration in 31% of the patients who received balloon kyphoplasty.
Kyphosis correction: Four studies that assessed Cobb angle before and after balloon kyphoplasty in patients with osteoporosis found a significant reduction in degree of kyphosis after the procedure. In these studies, the differences between preoperative and postoperative Cobb angles were 3.4°, 7°, 8.8°, and 9.9°.
Only 1 study investigated kyphosis correction in patients with multiple myeloma or metastatic disease. The authors reported a significant improvement (5.2°) in local kyphosis.
Quality of life: Four studies used the Short Form 36 (SF-36) Health Survey Questionnaire to measure the quality of life in patients with osteoporosis after they had balloon kyphoplasty. A significant improvement in most of the domains of the SF-36 (bodily pain, social functioning, vitality, physical functioning, mental health, and role functioning) was observed in 2 studies. One study found that general health declined, although not significantly, and another found that role emotional declined.
Both studies that used the Oswestry Disability Index found that patients had a better quality of life after balloon kyphoplasty. In one study, this improvement was statistically significant. In another study, researchers found that quality of life after kyphoplasty improved significantly, as measured with the Roland-Morris Disability Questionnaire. Yet another study used a quality of life questionnaire and found that 62% of the patients that had balloon kyphoplasty had returned to normal activities, whereas 2 patients had reduced mobility.
To measure quality of life in patients with multiple myeloma or metastatic disease, one group of researchers used the SF-36 and found significantly better scores on bodily pain, physical functioning, vitality, and social functioning after kyphoplasty. However, the scores for general health, mental health, role physical, and role emotional had not improved. A study that used the Oswestry Disability Index reported that patients’ scores were better postoperatively and at 3 months follow-up.
These were the main findings on complications in patients with osteoporosis:
The bone cement leaked in 37 (6%) of 620 treated fractures.
There were no reports of neurological deficits.
There were no reports of pulmonary embolism due to cement leakage.
There were 6 cases of cardiovascular events in 362 patients:
3 (0.8%) patients had myocardial infarction.
3 (0.8%) patients had cardiac arrhythmias.
There was 1 (0.27%) case of pulmonary embolism due to deep venous thrombosis.
There were 20 (8.4%) cases of new fractures in 238 patients.
For patients with multiple myeloma or metastatic disease, these were the main findings:
The bone cement leaked in 12 (9.6%) of 125 procedures.
There were no reports of neurological deficits.
Economic Analysis
Balloon kyphoplasty requires anesthesia. Standard vertebroplasty requires sedation and an analgesic. Based on these considerations, the professional fees (Cdn) for each procedure is shown in Table 1.
Professional Fees for Standard Vertebroplasty and Balloon Kyphoplasty
Balloon kyphoplasty has a sizable device cost add-on of $3,578 (the device cost per case) that standard vertebroplasty does not have. Therefore, the up-front cost (i.e., physician’s fees and device costs) is $187 for standard vertebroplasty and $3,812 for balloon kyphoplasty. (All costs are in Canadian currency.)
There are also “downstream costs” of the procedures, based on the different adverse outcomes associated with each. This includes the risk of developing new fractures (21% for vertebroplasty vs. 8.4% for balloon kyphoplasty), neurological complications (3.9% for vertebroplasty vs. 0% for balloon kyphoplasty), pulmonary embolism (0.1% for vertebroplasty vs. 0% for balloon kyphoplasty), and cement leakage (26.5% for vertebroplasty vs. 6.0% for balloon kyphoplasty). Accounting for these risks, and the base costs to treat each of these complications, the expected downstream costs are estimated at less than $500 per case. Therefore, the expected total direct medical cost per patient is about $700 for standard vertebroplasty and $4,300 for balloon kyphoplasty.
Kyphon, the manufacturer of the inflatable bone tamps has stated that the predicted Canadian incidence of osteoporosis in 2005 is about 29,000. The predicted incidence of cancer-related vertebral fractures in 2005 is 6,731. Based on Ontario having about 38% of the Canadian population, the incidence in the province is likely to be about 11,000 for osteoporosis and 2,500 for cancer-related vertebral fractures. This means there could be as many as 13,500 procedures per year in Ontario; however, this is highly unlikely because most of the cancer-related fractures likely would be treated with medication. Given a $3,600 incremental direct medical cost associated with balloon kyphoplasty, the budget impact of adopting this technology could be as high as $48.6 million per year; however, based on data from the Provider Services Branch, about 120 standard vertebroplasties are done in Ontario annually. Given these current utilization patterns, the budget impact is likely to be in the range of $430,000 per year. This is because of the sizable device cost add-on of $3,578 (per case) for balloon kyphoplasty that standard vertebroplasty does not have.
Policy Considerations
Other treatments for osteoporotic VCFs are medical management and open surgery. In cases without neurological involvement, the medical treatment of osteoporotic VCFs comprises bed rest, orthotic management, and pain medication. However, these treatments are not free of side effects. Bed rest over time can result in more bone and muscle loss, and can speed the deterioration of the underlying condition. Medication can lead to altered mood or mental status. Surgery in these patients has been limited because of its inherent risks and invasiveness, and the poor quality of osteoporotic bones. However, it may be indicated in patients with neurological deficits.
Neither of these vertebral augmentation procedures eliminates the need for aggressive treatment of osteoporosis. Osteoporotic VCFs are often under-diagnosed and under-treated. A survey of physicians in Ontario (1) who treated elderly patients living in long-term care homes found that although these physicians were aware of the rates of osteoporosis in these patients, 45% did not routinely assess them for osteoporosis, and 26% did not routinely treat them for osteoporosis.
Management of the underlying condition that weakens the vertebral bodies should be part of the treatment plan. All patients with osteoporosis should be in a medical therapy program to treat the underlying condition, and the referring health care provider should monitor the clinical progress of the patient.
The main complication associated with vertebroplasty and balloon kyphoplasty is cement leakage (extravertebral or vascular). This may result in more patient morbidity, longer hospitalizations, the need for open surgery, and the use of pain medications, all of which have related costs. Extravertebral cement leakage can cause neurological complications, like spinal cord compression, nerve root compression, and radiculopathy. In some cases, surgery is required to remove the cement and release the nerve. The rate of cement leakage is much lower after balloon kyphoplasty than after vertebroplasty. Furthermore, the neurological complications seen with vertebroplasty have not seen in the studies of balloon kyphoplasty. Rarely, cement leakage into the venous system will cause a pulmonary embolism. Finally, compared with vertebroplasty, the rate of new fractures is lower after balloon kyphoplasty.
Diffusion – International, National, Provincial
In Canada, balloon kyphoplasty has not yet been funded in any of the provinces. The first balloon kyphoplasty performed in Canada was in July 2004 in Ontario.
In the United States, the technology is considered by some states as medically reasonable and necessary for the treatment of painful vertebral body compression fractures.
Conclusion
There is level 4 evidence that balloon kyphoplasty to treat pain associated with VCFs due to osteoporosis is as effective as vertebroplasty at relieving pain. Furthermore, the evidence suggests that it restores the height of the affected vertebra. It also results in lower fracture rates in other vertebrae compared with vertebroplasty, and in fewer neurological complications due to cement leakage compared with vertebroplasty. Balloon kyphoplasty is a reasonable alternative to vertebroplasty, although it must be reiterated that this conclusion is based on evidence from level 4 studies.
Balloon kyphoplasty should be restricted to facilities that have sufficient volumes to develop and maintain the expertise required to maximize good quality outcomes. Therefore, consideration should be given to limiting the number of facilities in the province that can do balloon kyphoplasty.
PMCID: PMC3387743  PMID: 23074451
4.  Coil Embolization for Intracranial Aneurysms 
Executive Summary
Objective
To determine the effectiveness and cost-effectiveness of coil embolization compared with surgical clipping to treat intracranial aneurysms.
The Technology
Endovascular coil embolization is a percutaneous approach to treat an intracranial aneurysm from within the blood vessel without the need of a craniotomy. In this procedure, a microcatheter is inserted into the femoral artery near the groin and navigated to the site of the aneurysm. Small helical platinum coils are deployed through the microcatheter to fill the aneurysm, and prevent it from further expansion and rupture. Health Canada has approved numerous types of coils and coil delivery systems to treat intracranial aneurysms. The most favoured are controlled detachable coils. Coil embolization may be used with other adjunct endovascular devices such as stents and balloons.
Background
Intracranial Aneurysms
Intracranial aneurysms are the dilation or ballooning of part of a blood vessel in the brain. Intracranial aneurysms range in size from small (<12 mm in diameter) to large (12–25 mm), and to giant (>25 mm). There are 3 main types of aneurysms. Fusiform aneurysms involve the entire circumference of the artery; saccular aneurysms have outpouchings; and dissecting aneurysms have tears in the arterial wall. Berry aneurysms are saccular aneurysms with well-defined necks.
Intracranial aneurysms may occur in any blood vessel of the brain; however, they are most commonly found at the branch points of large arteries that form the circle of Willis at the base of the brain. In 85% to 95% of patients, they are found in the anterior circulation. Aneurysms in the posterior circulation are less frequent, and are more difficult to treat surgically due to inaccessibility.
Most intracranial aneurysms are small and asymptomatic. Large aneurysms may have a mass effect, causing compression on the brain and cranial nerves and neurological deficits. When an intracranial aneurysm ruptures and bleeds, resulting in a subarachnoid hemorrhage (SAH), the mortality rate can be 40% to 50%, with severe morbidity of 10% to 20%. The reported overall risk of rupture is 1.9% per year and is higher for women, cigarette smokers, and cocaine users, and in aneurysms that are symptomatic, greater than 10 mm in diameter, or located in the posterior circulation. If left untreated, there is a considerable risk of repeat hemorrhage in a ruptured aneurysm that results in increased mortality.
In Ontario, intracranial aneurysms occur in about 1% to 4% of the population, and the annual incidence of SAH is about 10 cases per 100,000 people. In 2004-2005, about 660 intracranial aneurysm repairs were performed in Ontario.
Treatment of Intracranial Aneurysms
Treatment of an unruptured aneurysm attempts to prevent the aneurysm from rupturing. The treatment of a ruptured intracranial aneurysm aims to prevent further hemorrhage. There are 3 approaches to treating an intracranial aneurysm.
Small, asymptomatic aneurysms less than 10 mm in diameter may be monitored without any intervention other than treatment for underlying risk factors such as hypertension.
Open surgical clipping, involves craniotomy, brain retraction, and placement of a silver clip across the neck of the aneurysm while a patient is under general anesthesia. This procedure is associated with surgical risks and neurological deficits.
Endovascular coil embolization, introduced in the 1990s, is the health technology under review.
Literature Review
Methods
The Medical Advisory Secretariat searched the International Health Technology Assessment (INAHTA) Database and the Cochrane Database of Systematic Reviews to identify relevant systematic reviews. OVID Medline, Medline In-Process and Other Non-Indexed Citations, and Embase were searched for English-language journal articles that reported primary data on the effectiveness or cost-effectiveness of treatments for intracranial aneurysms, obtained in a clinical setting or analyses of primary data maintained in registers or institutional databases. Internet searches of Medscape and manufacturers’ databases were conducted to identify product information and recent reports on trials that were unpublished but that were presented at international conferences. Four systematic reviews, 3 reports on 2 randomized controlled trials comparing coil embolization with surgical clipping of ruptured aneurysms, 30 observational studies, and 3 economic analysis reports were included in this review.
Results
Safety and Effectiveness
Coil embolization appears to be a safe procedure. Complications associated with coil embolization ranged from 8.6% to 18.6% with a median of about 10.6%. Observational studies showed that coil embolization is associated with lower complication rates than surgical clipping (permanent complication 3-7% versus 10.9%; overall 23% versus 46% respectively, p=0.009). Common complications of coil embolization are thrombo-embolic events (2.5%–14.5%), perforation of aneurysm (2.3%–4.7%), parent artery obstruction (2%–3%), collapsed coils (8%), coil malposition (14.6%), and coil migration (0.5%–3%).
Randomized controlled trials showed that for ruptured intracranial aneurysms with SAH, suitable for both coil embolization and surgical clipping (mostly saccular aneurysms <10 mm in diameter located in the anterior circulation) in people with good clinical condition:Coil embolization resulted in a statistically significant 23.9% relative risk reduction and 7% absolute risk reduction in the composite rate of death and dependency compared to surgical clipping (modified Rankin score 3–6) at 1-year.
The advantage of coil embolization over surgical clipping varies widely with aneurysm location, but endovascular treatment seems beneficial for all sites.
There were less deaths in the first 7 years following coil embolization compared to surgical clipping (10.8% vs 13.7%). This survival benefit seemed to be consistent over time, and was statistically significant (log-rank p= 0.03).
Coil embolization is associated with less frequent MRI-detected superficial brain deficits and ischemic lesions at 1-year.
The 1- year rebleeding rate was 2.4% after coil embolization and 1% for surgical clipping. Confirmed rebleeding from the repaired aneurysm after the first year and up to year eight was low and not significantly different between coil embolization and surgical clipping (7 patients for coil embolization vs 2 patients for surgical clipping, log-rank p=0.22).
Observational studies showed that patients with SAH and good clinical grade had better 6-month outcomes and lower risk of symptomatic cerebral vasospasm after coil embolization compared to surgical clipping.
For unruptured intracranial aneurysms, there were no randomized controlled trials that compared coil embolization to surgical clipping. Large observational studies showed that:
The risk of rupture in unruptured aneurysms less than 10 mm in diameter is about 0.05% per year for patients with no pervious history of SAH from another aneurysm. The risk of rupture increases with history of SAH and as the diameter of the aneurysm reaches 10 mm or more.
Coil embolization reduced the composite rate of in hospital deaths and discharge to long-term or short-term care facilities compared to surgical clipping (Odds Ratio 2.2, 95% CI 1.6–3.1, p<0.001). The improvement in discharge disposition was highest in people older than 65 years.
In-hospital mortality rate following treatment of intracranial aneurysm ranged from 0.5% to 1.7% for coil embolization and from 2.1% to 3.5% for surgical clipping. The overall 1-year mortality rate was 3.1% for coil embolization and 2.3% for surgical clipping. One-year morbidity rate was 6.4% for coil embolization and 9.8% for surgical clipping. It is not clear whether these differences were statistically significant.
Coil embolization is associated with shorter hospital stay compared to surgical clipping.
For both ruptured and unruptured aneurysms, the outcome of coil embolization does not appear to be dependent on age, whereas surgical clipping has been shown to yield worse outcome for patients older than 64 years.
Angiographic Efficiency and Recurrences
The main drawback of coil embolization is its low angiographic efficiency. The percentage of complete aneurysm occlusion after coil embolization (27%–79%, median 55%) remains lower than that achieved with surgical clipping (82%–100%). However, about 90% of coiled aneurysms achieve near total occlusion or better. Incompletely coiled aneurysms have been shown to have higher aneurysm recurrence rates ranging from 7% to 39% for coil embolization compared to 2.9% for surgical clipping. Recurrence is defined as refilling of the neck, sac, or dome of a successfully treated aneurysm as shown on an angiogram. The long-term clinical significance of incomplete occlusion following coil embolization is unknown, but in one case series, 20% of patients had major recurrences, and 50% of these required further treatment.
Long-Term Outcomes
A large international randomized trial reported that the survival benefit from coil embolization was sustained for at least 7 years. The rebleeding rate between year 2 and year 8 following coil embolization was low and not significantly different from that of surgical clipping. However, high quality long-term angiographic evidence is lacking. Accordingly, there is uncertainty about long-term occlusion status, coil durability, and recurrence rates. While surgical clipping is associated with higher immediate procedural risks, its long-term effectiveness has been established.
Indications and Contraindications
Coil embolization offers treatment for people at increased risk for craniotomy, such as those over 65 years of age, with poor clinical status, or with comorbid conditions. The technology also makes it possible to treat surgical high-risk aneurysms.
Not all aneurysms are suitable for coil embolization. Suitability depends on the size, anatomy, and location of the aneurysm. Aneurysms more than 10 mm in diameter or with an aneurysm neck greater than or equal to 4 mm are less likely to achieve total occlusion. They are also more prone to aneurysm recurrences and to complications such as coil compaction or parent vessel occlusion. Aneurysms with a dome to neck ratio of less than 1 have been shown to have lower obliteration rates and poorer outcome following coil embolization. Furthermore, aneurysms in the middle cerebral artery bifurcation are less suitable for coil embolization. For some aneurysms, treatment may require the use of both coil embolization and surgical clipping or adjunctive technologies, such as stents and balloons, to obtain optimal results.
Diffusion
Information from 3 countries indicates that coil embolization is a rapidly diffusing technology. For example, it accounted for about 40% of aneurysm treatments in the United Kingdom.
In Ontario, coil embolization is an insured health service, with the same fee code and fee schedule as open surgical repair requiring craniotomy. Other costs associated with coil embolization are covered under hospitals’ global budgets. Utilization data showed that in 2004-2005, coil embolization accounted for about 38% (251 cases) of all intracranial aneurysm repairs in the province. With the 2005 publication of the positive long-term survival data from the International Subarachnoid Aneursym Trial, the pressure for diffusion will likely increase.
Economic Analysis
Recent economic studies show that treatment of unruptured intracranial aneurysms smaller than 10 mm in diameter in people with no previous history of SAH, either by coil embolization or surgical clipping, would not be effective or cost-effective. However, in patients with aneurysms that are greater than or equal to 10 mm or symptomatic, or in patients with a history of SAH, treatment appears to be cost-effective.
In Ontario, the average device cost of coil embolization per case was estimated to be about $7,500 higher than surgical clipping. Assuming that the total number of intracranial aneurysm repairs in Ontario increases to 750 in the fiscal year of 2007, and assuming that up to 60% (450 cases) of these will be repaired by coil embolization, the difference in device costs for the 450 cases (including a 15% recurrence rate) would be approximately $3.8 million. This figure does not include capital costs (e.g. $3 million for an angiosuite), additional human resources required, or costs of follow-up. The increase in expenditures associated with coil embolization may be offset partially, by shorter operating room times and hospitalization stays for endovascular repair of unruptured aneurysms; however, the impact of these cost savings is probably not likely to be greater than 25% of the total outlay since the majority of cases involve ruptured aneurysms. Furthermore, the recent growth in aneurysm repair has predominantly been in the area of coil embolization presumably for patients for whom surgical clipping would not be advised; therefore, no offset of surgical clipping costs could be applied in such cases. For ruptured aneurysms, downstream cost savings from endovascular repair are likely to be minimal even though the savings for individual cases may be substantial due to lower perioperative complications for endovascular aneurysm repair.
Guidelines
The two Guidance documents issued by the National Institute of Clinical Excellence (UK) in 2005 support the use of coil embolization for both unruptured and ruptured (SAH) intracranial aneurysms, provided that procedures are in place for informed consent, audit, and clinical governance, and that the procedure is performed in specialist units with expertise in the endovascular treatment of intracranial aneurysms.
Conclusion
For people in good clinical condition following subarachnoid hemorrhage from an acute ruptured intracranial aneurysm suitable for either surgical clipping or endovascular repair, coil embolization results in improved independent survival in the first year and improved survival for up to seven years compared to surgical clipping. The rebleeding rate is low and not significantly different between the two procedures after the first year. However, there is uncertainty regarding the long-term occlusion status, durability of the stent graft, and long-term complications.
For people with unruptured aneurysms, level 4 evidence suggests that coil embolization may be associated with comparable or less mortality and morbidity, shorter hospital stay, and less need for discharge to short-term rehabilitation facilities. The greatest benefit was observed in people over 65 years of age. In these patients, the decision regarding treatment needs to be based on the assessment of the risk of rupture against the risk of the procedure, as well as the morphology of the aneurysm.
In people who require treatment for intracranial aneurysm, but for whom surgical clipping is too risky or not feasible, coil embolization provides survival benefits over surgical clipping, even though the outcomes may not be as favourable as in people in good clinical condition and with small aneurysms. The procedure may be considered under the following circumstances provided that the aneurysm is suitable for coil embolization:
Patients in poor/unstable clinical or neurological state
Patients at high risk for surgical repair (e.g. people>age 65 or with comorbidity), or
Aneurysm(s) with poor accessibility or visibility for surgical treatment due to their location (e.g. ophthalmic or basilar tip aneurysms)
Compared to small aneurysms with a narrow neck in the anterior circulation, large aneurysms (> 10 mm in diameter), aneurysms with a wide neck (>4mm in diameter), and aneurysms in the posterior circulation have lower occlusion rates and higher rate of hemorrhage when treated with coil embolization.
The extent of aneurysm obliteration after coil embolization remains lower than that achieved with surgical clipping. Aneurysm recurrences after successful coiling may require repeat treatment with endovascular or surgical procedures. Experts caution that long-term angiographic outcomes of coil embolization are unknown at this time. Informed consent for and long-term follow-up after coil embolization are recommended.
The decision to treat an intracranial aneurysm with surgical clipping or coil embolization needs to be made jointly by the neurosurgeon and neuro-intervention specialist, based on the clinical status of the patient, the size and morphology of the aneurysm, and the preference of the patient.
The performance of endovascular coil embolization should take place in centres with expertise in both neurosurgery and endovascular neuro-interventions, with adequate treatment volumes to maintain good outcomes. Distribution of the technology should also take into account that patients with SAH should be treated as soon as possible with minimal disruption.
PMCID: PMC3379525  PMID: 23074479
5.  Health-related quality of life and utility scores in short-term survivors of pediatric acute lymphoblastic leukemia 
Quality of Life Research  2012;22(3):677-681.
Purpose
Increase of survival in pediatric acute lymphoblastic leukemia (ALL) has made outcomes such as health-related quality of life (HRQL) and economic burden more important. To make informed decisions on the use of healthcare resources, costs as well as utilities need to be taken into account. Among the preference-based HRQL instruments, the Health Utilities Index (HUI) is the most employed in pediatric cancer. Information on utility scores during ALL treatment and in long-term survivors is available, but utility scores in short-term survivors are lacking. This study assesses utility scores, health state, and HRQL in short-term (6 months to 4 years) ALL survivors.
Methods
Cross-sectional single-center cohort study of short-term ALL survivors using HUI3 proxy assessments.
Results
Thirty-three survivors (median 1.5 years off treatment) reported 14 unique health states. The majority of survivors (61 %) enjoyed a perfect health, but 21 % had three affected attributes. Overall, HRQL was nonsignificantly lower compared to the norm, although the difference was large and may be clinically relevant. Cognition was significantly impaired (p = 0.03).
Conclusion
Although 61 % of short-term survivors of ALL report no impairment, the health status of the other patients lead to a clinically important impaired HRQL compared to norms. Prospective studies assessing utility scores associated with pediatric ALL should be performed, enabling valid and reliable cost-utility analyses for policy makers to make informed decisions.
doi:10.1007/s11136-012-0183-x
PMCID: PMC3607731  PMID: 22547048
Quality of life; Acute lymphoblastic leukemia; Health Utilities Index; Survivor; Childhood cancer; Pediatric
6.  Utilization of DXA Bone Mineral Densitometry in Ontario 
Executive Summary
Issue
Systematic reviews and analyses of administrative data were performed to determine the appropriate use of bone mineral density (BMD) assessments using dual energy x-ray absorptiometry (DXA), and the associated trends in wrist and hip fractures in Ontario.
Background
Dual Energy X-ray Absorptiometry Bone Mineral Density Assessment
Dual energy x-ray absorptiometry bone densitometers measure bone density based on differential absorption of 2 x-ray beams by bone and soft tissues. It is the gold standard for detecting and diagnosing osteoporosis, a systemic disease characterized by low bone density and altered bone structure, resulting in low bone strength and increased risk of fractures. The test is fast (approximately 10 minutes) and accurate (exceeds 90% at the hip), with low radiation (1/3 to 1/5 of that from a chest x-ray). DXA densitometers are licensed as Class 3 medical devices in Canada. The World Health Organization has established criteria for osteoporosis and osteopenia based on DXA BMD measurements: osteoporosis is defined as a BMD that is >2.5 standard deviations below the mean BMD for normal young adults (i.e. T-score <–2.5), while osteopenia is defined as BMD that is more than 1 standard deviation but less than 2.5 standard deviation below the mean for normal young adults (i.e. T-score< –1 & ≥–2.5). DXA densitometry is presently an insured health service in Ontario.
Clinical Need
 
Burden of Disease
The Canadian Multicenter Osteoporosis Study (CaMos) found that 16% of Canadian women and 6.6% of Canadian men have osteoporosis based on the WHO criteria, with prevalence increasing with age. Osteopenia was found in 49.6% of Canadian women and 39% of Canadian men. In Ontario, it is estimated that nearly 530,000 Ontarians have some degrees of osteoporosis. Osteoporosis-related fragility fractures occur most often in the wrist, femur and pelvis. These fractures, particularly those in the hip, are associated with increased mortality, and decreased functional capacity and quality of life. A Canadian study showed that at 1 year after a hip fracture, the mortality rate was 20%. Another 20% required institutional care, 40% were unable to walk independently, and there was lower health-related quality of life due to attributes such as pain, decreased mobility and decreased ability to self-care. The cost of osteoporosis and osteoporotic fractures in Canada was estimated to be $1.3 billion in 1993.
Guidelines for Bone Mineral Density Testing
With 2 exceptions, almost all guidelines address only women. None of the guidelines recommend blanket population-based BMD testing. Instead, all guidelines recommend BMD testing in people at risk of osteoporosis, predominantly women aged 65 years or older. For women under 65 years of age, BMD testing is recommended only if one major or two minor risk factors for osteoporosis exist. Osteoporosis Canada did not restrict its recommendations to women, and thus their guidelines apply to both sexes. Major risk factors are age greater than or equal to 65 years, a history of previous fractures, family history (especially parental history) of fracture, and medication or disease conditions that affect bone metabolism (such as long-term glucocorticoid therapy). Minor risk factors include low body mass index, low calcium intake, alcohol consumption, and smoking.
Current Funding for Bone Mineral Density Testing
The Ontario Health Insurance Program (OHIP) Schedule presently reimburses DXA BMD at the hip and spine. Measurements at both sites are required if feasible. Patients at low risk of accelerated bone loss are limited to one BMD test within any 24-month period, but there are no restrictions on people at high risk. The total fee including the professional and technical components for a test involving 2 or more sites is $106.00 (Cdn).
Method of Review
This review consisted of 2 parts. The first part was an analysis of Ontario administrative data relating to DXA BMD, wrist and hip fractures, and use of antiresorptive drugs in people aged 65 years and older. The Institute for Clinical Evaluative Sciences extracted data from the OHIP claims database, the Canadian Institute for Health Information hospital discharge abstract database, the National Ambulatory Care Reporting System, and the Ontario Drug Benefit database using OHIP and ICD-10 codes. The data was analyzed to examine the trends in DXA BMD use from 1992 to 2005, and to identify areas requiring improvement.
The second part included systematic reviews and analyses of evidence relating to issues identified in the analyses of utilization data. Altogether, 8 reviews and qualitative syntheses were performed, consisting of 28 published systematic reviews and/or meta-analyses, 34 randomized controlled trials, and 63 observational studies.
Findings of Utilization Analysis
Analysis of administrative data showed a 10-fold increase in the number of BMD tests in Ontario between 1993 and 2005.
OHIP claims for BMD tests are presently increasing at a rate of 6 to 7% per year. Approximately 500,000 tests were performed in 2005/06 with an age-adjusted rate of 8,600 tests per 100,000 population.
Women accounted for 90 % of all BMD tests performed in the province.
In 2005/06, there was a 2-fold variation in the rate of DXA BMD tests across local integrated health networks, but a 10-fold variation between the county with the highest rate (Toronto) and that with the lowest rate (Kenora). The analysis also showed that:
With the increased use of BMD, there was a concomitant increase in the use of antiresorptive drugs (as shown in people 65 years and older) and a decrease in the rate of hip fractures in people age 50 years and older.
Repeat BMD made up approximately 41% of all tests. Most of the people (>90%) who had annual BMD tests in a 2-year or 3-year period were coded as being at high risk for osteoporosis.
18% (20,865) of the people who had a repeat BMD within a 24-month period and 34% (98,058) of the people who had one BMD test in a 3-year period were under 65 years, had no fracture in the year, and coded as low-risk.
Only 19% of people age greater than 65 years underwent BMD testing and 41% received osteoporosis treatment during the year following a fracture.
Men accounted for 24% of all hip fractures and 21 % of all wrist fractures, but only 10% of BMD tests. The rates of BMD tests and treatment in men after a fracture were only half of those in women.
In both men and women, the rate of hip and wrist fractures mainly increased after age 65 with the sharpest increase occurring after age 80 years.
Findings of Systematic Review and Analysis
Serial Bone Mineral Density Testing for People Not Receiving Osteoporosis Treatment
A systematic review showed that the mean rate of bone loss in people not receiving osteoporosis treatment (including postmenopausal women) is generally less than 1% per year. Higher rates of bone loss were reported for people with disease conditions or on medications that affect bone metabolism. In order to be considered a genuine biological change, the change in BMD between serial measurements must exceed the least significant change (variability) of the testing, ranging from 2.77% to 8% for precisions ranging from 1% to 3% respectively. Progression in BMD was analyzed, using different rates of baseline BMD values, rates of bone loss, precision, and BMD value for initiating treatment. The analyses showed that serial BMD measurements every 24 months (as per OHIP policy for low-risk individuals) is not necessary for people with no major risk factors for osteoporosis, provided that the baseline BMD is normal (T-score ≥ –1), and the rate of bone loss is less than or equal to 1% per year. The analyses showed that for someone with a normal baseline BMD and a rate of bone loss of less than 1% per year, the change in BMD is not likely to exceed least significant change (even for a 1% precision) in less than 3 years after the baseline test, and is not likely to drop to a BMD level that requires initiation of treatment in less than 16 years after the baseline test.
Serial Bone Mineral Density Testing in People Receiving Osteoporosis Therapy
Seven published meta-analysis of randomized controlled trials (RCTs) and 2 recent RCTs on BMD monitoring during osteoporosis therapy showed that although higher increases in BMD were generally associated with reduced risk of fracture, the change in BMD only explained a small percentage of the fracture risk reduction.
Studies showed that some people with small or no increase in BMD during treatment experienced significant fracture risk reduction, indicating that other factors such as improved bone microarchitecture might have contributed to fracture risk reduction.
There is conflicting evidence relating to the role of BMD testing in improving patient compliance with osteoporosis therapy.
Even though BMD may not be a perfect surrogate for reduction in fracture risk when monitoring responses to osteoporosis therapy, experts advised that it is still the only reliable test available for this purpose.
A systematic review conducted by the Medical Advisory Secretariat showed that the magnitude of increases in BMD during osteoporosis drug therapy varied among medications. Although most of the studies yielded mean percentage increases in BMD from baseline that did not exceed the least significant change for a 2% precision after 1 year of treatment, there were some exceptions.
Bone Mineral Density Testing and Treatment After a Fragility Fracture
A review of 3 published pooled analyses of observational studies and 12 prospective population-based observational studies showed that the presence of any prevalent fracture increases the relative risk for future fractures by approximately 2-fold or more. A review of 10 systematic reviews of RCTs and 3 additional RCTs showed that therapy with antiresorptive drugs significantly reduced the risk of vertebral fractures by 40 to 50% in postmenopausal osteoporotic women and osteoporotic men, and 2 antiresorptive drugs also reduced the risk of nonvertebral fractures by 30 to 50%. Evidence from observational studies in Canada and other jurisdictions suggests that patients who had undergone BMD measurements, particularly if a diagnosis of osteoporosis is made, were more likely to be given pharmacologic bone-sparing therapy. Despite these findings, the rate of BMD investigation and osteoporosis treatment after a fracture remained low (<20%) in Ontario as well as in other jurisdictions.
Bone Mineral Density Testing in Men
There are presently no specific Canadian guidelines for BMD screening in men. A review of the literature suggests that risk factors for fracture and the rate of vertebral deformity are similar for men and women, but the mortality rate after a hip fracture is higher in men compared with women. Two bisphosphonates had been shown to reduce the risk of vertebral and hip fractures in men. However, BMD testing and osteoporosis treatment were proportionately low in Ontario men in general, and particularly after a fracture, even though men accounted for 25% of the hip and wrist fractures. The Ontario data also showed that the rates of wrist fracture and hip fracture in men rose sharply in the 75- to 80-year age group.
Ontario-Based Economic Analysis
The economic analysis focused on analyzing the economic impact of decreasing future hip fractures by increasing the rate of BMD testing in men and women age greater than or equal to 65 years following a hip or wrist fracture. A decision analysis showed the above strategy, especially when enhanced by improved reporting of BMD tests, to be cost-effective, resulting in a cost-effectiveness ratio ranging from $2,285 (Cdn) per fracture avoided (worst-case scenario) to $1,981 (Cdn) per fracture avoided (best-case scenario). A budget impact analysis estimated that shifting utilization of BMD testing from the low risk population to high risk populations within Ontario would result in a saving of $0.85 million to $1.5 million (Cdn) to the health system. The potential net saving was estimated at $1.2 million to $5 million (Cdn) when the downstream cost-avoidance due to prevention of future hip fractures was factored into the analysis.
Other Factors for Consideration
There is a lack of standardization for BMD testing in Ontario. Two different standards are presently being used and experts suggest that variability in results from different facilities may lead to unnecessary testing. There is also no requirement for standardized equipment, procedure or reporting format. The current reimbursement policy for BMD testing encourages serial testing in people at low risk of accelerated bone loss. This review showed that biannual testing is not necessary for all cases. The lack of a database to collect clinical data on BMD testing makes it difficult to evaluate the clinical profiles of patients tested and outcomes of the BMD tests. There are ministry initiatives in progress under the Osteoporosis Program to address the development of a mandatory standardized requisition form for BMD tests to facilitate data collection and clinical decision-making. Work is also underway for developing guidelines for BMD testing in men and in perimenopausal women.
Conclusion
Increased use of BMD in Ontario since 1996 appears to be associated with increased use of antiresorptive medication and a decrease in hip and wrist fractures.
Data suggest that as many as 20% (98,000) of the DXA BMD tests in Ontario in 2005/06 were performed in people aged less than 65 years, with no fracture in the current year, and coded as being at low risk for accelerated bone loss; this is not consistent with current guidelines. Even though some of these people might have been incorrectly coded as low-risk, the number of tests in people truly at low risk could still be substantial.
Approximately 4% (21,000) of the DXA BMD tests in 2005/06 were repeat BMDs in low-risk individuals within a 24-month period. Even though this is in compliance with current OHIP reimbursement policies, evidence showed that biannual serial BMD testing is not necessary in individuals without major risk factors for fractures, provided that the baseline BMD is normal (T-score < –1). In this population, BMD measurements may be repeated in 3 to 5 years after the baseline test to establish the rate of bone loss, and further serial BMD tests may not be necessary for another 7 to 10 years if the rate of bone loss is no more than 1% per year. Precision of the test needs to be considered when interpreting serial BMD results.
Although changes in BMD may not be the perfect surrogate for reduction in fracture risk as a measure of response to osteoporosis treatment, experts advised that it is presently the only reliable test for monitoring response to treatment and to help motivate patients to continue treatment. Patients should not discontinue treatment if there is no increase in BMD after the first year of treatment. Lack of response or bone loss during treatment should prompt the physician to examine whether the patient is taking the medication appropriately.
Men and women who have had a fragility fracture at the hip, spine, wrist or shoulder are at increased risk of having a future fracture, but this population is presently under investigated and under treated. Additional efforts have to be made to communicate to physicians (particularly orthopaedic surgeons and family physicians) and the public about the need for a BMD test after fracture, and for initiating treatment if low BMD is found.
Men had a disproportionately low rate of BMD tests and osteoporosis treatment, especially after a fracture. Evidence and fracture data showed that the risk of hip and wrist fractures in men rises sharply at age 70 years.
Some counties had BMD utilization rates that were only 10% of that of the county with the highest utilization. The reasons for low utilization need to be explored and addressed.
Initiatives such as aligning reimbursement policy with current guidelines, developing specific guidelines for BMD testing in men and perimenopausal women, improving BMD reports to assist in clinical decision making, developing a registry to track BMD tests, improving access to BMD tests in remote/rural counties, establishing mechanisms to alert family physicians of fractures, and educating physicians and the public, will improve the appropriate utilization of BMD tests, and further decrease the rate of fractures in Ontario. Some of these initiatives such as developing guidelines for perimenopausal women and men, and developing a standardized requisition form for BMD testing, are currently in progress under the Ontario Osteoporosis Strategy.
PMCID: PMC3379167  PMID: 23074491
7.  Patient-reported outcome 2 years after lung transplantation: does the underlying diagnosis matter? 
Purpose
Transplantation has the potential to produce profound effects on survival and health-related quality of life (HRQL). The inclusion of the patient’s perspective may play an important role in the assessment of the effectiveness of lung transplantation. Patient perspectives are assessed by patient-reported outcome measures, including HRQL measures. We describe how patients’ HRQL among different diagnosis groups can be used by clinicians to monitor and evaluate the outcomes associated with transplantation.
Methods
Consecutive lung transplant recipients attending the lung transplant outpatient clinic in a tertiary institution completed the 15-item Health Utilities Index (HUI) questionnaire on a touchscreen computer. The results were available to clinicians at every patient visit. The HUI3 covers a range of severity and comorbidities in eight dimensions of health status. Overall HUI3 scores are on a scale in which dead = 0.00 and perfect health = 1.00; disability categories range from no disability = 1 to severe disability <0.70. Single-attribute and overall HUI3 scores were used to compare patients’ HRQL among different diagnosis groups. Random-effect models with time since transplant as a random variable and age, gender, underlying diagnoses, infections, and broncholitis obliterans syndrome as fixed variables were built to identify determinants of health status at 2-years posttransplantation.
Results
Two hundred and fourteen lung transplant recipients of whom 61% were male with a mean age of 52 (19–75) years were included in the study. Chronic obstructive pulmonary disease and cystic fibrosis patients displayed moderate disability, while pulmonary fibrosis and pulmonary arterial hypertension patients displayed severe disability. Patients with chronic obstructive pulmonary disease had the worst pain level, whereas patients with pulmonary fibrosis had the worst emotion and cognition levels. A random-effect model confirmed that development of broncholitis obliterans syndrome was the most important determinant of health status (P = 0.03) compared to other variables, such as cytomegalovirus infections and underlying diagnoses.
Conclusion
Descriptions of patients’ HRQL among different diagnosis groups could be used by clinicians to assist individualized patient care.
doi:10.2147/PROM.S32399
PMCID: PMC3508652  PMID: 23204877
patient-reported outcomes; health-related quality of life measures; underlying diagnoses in lung transplant recipients; health utilities index
8.  Why Reassurance Fails in Patients with Unexplained Symptoms—An Experimental Investigation of Remembered Probabilities 
PLoS Medicine  2006;3(8):e269.
Background
Providing reassurance is one of physicians' most frequently used verbal interventions. However, medical reassurance can fail or even have negative effects. This is frequently the case in patients with medically unexplained symptoms. It is hypothesized that these patients are more likely than patients from other groups to incorrectly recall the likelihoods of medical explanations provided by doctors.
Methods and Findings
Thirty-three patients with medically unexplained symptoms, 22 patients with major depression, and 30 healthy controls listened to an audiotaped medical report, as well as to two control reports. After listening to the reports, participants were asked to rate what the doctor thinks the likelihood is that the complaints are caused by a specific medical condition.
Although the doctor rejected most of the medical explanations for the symptoms in his verbal report, the patients with medically unexplained complaints remembered a higher likelihood for medical explanations for their symptoms. No differences were found between patients in the other groups, and for the control conditions. When asked to imagine that the reports were applicable to themselves, patients with multiple medical complaints reported more concerns about their health state than individuals in the other groups.
Conclusions
Physicians should be aware that patients with medically unexplained symptoms recall the likelihood of medical causes for their complaints incorrectly. Therefore, physicians should verify correct understanding by using check-back questions and asking for summaries, to improve the effect of reassurance.
Those patients for whom there is no medical explanation for their symptoms are likely to have more difficulty than other patients in remembering information intended to reassure them about their condition.
Editors' Summary
Background.
Being told by the doctor that that niggling headache or persistent stomach ache is not caused by a medical condition reassures most patients. But for some—those with a history of medically unexplained complaints—being told that tests have revealed no underlying cause for their symptoms provides little or no reassurance. Such patients have what is sometimes called “somatization syndrome.” In somatization, mental factors such as stress manifest themselves as physical symptoms. Patients with somatization syndrome start to report multiple medically unexplained symptoms as young adults. These symptoms, which change over time, include pain at different sites in the body and digestive, reproductive, and nervous system problems. What causes this syndrome is unknown and there is no treatment other than helping patients to control their symptoms.
Why Was This Study Done?
Patients with medically unexplained complaints make up a substantial and expensive part of the workload of general medical staff. Part of this expense is because patients with somatization syndrome are not reassured by their medical practitioners telling them there is no physical cause for their symptoms, which leads to requests for further tests. It is unclear why medical reassurance fails in these patients, but if this puzzle could be solved, it might help doctors to deal better with them. In this study, the researchers tested the idea that these patients do not accept medical reassurance because they incorrectly remember what their doctors have told them about the likelihood that specific medical conditions could explain their symptoms.
What Did the Researchers Do and Find?
The researchers recruited patients with medically unexplained symptoms and, for comparison, patients with depression and healthy individuals. All the participants were assessed for somatization syndrome and their general memory tested. They then listened to three audiotapes. In one, a doctor gave test results to a patient with abdominal pain (a medical situation). The other two tapes dealt with a social situation (the lack of an invitation to a barbecue) and a neutral situation (a car breakdown). Each tape contained ten messages, including four that addressed possible explanations for the problem. Two were unambiguous and negative—for example, “the reason for your complaints is definitely not stomach flu.” Two were ambiguous but highly unlikely—“we don't think that you have bowel cancer; this is very unlikely.” The researchers then assessed how well the participants remembered the likelihood that any given explanation was responsible for the patient's symptoms, the missing invitation, or the broken-down car. The patients with somatization syndrome overestimated the likelihood of medical causes for symptoms, particularly (and somewhat surprisingly) when the doctor's assessment had been unambiguous. By contrast, the other participants correctly remembered the doctor's estimates as low. The three study groups were similar in their recall of the likelihood estimates from the social or neutral situation. Finally, when asked to imagine that the medical situation was personally applicable, the patients with unexplained symptoms reacted more emotionally than the other study participants by reporting more concerns with their health.
What Do These Findings Mean?
These results support the researchers' hypothesis that people with somatization syndrome remember the chance that a given symptom has a specific medical cause incorrectly. This is not because of a general memory deficit or an inability to commit health-related facts to memory. The results also indicate that these patients react emotionally to medical situations, so they may find it hard to cope when a doctor fails to explain all their symptoms. Some of these characteristics could, of course, reflect the patients' previous experiences with medical professionals, and the experiment will need to be repeated with additional taped situations and more patients before firm recommendations can be made to help people with somatization syndrome. Nevertheless, given that medical reassurance and the presentation of negative results led to overestimates of the likelihood of medical explanations for symptoms in patients with somatization syndrome, the researchers recommend that doctors bear this bias in mind. To reduce it, they suggest, doctors could ask patients for summaries about what they have been told. This would make it possible to detect when patients have misremembered the likelihood of various medical explanations, and provide an opportunity to correct the situation.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030269.
• MedlinePlus encyclopedia entry on somatization disorder
• Wikipedia page on somatization disorder (note that Wikipedia is a free online encyclopedia that anyone can edit)
• Prodigy Knowledge's information for patients on somatization and somatoform disorders
doi:10.1371/journal.pmed.0030269
PMCID: PMC1523375  PMID: 16866576
9.  The impact of disease progression on perceived health status and quality of life of long-term cancer survivors 
Journal of Cancer Survivorship  2009;3(3):164-173.
Introduction
The number of cancer survivors experiencing disease progression (DP) is increasing with the number of cancer survivors. However, little is known whether DP affects health-related quality of life (HRQL) of long-term cancer survivors. We aimed therefore to compare the health status (HS) and HRQL of DP and disease-free (DF) survivors up to 15 years after initial diagnosis.
Methods
232 cancer survivors with DP identified through the Eindhoven Cancer Registry were matched with 232 DF survivors of similar demographic and clinical characteristics. Patients completed generic HS (SF-36) and cancer-specific HRQL (QOL-CS) questionnaires 5–15 years after diagnosis.
Results
Compared with DF survivors, DP survivors exhibited significantly lower scores on all SF-36 and QOL-CS (except spiritual well-being) dimensions. DF survivors had better scores than the normative population on all SF-36 dimensions. Among survivors with DP, those with short survival (<5 years) had significantly poorer HS scores on all dimensions except bodily pain compared with the normative population. Comparatively, the long survival (≥5 years) DP group had better HRQL than the short DP group but poorer HRQL than the normative population. In multivariate analyses, DP and DF survival time were independently associated with aspects of HS and HRQL in cancer survivors.
Discussions/Conclusions
DP cancer survivors have poorer long-term HS and HRQL compared with DF survivors. However, there is suggestion that HS and HRQL does improve over time following DP.
Implication for Cancer Survivors
Although DP survivors report poorer long-term HRQL compared with DF cancer survivors, results suggest that time can attenuate the distress of DP on HRQL. Psycho-educational programs could help to increase patients’ sense of empowerment and personal control should DP occur.
doi:10.1007/s11764-009-0094-1
PMCID: PMC2714447  PMID: 19557519
Cancer; Disease progression; Health status; Long-term survivors; Quality of life; Recurrence
10.  Patient-Physician Communication About Health-Related Quality-of-Life Problems: Are Non-Hodgkin Lymphoma Survivors Willing to Talk? 
Journal of Clinical Oncology  2013;31(31):3964-3970.
Purpose
To investigate non-Hodgkin lymphoma (NHL) survivors' willingness to discuss health-related quality-of-life (HRQOL) problems with their follow-up care physician.
Patients and Methods
Willingness to discuss HRQOL problems (physical, daily, emotional, social, and sexual functioning) was examined among 374 NHL survivors, 2 to 5 years postdiagnosis. Survivors were asked if they would bring up HRQOL problems with their physician and indicate reasons why not. Logistic regression models examined the association of patient sociodemographics, clinical characteristics, follow-up care variables, and current HRQOL scores with willingness to discuss HRQOL problems.
Results
Overall, 94%, 82%, 76%, 43%, and 49% of survivors would initiate discussions of physical, daily, emotional, social, and sexual functioning, respectively. Survivors who indicated their physician “always” spent enough time with them or rated their care as “excellent” were more willing to discuss HRQOL problems (P < .05). Survivors reporting poorer physical health were less willing to discuss their daily functioning problems (P < .001). Men were more willing to discuss sexual problems than women (P < .001). One in three survivors cited “nothing can be done” as a reason for not discussing daily functioning problems, and at least one in four cited “this was not their doctor's job” and a preference to “talk to another clinician” as reasons for not discussing emotional, social, and sexual functioning.
Conclusion
NHL survivors' willingness to raise HRQOL problems with their physician varied by HRQOL domain. For some domains, even when survivors were experiencing problems, they may not discuss them. To deliver cancer care for the whole patient, interventions that facilitate survivor-clinician communication about survivors' HRQOL are needed.
doi:10.1200/JCO.2012.47.6705
PMCID: PMC3805931  PMID: 24062408
11.  Packaging Health Services When Resources Are Limited: The Example of a Cervical Cancer Screening Visit 
PLoS Medicine  2006;3(11):e434.
Background
Increasing evidence supporting the value of screening women for cervical cancer once in their lifetime, coupled with mounting interest in scaling up successful screening demonstration projects, present challenges to public health decision makers seeking to take full advantage of the single-visit opportunity to provide additional services. We present an analytic framework for packaging multiple interventions during a single point of contact, explicitly taking into account a budget and scarce human resources, constraints acknowledged as significant obstacles for provision of health services in poor countries.
Methods and Findings
We developed a binary integer programming (IP) model capable of identifying an optimal package of health services to be provided during a single visit for a particular target population. Inputs to the IP model are derived using state-transition models, which compute lifetime costs and health benefits associated with each intervention. In a simplified example of a single lifetime cervical cancer screening visit, we identified packages of interventions among six diseases that maximized disability-adjusted life years (DALYs) averted subject to budget and human resource constraints in four resource-poor regions. Data were obtained from regional reports and surveys from the World Health Organization, international databases, the published literature, and expert opinion. With only a budget constraint, interventions for depression and iron deficiency anemia were packaged with cervical cancer screening, while the more costly breast cancer and cardiovascular disease interventions were not. Including personnel constraints resulted in shifting of interventions included in the package, not only across diseases but also between low- and high-intensity intervention options within diseases.
Conclusions
The results of our example suggest several key themes: Packaging other interventions during a one-time visit has the potential to increase health gains; the shortage of personnel represents a real-world constraint that can impact the optimal package of services; and the shortage of different types of personnel may influence the contents of the package of services. Our methods provide a general framework to enhance a decision maker's ability to simultaneously consider costs, benefits, and important nonmonetary constraints. We encourage analysts working on real-world problems to shift from considering costs and benefits of interventions for a single disease to exploring what synergies might be achievable by thinking across disease burdens.
Jane Kim and colleagues analyzed the possible ways that multiple health interventions might be packaged together during a single visit, taking into account scarce financial and human resources.
Editors' Summary
Background.
Public health decision makers in developed and developing countries are exploring the idea of providing packages of health checks at specific times during a person's lifetime to detect and/or prevent life-threatening diseases such as diabetes, heart problems, and some cancers. Bundling together tests for different diseases has advantages for both health-care systems and patients. It can save time and money for both parties and, by associating health checks with life events such as childbirth, it can take advantage of a valuable opportunity to check on the overall health of individuals who may otherwise rarely visit a doctor. But money and other resources (for example, nurses to measure blood pressure) are always limited, even in wealthy countries, so decision makers have to assess the likely costs and benefits of packages of interventions before putting them into action.
Why Was This Study Done?
Recent evidence suggests that women in developing countries would benefit from a once-in-a-lifetime screen for cervical cancer, a leading cause of cancer death for this population. If such a screening strategy for cervical cancer were introduced, it might provide a good opportunity to offer women other health checks, but it is unclear which interventions should be packaged together. In this study, the researchers have developed an analytic framework to identify an optimal package of health services to offer to women attending a clinic for their lifetime cervical cancer screen. Their model takes into account monetary limitations and possible shortages in trained personnel to do the health checks, and balances these constraints against the likely health benefits for the women.
What Did the Researchers Do and Find?
The researchers developed a “mathematical programming” model to identify an optimal package of health services to be provided during a single visit. They then used their model to estimate the average costs and health outcomes per woman of various combinations of health interventions for 35- to 40-year-old women living in four regions of the world with high adult death rates. The researchers chose breast cancer, cardiovascular disease, depression, anemia caused by iron deficiency, and sexually transmitted diseases as health conditions to be checked in addition to cervical cancer during the single visit. They considered two ways—one cheap in terms of money and people; the other more expensive but often more effective—of checking for or dealing with each potential health problem. When they set a realistic budgetary constraint (based on the annual health budget of the poorest countries and a single health check per woman in the two decades following her reproductive years), the optimal health package generated by the model for all four regions included cervical cancer screening done by testing for human papillomavirus (an effective but complex test), treatment for depression, and screening or treatment for anemia. When a 50% shortage in general (for example, nurses) and specialized (for example, doctors) personnel time was also included, the health benefits of the package were maximized by using a simpler test for cervical cancer and by treating anemia but not depression; this freed up resources in some regions to screen for breast cancer or cardiovascular disease.
What Do These Findings Mean?
The model described by the researchers provides a way to explore the potential advantages of delivering a package of health interventions to individuals in a single visit. Like all mathematical models, its conclusions rely heavily on the data used in its construction. Indeed, the researchers stress that, because they did not have full data on the effectiveness of each intervention and made many other assumptions, their results on their own cannot be used to make policy decisions. Nevertheless, their results clearly show that the packaging of multiple health services during a single visit has great potential to maximize health gains, provided the right interventions are chosen. Most importantly, their analysis shows that in the real world the shortage of personnel, which has been ignored in previous analyses even though it is a major problem in many developing countries, will affect which health conditions and specific interventions should be bundled together to provide the greatest impact on public health.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030434.g001.
The World Health Organization has information on choosing cost-effective health interventions and on human resources for health
The American Cancer Society offers patient information on cervical cancer
The Alliance for Cervical Cancer Prevention includes information about cervical cancer prevention programs in developing countries
doi:10.1371/journal.pmed.0030434
PMCID: PMC1635742  PMID: 17105337
12.  Health-related quality of life of child and adolescent retinoblastoma survivors in the Netherlands 
Background
To assess health-related quality of life (HRQoL) in children (8–11 years) and adolescents (12–18 years) who survived retinoblastoma (RB), by means of the KIDSCREEN self-report questionnaire and the proxy-report version.
Methods
This population-based cross-sectional study (participation rate 70%) involved 65 RB survivors (8–18 years) and their parents. Child/adolescents' and parents' perception of their youth's HRQoL was assessed using the KIDSCREEN, and the results were compared with Dutch reference data. Relations with gender, age, marital status of the parents, and visual acuity were analyzed.
Results
RB survivors reported better HRQoL than did the Dutch reference group on the dimensions "moods and emotions" and "autonomy". Increased ratings of HRQoL in RB survivors were mainly seen in perceptions of the younger children and adolescent girls. RB survivors with normal visual acuity scored higher on "physical well-being" than visually impaired survivors. Age was negatively associated with the dimensions "psychological well-being", "self-perception" (according to the child and parent reports) and "parent relations and home life" (according to the child). "Self-perception" was also negatively associated with visual acuity (according to the child). Only parents of young boys surviving RB reported lower on "autonomy" than the reference group, and parents of low visual acuity and blind RB survivors reported higher on "autonomy" than parents of visually unimpaired survivors. Survivors' perceptions and parents' perceptions correlated poorly on all HRQoL dimensions.
Conclusion
RB survivors reported a very good HRQoL compared with the Dutch reference group. The perceptions related to HRQoL differ substantially between parents and their children, i.e. parents judge the HRQoL of their child to be relatively poorer. Although the results are reassuring, additional factors of HRQoL that may have more specific relevance, such as psychological factors or coping skills, should be explored.
doi:10.1186/1477-7525-5-65
PMCID: PMC2219958  PMID: 18053178
13.  The course of fatigue and its correlates in colorectal cancer survivors: a prospective cohort study of the PROFILES registry 
Supportive Care in Cancer  2015;23(11):3361-3371.
Purpose
Colorectal cancer (CRC) survivors who remain fatigued during long-term follow-up are at risk for worse health outcomes and need relevant interventions most. The aim of this study is to prospectively assess cancer-related fatigue (CRF) and four categories of CRF correlates (clinical characteristics, demographic characteristics, behavior/well-being, functional status).
Methods
CRC survivors diagnosed between 2000 and 2009, as registered in the population-based Eindhoven Cancer Registry, completed the Fatigue Assessment Scale at three annual time points. Linear mixed models were used to assess the course of CRF and identify its correlates.
Results
CRF levels were relatively stable over time. Being female, young (≤65 years of age), and single; having a low educational level; treatment with chemotherapy; and having one or more comorbid conditions were associated with higher CRF scores. Years since diagnosis, radiotherapy, and disease stage were not related to CRF over time.
Significant between- and within-subject effects were found for all well-being factors (social, emotional, and cognitive functioning, and global quality of life), symptoms (anxiety, depression, pain, and insomnia), and functional status (physical and role functioning, physical activity levels) in relation to CRF.
The differences in CRF levels could, for a large part, be attributed to differences in behavior/well-being (59 %), functional status (37 %), and, to a lesser extent, to sociodemographic (4 %) and clinical characteristics (8 %).
Conclusion
This study showed that sociodemographic and clinical factors were associated with CRF levels over time among CRC survivors; however, behavior/well-being and functional status explained a larger part of the variance in levels of CRF.
doi:10.1007/s00520-015-2802-x
PMCID: PMC4584107  PMID: 26123601
Behavior; Cancer; Fatigue; Functional status; Survivorship; Well-being
14.  Neurocognitive Status in Long-Term Survivors of Childhood CNS Malignancies: A Report from the Childhood Cancer Survivor Study 
Neuropsychology  2009;23(6):705-717.
Background
Among survivors of childhood cancer, those with Central Nervous System (CNS) malignancies have been found to be at greatest risk for neuropsychological dysfunction in the first few years following diagnosis and treatment. This study follows survivors to adulthood to assess the long term impact of childhood CNS malignancy and its treatment on neurocognitive functioning.
Participants & Methods
As part of the Childhood Cancer Survivor Study (CCSS), 802 survivors of childhood CNS malignancy, 5937 survivors of non-CNS malignancy and 382 siblings without cancer completed a 25 item Neurocognitive Questionnaire (CCSS-NCQ) at least 16 years post cancer diagnosis assessing task efficiency, emotional regulation, organizational skills and memory. Neurocognitive functioning in survivors of CNS malignancy was compared to that of non-CNS malignancy survivors and a sibling cohort. Within the group of CNS malignancy survivors, multiple linear regression was used to assess the contribution of demographic, illness and treatment variables to reported neurocognitive functioning and the relationship of reported neurocognitive functioning to educational, employment and income status.
Results
Survivors of CNS malignancy reported significantly greater neurocognitive impairment on all factors assessed by the CCSS-NCQ than non-CNS cancer survivors or siblings (p<.01), with mean T scores of CNS malignancy survivors substantially more impaired that those of the sibling cohort (p<.001), with a large effect size for Task Efficiency (1.16) and a medium effect size for Memory (.68). Within the CNS malignancy group, medical complications, including hearing deficits, paralysis and cerebrovascular incidents resulted in a greater likelihood of reported deficits on all of the CCSS-NCQ factors, with generally small effect sizes (.22-.50). Total brain irradiation predicted greater impairment on Task Efficiency and Memory (Effect sizes: .65 and .63, respectively), as did partial brain irradiation, with smaller effect sizes (.49 and .43, respectively). Ventriculoperitoneal (VP) shunt placement was associated with small deficits on the same scales (Effect sizes: Task Efficiency .26, Memory .32). Female gender predicted a greater likelihood of impaired scores on 2 scales, with small effect sizes (Task Efficiency .38, Emotional Regulation .45), while diagnosis before age 2 years resulted in less likelihood of reported impairment on the Memory factor with a moderate effect size (.64). CNS malignancy survivors with more impaired CCSS-NCQ scores demonstrated significantly lower educational attainment (p<.01), less household income (p<.001) and less full time employment (p<.001).
Conclusions
Survivors of childhood CNS malignancy are at significant risk for impairment in neurocognitive functioning in adulthood, particularly if they have received cranial radiation, had a VP shunt placed, suffered a cerebrovascular incident or are left with hearing or motor impairments. Reported neurocognitive impairment adversely affected important adult outcomes, including education, employment, income and marital status.
doi:10.1037/a0016674
PMCID: PMC2796110  PMID: 19899829
Neurocognitive functioning; brain tumors; CNS malignancies; Childhood Cancer Survivor Study
15.  Treatment Outcomes and Cost-Effectiveness of Shifting Management of Stable ART Patients to Nurses in South Africa: An Observational Cohort 
PLoS Medicine  2011;8(7):e1001055.
Lawrence Long and colleagues report that “down-referring” stable HIV patients from a doctor-managed, hospital-based ART clinic to a nurse-managed primary health facility provides good health outcomes and cost-effective treatment for patients.
Background
To address human resource and infrastructure shortages, resource-constrained countries are being encouraged to shift HIV care to lesser trained care providers and lower level health care facilities. This study evaluated the cost-effectiveness of down-referring stable antiretroviral therapy (ART) patients from a doctor-managed, hospital-based ART clinic to a nurse-managed primary health care facility in Johannesburg, South Africa.
Methods and Findings
Criteria for down-referral were stable ART (≥11 mo), undetectable viral load within the previous 10 mo, CD4>200 cells/mm3, <5% weight loss over the last three visits, and no opportunistic infections. All patients down-referred from the treatment-initiation site to the down-referral site between 1 February 2008 and 1 January 2009 were compared to a matched sample of patients eligible for down-referral but not down-referred. Outcomes were assigned based on vital and health status 12 mo after down-referral eligibility and the average cost per outcome estimated from patient medical record data.
The down-referral site (n = 712) experienced less death and loss to follow up than the treatment-initiation site (n = 2,136) (1.7% versus 6.2%, relative risk = 0.27, 95% CI 0.15–0.49). The average cost per patient-year for those in care and responding at 12 mo was US$492 for down-referred patients and US$551 for patients remaining at the treatment-initiation site (p<0.0001), a savings of 11%. Down-referral was the cost-effective strategy for eligible patients.
Conclusions
Twelve-month outcomes of stable ART patients who are down-referred to a primary health clinic are as good as, or better than, the outcomes of similar patients who are maintained at a hospital-based ART clinic. The cost of treatment with down-referral is lower across all outcomes and would save 11% for patients who remain in care and respond to treatment. These results suggest that this strategy would increase treatment capacity and conserve resources without compromising patient outcomes.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
AIDS has killed more than 25 million people since 1981, and about 33 million people are now infected with HIV, the virus that causes AIDS. Because HIV destroys immune system cells, which leaves infected individuals susceptible to other infections, early in the AIDS epidemic, most HIV-infected people died within ten years of infection. Then, in 1996, antiretroviral therapy (ART), which can keep HIV in check for many years, became available. For people living in developed countries, HIV infection became a chronic condition, but people in developing countries were not so lucky—ART was prohibitively expensive and so a diagnosis of HIV infection remained a death sentence in many regions of the world. In 2003, this situation was declared a global health emergency, and governments, international agencies, and funding bodies began to implement plans to increase ART coverage in developing countries. As a result, nowadays, more than a third of people in low- and middle-income countries who need ART are receiving it.
Why Was This Study Done?
Unfortunately, shortages of human resources in developing countries are impeding progress toward universal ART coverage. In sub-Saharan Africa, for example, where two-thirds of all HIV-positive people live, there are too few doctors to supervise all the ART that is required. Various organizations are therefore encouraging a shift of clinical care responsibilities for people receiving ART from doctors to less highly trained, less expensive, and more numerous members of the clinical workforce. Thus, in South Africa, plans are underway to reduce the role of hospital doctors in ART and to increase the role of primary health clinic nurses. One specific strategy involves “down-referring” patients whose HIV infection is under control (“stable ART patients”) from a doctor-managed, hospital-based ART clinic to a nurse-managed primary health care facility. In this observational study, the researchers investigate the effect of this strategy on treatment outcomes and costs by retrospectively analyzing data collected from a cohort (group) of adult patients initially treated by doctors at the Themba Lethu Clinic in Johannesburg and then down-referred to a nearby primary health clinic where nurses supervised their treatment.
What Did the Researchers Do and Find?
Patients attending the hospital-based ART clinic were invited to transfer to the down-referral site if they had been on ART for at least 11 months and met criteria that indicated that ART was controlling their HIV infection. Each of the 712 stable ART patients who agreed to be down-referred to the primary health clinic was matched to three patients eligible for down-referral but not down-referred (2,136 patients), and clinical outcomes and costs in the patient groups were compared 12 months after down-referral eligibility. At this time point, 1.7% of the down-referred patients had died or had been lost to follow up compared to 6.2% of the patients who continued to receive hospital-based ART. The average cost per patient-year for those in care and responding at 12 months was US$492 for down-referred patients but US$551 for patients remaining at the hospital. Finally, the down-referral site spent US$509 to produce a patient who was in care and responding one year after down-referral on average, whereas the hospital spent US$602 for each responding patient. Thus, the down-referral strategy (nurse-managed care) was more cost-effective than continued hospital treatment (doctor-managed care).
What Do These Findings Mean?
These findings indicate that, at least for this pair of study sites, the 12-month outcomes of stable ART patients who were down-referred to a primary health clinic were as good as or better than the outcomes of similar patients who remained at a hospital-based ART clinic. Moreover, the down-referral strategy saved 11% of costs for patients who remained in care and responded to treatment and appeared to be cost-effective, although additional studies are needed to confirm this last finding. Because this is an observational study (that is, patients eligible for down-referral were not randomly assigned to hospital or primary care facility treatment), it is possible that some unknown factor was responsible for the difference in outcomes between the two patient groups. Nevertheless, these results suggest that the down-referral strategy tested in this study could increase ART capacity and conserve resources without compromising patient outcomes in South Africa and possibly in other resource-limited settings.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/journal.pmed.pmed.1001055.
This study is further discussed in a PLoS Medicine Perspective by Ford and Mills
The US National Institute of Allergy and Infectious Diseases provides information on HIV infection and AIDS
HIV InSite has comprehensive information on HIV/AIDS
Information is available from Avert, an international AIDS charity, on many aspects of HIV/AIDS, including information on HIV and AIDS in Africa and on universal access to AIDS treatment (in English and Spanish)
The World Health Organization provides information about universal access to AIDS treatment, including its 2010 progress report (in English, French and Spanish)
Right to Care, a non-profit organization that aims to deliver and support quality clinical services in Southern Africa for the prevention, treatment, and management of HIV, provides information on down-referral
doi:10.1371/journal.pmed.1001055
PMCID: PMC3139666  PMID: 21811402
16.  Internet-Based Device-Assisted Remote Monitoring of Cardiovascular Implantable Electronic Devices 
Executive Summary
Objective
The objective of this Medical Advisory Secretariat (MAS) report was to conduct a systematic review of the available published evidence on the safety, effectiveness, and cost-effectiveness of Internet-based device-assisted remote monitoring systems (RMSs) for therapeutic cardiac implantable electronic devices (CIEDs) such as pacemakers (PMs), implantable cardioverter-defibrillators (ICDs), and cardiac resynchronization therapy (CRT) devices. The MAS evidence-based review was performed to support public financing decisions.
Clinical Need: Condition and Target Population
Sudden cardiac death (SCD) is a major cause of fatalities in developed countries. In the United States almost half a million people die of SCD annually, resulting in more deaths than stroke, lung cancer, breast cancer, and AIDS combined. In Canada each year more than 40,000 people die from a cardiovascular related cause; approximately half of these deaths are attributable to SCD.
Most cases of SCD occur in the general population typically in those without a known history of heart disease. Most SCDs are caused by cardiac arrhythmia, an abnormal heart rhythm caused by malfunctions of the heart’s electrical system. Up to half of patients with significant heart failure (HF) also have advanced conduction abnormalities.
Cardiac arrhythmias are managed by a variety of drugs, ablative procedures, and therapeutic CIEDs. The range of CIEDs includes pacemakers (PMs), implantable cardioverter-defibrillators (ICDs), and cardiac resynchronization therapy (CRT) devices. Bradycardia is the main indication for PMs and individuals at high risk for SCD are often treated by ICDs.
Heart failure (HF) is also a significant health problem and is the most frequent cause of hospitalization in those over 65 years of age. Patients with moderate to severe HF may also have cardiac arrhythmias, although the cause may be related more to heart pump or haemodynamic failure. The presence of HF, however, increases the risk of SCD five-fold, regardless of aetiology. Patients with HF who remain highly symptomatic despite optimal drug therapy are sometimes also treated with CRT devices.
With an increasing prevalence of age-related conditions such as chronic HF and the expanding indications for ICD therapy, the rate of ICD placement has been dramatically increasing. The appropriate indications for ICD placement, as well as the rate of ICD placement, are increasingly an issue. In the United States, after the introduction of expanded coverage of ICDs, a national ICD registry was created in 2005 to track these devices. A recent survey based on this national ICD registry reported that 22.5% (25,145) of patients had received a non-evidence based ICD and that these patients experienced significantly higher in-hospital mortality and post-procedural complications.
In addition to the increased ICD device placement and the upfront device costs, there is the need for lifelong follow-up or surveillance, placing a significant burden on patients and device clinics. In 2007, over 1.6 million CIEDs were implanted in Europe and the United States, which translates to over 5.5 million patient encounters per year if the recommended follow-up practices are considered. A safe and effective RMS could potentially improve the efficiency of long-term follow-up of patients and their CIEDs.
Technology
In addition to being therapeutic devices, CIEDs have extensive diagnostic abilities. All CIEDs can be interrogated and reprogrammed during an in-clinic visit using an inductive programming wand. Remote monitoring would allow patients to transmit information recorded in their devices from the comfort of their own homes. Currently most ICD devices also have the potential to be remotely monitored. Remote monitoring (RM) can be used to check system integrity, to alert on arrhythmic episodes, and to potentially replace in-clinic follow-ups and manage disease remotely. They do not currently have the capability of being reprogrammed remotely, although this feature is being tested in pilot settings.
Every RMS is specifically designed by a manufacturer for their cardiac implant devices. For Internet-based device-assisted RMSs, this customization includes details such as web application, multiplatform sensors, custom algorithms, programming information, and types and methods of alerting patients and/or physicians. The addition of peripherals for monitoring weight and pressure or communicating with patients through the onsite communicators also varies by manufacturer. Internet-based device-assisted RMSs for CIEDs are intended to function as a surveillance system rather than an emergency system.
Health care providers therefore need to learn each application, and as more than one application may be used at one site, multiple applications may need to be reviewed for alarms. All RMSs deliver system integrity alerting; however, some systems seem to be better geared to fast arrhythmic alerting, whereas other systems appear to be more intended for remote follow-up or supplemental remote disease management. The different RMSs may therefore have different impacts on workflow organization because of their varying frequency of interrogation and methods of alerts. The integration of these proprietary RM web-based registry systems with hospital-based electronic health record systems has so far not been commonly implemented.
Currently there are 2 general types of RMSs: those that transmit device diagnostic information automatically and without patient assistance to secure Internet-based registry systems, and those that require patient assistance to transmit information. Both systems employ the use of preprogrammed alerts that are either transmitted automatically or at regular scheduled intervals to patients and/or physicians.
The current web applications, programming, and registry systems differ greatly between the manufacturers of transmitting cardiac devices. In Canada there are currently 4 manufacturers—Medtronic Inc., Biotronik, Boston Scientific Corp., and St Jude Medical Inc.—which have regulatory approval for remote transmitting CIEDs. Remote monitoring systems are proprietary to the manufacturer of the implant device. An RMS for one device will not work with another device, and the RMS may not work with all versions of the manufacturer’s devices.
All Internet-based device-assisted RMSs have common components. The implanted device is equipped with a micro-antenna that communicates with a small external device (at bedside or wearable) commonly known as the transmitter. Transmitters are able to interrogate programmed parameters and diagnostic data stored in the patients’ implant device. The information transfer to the communicator can occur at preset time intervals with the participation of the patient (waving a wand over the device) or it can be sent automatically (wirelessly) without their participation. The encrypted data are then uploaded to an Internet-based database on a secure central server. The data processing facilities at the central database, depending on the clinical urgency, can trigger an alert for the physician(s) that can be sent via email, fax, text message, or phone. The details are also posted on the secure website for viewing by the physician (or their delegate) at their convenience.
Research Questions
The research directions and specific research questions for this evidence review were as follows:
To identify the Internet-based device-assisted RMSs available for follow-up of patients with therapeutic CIEDs such as PMs, ICDs, and CRT devices.
To identify the potential risks, operational issues, or organizational issues related to Internet-based device-assisted RM for CIEDs.
To evaluate the safety, acceptability, and effectiveness of Internet-based device-assisted RMSs for CIEDs such as PMs, ICDs, and CRT devices.
To evaluate the safety, effectiveness, and cost-effectiveness of Internet-based device-assisted RMSs for CIEDs compared to usual outpatient in-office monitoring strategies.
To evaluate the resource implications or budget impact of RMSs for CIEDs in Ontario, Canada.
Research Methods
Literature Search
The review included a systematic review of published scientific literature and consultations with experts and manufacturers of all 4 approved RMSs for CIEDs in Canada. Information on CIED cardiac implant clinics was also obtained from Provincial Programs, a division within the Ministry of Health and Long-Term Care with a mandate for cardiac implant specialty care. Various administrative databases and registries were used to outline the current clinical follow-up burden of CIEDs in Ontario. The provincial population-based ICD database developed and maintained by the Institute for Clinical Evaluative Sciences (ICES) was used to review the current follow-up practices with Ontario patients implanted with ICD devices.
Search Strategy
A literature search was performed on September 21, 2010 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from 1950 to September 2010. Search alerts were generated and reviewed for additional relevant literature until December 31, 2010. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search.
Inclusion Criteria
published between 1950 and September 2010;
English language full-reports and human studies;
original reports including clinical evaluations of Internet-based device-assisted RMSs for CIEDs in clinical settings;
reports including standardized measurements on outcome events such as technical success, safety, effectiveness, cost, measures of health care utilization, morbidity, mortality, quality of life or patient satisfaction;
randomized controlled trials (RCTs), systematic reviews and meta-analyses, cohort and controlled clinical studies.
Exclusion Criteria
non-systematic reviews, letters, comments and editorials;
reports not involving standardized outcome events;
clinical reports not involving Internet-based device assisted RM systems for CIEDs in clinical settings;
reports involving studies testing or validating algorithms without RM;
studies with small samples (<10 subjects).
Outcomes of Interest
The outcomes of interest included: technical outcomes, emergency department visits, complications, major adverse events, symptoms, hospital admissions, clinic visits (scheduled and/or unscheduled), survival, morbidity (disease progression, stroke, etc.), patient satisfaction, and quality of life.
Summary of Findings
The MAS evidence review was performed to review available evidence on Internet-based device-assisted RMSs for CIEDs published until September 2010. The search identified 6 systematic reviews, 7 randomized controlled trials, and 19 reports for 16 cohort studies—3 of these being registry-based and 4 being multi-centered. The evidence is summarized in the 3 sections that follow.
1. Effectiveness of Remote Monitoring Systems of CIEDs for Cardiac Arrhythmia and Device Functioning
In total, 15 reports on 13 cohort studies involving investigations with 4 different RMSs for CIEDs in cardiology implant clinic groups were identified in the review. The 4 RMSs were: Care Link Network® (Medtronic Inc,, Minneapolis, MN, USA); Home Monitoring® (Biotronic, Berlin, Germany); House Call 11® (St Jude Medical Inc., St Pauls, MN, USA); and a manufacturer-independent RMS. Eight of these reports were with the Home Monitoring® RMS (12,949 patients), 3 were with the Care Link® RMS (167 patients), 1 was with the House Call 11® RMS (124 patients), and 1 was with a manufacturer-independent RMS (44 patients). All of the studies, except for 2 in the United States, (1 with Home Monitoring® and 1 with House Call 11®), were performed in European countries.
The RMSs in the studies were evaluated with different cardiac implant device populations: ICDs only (6 studies), ICD and CRT devices (3 studies), PM and ICD and CRT devices (4 studies), and PMs only (2 studies). The patient populations were predominately male (range, 52%–87%) in all studies, with mean ages ranging from 58 to 76 years. One study population was unique in that RMSs were evaluated for ICDs implanted solely for primary prevention in young patients (mean age, 44 years) with Brugada syndrome, which carries an inherited increased genetic risk for sudden heart attack in young adults.
Most of the cohort studies reported on the feasibility of RMSs in clinical settings with limited follow-up. In the short follow-up periods of the studies, the majority of the events were related to detection of medical events rather than system configuration or device abnormalities. The results of the studies are summarized below:
The interrogation of devices on the web platform, both for continuous and scheduled transmissions, was significantly quicker with remote follow-up, both for nurses and physicians.
In a case-control study focusing on a Brugada population–based registry with patients followed-up remotely, there were significantly fewer outpatient visits and greater detection of inappropriate shocks. One death occurred in the control group not followed remotely and post-mortem analysis indicated early signs of lead failure prior to the event.
Two studies examined the role of RMSs in following ICD leads under regulatory advisory in a European clinical setting and noted:
– Fewer inappropriate shocks were administered in the RM group.
– Urgent in-office interrogations and surgical revisions were performed within 12 days of remote alerts.
– No signs of lead fracture were detected at in-office follow-up; all were detected at remote follow-up.
Only 1 study reported evaluating quality of life in patients followed up remotely at 3 and 6 months; no values were reported.
Patient satisfaction was evaluated in 5 cohort studies, all in short term follow-up: 1 for the Home Monitoring® RMS, 3 for the Care Link® RMS, and 1 for the House Call 11® RMS.
– Patients reported receiving a sense of security from the transmitter, a good relationship with nurses and physicians, positive implications for their health, and satisfaction with RM and organization of services.
– Although patients reported that the system was easy to implement and required less than 10 minutes to transmit information, a variable proportion of patients (range, 9% 39%) reported that they needed the assistance of a caregiver for their transmission.
– The majority of patients would recommend RM to other ICD patients.
– Patients with hearing or other physical or mental conditions hindering the use of the system were excluded from studies, but the frequency of this was not reported.
Physician satisfaction was evaluated in 3 studies, all with the Care Link® RMS:
– Physicians reported an ease of use and high satisfaction with a generally short-term use of the RMS.
– Physicians reported being able to address the problems in unscheduled patient transmissions or physician initiated transmissions remotely, and were able to handle the majority of the troubleshooting calls remotely.
– Both nurses and physicians reported a high level of satisfaction with the web registry system.
2. Effectiveness of Remote Monitoring Systems in Heart Failure Patients for Cardiac Arrhythmia and Heart Failure Episodes
Remote follow-up of HF patients implanted with ICD or CRT devices, generally managed in specialized HF clinics, was evaluated in 3 cohort studies: 1 involved the Home Monitoring® RMS and 2 involved the Care Link® RMS. In these RMSs, in addition to the standard diagnostic features, the cardiac devices continuously assess other variables such as patient activity, mean heart rate, and heart rate variability. Intra-thoracic impedance, a proxy measure for lung fluid overload, was also measured in the Care Link® studies. The overall diagnostic performance of these measures cannot be evaluated, as the information was not reported for patients who did not experience intra-thoracic impedance threshold crossings or did not undergo interventions. The trial results involved descriptive information on transmissions and alerts in patients experiencing high morbidity and hospitalization in the short study periods.
3. Comparative Effectiveness of Remote Monitoring Systems for CIEDs
Seven RCTs were identified evaluating RMSs for CIEDs: 2 were for PMs (1276 patients) and 5 were for ICD/CRT devices (3733 patients). Studies performed in the clinical setting in the United States involved both the Care Link® RMS and the Home Monitoring® RMS, whereas all studies performed in European countries involved only the Home Monitoring® RMS.
3A. Randomized Controlled Trials of Remote Monitoring Systems for Pacemakers
Two trials, both multicenter RCTs, were conducted in different countries with different RMSs and study objectives. The PREFER trial was a large trial (897 patients) performed in the United States examining the ability of Care Link®, an Internet-based remote PM interrogation system, to detect clinically actionable events (CAEs) sooner than the current in-office follow-up supplemented with transtelephonic monitoring transmissions, a limited form of remote device interrogation. The trial results are summarized below:
In the 375-day mean follow-up, 382 patients were identified with at least 1 CAE—111 patients in the control arm and 271 in the remote arm.
The event rate detected per patient for every type of CAE, except for loss of atrial capture, was higher in the remote arm than the control arm.
The median time to first detection of CAEs (4.9 vs. 6.3 months) was significantly shorter in the RMS group compared to the control group (P < 0.0001).
Additionally, only 2% (3/190) of the CAEs in the control arm were detected during a transtelephonic monitoring transmission (the rest were detected at in-office follow-ups), whereas 66% (446/676) of the CAEs were detected during remote interrogation.
The second study, the OEDIPE trial, was a smaller trial (379 patients) performed in France evaluating the ability of the Home Monitoring® RMS to shorten PM post-operative hospitalization while preserving the safety of conventional management of longer hospital stays.
Implementation and operationalization of the RMS was reported to be successful in 91% (346/379) of the patients and represented 8144 transmissions.
In the RM group 6.5% of patients failed to send messages (10 due to improper use of the transmitter, 2 with unmanageable stress). Of the 172 patients transmitting, 108 patients sent a total of 167 warnings during the trial, with a greater proportion of warnings being attributed to medical rather than technical causes.
Forty percent had no warning message transmission and among these, 6 patients experienced a major adverse event and 1 patient experienced a non-major adverse event. Of the 6 patients having a major adverse event, 5 contacted their physician.
The mean medical reaction time was faster in the RM group (6.5 ± 7.6 days vs. 11.4 ± 11.6 days).
The mean duration of hospitalization was significantly shorter (P < 0.001) for the RM group than the control group (3.2 ± 3.2 days vs. 4.8 ± 3.7 days).
Quality of life estimates by the SF-36 questionnaire were similar for the 2 groups at 1-month follow-up.
3B. Randomized Controlled Trials Evaluating Remote Monitoring Systems for ICD or CRT Devices
The 5 studies evaluating the impact of RMSs with ICD/CRT devices were conducted in the United States and in European countries and involved 2 RMSs—Care Link® and Home Monitoring ®. The objectives of the trials varied and 3 of the trials were smaller pilot investigations.
The first of the smaller studies (151 patients) evaluated patient satisfaction, achievement of patient outcomes, and the cost-effectiveness of the Care Link® RMS compared to quarterly in-office device interrogations with 1-year follow-up.
Individual outcomes such as hospitalizations, emergency department visits, and unscheduled clinic visits were not significantly different between the study groups.
Except for a significantly higher detection of atrial fibrillation in the RM group, data on ICD detection and therapy were similar in the study groups.
Health-related quality of life evaluated by the EuroQoL at 6-month or 12-month follow-up was not different between study groups.
Patients were more satisfied with their ICD care in the clinic follow-up group than in the remote follow-up group at 6-month follow-up, but were equally satisfied at 12- month follow-up.
The second small pilot trial (20 patients) examined the impact of RM follow-up with the House Call 11® system on work schedules and cost savings in patients randomized to 2 study arms varying in the degree of remote follow-up.
The total time including device interrogation, transmission time, data analysis, and physician time required was significantly shorter for the RM follow-up group.
The in-clinic waiting time was eliminated for patients in the RM follow-up group.
The physician talk time was significantly reduced in the RM follow-up group (P < 0.05).
The time for the actual device interrogation did not differ in the study groups.
The third small trial (115 patients) examined the impact of RM with the Home Monitoring® system compared to scheduled trimonthly in-clinic visits on the number of unplanned visits, total costs, health-related quality of life (SF-36), and overall mortality.
There was a 63.2% reduction in in-office visits in the RM group.
Hospitalizations or overall mortality (values not stated) were not significantly different between the study groups.
Patient-induced visits were higher in the RM group than the in-clinic follow-up group.
The TRUST Trial
The TRUST trial was a large multicenter RCT conducted at 102 centers in the United States involving the Home Monitoring® RMS for ICD devices for 1450 patients. The primary objectives of the trial were to determine if remote follow-up could be safely substituted for in-office clinic follow-up (3 in-office visits replaced) and still enable earlier physician detection of clinically actionable events.
Adherence to the protocol follow-up schedule was significantly higher in the RM group than the in-office follow-up group (93.5% vs. 88.7%, P < 0.001).
Actionability of trimonthly scheduled checks was low (6.6%) in both study groups. Overall, actionable causes were reprogramming (76.2%), medication changes (24.8%), and lead/system revisions (4%), and these were not different between the 2 study groups.
The overall mean number of in-clinic and hospital visits was significantly lower in the RM group than the in-office follow-up group (2.1 per patient-year vs. 3.8 per patient-year, P < 0.001), representing a 45% visit reduction at 12 months.
The median time from onset of first arrhythmia to physician evaluation was significantly shorter (P < 0.001) in the RM group than in the in-office follow-up group for all arrhythmias (1 day vs. 35.5 days).
The median time to detect clinically asymptomatic arrhythmia events—atrial fibrillation (AF), ventricular fibrillation (VF), ventricular tachycardia (VT), and supra-ventricular tachycardia (SVT)—was also significantly shorter (P < 0.001) in the RM group compared to the in-office follow-up group (1 day vs. 41.5 days) and was significantly quicker for each of the clinical arrhythmia events—AF (5.5 days vs. 40 days), VT (1 day vs. 28 days), VF (1 day vs. 36 days), and SVT (2 days vs. 39 days).
System-related problems occurred infrequently in both groups—in 1.5% of patients (14/908) in the RM group and in 0.7% of patients (3/432) in the in-office follow-up group.
The overall adverse event rate over 12 months was not significantly different between the 2 groups and individual adverse events were also not significantly different between the RM group and the in-office follow-up group: death (3.4% vs. 4.9%), stroke (0.3% vs. 1.2%), and surgical intervention (6.6% vs. 4.9%), respectively.
The 12-month cumulative survival was 96.4% (95% confidence interval [CI], 95.5%–97.6%) in the RM group and 94.2% (95% confidence interval [CI], 91.8%–96.6%) in the in-office follow-up group, and was not significantly different between the 2 groups (P = 0.174).
The CONNECT Trial
The CONNECT trial, another major multicenter RCT, involved the Care Link® RMS for ICD/CRT devices in a15-month follow-up study of 1,997 patients at 133 sites in the United States. The primary objective of the trial was to determine whether automatically transmitted physician alerts decreased the time from the occurrence of clinically relevant events to medical decisions. The trial results are summarized below:
Of the 575 clinical alerts sent in the study, 246 did not trigger an automatic physician alert. Transmission failures were related to technical issues such as the alert not being programmed or not being reset, and/or a variety of patient factors such as not being at home and the monitor not being plugged in or set up.
The overall mean time from the clinically relevant event to the clinical decision was significantly shorter (P < 0.001) by 17.4 days in the remote follow-up group (4.6 days for 172 patients) than the in-office follow-up group (22 days for 145 patients).
– The median time to a clinical decision was shorter in the remote follow-up group than in the in-office follow-up group for an AT/AF burden greater than or equal to 12 hours (3 days vs. 24 days) and a fast VF rate greater than or equal to 120 beats per minute (4 days vs. 23 days).
Although infrequent, similar low numbers of events involving low battery and VF detection/therapy turned off were noted in both groups. More alerts, however, were noted for out-of-range lead impedance in the RM group (18 vs. 6 patients), and the time to detect these critical events was significantly shorter in the RM group (same day vs. 17 days).
Total in-office clinic visits were reduced by 38% from 6.27 visits per patient-year in the in-office follow-up group to 3.29 visits per patient-year in the remote follow-up group.
Health care utilization visits (N = 6,227) that included cardiovascular-related hospitalization, emergency department visits, and unscheduled clinic visits were not significantly higher in the remote follow-up group.
The overall mean length of hospitalization was significantly shorter (P = 0.002) for those in the remote follow-up group (3.3 days vs. 4.0 days) and was shorter both for patients with ICD (3.0 days vs. 3.6 days) and CRT (3.8 days vs. 4.7 days) implants.
The mortality rate between the study arms was not significantly different between the follow-up groups for the ICDs (P = 0.31) or the CRT devices with defribillator (P = 0.46).
Conclusions
There is limited clinical trial information on the effectiveness of RMSs for PMs. However, for RMSs for ICD devices, multiple cohort studies and 2 large multicenter RCTs demonstrated feasibility and significant reductions in in-office clinic follow-ups with RMSs in the first year post implantation. The detection rates of clinically significant events (and asymptomatic events) were higher, and the time to a clinical decision for these events was significantly shorter, in the remote follow-up groups than in the in-office follow-up groups. The earlier detection of clinical events in the remote follow-up groups, however, was not associated with lower morbidity or mortality rates in the 1-year follow-up. The substitution of almost all the first year in-office clinic follow-ups with RM was also not associated with an increased health care utilization such as emergency department visits or hospitalizations.
The follow-up in the trials was generally short-term, up to 1 year, and was a more limited assessment of potential longer term device/lead integrity complications or issues. None of the studies compared the different RMSs, particularly the different RMSs involving patient-scheduled transmissions or automatic transmissions. Patients’ acceptance of and satisfaction with RM were reported to be high, but the impact of RM on patients’ health-related quality of life, particularly the psychological aspects, was not evaluated thoroughly. Patients who are not technologically competent, having hearing or other physical/mental impairments, were identified as potentially disadvantaged with remote surveillance. Cohort studies consistently identified subgroups of patients who preferred in-office follow-up. The evaluation of costs and workflow impact to the health care system were evaluated in European or American clinical settings, and only in a limited way.
Internet-based device-assisted RMSs involve a new approach to monitoring patients, their disease progression, and their CIEDs. Remote monitoring also has the potential to improve the current postmarket surveillance systems of evolving CIEDs and their ongoing hardware and software modifications. At this point, however, there is insufficient information to evaluate the overall impact to the health care system, although the time saving and convenience to patients and physicians associated with a substitution of in-office follow-up by RM is more certain. The broader issues surrounding infrastructure, impacts on existing clinical care systems, and regulatory concerns need to be considered for the implementation of Internet-based RMSs in jurisdictions involving different clinical practices.
PMCID: PMC3377571  PMID: 23074419
17.  Initial validation of the Argentinean Spanish version of the PedsQL™ 4.0 Generic Core Scales in children and adolescents with chronic diseases: acceptability and comprehensibility in low-income settings 
Background
To validate the Argentinean Spanish version of the PedsQL™ 4.0 Generic Core Scales in Argentinean children and adolescents with chronic conditions and to assess the impact of socio-demographic characteristics on the instrument's comprehensibility and acceptability. Reliability, and known-groups, and convergent validity were tested.
Methods
Consecutive sample of 287 children with chronic conditions and 105 healthy children, ages 2–18, and their parents. Chronically ill children were: (1) attending outpatient clinics and (2) had one of the following diagnoses: stem cell transplant, chronic obstructive pulmonary disease, HIV/AIDS, cancer, end stage renal disease, complex congenital cardiopathy. Patients and adult proxies completed the PedsQL™ 4.0 and an overall health status assessment. Physicians were asked to rate degree of health status impairment.
Results
The PedsQL™ 4.0 was feasible (only 9 children, all 5 to 7 year-olds, could not complete the instrument), easy to administer, completed without, or with minimal, help by most children and parents, and required a brief administration time (average 5–6 minutes). People living below the poverty line and/or low literacy needed more help to complete the instrument. Cronbach Alpha's internal consistency values for the total and subscale scores exceeded 0.70 for self-reports of children over 8 years-old and parent-reports of children over 5 years of age. Reliability of proxy-reports of 2–4 year-olds was low but improved when school items were excluded. Internal consistency for 5–7 year-olds was low (α range = 0.28–0.76). Construct validity was good. Child self-report and parent proxy-report PedsQL™ 4.0 scores were moderately but significantly correlated (ρ = 0.39, p < 0.0001) and both significantly correlated with physician's assessment of health impairment and with child self-reported overall health status. The PedsQL™ 4.0 discriminated between healthy and chronically ill children (72.72 and 66.87, for healthy and ill children, respectively, p = 0.01), between different chronic health conditions, and children from lower socioeconomic status.
Conclusion
Results suggest that the Argentinean Spanish PedsQL™ 4.0 is suitable for research purposes in the public health setting for children over 8 years old and parents of children over 5 years old. People with low income and low literacy need help to complete the instrument. Steps to expand the use of the Argentinean Spanish PedsQL™ 4.0 include an alternative approach to scoring for the 2–4 year-olds, further understanding of how to increase reliability for the 5–7 year-olds self-report, and confirmation of other aspects of validity.
doi:10.1186/1477-7525-6-59
PMCID: PMC2533649  PMID: 18687134
18.  ‘Will I be able to have a baby?’ Results from online focus group discussions with childhood cancer survivors in Sweden 
Human Reproduction (Oxford, England)  2014;29(12):2704-2711.
STUDY QUESTION
What do adolescent and young adult survivors of childhood cancer think about the risk of being infertile?
SUMMARY ANSWER
The potential infertility, as well as the experience of having had cancer, affects well-being, intimate relationships and the desire to have children in the future.
WHAT IS KNOWN ALREADY
Many childhood cancer survivors want to have children and worry about possible infertility.
STUDY DESIGN, SIZE, DURATION
For this qualitative study with a cross-sectional design, data were collected through 39 online focus group discussions during 2013.
PARTICIPANTS/MATERIALS, SETTING, METHODS
Cancer survivors previously treated for selected diagnoses were identified from The Swedish Childhood Cancer Register (16–24 years old at inclusion, ≥5 years after diagnosis) and approached regarding study participation. Online focus group discussions of mixed sex (n = 133) were performed on a chat platform in real time. Texts from the group discussions were analysed using qualitative content analysis.
MAIN RESULTS AND THE ROLE OF CHANCE
The analysis resulted in the main category Is it possible to have a baby? including five generic categories: Risk of infertility affects well-being, Dealing with possible infertility, Disclosure of possible infertility is a challenge, Issues related to heredity and Parenthood may be affected. The risk of infertility was described as having a negative impact on well-being and intimate relationships. Furthermore, the participants described hesitation about becoming a parent due to perceived or anticipated physical and psychological consequences of having had cancer.
LIMITATIONS, REASONS FOR CAUTION
Given the sensitive topic of the study, the response rate (36%) is considered acceptable. The sample included participants who varied with regard to received fertility-related information, current fertility status and concerns related to the risk of being infertile.
WIDER IMPLICATIONS OF THE FINDINGS
The results may be transferred to similar contexts with other groups of patients of childbearing age and a risk of impaired fertility due to disease. The findings imply that achieving parenthood, whether or not with biological children, is an area that needs to be addressed by health care services.
STUDY FUNDING/COMPETING INTEREST(S)
The study was financially supported by The Cancer Research Foundations of Radiumhemmet, The Swedish Childhood Cancer Foundation and the Doctoral School in Health Care Science, Karolinska Institutet. The authors report no conflicts of interest.
doi:10.1093/humrep/deu280
PMCID: PMC4227581  PMID: 25344069
childhood cancer; infertility; focus group discussions; adolescents and young adults; qualitative research
19.  Neurocognitive Functioning in Adult Survivors of Childhood Non-Central Nervous System Cancers 
Background
We sought to measure self-reported neurocognitive functioning among survivors of non-central nervous system (CNS) childhood cancers, overall and compared with a sibling cohort, and to identify factors associated with worse functioning.
Methods
In a retrospective cohort study, 5937 adult survivors of non-CNS cancers and 382 siblings completed a validated neuropsychological instrument with subscales in task efficiency, emotional regulation, organization, and memory. Scores were converted to T scores; scores in the worst 10% of siblings’ scores (ie, T score ≥63) were defined as impaired. Non-CNS cancer survivors and siblings were compared with multivariable linear regression and log-binomial regression. Among survivors, log-binomial models assessed the association of patient and treatment factors with neurocognitive dysfunction. All statistical tests were two-sided.
Results
Non-CNS cancer survivors had similar or slightly worse (<0.5 standard deviation) mean test scores for all four subscales than siblings. However, frequencies of impaired survivors were approximately 50% higher than siblings in task efficiency (13.0% of survivors vs 7.3% of siblings), memory (12.5% vs 7.6%), and emotional regulation (21.2% vs 14.4%). Impaired task efficiency was most often identified in patients with acute lymphoblastic leukemia who received cranial radiation therapy (18.1% with impairment), myeloid leukemia who received cranial radiation therapy (21.2%), and non-Hodgkin lymphoma (13.9%). In adjusted analysis, diagnosis age of younger than 6 years, female sex, cranial radiation therapy, and hearing impairment were associated with impairment.
Conclusion
A statistically and clinically significantly higher percentage of self-reported neurocognitive impairment was found among survivors of non-CNS cancers than among siblings.
doi:10.1093/jnci/djq156
PMCID: PMC2886093  PMID: 20458059
20.  The role of the general health questionnaire in general practice consultations. 
The British Journal of General Practice  1998;48(434):1565-1569.
BACKGROUND: The patient self-rating questionnaire is commonly used as a research tool to identify patients with 'unrecognized' depression. There is no evidence to support its use as a clinical tool in general practice. AIM: To determine whether use of the 30-item general health questionnaire (GHQ) is a practical means of increasing identification of 'new' episodes of emotional distress among patients consulting their general practitioner (GP). METHOD: A randomized controlled trial was carried out in a Scottish new town practice with eight partners. In the waiting room, 1912 patients aged over 14 years and consulting over a 10-month period attempted to complete the GHQ. The 'clinical judgement' group posted the questionnaire into a box then attended the doctor as normal. The 'screened' group presented the questionnaire to the doctor. After the consultation, the doctor completed an assessment questionnaire. The main outcome measures were GHQ scores and doctors' assessments of mental health. RESULTS: In total, 1589 patients were eligible to participate. However, 207 patients in the screened group were excluded because the doctor did not look at the questionnaire. The clinical judgement group (59.7% patients) and the screened group (40.3%) were compared. Although the doctors' diagnoses of distress were low in the clinical judgement group (8.1%), they were significantly greater in the screened group (13.9%) where the diagnosis of depression was doubled. The percentage of patients scoring greater than or equal to 9 (GHQ+) was 21.5% and 21.0% respectively. The level of agreement between the doctors' diagnoses of distress and the questionnaires scoring GHQ+ rose from 19% in the clinical judgement group to 35% in the screened group. CONCLUSIONS: The general health questionnaire used in a practice setting increases the identification of patients with emotional distress. However, the use made of the questionnaires in the screened group raises questions of doctor and patient acceptability.
PMCID: PMC1313218  PMID: 9830180
21.  Estimates of Outcomes Up to Ten Years after Stroke: Analysis from the Prospective South London Stroke Register 
PLoS Medicine  2011;8(5):e1001033.
Charles Wolfe and colleagues collected data from the South London Stroke Register on 3,373 first strokes registered between 1995 and 2006 and showed that between 20% and 30% of survivors have poor outcomes up to 10 years after stroke.
Background
Although stroke is acknowledged as a long-term condition, population estimates of outcomes longer term are lacking. Such estimates would be useful for planning health services and developing research that might ultimately improve outcomes. This burden of disease study provides population-based estimates of outcomes with a focus on disability, cognition, and psychological outcomes up to 10 y after initial stroke event in a multi-ethnic European population.
Methods and Findings
Data were collected from the population-based South London Stroke Register, a prospective population-based register documenting all first in a lifetime strokes since 1 January 1995 in a multi-ethnic inner city population. The outcomes assessed are reported as estimates of need and included disability (Barthel Index <15), inactivity (Frenchay Activities Index <15), cognitive impairment (Abbreviated Mental Test < 8 or Mini-Mental State Exam <24), anxiety and depression (Hospital Anxiety and Depression Scale >10), and mental and physical domain scores of the Medical Outcomes Study 12-item short form (SF-12) health survey. Estimates were stratified by age, gender, and ethnicity, and age-adjusted using the standard European population. Plots of outcome estimates over time were constructed to examine temporal trends and sociodemographic differences. Between 1995 and 2006, 3,373 first-ever strokes were registered: 20%–30% of survivors had a poor outcome over 10 y of follow-up. The highest rate of disability was observed 7 d after stroke and remained at around 110 per 1,000 stroke survivors from 3 mo to 10 y. Rates of inactivity and cognitive impairment both declined up to 1 y (280/1,000 and 180/1,000 survivors, respectively); thereafter rates of inactivity remained stable till year eight, then increased, whereas rates of cognitive impairment fluctuated till year eight, then increased. Anxiety and depression showed some fluctuation over time, with a rate of 350 and 310 per 1,000 stroke survivors, respectively. SF-12 scores showed little variation from 3 mo to 10 y after stroke. Inactivity was higher in males at all time points, and in white compared to black stroke survivors, although black survivors reported better outcomes in the SF-12 physical domain. No other major differences were observed by gender or ethnicity. Increased age was associated with higher rates of disability, inactivity, and cognitive impairment.
Conclusions
Between 20% and 30% of stroke survivors have a poor range of outcomes up to 10 y after stroke. Such epidemiological data demonstrate the sociodemographic groups that are most affected longer term and should be used to develop longer term management strategies that reduce the significant poor outcomes of this group, for whom effective interventions are currently elusive.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Every year, 15 million people have a stroke. About 5 million of these people die within a few days, and another 5 million are left disabled. Stroke occurs when the brain's blood supply is suddenly interrupted by a blood clot blocking a blood vessel in the brain (ischemic stroke, the commonest type of stroke) or by a blood vessel in the brain bursting (hemorrhagic stroke). Deprived of the oxygen normally carried to them by the blood, the brain cells near the blockage die. The symptoms of stroke depend on which part of the brain is damaged but include sudden weakness or paralysis along one side of the body, vision loss in one or both eyes, and confusion or trouble speaking or understanding speech. Anyone experiencing these symptoms should seek immediate medical attention because prompt treatment can limit the damage to the brain. Risk factors for stroke include age (three-quarters of strokes occur in people over 65 years old), high blood pressure, and heart disease.
Why Was This Study Done?
Post-stroke rehabilitation can help individuals overcome the physical disabilities caused by stroke, and drugs and behavioral counseling can reduce the risk of a second stroke. However, people can also have problems with cognition (thinking, awareness, attention, learning, judgment, and memory) after a stroke, and they can become depressed or anxious. These “outcomes” can persist for many years, but although stroke is acknowledged as a long-term condition, most existing data on stroke outcomes are limited to a year after the stroke and often focus on disability alone. Longer term, more extensive information is needed to help plan services and to help develop research to improve outcomes. In this burden of disease analysis, the researchers use follow-up data collected by the prospective South London Stroke Register (SLSR) to provide long-term population-based estimates of disability, cognition, and psychological outcomes after a first stroke. The SLSR has recorded and followed all patients of all ages in an inner area of South London after their first-ever stroke since 1995.
What Did the Researchers Do and Find?
Between 1995 and 2006, the SLSR recorded 3,373 first-ever strokes. Patients were examined within 48 hours of referral to SLSR, their stroke diagnosis was verified, and their sociodemographic characteristics (including age, gender, and ethnic origin) were recorded. Study nurses and fieldworkers then assessed the patients at three months and annually after the stroke for disability (using the Barthel Index, which measures the ability to, for example, eat unaided), inactivity (using the Frenchay Activities Index, which measures participation in social activities), and cognitive impairment (using the Abbreviated Mental Test or the Mini-Mental State Exam). Anxiety and depression and the patients' perceptions of their mental and physical capabilities were also assessed. Using preset cut-offs for each outcome, 20%–30% of stroke survivors had a poor outcome over ten years of follow-up. So, for example, 110 individuals per 1,000 population were judged disabled from three months to ten years, rates of inactivity remained constant from year one to year eight, at 280 affected individuals per 1,000 survivors, and rates of anxiety and depression fluctuated over time but affected about a third of the population. Notably, levels of inactivity were higher among men than women at all time points and were higher in white than in black stroke survivors. Finally, increased age was associated with higher rates of disability, inactivity, and cognitive impairment.
What Do These Findings Mean?
Although the accuracy of these findings may be affected by the loss of some patients to follow-up, these population-based estimates of outcome measures for survivors of a first-ever stroke for up to ten years after the event provide concrete evidence that stroke is a lifelong condition with ongoing poor outcomes. They also identify the sociodemographic groups of patients that are most affected in the longer term. Importantly, most of the measured outcomes remain relatively constant (and worse than outcomes in an age-matched non-stroke-affected population) after 3–12 months, a result that needs to be considered when planning services for stroke survivors. In other words, these findings highlight the need for health and social services to provide long-term, ongoing assessment and rehabilitation for patients for many years after a stroke.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001033.
The US National Institute of Neurological Disorders and Stroke provides information about all aspects of stroke (in English and Spanish); the US National Institute of Health SeniorHealth Web site has additional information about stroke
The Internet Stroke Center provides detailed information about stroke for patients, families, and health professionals (in English and Spanish)
The UK National Health Service Choices Web site also provides information about stroke for patients and their families
MedlinePlus has links to additional resources about stroke (in English and Spanish)
More information about the South London Stroke Register is available
doi:10.1371/journal.pmed.1001033
PMCID: PMC3096613  PMID: 21610863
22.  Epidermal Growth Factor Receptor Mutation (EGFR) Testing for Prediction of Response to EGFR-Targeting Tyrosine Kinase Inhibitor (TKI) Drugs in Patients with Advanced Non-Small-Cell Lung Cancer 
Executive Summary
In February 2010, the Medical Advisory Secretariat (MAS) began work on evidence-based reviews of the literature surrounding three pharmacogenomic tests. This project came about when Cancer Care Ontario (CCO) asked MAS to provide evidence-based analyses on the effectiveness and cost-effectiveness of three oncology pharmacogenomic tests currently in use in Ontario.
Evidence-based analyses have been prepared for each of these technologies. These have been completed in conjunction with internal and external stakeholders, including a Provincial Expert Panel on Pharmacogenetics (PEPP). Within the PEPP, subgroup committees were developed for each disease area. For each technology, an economic analysis was also completed by the Toronto Health Economics and Technology Assessment Collaborative (THETA) and is summarized within the reports.
The following reports can be publicly accessed at the MAS website at: http://www.health.gov.on.ca/mas or at www.health.gov.on.ca/english/providers/program/mas/mas_about.html
Gene Expression Profiling for Guiding Adjuvant Chemotherapy Decisions in Women with Early Breast Cancer: An Evidence-Based Analysis
Epidermal Growth Factor Receptor Mutation (EGFR) Testing for Prediction of Response to EGFR-Targeting Tyrosine Kinase Inhibitor (TKI) Drugs in Patients with Advanced Non-Small-Cell Lung Cancer: an Evidence-Based Analysis
K-RAS testing in Treatment Decisions for Advanced Colorectal Cancer: an Evidence-Based Analysis
Objective
The Medical Advisory Secretariat undertook a systematic review of the evidence on the clinical effectiveness and cost-effectiveness of epidermal growth factor receptor (EGFR) mutation testing compared with no EGFR mutation testing to predict response to tyrosine kinase inhibitors (TKIs), gefitinib (Iressa®) or erlotinib (Tarceva®) in patients with advanced non-small cell lung cancer (NSCLC).
Clinical Need: Target Population and Condition
With an estimated 7,800 new cases and 7,000 deaths last year, lung cancer is the leading cause of cancer deaths in Ontario. Those with unresectable or advanced disease are commonly treated with concurrent chemoradiation or platinum-based combination chemotherapy. Although response rates to cytotoxic chemotherapy for advanced NSCLC are approximately 30 to 40%, all patients eventually develop resistance and have a median survival of only 8 to 10 months. Treatment for refractory or relapsed disease includes single-agent treatment with docetaxel, pemetrexed or EGFR-targeting TKIs (gefitinib, erlotinib). TKIs disrupt EGFR signaling by competing with adenosine triphosphate (ATP) for the binding sites at the tyrosine kinase (TK) domain, thus inhibiting the phosphorylation and activation of EGFRs and the downstream signaling network. Gefitinib and erlotinib have been shown to be either non-inferior or superior to chemotherapy in the first- or second-line setting (gefitinib), or superior to placebo in the second- or third-line setting (erlotinib).
Certain patient characteristics (adenocarcinoma, non-smoking history, Asian ethnicity, female gender) predict for better survival benefit and response to therapy with TKIs. In addition, the current body of evidence shows that somatic mutations in the EGFR gene are the most robust biomarkers for EGFR-targeting therapy selection. Drugs used in this therapy, however, can be costly, up to C$ 2000 to C$ 3000 per month, and they have only approximately a 10% chance of benefiting unselected patients. For these reasons, the predictive value of EGFR mutation testing for TKIs in patients with advanced NSCLC needs to be determined.
The Technology: EGFR mutation testing
The EGFR gene sequencing by polymerase chain reaction (PCR) assays is the most widely used method for EGFR mutation testing. PCR assays can be performed at pathology laboratories across Ontario. According to experts in the province, sequencing is not currently done in Ontario due to lack of adequate measurement sensitivity. A variety of new methods have been introduced to increase the measurement sensitivity of the mutation assay. Some technologies such as single-stranded conformational polymorphism, denaturing high-performance liquid chromatography, and high-resolution melting analysis have the advantage of facilitating rapid mutation screening of large numbers of samples with high measurement sensitivity but require direct sequencing to confirm the identity of the detected mutations. Other techniques have been developed for the simple, but highly sensitive detection of specific EGFR mutations, such as the amplification refractory mutations system (ARMS) and the peptide nucleic acid-locked PCR clamping. Others selectively digest wild-type DNA templates with restriction endonucleases to enrich mutant alleles by PCR. Experts in the province of Ontario have commented that currently PCR fragment analysis for deletion and point mutation conducts in Ontario, with measurement sensitivity of 1% to 5%.
Research Questions
In patients with locally-advanced or metastatic NSCLC, what is the clinical effectiveness of EGFR mutation testing for prediction of response to treatment with TKIs (gefitinib, erlotinib) in terms of progression-free survival (PFS), objective response rates (ORR), overall survival (OS), and quality of life (QoL)?
What is the impact of EGFR mutation testing on overall clinical decision-making for patients with advanced or metastatic NSCLC?
What is the cost-effectiveness of EGFR mutation testing in selecting patients with advanced NSCLC for treatment with gefitinib or erlotinib in the first-line setting?
What is the budget impact of EGFR mutation testing in selecting patients with advanced NSCLC for treatment with gefitinib or erlotinib in the second- or third-line setting?
Methods
A literature search was performed on March 9, 2010 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, OVID EMBASE, Wiley Cochrane, CINAHL, Centre for Reviews and Dissemination/International Agency for Health Technology Assessment for studies published from January 1, 2004 until February 28, 2010 using the following terms:
Non-Small-Cell Lung Carcinoma
Epidermal Growth Factor Receptor
An automatic literature update program also extracted all papers published from February 2010 until August 2010. Abstracts were reviewed by a single reviewer and for those studies meeting the eligibility criteria full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search. Articles with unknown eligibility were reviewed with a second clinical epidemiologist, and then a group of epidemiologists, until consensus was established. The quality of evidence was assessed as high, moderate, low or very low according to GRADE methodology.
The inclusion criteria were as follows:
Population: patients with locally advanced or metastatic NSCLC (stage IIIB or IV)
Procedure: EGFR mutation testing before treatment with gefitinib or erlotinib
Language: publication in English
Published health technology assessments, guidelines, and peer-reviewed literature (abstracts, full text, conference abstract)
Outcomes: progression-free survival (PFS), Objective response rate (ORR), overall survival (OS), quality of life (QoL).
The exclusion criteria were as follows:
Studies lacking outcomes specific to those of interest
Studies focused on erlotinib maintenance therapy
Studies focused on gefitinib or erlotinib use in combination with cytotoxic agents or any other drug
Grey literature, where relevant, was also reviewed.
Outcomes of Interest
PFS
ORR determined by means of the Response Evaluation Criteria in Solid Tumours (RECIST)
OS
QoL
Quality of Evidence
The quality of the Phase II trials and observational studies was based on the method of subject recruitment and sampling, possibility of selection bias, and generalizability to the source population. The overall quality of evidence was assessed as high, moderate, low or very low according to the GRADE Working Group criteria.
Summary of Findings
Since the last published health technology assessment by Blue Cross Blue Shield Association in 2007 there have been a number of phase III trials which provide evidence of predictive value of EGFR mutation testing in patients who were treated with gefitinib compared to chemotherapy in the first- or second-line setting. The Iressa Pan Asian Study (IPASS) trial showed the superiority of gefitinib in terms of PFS in patients with EGFR mutations versus patients with wild-type EGFR (Hazard ratio [HR], 0.48, 95%CI; 0.36-0.64 versus HR, 2.85; 95%CI, 2.05-3.98). Moreover, there was a statistically significant increased ORR in patients who received gefitinib and had EGFR mutations compared to patients with wild-type EGFR (71% versus 1%). The First-SIGNAL trial in patients with similar clinical characteristics as IPASS as well as the NEJ002 and WJTOG3405 trials that included only patients with EGFR mutations, provide confirmation that gefitinib is superior to chemotherapy in terms of improved PFS or higher ORR in patients with EGFR mutations. The INTEREST trial further indicated that patients with EGFR mutations had prolonged PFS and higher ORR when treated with gefitinib compared with docetaxel.
In contrast, there is still a paucity of strong evidence regarding the predictive value of EGFR mutation testing for response to erlotinib in the second- or third-line setting. The BR.21 trial randomized 731 patients with NSCLC who were refractory or intolerant to prior first- or second-line chemotherapy to receive erlotinib or placebo. While the HR of 0.61 (95%CI, 0.51-0.74) favored erlotinib in the overall population, this was not a significant in the subsequent retrospective subgroup analysis. A retrospective evaluation of 116 of the BR.21 tumor samples demonstrated that patients with EGFR mutations had significantly higher ORRs when treated with erlotinib compared with placebo (27% versus 7%; P=0.03). However, erlotinib did not confer a significant survival benefit compared with placebo in patients with EGFR mutations (HR, 0.55; 95%CI, 0.25-1.19) versus wild-type (HR, 0.74; 95%CI, 0.52-1.05). The interaction between EGFR mutation status and erlotinib use was not significant (P=0.47). The lack of significance could be attributable to a type II error since there was a low sample size that was available for subgroup analysis.
A series of phase II studies have examined the clinical effectiveness of erlotinib in patients known to have EGFR mutations. Evidence from these studies has consistently shown that erlotinib yields a very high ORR (typically 70% vs. 4%) and a prolonged PFS (9 months vs. 2 months) in patients with EGFR mutations compared with patients with wild-type EGFR. Although having a prolonged PFS and higher respond in EGFR mutated patients might be due to a better prognostic profile regardless of the treatment received. In the absence of a comparative treatment or placebo control group, it is difficult to determine if the observed differences in survival benefit in patients with EGFR mutation is attributed to prognostic or predictive value of EGFR mutation status.
Conclusions
Based on moderate quality of evidence, patients with locally advanced or metastatic NSCLC with adenocarcinoma histology being treated with gefitinib in the first-line setting are highly likely to benefit from gefitinib if they have EGFR mutations compared to those with wild-type EGFR. This advantage is reflected in improved PFS, ORR and QoL in patients with EGFR mutation who are being treated with gefitinib relative to patients treated with chemotherapy.
Based on low quality of evidence, in patients with locally advanced or metastatic NSCLC who are being treated with erlotinib, the identification of EGFR mutation status selects those who are most likely to benefit from erlotinib relative to patients treated with placebo in the second or third-line setting.
PMCID: PMC3377519  PMID: 23074402
23.  Use of Psychotropic Medications by U.S. Cancer Survivors 
Psycho-oncology  2011;21(11):1237-1243.
Objectives
To describe national utilization of psychotropic medications by adult cancer survivors in the U.S. To estimate the extra use of psychotropic medications that is attributable to cancer survivorship.
Methods
Prescription data for 2001 to 2006 from the Medical Expenditure Panel Survey (MEPS) were linked to data identifying cancer survivors from the National Health Interview Survey (NHIS), the MEPS sampling frame. The sample was limited to adults 25 years of age and older. Propensity score matching was used to estimate the effects of cancer survivorship on utilization of psychotropic medications, by comparing cancer survivors and other adults in MEPS. Utilization was measured as any use during a calendar year and the number of prescriptions purchased (including refills). Analyses were stratified by gender and age, distinguishing adults younger than 65 from those 65 and older.
Results
Nineteen percent of cancer survivors under age 65 and 16% of survivors 65 and older used psychotropic medications. Sixteen percent of younger survivors used antidepressants; 7% used anti-anxiety medications. For older survivors, utilization rates for these two drug types were 11% and 7% respectively. The increase in any use attributable to cancer amounted to 4-5 percentage points for younger survivors (p<.05) and 2-3 percentage points for older survivors (p<.05), depending on gender.
Conclusion
Increased use of psychotropic medications by cancer survivors, compared to other adults, suggests that survivorship presents ongoing psychological challenges.
doi:10.1002/pon.2039
PMCID: PMC4079257  PMID: 21905155
cancer; survivorship; psychotropic medicines; oncology; utilization
24.  Oral health-related quality of life and related factors among residents in a disaster area of the Great East Japan Earthquake and giant tsunami 
Background
Oral health is one of the most important issues for disaster survivors. The aim of this study was to determine post-disaster distribution of oral health-related quality of life (OHRQoL) and related factors in survivors of the Great East Japan Earthquake and Tsunami.
Methods
Questionnaires to assess OHRQoL, psychological distress, disaster-related experiences, and current systemic-health and economic conditions were sent to survivors over 18 years of age living in Otsuchi, one of the most severely damaged municipalities. OHRQoL and psychological distress were assessed using the General Oral Health Assessment Index (GOHAI) and the Kessler Psychological Distress Scale (K6), Japanese version, respectively. Among 11,411 residents, 1,987 returned the questionnaire (response rate, 17.4 %) and received an oral examination to determine number of present teeth, dental caries status, and tooth-mobility grade, and to assess periodontal health using the Community Periodontal Index. Relationships between GOHAI and related factors were examined by nonparametric bivariate and multinomial logistic regression analyses using GOHAI cutoff points at the 25th and 50th national standard percentiles.
Results
GOHAI scores were significantly lower in the 50–69-age group compared with other age groups in this study and compared with the national standard score. In bivariate analyses, all factors assessed in this study (i.e., sex, age, evacuation from home, interruption of dental treatment, lost or fractured dentures, self-rated systemic health, serious psychological distress (SPD), economic status, number of teeth, having decayed teeth, CPI code, and tooth mobility) were significantly associated with OHRQoL. Subsequent multinomial logistic regression analyses revealed that participants of upper-middle age, who had received dental treatment before the disaster, who had lost or fractured dentures, and who had clinical oral health problems were likely to show low levels of OHRQoL. In addition, perceived systemic health and SPD were also related with OHRQoL.
Conclusions
OHRQoL of disaster survivors was associated with oral problems stemming from the disaster in addition to factors related to OHRQoL in ordinary times such as clinical oral status and perceived systemic health. Furthermore, SPD was also associated with OHRQoL, which suggests the disaster’s great negative impact on both oral and mental health conditions.
doi:10.1186/s12955-015-0339-9
PMCID: PMC4570176  PMID: 26369321
25.  Understanding the mental health of youth living with perinatal HIV infection: lessons learned and current challenges 
Introduction
Across the globe, children born with perinatal HIV infection (PHIV) are reaching adolescence and young adulthood in large numbers. The majority of research has focused on biomedical outcomes yet there is increasing awareness that long-term survivors with PHIV are at high risk for mental health problems, given genetic, biomedical, familial and environmental risk. This article presents a review of the literature on the mental health functioning of perinatally HIV-infected (PHIV+) adolescents, corresponding risk and protective factors, treatment modalities and critical needs for future interventions and research.
Methods
An extensive review of online databases was conducted. Articles including: (1) PHIV+ youth; (2) age 10 and older; (3) mental health outcomes; and (4) mental health treatment were reviewed. Of 93 articles identified, 38 met inclusion criteria, the vast majority from the United States and Europe.
Results
These studies suggest that PHIV+ youth experience emotional and behavioural problems, including psychiatric disorders, at higher than expected rates, often exceeding those of the general population and other high-risk groups. Yet, the specific role of HIV per se remains unclear, as uninfected youth with HIV exposure or those living in HIV-affected households displayed similar prevalence rates in some studies, higher rates in others and lower rates in still others. Although studies are limited with mixed findings, this review indicates that child-health status, cognitive function, parental health and mental health, stressful life events and neighbourhood disorder have been associated with worse mental health outcomes, while parent–child involvement and communication, and peer, parent and teacher social support have been associated with better function. Few evidence-based interventions exist; CHAMP+, a mental health programme for PHIV+ youth, shows promise across cultures.
Conclusions
This review highlights research limitations that preclude both conclusions and full understanding of aetiology. Conversely, these limitations present opportunities for future research. Many PHIV+ youth experience adequate mental health despite vulnerabilities. However, the focus of research to date highlights the identification of risks rather than positive attributes, which could inform preventive interventions. Development and evaluation of mental health interventions and preventions are urgently needed to optimize mental health, particularly for PHIV+ youth growing up in low-and-middle income countries.
doi:10.7448/IAS.16.1.18593
PMCID: PMC3687078  PMID: 23782478
mental health; psychiatric disorders; emotional and behavioural problems; perinatal HIV infection; adolescence; paediatric HIV

Results 1-25 (1779862)