|Home | About | Journals | Submit | Contact Us | Français|
To determine whether biennial eye evaluation or telemedicine screening are cost-effective alternatives to current recommendations for the estimated 10 million people aged 30–84 with diabetes but no or minimal diabetic retinopathy.
United Kingdom Prospective Diabetes Study, National Health and Nutrition Examination Survey, American Academy of Ophthalmology Preferred Practice Patterns, Medicare Payment Schedule.
Cost-effectiveness Monte Carlo simulation.
Literature review, analysis of existing surveys.
Biennial eye evaluation was the most cost-effective treatment option when the ability to detect other eye conditions was included in the model. Telemedicine was most cost-effective when other eye conditions were not considered or when telemedicine was assumed to detect refractive error. The current annual eye evaluation recommendation was costly compared with either treatment alternative. Self-referral was most cost-effective up to a willingness to pay (WTP) of U.S.$37,600, with either biennial or annual evaluation most cost-effective at higher WTP levels.
Annual eye evaluations are costly and add little benefit compared with either plausible alternative. More research on the ability of telemedicine to detect other eye conditions is needed to determine whether it is more cost-effective than biennial eye evaluation.
Diabetic retinopathy (DR)—a complication involving the retinal microvasculature resulting in damage to the retina from ischema, neovascularization, hemorrhage, and edema—develops in virtually all people with diabetes mellitus and can progress to more advanced stages threatening permanent visual impairment and blindness (American Academy of Ophthalmology Retina Panel 2008). In 2004, DR was the estimated cause of 195,000 prevalent cases of visual impairment and blindness in the United States (The Eye Diseases Prevalence Research Group 2004). Screening to increase detection of DR progression and timely treatment can reduce the risk of permanent visual impairment. Based on clinical and cost-effectiveness evidence from the 1990s, the American Academy of Ophthalmology (AAO), the American Optometric Association, and the American Diabetes Association currently all recommend annual dilated evaluations by an eye-care professional (annual eye evaluations) for all patients with diabetes (Dasbach et al. 1991; Javitt et al. 1994; Javitt 1995; Javitt and Aiello 1996; Fodera 1999; Fong et al. 2003; American Academy of Ophthalmology Retina Panel 2008). This recommendation is likely most cost-effective for patients with advanced DR or elevated hemoglobin A1c (HbA1c) levels, but annual evaluation may be costly for those with no DR or with microaneurysms (MA) only.
For lower-risk patients with diabetes, Vijan, Hofer, and Hayward (2000) found that less frequent evaluations provide nearly equivalent benefits as annual ones at lower costs (Vijan, Hofer, and Hayward 2000). In 2006, nearly 9.1 million Americans had type 2 diabetes with limited or no signs of DR (National Center for Health Statistics 2005–2006). After accounting for the rate of compliance with current screening recommendations and the costs of evaluation by an eye-care professional, a change from annual evaluations to biennial evaluations could save approximately U.S.$200 million in health expenditures annually with limited risks to health.
However, Vijan, Hofer, and Hayward (2000) did not account for noncompliance with recommendations or for the benefits of detecting other ocular disorders. These limitations may have influenced their results because only 64 percent of people with diabetes aged 30 or older comply with current recommendations and because people with diabetes are at higher risk for incident glaucoma, cataract, and possibly vision-threatening age-related macular degeneration (AMD) than people without diabetes (Klein et al. 1995; Bonovas, Peponis, and Filioussi 2004; Clemons et al. 2005; National Center for Health Statistics 2005–2006). Furthermore, advocates for annual evaluation argue that the annual recommendation is easier to communicate, creating less risk of noncompliance.
Over the past decade, retinal digital photography (telemedicine) has emerged as a lower cost alternative to annual evaluation by an eye-care professional. Telemedicine uses digital retinal photography to enable screening in nonophthalmologic settings. Images are electronically transferred to a grading center for evaluation, and patients with evidence of mild to severe DR are referred to an eye-care professional for a full evaluation. Telemedicine has shown better sensitivity (98 percent) and specificity (86 percent) in detecting DR than ophthalmoscopy (Moss et al. 1985; Ahmed et al. 2006) and usually costs less from both the health care and societal perspectives than dilated eye examinations because of lower provider reimbursements and lower patient productivity losses from time lost to treatment. Because 82 percent of people with diabetes visit a primary care provider annually, telemedicine could also potentially increase the annual probability of screening for DR compared with clinical eye evaluations (National Center for Health Statistics 2005–2006). Unfortunately, telemedicine currently has a limited ability to detect prevalent eye conditions other than AMD, such as cataract, glaucoma, or uncorrected refractive error (URE, a presenting acuity of 20/40 or worse that can be easily corrected with glasses or contact lenses). The much greater ability of clinical eye evaluation to detect these conditions may result in either annual or biennial evaluation being more cost-effective than telemedicine.
We designed this study to provide additional information regarding the most cost-effective screening alternatives for people with diabetes who are at low risk of progression when accounting for imperfect compliance with screening recommendations and the ability of eye evaluation to detect other common visual disorders. We estimated the cost-effectiveness of four possible methods of managing this patient population: patient self-referral following visual symptoms, annual eye evaluation, biennial eye evaluation, and annual telemedicine screening in primary care settings. Our results can be used to evaluate the management practice that is most likely to be cost-effective at different societal valuations of the gains from medical therapy and to inform what additional research may be required to update recommendations.
We simulated a mixed-age cohort of people with diabetes who were at low risk of progression beginning in the year 2006 and continuing until death or age 90. We defined low-risk as patients aged 30 or older with diagnosed type 2 diabetes, no DR or only retinal MA, who had visited a primary care physician at least once in the past year. We assigned age, race/ethnicity, gender, and diabetes duration based on 2005/2006 National Health and Nutrition Examination Survey (NHANES) data (National Center for Health Statistics 2005–2006). We assumed a starting HbA1c of 6.8 percent for all patients, the estimated mean value at diabetes onset (Dong, Orians, and Manninen 1997) and approximately 15 percent lower than the average HbA1c values for all people with diabetes (National Center for Health Statistics 2005–2006), and increased HbA1c by 0.2 points annually (Table 1) (Dong, Orians, and Manninen 1997).
Patients died at census mortality rates multiplied by 2.0, the relative risk of mortality of a diabetic patient (Leibson et al. 2005; Arias 2007; National Center for Chronic Disease Prevention and Health Promotion 2009). Patients who developed a visual acuity of 20/200 or worse experienced an additional mortality relative risk of 1.20 (Clemons, Kurinij, and Sperduto 2004; Freeman et al. 2005; Thiagarajan et al. 2005; Knudtson, Klein, and Klein 2006).
We modeled DR progression (Figure 1) based on the Early Treatment of Diabetic Retinopathy Study (ETDRS) worse eye/better eye severity scale as modified and abridged for use by the United Kingdom Prospective Diabetes Study (UKPDS) (Klein et al. 1984; Stratton et al. 2001). We categorized not immediately vision-threatening DR as follows (ETDRS numerical level in parentheses): DR absent (10); MA only with other lesions absent (20); mild nonproliferative DR (NPDR) defined as MA plus retinal hemorrhages, and/or hard exudates, and/or cotton wool spots (35); or moderate NPDR defined as mild NPDR plus either extensive or severe hemorrhages and MA or intraretinal microvascular abnormalities present in the absence of clinically significant macular edema (CSME) (43). We classified vision-threatening stages of disease (VTDR) as non-high-risk proliferative DR (Non-HR-PDR), CSME, Non-HR-PDR concurrent with CSME, high-risk proliferative DR (HR-PDR), and HR-PDR concurrent with CSME. To simplify the model, we assumed no vision loss from nonclinically significant macular edema.
We governed transitions between prevision-threatening states using a matrix of probabilities published by the UKPDS (Stratton et al. 2001). We estimated the transition probability from mild or moderate NPDR (35, 43) to vision-threatening states using UKPDS functions of disease severity, diabetes duration, and 14-year average of patient HbA1c (Stevens, Stratton, and Holman 2002). When eyes progressed to VTDR, we assigned 12.8 percent to Non-HR-PDR, 82.1 percent to CSME, and 5.1 percent to Non-HR-PDR with CSME (Table 2) (Stevens, Stratton, and Holman 2002). When the first eye progressed, we assigned the fellow eye to moderate NPDR (43). Eyes in each vision-threatening state experienced annual probabilities of progressing to other vision-threatening states (Early Treatment Diabetic Retinopathy Study Research Group 1991; Klein et al. 1995) and of losing vision at the rate of the control group observed in treatment studies (Early Treatment Diabetic Retinopathy Study Research Group 1985; Diabetic Retinopathy Study Research Group 1987).
Patients diagnosed with mild NPDR or more severe states received the treatment recommended by AAO's Preferred Practice Patterns (American Academy of Ophthalmology Retina Panel 2008): one to four annual ophthalmoscopic evaluations, fluorescein angiography with focal/grid laser photocoagulation treatment for CSME, and scatter (panretinal) laser photocoagulation therapy for Non-HR or HR-PDR and no CSME. Treatment reduced the annual probability of visual loss from these conditions between 55 and 62 percent (Early Treatment Diabetic Retinopathy Study Research Group 1985; Diabetic Retinopathy Study Research Group 1987). Previously treated eyes experienced an annual probability of requiring additional treatment (Early Treatment Diabetic Retinopathy Study Research Group 1995).
Using NHANES data, we estimated the prevalence of URE for whites, African Americans, and Hispanics for the age groups 30–59, 60–79, and 80+ years by subtracting the proportion of patients with a visual acuity of 20/40 or worse after refraction from the proportion with a visual acuity of 20/40 or worse before refraction. We applied the age 30–59 year prevalence rate equally within that age group with no incident URE over these years. We calculated annual incidence of URE in patients aged 60 or older by converting prevalence rate differences between age groups into annual incidence rates. URE treatment consisted of one eye evaluation, glasses, contact lenses, or both as weighted by utilization proportions with replacement every 3.4 years for glasses and annually for contact lenses (Rein et al. 2006). We assumed that URE treatment was 100 percent effective.
The AMD and glaucoma modules have been described and validated elsewhere (Rein et al. 2007a; Rein et al. 2009). Briefly, our AMD module categorized patients into prevision-threatening and vision-threatening states of geographic atrophy and choroidal neovascularization (CNV) with transition probabilities based on data from the Age-Related Eye Disease Study and clinical trial data (Macular Photocoagulation Study Group 1991; Age-Related Eye Disease Study Research Group 2001; Verteporfin in Photodynamic Therapy Study Group 2001; Rein et al. 2007a). Our glaucoma module used a three-step function of incidence, the annual probability of visual field loss given incident glaucoma, and the quantity of visual field lost in years any loss occurred (Heijl et al. 2002; Leske et al. 2003). Compared with persons without diabetes, those with diabetes experienced 1.5 times the annual risk of incident glaucoma (Bonovas, Peponis, and Filioussi 2004). Patients with diabetes, AMD, and CNV in one eye experienced 1.8 times the risk of developing CNV in the fellow eye than patients without diabetes with AMD and CNV (Clemons et al. 2005). We excluded cataracts because of the lack of consensus regarding when cataracts should be removed following early detection.
We summed acuity losses from DR and AMD and separately tracked glaucomatous visual field losses. We assigned published QALY values based on the lower utility derived from either acuity or field impairment in the better-seeing eye (Brown et al. 2003b; Rein et al. 2007b). We assumed that patients with URE experience QALY losses analogous to a best-corrected acuity of 20/30. We assigned a minimum 0.03 utility loss for patients with at least moderate vision loss in one eye. Patient QALYs were calculated annually by multiplying the value of 1 minus patients' visual impairment-related health utility decrement (if any) by a uniform background utility of 0.87 (the average health utility of otherwise healthy people at ages 45–54 years) (Gold et al. 1998).
We converted AAO recommendations into Current Procedural Terminology codes, assigned codes Medicare reimbursement values, multiplied that value by annual frequency for each procedure, and summed costs within each disease stage. We assigned 1 hour of productivity losses to account for office visits and an additional 3.25 hours to account for time lost due to dilation to all eye evaluations and 8 hours for focal/grid or scatter laser surgery. Hourly productivity is based on average wage data by patient age from the Bureau of Labor Statistics. We assigned no productivity losses to telemedicine when dilation was not required for a gradable photo and 3.25 hours when it was. Productivity losses from visual impairment are embedded in QALY estimates (Gold et al. 1996; Brown et al. 2001).
We compared the costs and benefits of three screening scenarios to each other and to a counterfactual of self-referral. In the self-referral scenario, patients visited an eye-care provider in the year after they experienced permanent visual loss in at least one eye. Patients who were diagnosed with any eye condition then received care according to AAO guidelines.
Our first screening scenario simulated annual evaluation using a 0.63 annual compliance rate for patients aged 30–64 and a 0.74 annual compliance rate for patients aged 65 or older (Sloan et al. 2004; National Center for Health Statistics 2005–2006). Evaluation included a dilated ophthalmoscopic exam to detect DR, early and advanced AMD, and optic nerve cupping due to glaucoma; and autorefraction to detect URE at published sensitivities and specificities (Moss et al. 1985; Harper and Reeves 2000; Tikellis et al. 2000; Williams et al. 2000; Tong et al. 2004; Luo et al. 2006). We assigned patients to treatment following the detection of any eye condition except URE. URE was treated and the patient returned to screening. False-positive test results were assigned additional costs associated with confirmatory tests and kept in screening. Our second intervention, biennial eye evaluation, was identical to the first except patients experienced the compliance probabilities only once every 2 years and not at all in the intervening year.
Our third intervention simulated 100 percent annual telemedicine screening. We chose a compliance rate of 100 percent because we assumed that telemedicine would be performed in a primary care office and because our universe of eligible patients were those with at least one primary care visit in the past year. Telemedicine resulted in an initial gradable fundus photograph for 88.6 percent of patients aged 30–59, 76.8 percent of patients aged 60–69, and 65.0 percent of patients older than age 69 (Cavallerano et al. 2003; Ahmed et al. 2006; Chow et al. 2006). Patients with ungradable photographs had their pupils pharmacologically dilated, and 96 percent of these photos were gradable (Deb-Joardar et al. 2007). Gradable photographs had a sensitivity and specificity of 0.98 and 0.86 for detecting DR progression, 0.50 and 1.00 for detecting early AMD (by assumption), and were assumed to not detect glaucoma or URE. Eye evaluations were reimbursed U.S.$71 for new patients and U.S.$65 for established patients. Telemedicine cost estimates varied widely from U.S.$15 per screening reported by one screening center to U.S.$69, the Medicare reimbursement for color fundus photography with interpretation by an ophthalmologist. We reimbursed telemedicine at the same Medicare rate currently allowed for biennial glaucoma screening (U.S.$46.25) assuming that a Medicare telemedicine reimbursement policy would resemble reimbursement for other visual screening.
We take a societal perspective based on recommendations of the Panel on Cost-Effectiveness in Health and Medicine (Weinstein et al. 1996). Future costs and QALYs were discounted to 2010 values using a 3 percent rate. Discounted average lifetime costs and QALYs were used to calculate the incremental net benefit (INB) of each scenario compared with self-referral and compared with the next most costly scenario. The INB, which results in the same rank order of policy preferences as the incremental cost-effectiveness ratio (ICER) but has superior statistical attributes (Willan and Lin 2001), is defined as
where λ is equal to a willingness-to-pay (WTP) value, ΔQ is the mean incremental difference in QALYs between two scenarios, and ΔC is the mean incremental difference in costs. Our baseline results present an INB assuming a WTP value of U.S.$50,000 per QALY. We considered a scenario cost-effective when its INB evaluated at the mean values of costs and QALYs was >0, and we considered the scenarios with the higher total net benefits more cost-effective than scenarios with lower ones. For comparability to other studies, we also report the ICER of each intervention when compared with self-referral and when compared with the next most costly alternative.
We report the proportion of societal costs that we expected to be paid by Medicare and experienced as productivity losses. For patients older than 65, we attributed 80 percent of outpatient costs (i.e., deducting the typical Part B deductible payment) plus all telemedicine screening costs to Medicare.
We used probabilistic sensitivity analysis (PSA) to estimate the mean value of each outcome and its credible interval given uncertainty regarding key model parameters. Credible intervals refer to ranges generated from simulation results not from a sample of observed data (O'Hagan and Luce 2003). The PSA simulated scenarios in batches, with each batch using a parameter set drawn from their distribution of possible values. We specified the variance and distribution of parameters based on published guidelines (Doubilet et al. 1985).
We selected the number of model replications (6,000 per scenario) and the number of patients simulated per replication (30,000) that would allow us to detect a difference in mean QALYs of 0.01 between any two scenarios after adjusting our standard error estimates for stochastic patient-level error (O'Hagan, Stevenson, and Madan 2007). We defined the credible interval for each mean as the mean ± 1.96 multiplied times its adjusted standard error, a range that would contain 95 percent of simulated means if they were normally distributed.
Using the sample results, we estimated cost-effectiveness acceptability curves (CEACs) using the expected values and the adjusted standard error of each simulated scenario's costs and QALYs assuming the expected means were normally distributed. The CEACs graph the probability (y-axis) that a scenario (counterfactual or screening) represented the most cost-effective choice (i.e., resulted in the greatest net benefit) at each WTP value per QALY gained (x-axis). We tested the impact of the normality assumption on our cost-effectiveness determination by estimating the cost-effectiveness of each scenario at a WTP of U.S.$50,000 per QALY gained when assuming a uniform and then a γ distribution of the means.
Our estimated mean and credible intervals provide the best estimate of the expected value and credible interval of costs and QALYs given joint uncertainty across the models. We also used OLS regression to identify important parameters, and we used subsets of our simulated results to calculate the mean INB of each scenario when restricting the value of this parameter. We did not develop credible intervals for these univariate sensitivity results because of their computational intensity. We also calculated the total expected value of perfect information (EVPI) for parameters included in the PSA. The EVPI shows the expected cost of forgone benefits if uncertainty in the model were to result in choosing a suboptimal policy.
Following guidelines (Weinstein et al. 2003), we tested the model's internal validity and its ability to reproduce externally published estimates of (a) DR prevalence rates by disease stage and visual impairment from (b) untreated and (c) treated DR. The model was internally valid, strongly reproduced (a) and (c), and adequately reproduced (b) within the bounds of reported estimates. The extremely wide variance of published estimates of (b) led us to vary these parameters in our sensitivity analysis.
With self-referral, the average person experienced 696 days of visual acuity of 20/30 or worse in the better-seeing eye from DR, AMD, glaucoma, or URE between 2010 and their death. This number fell to 588 days with annual telemedicine screening, 574 days with biennial eye evaluation, and 572 days with annual eye evaluation. Self-referral offered the lowest costs and QALYs, followed by telemedicine, biennial evaluation, and annual evaluation (Table 3).
Self-referral resulted in average per person ophthalmologic-related costs of U.S.$7,368, of which 28 percent were paid by Medicare and 3 percent were productivity losses, with the remainder paid for by other insurers or out of pocket. Compared with self-referral, telemedicine increased costs by U.S.$3,343, biennial evaluation by U.S.$3,636, and annual evaluation by U.S.$4,809. Medicare paid 35 percent of the incremental costs of telemedicine, 24 percent of biennial evaluation, and 21 percent of annual evaluation.
Compared with self-referral, the ICER for annual telemedicine assessments was U.S.$55,000 per QALY gained, with an INB (assuming a WTP of U.S.$50,000 per QALY gained) of −U.S.$303 (Table 3); the ICER of biennial evaluation was U.S.$38,000 per QALY gained with an INB of U.S.$1,208; and the ICER of annual evaluation was U.S.$46,000 per QALY gained with an INB of U.S.$466. Biennial examination compared with annual telemedicine cost U.S.$8,107 per QALY gained with an INB of U.S.$1,511 assuming a WTP of U.S.$50,000 per QALY. Annual evaluation compared with biennial examination cost U.S.$136,170 per QALY gained with an INB of −U.S.$742.
Self-referral was most likely to be cost-effective at a WTP between U.S.$0 and U.S.$37,500 per QALY gained. Biennial evaluation was most likely to be cost-effective at a WTP between U.S.$37,500 and U.S.$150,000 per QALY gained, and annual evaluation was most likely to be cost-effective at WTP values >U.S.$150,000 per QALY gained (Figure 2).
The discounted lifetime EVPI was U.S.$78 per person. The cumulative EVPI over the study population of 9.1 million persons was U.S.$709 million.
Our model was most sensitive to the assumption that telemedicine could not detect URE. When we assumed that telemedicine could detect between 25 and 75 percent of URE (γ distributed), telemedicine-dominated biennial evaluation, resulting in greater QALYs at lower cost, reversing our earlier result. In that scenario at a WTP of U.S.$50,000 per QALY, the INB of telemedicine compared with self-referral was U.S.$1,708, and the INB of biennial evaluation compared with telemedicine was −U.S.$500. When we assumed that telemedicine could detect 50 percent of URE, telemedicine was most likely to be cost-effective at a WTP between U.S.$33,000 and U.S.$400,000 per QALY gained. No treatment was most likely to be cost-effective at WTP values below U.S.$33,000 per QALY gained, and annual evaluation was most likely to be cost-effective at WTP values about U.S.$400,000 per QALY gained. We found that varying the discount rate from 0 to 5 percent only impacted the choice between biennial and annual evaluations and that annual evaluation was only more likely to be cost-effective at discount rates lower than our baseline.
No other parameter altered the rank order of scenario preferences. Compared with self-referral all scenarios were more cost-effective when the starting mean HbA1c and its rate of change were higher and less cost-effective when they were lower. The model was insensitive to univariate variations in the relative risk of death among people with diabetes, the screening compliance rate obtained by annual and biennial evaluations, variations in telemedicine costs, and the rate of impairment after progressing to vision-threatening disease. There was no plausible parameter value at which annual evaluation was cost-effective compared with biennial evaluation at a WTP value below U.S.$100,000 per QALY gained. At a WTP of between U.S.$100,000 and U.S.$130,000, annual evaluation was preferred to biennial evaluation when the discount rate was 0 percent and when one assumed faster disease progression rates.
We evaluated the relative costs and benefits of three screening strategies for reducing visual morbidity in persons with diabetes at low risk of DR progression. When directly compared, biennial eye evaluation was more cost-effective than telemedicine or annual eye evaluation. However, the preference for biennial eye evaluation over telemedicine was sensitive to the assumption that telemedicine could not detect URE. When we assumed that telemedicine could detect URE, its benefits were greater than biennial evaluation at lower cost. Telemedicine programs that use simple acuity eye charts or screening questions such as inquiries about blurry vision to screen for URE are highly likely to be a cost-effective alternative to eye evaluation. Given the EVPI of U.S.$709 million, additional research to study the ability of telemedicine to detect other eye conditions is warranted.
We found annual evaluation to be costly per incremental QALY gained compared with biennial evaluation, a result that is consistent with Vijan, Hofer, and Hayward (2000). When last published, this conclusion was criticized because the study failed to account for imperfect compliance with guideline recommendations; because Vijan, Hofer, and Hayward (2000) used what some argued was an arbitrarily high utility value for vision loss; and because it used UKPDS retinopathy progression data, which some thought underestimated retinopathy progression rates experienced in U.S. populations (Brown, Brown, and Sharma 2000; Hoskins 2000; Javitt 2000). Our study does not assume perfect compliance with visit recommendations, and it uses standard and widely used and accepted utility values associated with visual loss (Brown et al. 2003a,b), yet we still reach the same conclusion (Vijan, Hofer, and Hayward 2000).
Like Vijan, Hofer, and Hayward (2000), our study used UKPDS data to govern disease progression. However, because of medical advances in the management of diabetes, the UKPDS findings may now overestimate rather than underestimate the rate of disease progression mitigating the importance of this earlier concern (Hovind et al. 2003; Klein and Klein 2010).
Our results are limited by uncertainty in the rate at which DR progresses to vision-threatening states, and the rate of vision loss if left untreated with pan retinal photocoagulation and/or focal/grid photocoagulation, as well as the cost of telemedicine. However, our probabilistic and univariate sensitivity analyses indicate that using different values of these parameters would not lead to different conclusions.
Our results are also limited by the rapid changes in the price, implementation, and benefits of telemedicine. The benefits of telemedicine modeled here are roughly equivalent to those attainable in 2008. Changes in technology may have already enhanced telemedicine's ability to detect AMD, glaucoma, and URE. Telemedicine does detect glaucomatic optic nerve damage with a high degree of sensitivity (94 percent), but its current poor specificity (70 percent) limits its usefulness for screening (Pasquale et al. 2007; Asefzadeh and Pasquale 2008). This is likely to improve and the benefits of telemedicine are likely to increase in the very near future.
Similarly, we modeled the average impact of screening modalities given the service utilization rates observed in the NHANES data. These average rates are not applicable to certain underserved populations for whom telemedicine may represent the only feasible means of achieving periodic screening for DR progression. Our telemedicine results assume that low-risk patients see their primary care physician every year. Patients who are less compliant would experience lower benefits and screening costs with uncertain impacts on cost-effectiveness.
Our results also excluded cataracts because of the lack of consensus regarding when they should be extracted. If all cataracts cause QALY decrements requiring immediate extraction upon detection, then their inclusion would increase the favorability of biennial evaluation compared with telemedicine and decrease the cost of annual evaluation as compared with biennial evaluation. Our results also depend on traditionally used measurements of QALYs from visual impairment which for technical reasons may overstate QALY losses, although the extent to which they do so is unknown.
Finally, advances in treatment since the UKPDS may have resulted in slower progression and lower rates of vision loss than the values used in our model (Hovind et al. 2003; Klein and Lee 2009; Klein and Klein 2010). However, the effect of this limitation in our model is to bias our results in favor of annual eye evaluation over biennial evaluation because more rapidly developing disease provides a stronger rationale for more frequent screening. Our model results show a preference for biennial over annual evaluation despite this limitation.
Biennial evaluation by an eye-care professional with expertise in assessment of DR was preferable to telemedicine assessment, but this conclusion was highly sensitive to the benefits of detecting URE. Schedules or interventions that combine the low costs and high screening compliance achieved with telemedicine with the more comprehensive diagnostic ability of eye-care evaluations could potentially lead to greater benefits at lower costs. Our analysis suggests that annual eye evaluation is not cost-effective for low-risk patients.
Joint Acknowledgment/Disclosure Statement: This study was funded by the Division of Diabetes Translation, Centers for Disease Control and Prevention, Atlanta, GA (contract no. 200-2002-00776), and the National Eye Institute (award no. R21 EY019173). The content is solely the responsibility of the authors and does not necessarily represent the official position of the Centers for Disease Control and Prevention or the official views of the National Eye Institute or the National Institutes of Health. Dr. Klein has previously worked as a consultant regarding design and/or analyses of resultant data on DR as an outcome for Astra-Zeneca, Lilly, Novartis, Merck, GlaxoSmithKline, and Pfizer. The authors have no other conflicts of interest to disclose. We thank Susan Murchie for her editorial assistance.
Additional supporting information may be found in the online version of this article:
Appendix SA1: Author Matrix.
Appendix SA2: CDC/RTI Multiple Eye Disease Simulation Model (CR-MEDS). Diabetic Retinopathy Module.
Please note: Wiley-Blackwell is not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.