Using questions derived from clinical practice guidelines, and a survey of glaucoma specialists, we generated a ranked list of clinical questions for comparative effectiveness research related to the management of POAG. The questions to which clinicians assigned the highest importance ranking related to the effectiveness of medical interventions, filtering surgery, and adjustment of therapy. Clinical questions on laser trabeculoplasty and cyclodestructive surgery were ranked as less important. The clinical questions that respondents ranked as the most important involved either common clinical scenarios that practitioners face several times daily, such as the decision to use eye drops to lower the intraocular pressure, or less common scenarios, such as interventions after filtration surgery for which practitioners may lack knowledge about their utility.
The top five questions receiving a high importance ranking under both coding schemas were also ranked as “research has already answered this question” by more than 50% of respondents. This may indicate the existence of evidence, for example, from one or two trials that has convinced many clinicians, but not others. It is particularly interesting that the proportion responding that “research has already answered the question” increased after respondents had the opportunity to view the responses by others. We do not know whether those changing their responses were prompted to check the evidence or whether they were simply influenced by their peers.
The clinician survey rankings agreed with the AAO importance rankings in cases where importance rankings had been assigned. We frequently derived more than one clinical question from each AAO PPP recommendation, however. For example, for the PPP statement that medical interventions are generally effective for POAG, we derived seven questions, specifying each type of medical intervention (e.g., beta-blockers) as a unique question. This may explain why over half of the survey questions did not have an AAO importance ranking.
Two main steps are involved in any priority setting effort: identifying important answerable clinical questions, and prioritizing the list of questions using a specific methodology. Our approach for identifying important questions used direct clinician input. Because practice guidelines typically are developed by professional societies aiming to assist healthcare practitioners with decision making,11
the clinical questions derived from them reflect key issues and dilemmas facing clinicians at the time of guideline development.
Our study also used Delphi survey methods, a formal consensus technique, which incorporates individual value judgment into group decision-making. This method is in contrast to nomination-based methods in which, first, topics may be suggested by curious investigators, by payers concerned about cost, or members of the public concerned about contradictory claims of a treatment’s efficacy, and then explicit pre-decided criteria are applied to develop rankings.7, 8, 20–22
In certain cases, these other methods have not demonstrated satisfactory validity.21
Our method has several unique strengths. First, by surveying AGS members, we have queried a large group of stakeholders with highly specialized knowledge. This group of experts allowed us to examine a broad range of interventions for the same condition, thus meeting the urgent needs of practitioners to answer myriad questions within the subspecialty. In contrast, most priority setting methods in use focus on just a few questions in a specialty area. This is especially true for vision research. Since their start in 1997, the Agency for Healthcare Research and Quality (AHRQ) Evidence-based Practice Centers (EPCs), the key U.S. producers of systematic reviews, have released only three eye and vision related evidence reports out of a total of 185 evidence reports completed.23
The recent Institute of Medicine report “Initial National Priorities for Comparative Effectiveness Research,”6
recommending research priorities for $400 million allocated by the American Recovery and Reinvestment Act fund to the Department of Health and Human Services, includes only two topics related to vision heath. While dealing with important topics, these reviews and nominated priority topics do not begin to address the important clinical questions of vision subspecialists.
Second, our method promises to lessen the gap between evidence generation and the translation of evidence to care, when the method is used in partnership with guideline developers, research funders, evidence producers, and consumers. We propose the following approach to filling the evidence gaps in a subspecialty area: 1) choose a topic area and work with guidelines producers to derive answerable clinical questions from existing guidelines; 2) survey members of one or more professional associations to assess individual and consensus rankings of the clinical questions; 3) determine evidence needs and research priorities by matching the ranked questions with existing evidence; 4) partner with funders, evidence producers, and evidence synthesizers (e.g., groups within The Cochrane Collaboration such as the Cochrane Eyes and Vision Group) to fill the information gaps.
Our approach can also be used to re-assess research priorities when novel medicine and technology emerge, new evidence gaps develop, and healthcare resources need to be re-allocated to meet these immediate needs.
Given our goal of developing a framework for prioritizing comparative effectiveness research, we faced several challenges. First, when translating the AAO guidelines into answerable clinical questions, we relied on interventions and outcome measures that were stated explicitly in the guidelines. One consequence was that intraocular pressure was over-emphasized as an outcome in the restated clinical questions. In addition, by their nature, practice guidelines may not cover all important clinical questions; the fact that respondents nominated new questions provides evidence regarding this issue.
A second challenge was that some clinicians failed to grasp that our purpose was research prioritization. At both the pilot testing and survey stages of the study, clinicians sometimes responded to the questions as if we were asking about their knowledge of the subject. We found that questions related to the delivery of care (e.g., behavioral interventions, pre- and post- operative care) appeared to be the most difficult in this regard.
We faced a particular challenge in analyzing clinical questions where a meaningful proportion of respondents selected “research has already answered the question,” a response option suggested by a clinician co-investigator. Although we performed sensitivity analyses to test how priority ratings would change under two different assumptions, neither approach provides information about whether research indeed has answered the questions and how existing evidence influenced a respondent’s interpretation of the question. To address these issues would require matching existing systematic reviews and RCTs to the 45 clinical questions. In future applications of our method, it may be useful to separate response options into two parts: 1) “Please rate the importance of having the answer to each of the following questions for providing effective patient care,” and 2) “Do you believe research has already answered each question?” so that responses can be analyzed separately. One could also ask respondents to cite the evidence if they say “research has already answered this question.”
High response variability may be used as an alternative criterion to prioritize clinical questions for additional research. Questions with high response variability (which would result in a wide credible interval) might reflect greater clinical uncertainty, and may be more suitable for additional research. In our study, however, response variability was small for all questions. Regardless, it is critical to search for and synthesize existing evidence for clinical questions identified as priorities by preparing, maintaining and disseminating systematic reviews, a necessary step prior to investment of research funds.24
Although our Delphi survey response rate was comparable to other web-based surveys of medical specialists,25
the opinions and rankings of our respondents may not be comparable to those of other American glaucoma specialists since fewer than one third of AGS members responded to the survey. In addition, those who responded to both Round One and Round Two were less likely to report expertise in clinical trials and systematic reviews, and were less likely to identify themselves as self-employed/private practice than those responding to Round 1 only, which may have influenced the final rankings.
If providers and patients are to make well-informed decisions about health care, comparative effectiveness research must be prioritized and conducted with all due speed. Prioritization of the generation and synthesis of research evidence to prevent, diagnose, treat, and monitor clinical conditions should reflect the most urgent needs. Those setting priorities report experience but little empirical evidence as to effective methods of prioritization.8, 22
Our study provides evidence supporting the practicality of a systematic method to identify priority questions utilizing stakeholder input.
In conclusion, we tested the feasibility of a framework for prioritizing answerable clinical questions for new comparative effectiveness research by using practice guidelines and a survey of clinicians. Our approach is systematic, transparent, participatory, and produces a ranked list of questions in a subspecialty area. We have demonstrated that our theoretical model for priority setting for comparative effectiveness research question is a pragmatic approach that merits testing in other medical settings.