PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of jamiaJAMIA - The Journal of the American Medical Informatics AssociationInstructions for authorsCurrent TOC
 
J Am Med Inform Assoc. 2012 Jan-Feb; 19(1): 94–101.
Published online Aug 16, 2011. doi:  10.1136/amiajnl-2011-000172
PMCID: PMC3240753
Building better guidelines with BRIDGE-Wiz: development and evaluation of a software assistant to promote clarity, transparency, and implementability
Richard N Shiffman,corresponding author1 George Michel,1 Richard M Rosenfeld,2 and Caryn Davidson3
1Yale Center for Medical Informatics, New Haven, Connecticut, USA
2Department of Otolaryngology, SUNY Downstate Medical Center, Brooklyn, New York, USA
3American Academy of Pediatrics, Elk Grove Village, Illinois, USA
corresponding authorCorresponding author.
Correspondence to Richard N Shiffman, Yale Center for Medical Informatics, 300 George Street, Suite 501, New Haven, CT 06511, USA; richard.shiffman/at/yale.edu
Received February 9, 2011; Accepted July 12, 2011.
Objective
To demonstrate the feasibility of capturing the knowledge required to create guideline recommendations in a systematic, structured, manner using a software assistant. Practice guidelines constitute an important modality that can reduce the delivery of inappropriate care and support the introduction of new knowledge into clinical practice. However, many guideline recommendations are vague and underspecified, lack any linkage to supporting evidence or documentation of how they were developed, and prove to be difficult to transform into systems that influence the behavior of care providers.
Methods
The BRIDGE-Wiz application (Building Recommendations In a Developer's Guideline Editor) uses a wizard approach to address the questions: (1) under what circumstances? (2) who? (3) ought (with what level of obligation?) (4) to do what? (5) to whom? (6) how and why? Controlled natural language was applied to create and populate a template for recommendation statements.
Results
The application was used by five national panels to develop guidelines. In general, panelists agreed that the software helped to formalize a process for authoring guideline recommendations and deemed the application usable and useful.
Discussion
Use of BRIDGE-Wiz promotes clarity of recommendations by limiting verb choices, building active voice recommendations, incorporating decidability and executability checks, and limiting Boolean connectors. It enhances transparency by incorporating systematic appraisal of evidence quality, benefits, and harms. BRIDGE-Wiz promotes implementability by providing a pseudocode rule, suggesting deontic modals, and limiting the use of ‘consider’.
Conclusion
Users found that BRIDGE-Wiz facilitates the development of clear, transparent, and implementable guideline recommendations.
Keywords: Controlled natural language, controlled terminologies and vocabularies, developing/using clinical decision support (other than diagnostic) and guideline systems, developing/using wireless and in-the-field applications (mHealth), guideline development, guidelines, implementation, knowledge bases, knowledge representations, ontologies, portable, practice guidelines
Over the past two decades, a major global initiative has been undertaken to develop, disseminate, and implement clinical practice guidelines. The worthy goals of the initiative have been to diminish inappropriate practice, to improve health outcomes, to control the rising costs of health care, and to speed the translation of research into practice.1 Major resource investments—both intellectual and financial—have been dedicated to creating a scientifically based approach to define and describe what constitutes appropriate practice. The initiative has spawned a plethora of guidelines, protocols, algorithms, decision support tools, care paths, and utilization and performance review criteria, and has contributed mightily to the development of evidence-based medicine.2 At the same time, many practice guidelines have become de facto repositories of the best knowledge about ‘ideal’ clinical practice.
A longstanding informatics challenge has been to develop efficient mechanisms whereby valid medical knowledge can be translated accurately and transparently into recommendations about appropriate care. However, the gulf between raw knowledge and clear, transparent, and implementable recommendation statements is broad.
To date, much of the effort in knowledge translation has been focused on extracting knowledge from guidelines that have been finalized and published.3–5 This paper describes a novel software application intended to support and facilitate the development of clinical practice guidelines. BRIDGE-Wiz (Building Recommendations In a Developer's Guideline Editor) captures the knowledge required to create guideline recommendations in a systematic, structured, manner using a software wizard. In this paper, we will (1) describe the need for such an application and the environment in which its development and testing occurred, (2) present key design objectives, (3) describe the function of the BRIDGE-Wiz application, (4) report the evaluation of the program's usefulness and usability, and (5) discuss lessons learned from the deployment of the application.
Current guideline development
According to the National Guidelines Clearinghouse, more than 280 different organizations are currently involved in the development of evidence-based, English-language guidelines (http://www.guidelines.gov/, accessed 6 May 2011). The 2003 survey by Burger et al6 indicated that the average cost of the development of a guideline in the USA was US$200 000.
To create evidence-based guidelines, knowledge must be distilled from the scientific literature and combined with expert judgment. Authors typically create evidence tables, meta-analyses, and systematic reviews to summarize the facts that are known about a topic. Combining these facts with expertise and judgment to create clear, actionable recommendations requires a skill set unfamiliar to most domain experts.7 Although some professional organizations maintain standing panels of guideline developers,8–10 most US guideline development efforts are undertaken by ad-hoc teams of volunteer domain experts convened to address a single topic, who must learn the process of guideline authoring concomitantly with producing a useful product.
Not surprisingly, in a process as complex and resource intensive as the translation of medical knowledge into recommendations about appropriate care, a number of shortcomings have been identified. Guideline recommendations are often vague and underspecified, lack any linkage to supporting evidence or documentation of how they were developed, and prove to be difficult to transform into systems that actually influence the behavior of care providers.11–14
We use the term ‘implementability’ to refer to a set of characteristics that predict the relative ease of implementation of guideline recommendations.15 While measures of successful implementation include improved adherence to guideline-prescribed processes of care and, ultimately, improved patient outcomes, indicators of implementability focus on the ease and accuracy of the translation of guideline advice into systems that influence care. The most critical dimensions of implementability are decidability—precisely under what conditions (such as age, gender, clinical findings, laboratory results) to perform a recommended activity—and executability—a specification of exactly what to do under those circumstances. A recommendation that lacks decidability or executability will not be implementable until that issue is resolved. The guideline implementability appraisal (GLIA) was developed to identify these and other obstacles to successful implementation that are intrinsic to the guideline itself.15
Once guidelines have been implemented in systems of care, factors that favor guideline acceptance include clarity of recommendations,16 confidence in the guideline's source and the reasons for development,17–19 and recommendations based on evidence.16 In additon, guidelines are more positively perceived when they have the imprimatur of a professional organization20 and when they are promoted as improving quality of care.21
Because guideline authoring is complex, a number of organizations have published guidelines on how to develop guidelines12 22 23 and the Institute of Medicine has recently published standards for the development of trustworthy practice guidelines.24 Several authors have created tools to support the implementation of guideline recommendations, but none has achieved widespread use.4 25–29 The GRADE Collaboration has developed GRADEpro, a computer program that creates a summary of findings table that is useful during the process of systematic review.30 Although these systems provide frameworks for guideline development, we are not aware of any that takes the user through a step-by-step process for creating recommendations.
Controlled natural language for guideline recommendations
Natural language is highly expressive, easily understandable, requires no extra learning effort, and has been called ‘the ultimate knowledge representation language’.31 However, representation of technical, legal, and health concepts in natural language often leads to variable interpretation. Guideline recommendations are often semantically complex and regularly incorporate logical gaps and even contradictions that promote ambiguity and frustration on the part of implementers.32 In addition, there is a mismatch between the unstructured narrative form in which guidelines are usually authored and the formal structured representation that is necessary for operationalizing guideline knowledge.33
A controlled natural language is ‘a precisely defined subset of natural language obtained by restricting the grammar and vocabulary in order to reduce or eliminate ambiguity and complexity’.34 These restrictions result in increased terminological consistency and standardization, generally simplified sentence structure, and standardized document format and layout. Controlled languages are ‘specifically designed to serve as documentation, specification, or knowledge representation languages’.35 Since the Caterpillar Tractor Company created Caterpillar Fundamental English in the 1960s, controlled languages have been widely used in many industries—particularly aerospace—to facilitate the development and use of maintenance manuals.36 37 Despite the widespread penetration of controlled natural languages in industry, they have not, however, been widely applied in healthcare.
We hypothesized that the expression of guideline recommendations using a controlled natural language approach would lead to recommendations that are clearer and more easily implemented.38
Guideline development process
The development of guideline recommendations in a typical professional society setting occurs at one or more face-to-face committee meetings.39 Under the direction of a chairperson, the panel—comprising 10–15 domain experts, an epidemiologist/methodologist, society staff, and sometimes an informatician—convenes to devise key action statements and to link these statements transparently to the evidence base. Before the meeting, the methodologist will have performed a systematic review of the literature pertinent to the guideline topic and will have summarized and made the summary available to the committee. Committee members will have reviewed the applicable evidence and will have considered individually (and often in a teleconference) the types of recommendations that are to be made. The committee will have received training that covers elements of critical literature review and the importance of transparency in guideline development. The panel assembles in a meeting room and statements are projected on a screen as they are proposed and iteratively amended.
A clear, transparent, and implementable key action statement (and its accompanying text) indicates:38
  • When (ie, precisely under what circumstances)
  • Who
  • Ought (with what level of implied obligation)
  • To do what
  • To whom
  • How and why.
Our primary design objective was to demonstrate that recommendations could be developed by assembling the information required to populate this framework in a systematic and replicable manner. BRIDGE-Wiz collects this information from a variety of sources shown in table 1 and described below in the System description section. A second objective was for the program to be considered usable and useful by guideline development teams. The results of an evaluation of usefulness and usability are described in the Evaluation section.
Table 1
Table 1
A framework for representation of critical information in key action statements is mapped to the steps in BRIDGE-Wiz in which the relevant information is collected
BRIDGE-Wiz is a standalone desktop application written using Java 1.6 and the Swing API. The application is built to run on both Windows and Macintosh platforms. It is designed to operate as a software wizard, ie, a program that leads a user through a clearly defined sequence of activities. Wizards are most useful for tasks that are complex, infrequently performed, or unfamiliar. The visual layout is specifically designed for the program to be run with the use of a projector screen so that development group members can contribute during the recommendation building process. ‘Chunking’ the major tasks (breaking them down into small groups of related operations) and sequencing them provides a path through the multiple activities that are required to develop a recommendation.38 BRIDGE-Wiz produces two files as output in Microsoft .doc format: (1) a Conference on Guideline Standardization (COGS) checklist40 populated with responses to the checklist questions, and (2) a recommendation profile (described below in step 15).
BRIDGE-Wiz was designed for use at meetings of guideline developer panels at which recommendations for appropriate care are authored. The program prompts for progression through the development process and documents and displays the progressive development of key action statements. Evolving recommendations are projected on a screen to focus the panel discussion and to help ensure clarity, transparency, and implementability of the guideline. Either the panel chairperson, a staff member of the sponsoring organization, or another member of the panel may operate the program. The products of the panel discussion are summarized in an evidence profile39 associated with each key action statement. (Examples of the effective use of an evidence profile can be seen in Schwartz et al.)41
At the outset, BRIDGE-Wiz provides an editable template to support documentation of the 19 criteria for a valid and usable guideline defined by the Conference on Guideline Standardization.40 This text includes explicit statements regarding guideline focus, goals, intended users and setting, target population, developer, funding and potential conflicts, the method of evidence collection, recommendation grading criteria, methods for synthesizing evidence, pre-release review, update plan, and the role of patient preferences. Definition of the guideline's intended audience and the target population for the recommendations is particularly critical. This material can be entered with group input or can be completed by the committee chair and methodologist at another time.
Next, the program focuses on the development of key action statements, ie, recommendations for actions to be performed by the guideline's intended audience. BRIDGE-Wiz supplies a sequence of prompts and editing windows in which one or more key action statements and supporting text are created.
In deconstructing the complex cognitive task of developing a key action statement, we hypothesized that the initial goal should be to clearly define the intended action followed by an examination of the circumstances under which the action would be appropriate. The importance of actions focuses attention on verbs and on the deontic terminology that defines the intended level of obligation. Integrated into this process are checks on the executability of the action and decidability of the conditions under which it is to be performed derived from the GLIA instrument.15
Applying these principles, the BRIDGE-Wiz program sequentially prompts the user to:
  • Step 1. Choose an action type from a dropdown list. Analysis has demonstrated that guideline recommendations call for a limited number of action types: test, monitor, enquire, examine, conclude, prescribe, perform procedure, refer/consult, educate/counsel, prevent, document, prepare, advocate, and dispose (figure 1).42 In BRIDGE-Wiz, definitions of each action type are dynamically displayed. Users can also choose the negation of an action type (‘Do not…’) by selecting a checkbox. Throughout the process, a free text ‘Notes’ field allows the recording of ideas and key phrases that emerge during the panel discussion that will be used to amplify the key action statements as the guideline document is finalized.
    Figure 1
    Figure 1
    Empirically derived classification of action types. Reviewing more than 700 randomly selected recommendations, recommended actions for the vast majority could be classified into these categories.
  • Step 2. Choose a verb based on that action type from a dropdown list (figure 2). In preparatory work, we classified more than 700 recommendations from the Yale guideline recommendation corpus43 as to action type and extracted the verbs associated with each. From this list we identified transitive verbs. Transitive verbs take a direct object to describe an action that is done to something or someone and to link the action taken with the object upon which that action is taken. A total of 279 verbs pertinent to the 14 action types was categorized and incorporated. BRIDGE-Wiz users are permitted to add verbs to the list when necessary. Use of the verb ‘consider’ is prohibited (unless the selected action type is ‘conclude’) because it is difficult to measure when an action has been contemplated and measurability is a critical factor in successful implementation.44
    Figure 2
    Figure 2
    Users choose an action type (‘PRESCRIBE’) and select a transitive, active-voice verb (‘start’). Next they define the object of the verb (‘metformin as first-line treatment’).
  • Step 3. Define the object for the verb (figure 2). The system prompts: {verb} ‘what?’45 and the authors complete the action clause.
  • Step 4. Add action(s) if the key action statement calls for multiple activities. Users may enter additional actions and their objects and link them using either AND or OR conjunctions. To avoid potential ambiguity associated with a mixture of Boolean conjunctions, once an AND or OR is selected, all additional action clauses may only be linked with the same Boolean connector.
  • Step 5. Check executability. BRIDGE-Wiz displays each action clause in a separate cell and asks the user whether every action is stated specifically and unambiguously. If not, users are encouraged to clarify the proposed action.
  • Step 6. Define precisely the conditions under which the action is to be performed. Users are given wide latitude to describe each applicable circumstance, but complex sets of conditions again may be linked only with a single Boolean conjunction.
  • Step 7. Check decidability. BRIDGE-Wiz displays each condition clause in a separate cell and asks the user whether members of the guideline's intended audience would consistently determine if each condition has been satisfied. If not, users are encouraged to clarify the conditions.
  • Step 8. Describe benefits followed by risks, harms, and costs that may be anticipated if the key action statement is carried out. Members of the panel are encouraged to contribute expected outcomes—major and minor. Probabilities of these outcomes may also be reported. Some organizations may elect not to consider economic costs.
  • Step 9. Judge benefit–harms balance (figure 3). The list of benefits and the list of risks, harms, and costs are each displayed against a background of a balance scale. BRIDGE-Wiz prompts for a judgment of whether there is a preponderance of benefit over risk-harm (or vice versa in the case of a recommendation against) or an equilibrium between benefit and harm.
    Figure 3
    Figure 3
    After constructing individual tables that reflect benefits versus risks, harms, and costs, users are prompted to judge whether there is a balance or a preponderance.
  • Step 10. Select aggregate evidence quality that supports this recommendation. Evidence quality designations are organization-specific and define a level of confidence in the validity of the evidence on which the key action statement is based. In 2002, Agency for Healthcare Research and Quality found 40 different systems that addressed grading the strength of a body of evidence.46 BRIDGE-Wiz currently supports the systems in use by the American Academy of Pediatrics (AAP)/American Academy of Otolaryngology–Head and Neck Surgery (AAO–HNS), the US Preventive Services Task Force, and the GRADE Collaboration.
  • Step 11. Review proposed strength of recommendation and term for level of obligation (figure 4). Based on Lomotan et al,47 and using the AAP/AAO–HNS system for grading recommendation strength (strong recommendation, recommendation, option, no recommendation), BRIDGE-Wiz proposes a strength of recommendation (level of intended obligation to adhere) and deontic language (‘must’, ‘should’, ‘may’) for the developing key action statement.47
    Figure 4
    Figure 4
    Bridge-Wiz summarizes the judgments about balance or preponderance of benefits versus risks and proposes a strength of recommendation and an appropriate deontic term.
  • Step 12. Define the actor. BRIDGE-Wiz prompts for ‘who?’ is to carry out the key action statement. Individual key action statements may not apply to the globally defined intended audience of the guideline. In many cases, it is a subset of the audience (eg, not all clinicians have prescribing authority) and in others it may apply to a completely different group (eg, prepare actions may apply to administrators, educate/counsel may apply to patients).
  • Step 13. Choose recommendation style. BRIDGE-Wiz formats the evolving key action statement four ways from which the panel can select a preferred style:
    • a. IF {Conditions} THEN {Verb-Object}
    • b. {Verb-Object} IF/WHEN/WHENEVER {Conditions}
    • c. The {Developer} {strongly recommends/recommends/suggests} IF/WHEN/WHENEVER {Conditions} THEN {Verb-Object}
    • d. The {Developer} {strongly recommends/recommends/suggests} {Verb-Object} IF/WHEN/WHENEVER {Conditions}.
  • Step 14. The key action statement is displayed in a window for final editing.
  • Step 15. Output a recommendation and partly populated evidence profile in a .doc-formatted editable document. The evidence profile recommended in Rosenfeld and Shiffman39 is partly populated. The document includes the key action statement, the aggregate evidence quality, the benefits, risks, harms, and costs identified by the panel, and the developers' assessment of the balance or imbalance between benefits and harms. It also incorporates fields for value judgments, reasons for intentional vagueness, a role for patient preferences, and any exclusions specific to the instant recommendation. Finally, the accumulated notes section is appended. The skeleton recommendation can be further edited if necessary. The panel reflects on the values applied in judging the balance of benefit and harm and the role of patient preference, and records this information in the profile while the discussion is fresh in their minds.
Methods
We examined the perceived usefulness and usability of BRIDGE-Wiz in five guideline development efforts. In our first evaluation to determine the feasibility of using BRIDGE-Wiz in the setting of a guideline panel that was actively developing recommendations, one of the authors (RNS) operated the program at a meeting of the panel developing a guideline for the management of type II diabetes in children and adolescents in October 2009. The panel was a partnership of the AAP, the Lawson Wilkins Pediatric Endocrine Society, the American Academy of Family Physicians, American Diabetes Association and the American Dietetic Association. Using a beta version of the program, the panel developed six key action statements. Following the meeting, key informant interviews were conducted with the panel chair, the AAP staff member, and a panel member. They indicated that the program was appreciated as useful and fit for purpose.
Subsequently, the program was used with three panels sponsored by the AAP—acute otitis media in June 2010 (11 panelists), acute sinusitis in August 2010 (11 panelists), and obstructive sleep apnea in November 2010 (10 panelists)—and one panel sponsored by the AAO–HNS (18 panelists)—sudden hearing loss in January 2011. In each case, a brief slide presentation introduced the software. At each of the subsequent AAP panels, RNS operated the software for the first part of the meeting and a panelist replaced him for the latter half. At the AAO–HNS meeting, a staff member operated the program independently for the entire meeting. BRIDGE-Wiz was used to develop all key action statements at these meetings.
Panelists at these four meetings anonymously completed two-page surveys at the end of each 2-day meeting. One of the authors (RNS) distributed the survey after the panel session; academy staff collected the surveys and no incentives were offered. Panelists were free to opt out.
The surveys were developed to address hypothesized capabilities and deficiencies of the BRIDGE-Wiz application and its approach to recommendation development. Surveys asked for the level of panelists' agreement or disagreement with 16 statements. The scale was strongly disagree, disagree, neutral, agree, and strongly agree. Statements were worded so that in each case agreement supported the usefulness and/or usability of BRIDGE-Wiz. Median Likert scores (and 25th and 75th quartiles) were tallied for each survey item assigning a value of +2 to items scored as ‘strongly agree’, 0 to items scored as ‘neutral’, and −2 to items scored as ‘strongly disagree’.
In addition, panel members rated the ‘overall usefulness’ of BRIDGE-Wiz on a scale that ranged from 1 (not at all useful, impedes activity) to 10 (essential, should be used in all guideline development efforts). Panelists were also asked to record the most negative and the most positive aspects of the program in free text. These responses were analyzed to identify recurring themes and patterns. The survey was piloted among the authors to ensure clarity and comprehensiveness. The Yale Human Investigation Committee provided a waiver. Panelists were free to opt out of the survey.
Fifty guideline developers representing four different panels completed the formal survey (100%). The median overall usefulness rating awarded by the panelists was 8 (IQR 7.5–9). The median overall usefulness was rated by 47/50 panelists. There was considerable homogeneity of response among the four professional societies surveyed, as shown in table 2. Panelists expressed a high level of agreement with each of the 16 statements and no ‘strong disagreements’ with any of the statements (see figure 5). The median level of agreement was ‘agree’ for 13 of 16 statements and ‘strongly agree’ for the remainder with little dispersion.
Table 2
Table 2
Survey responses by guideline panel
Figure 5
Figure 5
Panelists' responses to survey questions. Numbers on the bars represent the number of respondents choosing each level of agreement. The number of panelists responding to the item is displayed in the column entitled ‘Total responses’. Median (more ...)
Several themes emerged from comments about positive and negative aspects of the program. On the negative side, the most common theme was that using BRIDGE-Wiz was time-consuming (‘takes more time’, ‘makes multistep recommendations cumbersome to create’). This was balanced by positive comments (‘time saving for controversial statements’, ‘enhanced efficiency in guideline development process’) and by arguments that using the software ‘focuses the discussion’, ‘a key value comes from how it takes the focus of the group members from their multiple chairs to the single screen’. Another positive theme related to standardization of the process of development of key action statements: ‘standardizes, improves clarity’, ‘forced to think about specifics of benefits and harms’, ‘forces principles of strength of recommendation’.
Individuals commented on ‘software clunkiness’ and an ‘inability to conveniently alter statements in progress’. These comments resulted in changes to the program's user interface. Others praised the software's ‘forc(ing) specificity’, ‘seems to reduce ambiguities’, ‘keeps wording consistent’, and ‘forces group to confront ambiguities’.
Among five guideline development panels sponsored by two different professional organizations there was substantial agreement that use of BRIDGE-Wiz could promote quality, clarity, transparency, and implementability. In addition, BRIDGE-Wiz supports a process that was considered to be useful and usable. Employing a wizard design, the BRIDGE-Wiz program formalizes a process for propounding guideline recommendations in a systematic manner. Using a controlled natural language approach, the program creates and populates a template for recommendation statements. The use of BRIDGE-Wiz promotes overall guideline quality by incorporating the COGS checklist of criteria.40 By prompting the developers to address each of the COGS parameters, a more comprehensive, usable, and valid guideline document is created.
The use of BRIDGE-Wiz enhances clarity of recommendations by promoting the use of transitive verbs and active voice. The performer of the action is clearly designated, rather than hidden as is the case with passive voice statements. In addition, the developer panel is asked to examine explicitly the decidability of each condition and the executability of each proposed action. The program restricts the use of Boolean connectors that link conditions and actions because ambiguity may be introduced when clauses that are ANDed are combined with clauses that are ORed.
BRIDGE-Wiz enhances the transparency of each recommendation by requiring and documenting a systematic appraisal of evidence quality and weighing of anticipated benefits, risks, harms, and costs that contribute to recommendation strength. In addition, the evidence profile produced as output by the program incorporates slots to describe the values applied in judging the balance of benefits and harms, the role of patient preference in how the recommendation should be implemented, and the reason any conditions or actions might be deliberately vague or underspecified (eg, incomplete or controversial evidence, inability to reach consensus, unwillingness to set a legal standard of care).
BRIDGE-Wiz promotes the implementability of recommendation statements in a number of ways. The program suggests an appropriate standardized deontic operator and defines a strength of recommendation for each statement. This helps to communicate to implementers the level of obligation that the developers intend. This deontic determination is particularly important for developers of computer-mediated decision support who can use this information to design interfaces for particular rules that range from full-stop to simply advisory. Also, limitation on the use of the verb ‘consider’ increases the likelihood that adherence to a recommendation will be measurable. Finally, the output of the program includes a pseudocode ‘rule’ in IF–THEN format.
The panelists agreed that use of BRIDGE-Wiz had a salutary effect on the process of guideline development. Displaying the sequence of prompts provided by the program on a projection screen focuses the attention of the developer panel and diminishes distraction. On several occasions when the discussion became tangential, we observed that a panelist pointed to the screen and directed the group to address the issue at hand.
Limitations
  • Use of a regimented system for guideline development will be resisted by some guideline authors. Support for the BRIDGE-Wiz approach by the sponsoring organization and the panel chairperson and a demonstration of how the software works in developing a typical recommendation is critical to overcome these concerns.
  • BRIDGE-Wiz has only been tested at two professional organizations. Although these organizations represented the ends of a spectrum from primary care to subspecialty surgery, the wide variety of development methodologies and organizational cultures extant may limit acceptance of the program by some organizations.
  • Although BRIDGE-Wiz incorporates three different systems for grading evidence quality and recommendation strength, the program has only been evaluated with a single grading system.
  • Complex recommendation statements—in which a single recommendation statement is associated with more than one evidence quality indicator or recommendation strength indicator—are not supported.
  • Because BRIDGE-Wiz defines knowledge in a declarative manner, procedural details—as might be displayed best in a flowchart—are not well supported.48
  • The output of BRIDGE-Wiz is a structured, natural language recommendation in IF–THEN format. Before implementation in a clinical decision support system is possible, the recommendation will need to undergo coding of its concepts in standardized vocabularies, selection of an appropriate decision support modality, user interface design, and integration with clinical workflow.4
Conclusions
BRIDGE-Wiz was developed to facilitate the authoring of clear, transparent, and implementable guideline recommendation statements. The program was found to be useful and usable when applied for the development of five guidelines.
We plan to continue to evaluate and improve the software with additional guideline development panels and to make it available to other guideline developer organizations. The software will be distributed by means of the GEM website at http://gem.med.yale.edu/BRIDGE-Wiz.
Footnotes
Funding: This work was supported by grant 2R01LM007199 funded by the National Library of Medicine and the Agency for Healthcare Research and Quality.
Competing interests: None.
Ethics approval: Ethics approval was obtained from Yale University Human Investigation Committee (HIC).
Provenance and peer review: Not commissioned; externally peer reviewed.
1. Merritt TA, Palmer D, Bergman DA, et al. Clinical practice guidelines in pediatric and newborn medicine: implications for their use in practice. Pediatrics 1997;99:100–14. [PubMed]
2. Chassin MR, Loeb JM, Schmaltz SP, et al. Accountability measures—using measurement to promote quality improvement. N Engl J Med 2010;363:683–8. [PubMed]
3. Tu SW, Musen MA, Shankar R. Modeling guidelines for integration into clinical workflow. Stud Health Technol Inform 2004;107:174–8. [PubMed]
4. Peleg M, Patel VL, Snow V, et al. Support for guideline development through error classification and constraint checking. Proc AMIA Symp 2002:607–11. [PMC free article] [PubMed]
5. Shiffman RN, Michel G, Essaihi A, et al. Bridging the guideline implementation gap: a systematic approach to document-centered guideline implementation. J Am Med Inform Assoc 2004;11:418–26. [PMC free article] [PubMed]
6. Burgers JS, Grol R, Klazinga NS, et al. Towards evidence-based clinical practice: an international survey of 18 clinical guideline programs. Int J Qual Health Care 2003;15:31–45. [PubMed]
7. Sebring RH, Herrerias CT. The political anatomy of a guideline: a collaborative effort to develop the AHCPR-sponsored practice guideline on otitis media with effusion. Jt Comm J Qual Improv 1996;22:403–11. [PubMed]
8. National Institute for Health and Clinical Excellence The Guidelines Manual. London: National Institute for Health and Clinical Excellence, 2009.
9. Qaseem A, Snow V, Douglas DK, et al. The development of clinical practice guidelines and guidance statements of the American College of Physicians: summary of methods. Ann Intern Med 2010;153:194–9. [PubMed]
10. Woolf SH, DiGuiseppi CG, Atkins D, et al. Developing evidence-based clinical practice guidelines: lessons learned by the US Preventive Services Task Force. Annu Rev Public Health 1996;17:511–38. [PubMed]
11. Balser M, Coltell O, van Croonenborg J, et al. Protocure: supporting the development of medical protocols through formal methods. Stud Health Technol Inform. 2004;101:103–7. [PubMed]
12. Zielstorff RD. Online practice guidelines: issues, obstacles and future prospects. J Am Med Inform Assoc 1998;5:227–36. [PMC free article] [PubMed]
13. Fletcher RH, Fletcher SW. Clinical practice guidelines. Ann Intern Med 1990;113:645–6. [PubMed]
14. Marcos M, Roomans H, ten Teije A, et al. Improving medical protocols through formalisation: a case study. In: Proceedings of the Sixth World Conference on Integrated Design and Process Technology. Pasadena, CA, USA, 2002.
15. Shiffman RN, Dixon J, Brandt C, et al. The Guideline Implementability Appraisal (GLIA): development of an instrument to identify obstacles to guideline implementation. BMC Med Inform Decis Mak 2005;5:23. [PMC free article] [PubMed]
16. Grol R, Dalhuijsen J, Thomas S, et al. Attributes of clinical guidelines that influence use of guidelines in general practice: observational study. BMJ 1998;317:858–61. [PMC free article] [PubMed]
17. James PA, Cowan TM, Graham RP, et al. Family physicians' attitudes about and use of clinical practice guidelines. J Fam Pract 1997;45:341–7. [PubMed]
18. Inouye J, Kristopatis R, Stone E, et al. Physicians' changing attitudes toward guidelines. J Gen Intern Med 1998;13:324–6. [PMC free article] [PubMed]
19. Hayward RS, Guyatt GH, Moore KA, et al. Canadian physicians' attitudes about and preferences regarding clinical practice guidelines. Can Med Assoc J 1997;156:1715–23. [PMC free article] [PubMed]
20. Tunis SR, Hayward RS, Wilson MC, et al. Internists' attitudes about clinical practice guidelines. Ann Intern Med 1994;120:956–63. [PubMed]
21. Puech M, Ward J, Hirst G, et al. Local implementation of national guidelines on lower urinary tract symptoms: what do general practitioners in Sydney, Australia suggest will work? Int J Qual Health Care 1998;10:339–43. [PubMed]
22. American College of Cardiology Foundation Practice Guidelines and Quality Standards. http://www.cardiosource.org/Science-And-Quality/Practice-Guidelines-and-Quality-Standards.aspx (accessed 20 Jun 2011).
23. American Academy of Neurology AAN Guideline Development Process. http://www.aan.com/go/practice/guidelines/development (accessed 20 Jun 2011).
24. Institute of Medicine Clinical Practice Guidelines We Can Trust. Washington, DC: National Academies Press, 2011.
25. Lobach DF. A model for adapting clinical guidelines for electronic implementation in primary care. Proc Annu Symp Comput Appl Med Care 1995:581–5. [PMC free article] [PubMed]
26. Zielstorff RD, Teich JM, Paterno MD, et al. P-CAPE: a high-level tool for entering and processing clinical practice guidelines. Partners Computerized Algorithm and Editor. Proc AMIA Symp 1998:478–82. [PMC free article] [PubMed]
27. Humber M, Butterworth H, Fox J, et al. Medical decision support via the internet: PROforma and Solo. Stud Health Technol Inform 2001;84:464–8. [PubMed]
28. De Clercq PA, Blom JA, Hasman A, et al. GASTON: an architecture for the acquisition and execution of clinical guideline-application tasks. Med Inform Internet Med 2000;25:247–63. [PubMed]
29. Maviglia SM, Zielstorff RD, Paterno M, et al. Automating complex guidelines for chronic disease: lessons learned. J Am Med Inform Assoc 2003;10:154–65. [PMC free article] [PubMed]
30. Brozek J, Oxman A, Schünemann H. GRADEpro Version 3.2 for Windows. 2008.
31. Sowa JF. Knowledge Representation: Logical, Philosophical, and Computational Foundations. Pacific Grove, CA: Brooks Cole Publishing Co, 2000.
32. Patel VL, Kaufman DR. Cognitive science and biomedical informatics. In: Shortliffe EH, editor. , ed. Biomedical Informatics: Computer Applications in Health Care and Biomedicine. New York, NY: Springer, 2006:176.
33. Greenes RA, Peleg M, Boxwala A, et al. Sharable computer-based clinical practice guidelines: rationale, obstacles, approaches, and prospects. Stud Health Technol Inform 2001;84:201–5. [PubMed]
34. Controlled Natural Language. 2010. http://www.en.wikipedia.org/wiki/Controlled_natural_language (accessed 13 Aug 2010).
35. Fuchs NE, Kaljurand K, Schneider G. Attempto Controlled English meets the challenges of knowledge representation, reasoning, interoperability, and user interfaces. 19th International FLAIRS Conference 2006. http://www.attempto.ifi.uzh.ch/site/pubs/ (accessed 21 Jun 2011).
36. AECMA AECMA Simplified English: A Guide for Preparation of Aircraft Maintenance Documentation in the International Aerospace Maintenance Language. Brussels, Belgium: European Association of Aerospace Industries, 1995.
37. Douglas S, Hurst M. Controlled language support for Perkins Approved Clear English (PACE). In: Proceedings of the First International Workshop on Controlled language Applications. Leuven, Belgium, 1996:93–105.
38. Shiffman RN, Michel G, Krauthammer M, et al. Writing clinical practice guidelines in controlled natural language. In: Fuchs NE, editor. , ed. Proceedings Controlled Natural Language 2009. New York, NY: Springer, 2010:265–80.
39. Rosenfeld RM, Shiffman RN. Clinical practice guideline development manual: a quality-driven approach for translating evidence into action. Otolaryngol Head Neck Surg 2009;140:S1–43. [PMC free article] [PubMed]
40. Shiffman RN, Shekelle P, Overhage JM, et al. Standardized reporting of clinical practice guidelines: a proposal from the Conference on Guideline Standardization. Ann Intern Med 2003;139:493–8. [PubMed]
41. Schwartz SR, Cohen SM, Dailey SH, et al. Clinical practice guideline: hoarseness (dysphonia). Otolaryngol Head Neck Surg 2009;141:S1–31. [PubMed]
42. Essaihi A, Michel G, Shiffman RN. Comprehensive categorization of guideline recommendations: creating an action palette for implementers. AMIA Annu Symp Proc 2003:220–4. [PMC free article] [PubMed]
43. Hussain T, Michel G, Shiffman RN. The Yale guideline recommendation corpus: a representative sample of the knowledge content of guidelines. Int J Med Inform 2009;78:354–63. [PubMed]
44. Rogers EM. Diffusion of Innovations. New York, NY: Free Press, 1962.
45. Verberne AA. Managing symptoms and exacerbations in pediatric asthma. Pediatr Pulmonol Suppl 1997;15:46–50. [PubMed]
46. West S, King V, Carey TS, et al. Systems to Rate the Strength of Scientific Evidence. Evidence Report/Technology Assessment No. 47. Rockville, MD: Agency for Healthcare Research and Quality, 2002.
47. Lomotan EA, Michel G, Lin Z, et al. How “should” we write guideline recommendations? Interpretation of deontic terminology in clinical practice guidelines: survey of the health services community. Qual Saf Health Care 2010;19:503–13. [PMC free article] [PubMed]
48. Peleg M, Garber JR. Extending the GuideLine Implementability Appraisal (GLIA) instrument to identify problems in control flow. AMIA Annu Symp Proc 2010:627–31. [PMC free article] [PubMed]
Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of
American Medical Informatics Association