The Brown Institutional Review Board approved the study protocol.
We identified states with the highest electronic prescribing activity on the SureScripts network in the fall of 2005. For reasons of logistics and feasibility, we selected a convenience sample of 6 states with the highest volume of e-prescribing transactions on the SureScripts network. Within these 6 states, SureScripts identified physician software systems willing to participate in the testing of the e-prescribing standards pilot project. The physician software systems participating in the study and their geographic representation (in brackets) included: OnCallData, InstantDX, LLC, Gaithersburg, MD (Rhode Island); PocketScript, Zix Corporation, Dallas, TX; (Massachusetts, New Jersey), Rcopia, DrFirst, Inc., Rockville, MD (Massachusetts); Care360, Medplus, Inc., Mason, OH (New Jersey, Florida); eMPOWERx, GoldStandard Multimedia, Inc., Tampa, FL (Florida); Touchworks, AllScripts, LLC, Chicago, IL(Nevada, Tennessee). Thus, 6 different e-prescribing systems were included in the study, but geographic location was highly correlated with software system. We required that each system vendor identify and enroll medical practices with a patient mix of at least 25% Medicare eligible patients. Physicians participating in the study received a $500 incentive. We cannot report participation rates with any level of certainty because our first contact with practices was when we received signed participation agreements.
The protocol of the larger e-prescribing standards study included an evaluation of medical practices, the physician software systems, and personnel in community pharmacies. The multi-component medical practice protocol consisted of surveys of prescribers, patients and staff; focus groups and semi-structured interviews with prescribers and staff conducted on-site; and at least one half day of site observation (including observation of patient–physician interactions related to medication use).
For the current study, we used a mixed methods approach that includes analysis of quantitative and qualitative data. Specifically, we analyzed the portion of the clinician survey relating to drug alerts, as well as the focus groups relating to drug alerting features of the software. The data for this study are derived from 64 practices, all experienced in electronic prescribing, which participated in on-site visits. All the data were collected before the new standards (and any changes to the electronic prescribing software to accommodate the electronic prescribing standards) were implemented.
The survey was designed to capture relevant information regarding prescriber perceptions of e-prescribing on efficiency, workflow, and quality, as well as their perceptions about patient communication relevant to medication issues. Content of the survey was informed by input from a multidisciplinary advisory team including practicing prescribers, pharmacists, and researchers. Prescribers completed the survey in either paper or web-based format as part of the medical practice protocol.
We evaluated responses to a series of questions regarding the frequency with which overrides of drug alerts occurred. Separate questions evaluated overriding of drug–drug interactions, allergies, and drug alerts regarding dosing. Responses included “always”, “most of the time”, “sometimes”, and “never”. Given concerns regarding sparse data, we collapsed the “always” and “most of the time” responses into one category. Prescribers (n
157) completed surveys available via the web (68%) or paper (32%) in advance of or during the site visit.
The analysis of the survey included descriptive statistics of the respondents (gender and job title) as well as descriptive statistics of the practices included in the study. Cross-tabulations of clinician responses to the drug alerting questions by physician software system were conducted. Fisher’s exact test was used to calculate more conservative p
values owing to the small sample size.9
We also analyzed information conducted during on-site focus groups with 276 prescribers and their staff. Two highly trained research assistants held focus groups (with a meal provided) before hours, at lunch, or after hours at the discretion of the practice between April and August 2006. Consent forms and demographic surveys were collected, and a sign listing the main topics for discussion was placed on the table for participants to view. An open-ended approach was used to elicit information about the benefits and drawbacks of e-prescribing. Focus group participants were asked to describe their experiences with e-prescribing software as well as their suggestions for improving e-prescribing. Participants spontaneously addressed patient safety issues and interaction alerts in the context of these discussions. Probes included questions about what aspects of e-prescribing are valuable, what participants found difficult, suggested improvements in office procedures and software functionality, and other resources that might be valuable. Other general probing was conducted using facilitative questions (Can you tell me more about that? “Any other opinions?) and clarification (summarizing and checking for accuracy, “When you say…, what do you mean by that?”).
Focus groups were recorded using 2 digital recorders with PZM microphones. Once all digital recordings were transcribed, research assistants double-checked every transcript for potential errors and corrected them as needed.
An extensive hierarchical coding structure was initially developed to handle the large volume of qualitative data. One of the authors (CD) designed the initial structure which was based on the focus group protocol and review of initial transcripts and revised and/or expanded during active coding. Codes were identified and defined so that diverse comments from participants could be collected in logical groupings for review and analysis. Using NVivo qualitative analysis software (version 7), 15 different parent nodes were defined. We focused our attention on 2 parent nodes, impact on clinical practice and software features, which were selected because our codebook instructed coders to place quotes related to alerting systems in these nodes. Within these parent nodes, we focused on nodes entitled patient safety, patient care, and drug alerts because the codebook explicitly directed coders to place quotes relating to alerts into these nodes. Another node, quality of care under impact on clinical practice was also examined to evaluate quotes related to how the alerts impacted practice and quality. Coders were trained in coding definitions and overall coding structure. A code book defined all codes and their relationships. All quotes for the current analysis were derived from the focus groups.
Consistency in the coding across team members was assured by extensive training, coding meetings, a coding handbook which provided the coding structure and definitions, group exercises, and by having 19% of the transcripts independently coded a second time by a different member of the coding team. Reports comparing the coding were generated and reviewed. These reports were used to identify any areas of coding that were not consistently applied by coders and for which additional training was required. Finally, a qualitative data review was conducted on the double coded transcripts. Passages coded by each coder commonly appeared twice, indicating effective coding among those transcripts by the research staff.