The study was a randomized controlled trial in 2 large university-affiliated primary care clinics located in the Midwest from August 2000 through June 2001. Participants were 24 staff physicians and 74 resident physicians in internal medicine and family practice. Three staff physicians and 2 residents practicing during the study period declined to participate in the study. Staff physicians averaged 35 scheduled visits per week (range 12 to 50 visits), while resident physicians averaged 6 scheduled visits per week.
Physicians were randomized into 2 groups, intervention (N = 50) and control (N = 48). The patients of the intervention physicians were encouraged to use a new triage-based e-mail system—the Electronic Messaging and Information Link (E-MAIL)—to communicate with their physicians and clinic staff about scheduling, billing, health issues, prescription renewals, and referrals. All e-mails were automatically routed to a central resource account managed by a nurse “navigator” who routed messages within the account to appropriate staff. Physicians received copies of their messages but replied to only those requiring physician input, such as patient-specific health questions. Clinic staff entered the central account to receive and respond to messages not requiring physician input.
The e-mail system was promoted to patients of intervention physicians in several ways. Because there was no valid primary care patient roster, we promoted the system to patients who were likely to be those of intervention physicians. First, intervention physicians were encouraged to give their patients a card during clinic visits with a study-specific e-mail address on it and a description of the triage system and how to use it. Second, we mailed flyers to a random sample of 5,000 patients who had visited an intervention doctor in the prior 6 months of the study period or were scheduled to visit an intervention doctor during the study period. The flyers encouraged patients to e-mail their physician using the special e-mail addresses and educated patients about appropriate content, response times, and message handling by the clinics. Patients who used the e-mail system were asked to follow specific guidelines in e-mailing their physicians, including: 1) do not use e-mail for emergencies or for urgent messages; 2) do not use e-mail to communicate about sensitive topics, such as HIV; 3) use e-mail to communicate with your physician and health care team about the following: appointment scheduling, billing questions, health questions, prescription renewals, referrals, and test results; and 4) send separate e-mails for each type of request and include specific information (e.g., for referrals, include information about whether the referral was requested by a physician, previous visits, and preferred specialist). Additionally, all patients who used the e-mail system received automatic responses to each new e-mail message they sent, reinforcing the educational messages covered in the flyers. Finally, intervention physicians also were encouraged to forward patient e-mails from their personal e-mail accounts to the triage account and to encourage patients to use their study-specific addresses in future correspondence.
Control physicians and their patients did not have access to this account. However, independent of our study, patients of both intervention and control physicians could e-mail their physicians by using the physician's personal e-mail account available through physicians' personal cards (some of which had personal e-mail addresses on them) or by searching the medical center directory.
We collected information on e-mail and phone call volume and the visit distribution of all physicians in the study during 5 two-week periods spread evenly over the course of the study. We also performed patient and physician surveys at the conclusion of the study.
E-mail communication between patients and providers occurred directly through a physician's personal e-mail account or, in the case of physicians in the intervention arm of the study, through the e-mail triage system. Because personal e-mail accounts could not be monitored by study personnel, we measured e-mail volume based on physician recall of the number of e-mail messages received directly from patients during the prior 2 weeks. Because approximately 20% of physicians did not report patient e-mail volume during various waves of data collection (mostly resident physicians equally divided between the intervention and control groups), we imputed these missing estimates to 0. We collected detailed information on the number of e-mails sent directly by patients to each intervention physician through the triage system. Information on phone call volume and type of call was collected periodically using staff phone logs. Information about the number and type of visits was obtained through the medical center information system. Visit types included arrived new visit, arrived return visit, and no-show, which indicated that a scheduled patient neither arrived nor canceled the visit.
We performed a physician survey at the conclusion of the study that assessed their use of e-mail with patients, attitudes toward the benefits of e-mail, how much they are bothered by different types of patient e-mail messages, and satisfaction with patient and staff communication. The self-administered survey took about 5 minutes to complete and was administered to physicians during clinic.
We also performed a patient survey at the conclusion of the study that assessed use of e-mail with physicians, perceived barriers and benefits of using e-mail with providers, preferred modes of communication for different health-related issues, general satisfaction with communication with physicians and staff, and demographic information. We selected a random sample of 900 patients (450 patients who had seen an intervention physician 1 or more times and a control physician no more than 1 time during the study period; and 450 patients who had seen a control doctor 1 or more times during the study period and an intervention physician no more than 1 time during the study period) to complete a mailed survey. The Dillman method was used to maximize response rates.12
Survey data were entered and a 10% sample was checked for errors (which yielded only 2.5 errors per 1,000 entered variables).
Variables and Analysis
We created a number of physician-level variables for the utilization analysis. Because the volume of communication was highly correlated with the level of clinical activity of individual physicians (e.g., average number of scheduled visits per week) and the level of clinical activity varied markedly across physicians, we incorporated it into all utilization variables. We constructed 3 utilization variables: weekly patient e-mails per 100 scheduled visits (number of reported patient e-mails per week divided by average number of scheduled visits per week during the study period * 100); weekly phone calls per 100 scheduled visits (number of phone calls per week divided by average number of scheduled visits per week * 100); and “no-show rate” (number of patient no-shows per month per 100 scheduled visits).
Physician survey variables included: 1) an “e-mail benefits” scale, which indicates attitudes toward the benefits of using e-mail with patients, with higher scores indicating more favorable attitudes (7 items; α = 0.87); item responses (5-point Likert scale from strongly disagree to strongly agree) were assigned scores of −2 to +2 and summed (range, −14 to 14); thus, higher scores indicated more perceived benefits; 2) an “e-mail bother” scale, indicating how bothered physicians were by different types of e-mail messaging from patients (8 items; α = 0.87), with values assigned to the 3-point scale (not at all a problem to a big problem, values 1, 2, 3) and summed (range, 8 to 24); higher scores indicate more bother; and 3) a general communication scale indicating attitudes toward communication with patients and staff; higher scores indicate more favorable attitudes toward communication with patients and staff (8 items; α = 0.95); item responses (5-point Likert scale from strongly disagree to strongly agree) were assigned scores of −2 to +2 and summed (range, −16 to 16).
We constructed a number of variables from the patient survey including: 1) an “e-mail barriers” scale, which was the sum of responses to 7 potential barriers to using e-mail; positive responses to the check list (7 items; α = 0.76) were summed and compared across groups; 2) an “e-mail benefits” scale, which indicates attitudes toward the benefits of using e-mail with health care providers; higher scores indicate more favorable attitudes toward the benefits of e-mail communication (4 items; α = 0.84); item responses (5-point Likert scale from strongly disagree to strongly agree) were assigned scores of −2 to +2 and summed; and 3) a general communication scale indicating attitudes toward communication involving physicians and staff, with higher scores indicating more favorable attitudes (7 items; α = 0.84), which was based on a previously validated scale of communication in primary care13
; item responses (5-point Likert scale from strongly disagree to strongly agree) were assigned scores of −2 to +2 and summed.
Bivariate comparisons between intervention and control groups were made using nonparametric statistics (Kruskal-Wallis, Fisher exact, or χ2). We tested for between-study group differences in trends of the utilization of e-mail, phone, and visit volume over time using generalized estimating equations techniques appropriate for repeated measures analysis. We ran separate models for each utilization variable. For example, in 1 model, the dependent variable was phone volume and the independent variables included group (intervention vs control), the discrete time periods (1 through 5), and physician status (resident vs faculty physician). We ran this model using Poisson and binomial regression techniques for each of the utilization variables and tested for differences between trends using z tests. We tested for differences in physician and patient attitude scale scores using ordinary least-squares regression to control for other factors, such as physician type, (resident vs faculty physician) and adjusted standard errors for clinic clustering.
The study protocol and all materials were approved by the University of Michigan Institutional Review Board. The study sponsor was not involved with any aspect of study design, data collection, or analyses. Only the authors had access to the data in the study, and we accept full responsibility for the integrity of the data and the accuracy of the data analysis.