Search tips
Search criteria 


Logo of jgimedspringer.comThis journalToc AlertsSubmit OnlineOpen Choice
J Gen Intern Med. 2009 June; 24(6): 710–715.
Published online 2009 April 15. doi:  10.1007/s11606-009-0971-3
PMCID: PMC2686771

A Web-based Generalist–Specialist System to Improve Scheduling of Outpatient Specialty Consultations in an Academic Center



Failed referrals for specialty care are common and often represent medical errors. Technological structures and processes account for many failures. Scheduling appointments for subspecialty evaluation is a first step in outpatient referral and consultation.


We determined whether moving from paper-based referrals to a Web-based system with automated tracking features was associated with greater scheduling of appointments among referred patients.


Staggered implementation of a quality-improvement project, with comparison of intervention and control groups.


Patients 21 or more years of age referred from any of 11 primary-care clinics to any of 25 specialty clinics.


Faxed referrals were replaced by a Web-based application shared by generalists and specialists, with enhanced communications and automated notification to the specialty office.


We compared scheduling before and after implementation and time from referral to appointment. A logistic regression analysis adjusted for demographics.


Among 40,487 referrals, 54% led to scheduled specialty visits before intervention, compared to 83% with intervention. The median time to appointment was 168 days without intervention and 78 days with intervention. Scheduling increased more when duplicate referrals were not generated (54% for single orders, 24% for multiple orders). After adjustment, referrals with the intervention were more than twice as likely to have scheduled visits.


With a new Web-based referrals system, referrals were more than twice as likely to lead to a scheduled visit. This system improves access to specialty medical services.

KEY WORDS: referral and consultation, ambulatory care, information systems


As medicine undergoes organizational changes, the relationships, interfaces, and flow of patients between primary-care and subspecialists are becoming more complex1. Up to 20% of general medical patients are referred to specialists2,3, and many referrals are inappropriate, are never completed, or lead to unintended events4. Failure to complete a requested consultation is a serious problem, with about half of consultants’ recommendations going uncompleted58. Failures of scheduling outpatient specialty consultation can especially be caused by technological structures or processes, such as lost, duplicated, incomplete, or mishandled papers, facsimiles, or other communications. One study of coordination of referrals and physician satisfaction showed that information was sent to specialists in only about half of referrals9. When routine scheduling of outpatient consultation fails, even for technical reasons, the consultant has no opportunity to evaluate the patient as planned, and quality of care is thus diminished. Such failures represent medical errors, “failure of a planned action to be completed as intended or use of a wrong plan to achieve an aim”10.

With gradually increasing adoption of electronic medical record systems, electronic referral systems have emerged as strategic networks for communication between sites of primary and secondary care, and with them comes the potential to improve access to care, continuity of care, and clinical management11. To increase the fraction of consultation orders leading to appointments, we designed a Web-based collaborative generalist-specialist system in an academic medical center. The system stores and tracks information about referrals, allows generalists and specialists to access the system, and provides notifications or reminders to both ends when follow-up or intervention is needed. We hypothesized that this system would increase, by at least 50%, the fraction of referrals leading to a scheduled visit for consultation.


Setting The home institution provides a tax-supported, urban healthcare system providing outpatient, inpatient, and community-based health services to residents of a Midwestern county in the U.S. The hospital and a core of outpatient clinics are located on the campus of an academic medical center. Additional community health centers providing primary care are located around the metropolitan area. In 2006, the institution serviced 1,166,988 outpatient visits, including 194,546 visits to community health centers. The payer mix is 28% Medicaid, 21% Medicare, 9.5% commercial, and 36% uninsured. A special program of services is provided for many low-income patients who are not eligible for Medicaid benefits. The institution has more than 1,000 physicians on its medical staff. Adult patients receiving primary care at the institution tend to receive all major medical care within the system, and most referrals for patients originate within the system and target specialty clinics or sites that are also in the system. All scheduling of routine and urgent visits is ultimately recorded in an electronic registration and scheduling system. The registration system also electronically stores contact information about patients, including gender, race, and date of birth.

Usual Care Under usual care, primary-care physicians requested consultation by creating an order in the electronic medical records system. This was reviewed by primary-care office personnel, who printed and faxed the order to the specialty office, where it was reviewed again for completeness and appropriateness. The specialty office personnel could then schedule an appointment for consultation, or deny the request and return it to the primary-care office. Requests for consultation are sometimes denied if clinical or demographic information is missing or if the patient’s condition does not meet inclusion or exclusion criteria specified by policies of the institution or consulting service.To schedule an appointment with the patient, the specialty clinic would attempt to contact the patient by telephone, up to three times. Patients who could not be reached would be mailed an appointment card. Once this was complete, the referral was closed. The specialty care physicians would review cases when patients failed to appear for appointments, to determine whether the patients needed to be seen immediately or could wait for the next available appointment. The patient would be called by telephone; the referral coordinator would then reschedule the visit. Patients who failed to show three times were required to see their primary-care provider to obtain a new referral.

Intervention To evaluate and improve the process and outcomes of referrals and consultations, a team of generalists and specialists met for one hour every two weeks, using a plan-do-study-act approach to quality improvement. The team, representing medicine, nursing, administration, informatics, and research, examined barriers to completion of scheduling and devised a new approach. Under the new methods developed, primary-care office personnel recorded clinicians’ requests for consultation electronically, via a new Web-based interface. Faxing was replaced by encrypted, Internet-based delivery of requests to specialty offices, and all incidents of non-scheduling, whether intended or unintended, led to delivery of electronic mail to referring office staff. Unintended non-scheduling—that is, requests not receiving attention from specialty office personnel within 48 hours for an urgent request, seven days for a semi-urgent request, or fourteen days for a routine request—also led to automated reminders being sent to the specialty office personnel, including supervisors. Data were stored on a secure server. The intervention was dubbed “Referral Order Management Portal” (ROMP). ROMP was driven by a Web-based application adapted from commercial software12 used to support a help desk for technical support in the institution. A summary of key features is provided in Table 1. Since the original vendor of the help-desk software performed the core modifications to create ROMP, the local institution does not own the application and is not licensed to distribute it.

Table 1
Characteristics of Referral Before and After Intervention

Study Design and Implementation This study emerged from a quality-improvement project. Following implementation at the on-campus primary-care clinic, we used computer-generated, randomly ordered staggering of the implementation at all ten remaining referring primary-care sites (community health centers) and 25 specialty consulting sites or clinics. This allowed a small team of implementers to introduce the system at each site and provide proper training, while allowing the investigators to assess referrals and scheduling before (control) and after (intervention) implementation at each site. These sites, before or after intervention, were used for the main analysis. Ten additional consulting sites without ROMP served as an additional comparison group. Throughout 2005, we activated the intervention at the 11 referring and 25 consulting sites, one at a time. Since, through this staggered implementation, a referring site with access to the intervention could use usual care to refer to consulting sites without the intervention, ROMP was programmed to accept referrals only from authorized referring sites and only to authorized consulting sites. Specialty clinic staff using ROMP were also instructed to return to the referring site any referral made through usual care, for referring sites with access to ROMP. Eligible patients were all patients 21 or more years of age referred from a primary-care clinic to a specialty clinic. Eligible physicians were all referring and consulting physicians who generated a referral or evaluated at least one participating patient at a participating site.

Analysis For analysis, data from the electronic registration and scheduling system were extracted. Clinics in the main analysis—the 11 referring and 25 consulting sites—served as control sites before receiving the intervention and served as intervention sites after receiving the intervention. In other words, the control clinics were the actual intervention clinics when they still used a faxed referral before the intervention was in place. An observation was coded as interventional when both referring and consulting sites had access to the intervention. An observation was pre-interventional when the referring or consulting site did not have access. To provide additional control data, we included six months of referrals data before the intervention was ever implemented. During these six months, all clinics and referrals were categorized as controls.For specialty clinics not in the main analysis—the 10 additional consulting sites without ROMP—an observation was called “pre-interventional” if a referral was ordered prior to the last date on which the intervention was implemented for clinics in the main analysis. We assessed whether referrals led to scheduling of specialty appointments corresponding to the referrals. Scheduled appointments were defined as appointments that matched the referral, such as cardiology appointment following a cardiology referral, and were scheduled to occur within 180 days of the matching referral.Because improperly generated referrals or referrals to especially busy specialty clinics may have been generated multiple times before scheduling occurred, we included an assessment of whether a referral was a first, or duplicate, referral to a specialty. Referrals with duplicates may be associated with different levels of scheduling than referrals without duplicates. If an order were repeated for a patient over time, we created a variable that documented the number of repeated orders.We compared intervention and control groups using chi-square testing. To determine the independent relationship between the intervention and scheduling of the specialty appointment, we conducted logistic regression, adjusting for demographics and the strata consisting of combinations of referring and specialty sites. P-value significance threshold was 0.05 for statistical analyses.The analysis described above is based on the stepwise implementation of the intervention over several months. Unmeasured, time-related factors may have played a role or affected the results. Therefore, to test a narrower study interval, we conducted an additional logistic regression analysis without the initial six months of control data. As in the main analysis, controls represented referrals before the intervention was available at both referring and consulting sites. We also manually examined a sample of denied referrals, to compile information about why the referrals were denied. The study was approved by the Institutional Review Board.


We identified 56,549 referrals among 26,456 patients. The mean age among patients was 49 years; 15% were 65 or more years of age, 67% were women, 44% were African–American, 43% were white, and 11% were Hispanic. The main analysis comprised 40,487 referrals (Table 2). Among these, 54% led to scheduled specialty visits before ROMP, compared to 83% with ROMP. The median time to appointment was 168 days without ROMP and 78 days with ROMP. More variation was found among clinical sites than among gender, race, and age. Scheduling increased more when duplicate orders for referral were not required (54% for single orders vs. 24% for multiple orders).

Table 2
Referrals and Specialty Appointments, by Patients’ Characteristics and Clinical Sites

Odds ratios from logistic regression models are presented in Table 3. In the models shown, each referring-specialty site combination (11 × 25) is treated as a separate stratum, resulting in 270 strata; five referral-specialty combinations had no observations. After adjustment, referrals with ROMP were 4.3 times more likely to have a scheduled visit than those without. Also independently statistically significant were single orders and age, with older patients seeing slightly more scheduling. When the interval was narrowed by excluding the initial six months of control data, the odds ratio decreased to 2.8 but remained statistically significant ( = 11,998 referrals).

Table 3
Logistic Regression Results for Scheduling of Specialty Appointments, Stratified by Referring Site and Specialty

Scheduling of appointments among non-ROMP specialties outside the main analysis are shown in Table 4. Approximately 34% of pre-ROMP referrals had scheduled appointments, compared to 33% after ROMP was fully implemented for clinics in the main analysis. Logistic regression comparing the two periods revealed an odds ratio of 0.97 ( = 0.324). Using a logistic regression model adjusting for the same factors as in Table 3, the odds ratio decreased to 0.92 ( = 0.020).

Table 4
Referrals and Appointments, by Specialty, for Specialty Sites not in the Main Analysis

Five percent of referrals were denied. Table 5 shows reasons for denial, among a sample of 1,255 denied referrals. Among these, duplicate orders occurred in 26%, patients did not meet clinical criteria in 26%, and the incorrect clinical site was targeted in 20%.

Table 5
Reasons for 1,255 Denied Referrals


Specialty medical care is vital for our primary-care patients and, except for self-referrals, outpatient specialty care does not typically occur unless referrals are processed and completed without error. Although lack of timeliness is a major problem with current referral systems13, and delays in completion of consultation are associated with adverse clinical outcomes14, hardly any studies have systematically evaluated the impact of scheduling systems on outcomes in a healthcare setting. In this study, we designed and implemented a Web-based system, shared by generalists and specialists, for scheduling outpatient consultation. Independent of other measured factors, including temporal changes in referrals, the referrals with ROMP were at least twice as likely to have a scheduled visit, compared to those without ROMP. Without a systematic, partly automated approach, the referrals, which often occur via fax, can be lost through mishandling or even just fax machines that run out of paper. Furthermore, such errors can be difficult to track or even identify quickly. Our system improves care by adding a shared information and communications system as well as automated alerts about referrals that have not been addressed promptly.

Better coordination of referral could improve efficiency, costs, and quality of care. Various approaches have been taken to address the problems of referral. The Department of Veterans Affairs initiated a Quality Improvement Technical Assistance Project specifically to address the completion of consultations15. Some have reported electronic messaging systems between primary and secondary care16. The Walter Reed Army Medical Center implemented an “Ask a Doc” referral system based on electronic mail, with an average response time to consultation of less than one day17. Additional work in this field could determine which features of such systems are most important and whether combining features improves outcomes.

Benefits of a computer-based referrals system, such as described here, could well be seen in other settings. The local system used the Web and a standard Web browser, without directly requiring an electronic medical records system. The system created the methods and capacity to record and deliver clinical history and the reason for the referral. Although we did not study and report about usability, we observed that the system was highly usable and valued by both staff and providers. It fit well into workflow, since referrals staff had desktop computers and could spend blocks of time using the computer interface to process referrals in batches. The time required to process a referral was not measured but was estimated to have decreased, due to the way the intervention eliminated problems with lost faxes and needs to interrupt workflow with duplicate fax transmissions and telephone calls to track paperwork. A more detailed assessment of usability would require further study.

With attention to design, systems of this type could more effectively address common reasons for denied referrals, such as duplicates, inadequate clinical criteria, or an incorrectly targeted clinic site. This can ultimately reduce waiting times and efficiency or accuracy of billing18. One radiology study comparing online scheduling to traditional telephone-based scheduling showed that 78% of physicians felt ready for online scheduling, and 75% of physicians who tried the system stated that it was easier to schedule patient online19. Another radiology study showed that patients’ waiting times decreased from two or three weeks for telephone-based scheduling, to two or three days for electronic scheduling20.

In the analysis with the longer interval, single orders and older ages were independently more likely to lead to scheduled visits. Coordinating referrals for patients who are hospitalized, institutionalized elsewhere, or require frequent medical care may be difficult and require multiple referrals, and this may explain the lower scheduling rates for those with multiple referral orders. With the methods of this study, we cannot ascertain why older patients were somewhat more likely to have scheduled appointments. Their greater comorbidity may translate to more urgent clinical needs, or their insurance coverage—more universal with Medicare—may have played a role.

Some referrals did not seem to benefit from the intervention. Referrals for continence or rehabilitation saw a decrease or only a modest increase in scheduling. Although we did not investigate these specific cases, we suspect special circumstances unrelated to the intervention but that could not be overcome by the intervention. The intervention, for example, cannot address the problems of full clinics or other backlogs or idiosyncrasies in the workflow of processing referrals. The intervention seemed to address only the most common causes of lost or delayed referrals.

The study has limitations. Since staff at consulting sites managed both faxed and Web-based referrals, contamination may have occurred if experience with the intervention led to closer follow-up of controls, but this would provide us with a conservative estimate of the intervention’s effect. This analysis also did not examine kept visits, which would ultimately be important in assessing completion of consultation, but the intervention was meant to target scheduling, rather than aspects of consultation more related to patients’ own behaviors. Attenuation of effect may be seen with a longer follow-up period.

In conclusion, with a Web-based referrals and scheduling system, referrals were nearly three times more likely to lead to a scheduled visit, after adjustment for other factors. Especially in a country with such a large supply of specialists, access to specialty care should not be hindered to such a large degree by the type of preventable error studied here. Although health information technology cannot solve all issues in our clinics, it can nearly eradicate the problem of failed scheduling. Nevertheless, even with computer-based approaches, scheduling systems can be better. An ideal system will have clinicians providing the key clinical pieces while other office staff handle key administrative elements. Further integrating referrals with a more robust clinical decision support system could lead to more appropriate referrals and fewer denials. Thus, to optimize clinical care, we’ll need systems that do a better job of bringing together not only generalists and specialists, but clinicians and office staff, so that each role is effectively and efficiently served by those with the skills and expertise to match.


Dr. Weiner was supported on this work by grant number 5K23AG020088 from the National Institute on Aging. Aspects of this work were presented at a meeting of the Midwest Region Society of General Internal Medicine, Chicago, IL, 30 September 2005. This study was supported by Wishard Health Services, Indianapolis, Indiana.

Conflicts of Interest None of the authors have any conflicts of interest to declare.



Dr. Weiner was supported by grant number 5K23AG020088 from the National Institute on Aging. This study was supported by Wishard Health Services, Indianapolis, Indiana.


1. Linzer M, Myerburg RJ, Kutner JS, et al. Exploring the generalist-subspecialist interface in internal medicine. Am J Med. 2006;119(6):528–37. [PubMed]
2. Wilkin D, Smith A. Explaining variation in general practitioner referrals to hospital. Fam Pract. 1987;4(3):160–9. [PubMed]
3. Calman NS, Hyman RB, Licht W. Variability in consultation rates and practitioner level of diagnostic certainty. J Fam Pract. 1992;35(1):31–8. [PubMed]
4. Rosemann T, Wensing M, Rueter G, et al. Referrals from general practice to consultants in Germany: if the GP is the initiator, patients’ experiences are more positive. BMC Health Serv Res. 2006;6:5. [PMC free article] [PubMed]
5. Cefalu CA. Adhering to inpatient geriatric consultation recommendations. J Fam Pract. 1996;42(3):259–63. [PubMed]
6. Ballard WP, Gold JP, Charlson ME. Compliance with the recommendations of medical consultants. J Gen Intern Med. 1986;1(4):220–4. [PubMed]
7. Allen CM, Becker PM, McVey LJ, et al. A randomized, controlled clinical trial of a geriatric consultation team. Compliance with recommendations. JAMA. 1986;255(19):2617–21. [PubMed]
8. Shah PN, Maly RC, Frank JC, et al. Managing geriatric syndromes: what geriatric assessment teams recommend, what primary care physicians implement, what patients adhere to. J Am Geriatr Soc. 1997;45(4):413–9. [PubMed]
9. Forrest CB, Glade GB, Baker AE, et al. Coordination of specialty referrals and physician satisfaction with referral care. Arch Pediatr Adolesc Med. 2000;154(5):499–506. [PubMed]
10. Committee on Quality of Health Care in America, Institute of Medicine. To Err is human: building a safer health system. In: Kohn LT, Corrigan JM, Donaldson M, eds. Washington, DC: National Academy of Sciences; 1999.
11. Wootton R, Harno K, Reponen J. Organizational aspects of e-referrals. J Telemed Telecare. 2003;9(Suppl 2):S76–9. [PubMed]
12. MindGent. MindGent Service Center. Available at: Accessed 03 June 2008.
13. Gandhi TK, Sittig DF, Franklin M, et al. Communication breakdown in the outpatient referral process. J Gen Intern Med. 2000;15(9):626–31. [PMC free article] [PubMed]
14. Olivotto IA, Gomi A, Bancej C, et al. Influence of delay to diagnosis on prognostic indicators of screen-detected breast carcinoma. Cancer. 2002;94(8):2143–50. [PubMed]
15. Ordin D. QUERI II and VA performance improvement. “Forum” publication of VA Office of Research & Development 2006 (August); August 2006:3.
16. Moorman PW, Branger PJ, van der Kam WJ, et al. Electronic messaging between primary and secondary care: a four-year case report. J Am Med Inform Assoc. 2001;8(4):372–8. [PMC free article] [PubMed]
17. Abbott KC, Mann S, DeWitt D, et al. Physician-to-physician consultation via electronic mail: the Walter Reed Army Medical Center Ask a Doc system. Mil Med. 2002;167(3):200–4. [PubMed]
18. Arenson R. Why bother with a computerized scheduling system? J Digit Imaging. 1988;1(1):24–7. [PubMed]
19. Mozumdar BC, Hornsby DN, Gogate AS, et al. Radiology scheduling: preferences of users of radiologic services and impact on referral base and extension. Acad Radiol. 2003;10(8):908–13. [PubMed]
20. Woods L. What works: scheduling. Picture perfect solution. The right technology and an ASP solution bring scheduling efficiency and added revenue to a community hospital's radiology department. Health Manag Technol. 2001;22(8):48–50. [PubMed]

Articles from Journal of General Internal Medicine are provided here courtesy of Society of General Internal Medicine