|Home | About | Journals | Submit | Contact Us | Français|
The inability to obtain timely appointments impedes access to primary health care. Open access scheduling (also known as advanced or same-day access) is an increasingly popular quality improvement tool to improve access to appointments.
To assess the impact of open access implementation and examine barriers to implementing this model.
6 primary care practices in the Boston metropolitan area from October 2003 through June 2006.
Implementation of open access scheduling
Availability of 3rd available appointments, no-show rates, and patient and staff satisfaction with appointment availability.
Of the 6 intervention practices, 5 were able to implement open access. In the first four months after implementation, these 5 practices substantially reduced their mean wait for 3rd available appointments (short visits – 21 days to 8 days, long visits 39 days to 14 days, both P<0.001). However, none of the 5 practices attained the goal of same-day access, and waits for 3rd available appointments subsequently increased over two years of follow-up. There was no significant improvement in no-show rates (14% pre-intervention vs. 14% post-intervention, P=0.18) nor the proportion of patients or staff rating appointment availability highly (48% pre-intervention vs. 51% post-intervention, P=0.75 and 38% pre-intervention vs. 59% post-intervention, p=0.13 respectively). Fluctuations in appointment availability due to provider leaves were a major barrier to implementation.
Lack of concurrent control groups, small sample of practices and providers
In this setting, implementation of open access improved appointment access, but not no-show rates or patient and staff satisfaction with appointment availability. These findings underscore the need for broader evaluations of open access scheduling in primary care.
Many primary care practices in the U.S. appear to be overwhelmed by patient demand (1). Routine appointments with physicians are booked far in advance and urgent appointments are often added to already full schedules. Precious staff time is devoted to triaging patients. Patients often receive urgent care from physicians other than their own primary physicians, which may disrupt continuity of care (2, 3). Some patients present to hospital emergency departments for non-emergent problems because their primary care providers are not accessible (4).
Frustrated by these shortcomings in the current system, many practices are considering open access scheduling as a means of providing patients better access to care. In open access scheduling, patients call the practice and are offered a prompt appointment, ideally on the same or next day, no matter what the reason for the visit. After being seen, a patient is given a timeframe for follow-up (e.g., in two months), and whenever the patient calls she can be seen the same day or soon thereafter. Proponents argue that open access eases work pressures on physicians, improves practice efficiency, increases patient satisfaction, and fulfills the Institute of Medicine’s call for timely access and more patient-centered care (5-7).
Some health care organizations have described successful implementation of open access. A Kaiser Permanente practice reduced the waiting time for routine appointments from 55 days to 1 day in less than one year (5). Other reports of success in open access implementation (8-10) have prompted some large health care systems, including the U.S. Veterans Administration system and the United Kingdom’s National Health System, to implement open access scheduling (11, 12).
Despite the interest in open access, most published studies have lacked detailed evaluations and relatively few studies have assessed the impact of open access implementation on outcomes beyond appointment availability (6, 8, 10, 13-20). In this study, we evaluated the impact of open access implementation on appointment availability, no-show rates, and patient and staff satisfaction with appointment availability in a case-series of six practices. We also examined potential barriers to implementation to guide others who may consider this increasingly popular approach to primary care scheduling.
In 2003 Partners Community HealthCare Inc. (PCHI) initiated an internal study to assess the impact of open access in six practices recruited from the approximately 100 practices in the PCHI network in Eastern Massachusetts. The study protocol was approved by the Partners Human Subjects Committee.
Each practice selected a leadership team, typically including the medical director, nursing director, and practice manager. This team created an implementation and communication plan in coordination with PCHI support staff. These support staff included a full-time open access project manager and two assistants who together provided coaching, training, and data analysis. Outside national experts in open access implementation provided further support as consultants. General steps taken to implement the model are listed in Table 1, adapted from prior descriptions of the conceptual framework and keys to implementation of the open access model (2, 3). After practices had developed an implementation plan, an official “go live” or implementation date was set. The implementation of the model was gradual and many steps were often introduced before this official date.
Each practice had a monthly team meeting that was facilitated by the open access implementation team. The open access implementation team also conducted multiple on-site visits which included coaching and training of all practice staff. Real-time feedback was provided to the practices in these meetings and visits, enabling the practices to adapt the implementation plan.
The practices simplified a variety of appointment types into two types. Short appointments were scheduled for 15 minutes and included both urgent and routine follow-up appointments. Long appointments were scheduled for 30 minutes and included preventive health examinations (i.e. physicals) and new patient appointments. We aggregated pre-implementation appointment types into these short or long types to allow us to track trends before and after the introduction of open access scheduling.
The primary study outcome was 3rd available appointment times. The waiting time for a 3rd available appointment is considered a more accurate and stable reflection of true appointment availability, because the 1st and 2nd available appointments can often be chance openings due to patient cancellations (2, 21). We calculated the 3rd available appointment measure for each practice based upon each provider’s 3rd available time and weighted by the number of practice sessions the provider worked each week. In calculating the 3rd available appointment, we counted calendar days (e.g., including weekends) and days off. Although part of open access implementation includes eliminating carve-out appointments (appointments which are closed to scheduling until they are made available for urgent visits the day of or the day prior to those visits), one practice retained these appointments, and we did not count them unless they had been released for booking. We also did not count appointments for providers on maternity leave or other extended absences, and those for temporary and urgent care providers. If providers were on vacation, they were included. All 3rd available measurements reported were collected at a consistent day and time (i.e. Mondays before the practices opened) and were collected by three members of the open access implementation team either manually or via automated scheduling systems when available. Data checks were done to ensure the three study staff were collecting data in a comparable manner. During the implementation period we collected data on 3rd available appointment at least monthly. We trained practice staff to measure their own 3rd available times, and when practices began their own data collection or when no changes were being made within the practice, we collected data less frequently. We did not include 3rd available times collected by practices in our analyses.
Practices were followed pre-implementation and post-implementation for different durations because the start of the planning process was staggered and the duration of the planning process varied.
Our secondary outcomes measures were patient satisfaction, staff satisfaction, and no-show rates. We measured patient satisfaction using two different instruments. At the first three practices, patient satisfaction surveys were given to all patients in a one-month period at their appointment arrival. Completed surveys were placed in a drop-box by the patients, so we were unable to track response rates. The question of interest asked respondents to evaluate their satisfaction with “the length of time you waited to get your appointment today” on a five-point Likert scale. At the other three practices we utilized the Ambulatory Care Experiences Survey (22). In this survey respondents were asked, “In the last 6 months, when you scheduled an appointment to see your personal doctor, how often did you get an appointment as soon as you needed it?” At each of the latter three practices, 200 randomly selected patients seen in a two-week period were mailed a survey in both Spanish and English. Up to two reminder letters were sent to non-respondents.
We measured staff satisfaction using a short written survey at staff meetings attended by physicians, nurses, medical assistants, front-office staff, and practice managers, which asked them to evaluate “access to appointments” at the practice on a five-point Likert scale. Staff who missed the meetings were given the survey to complete confidentially afterwards. Staff satisfaction surveys were administered at essentially the same time as patient satisfaction surveys at times noted by vertical arrows in Figure 1.
No-show rates were calculated via scheduling system reports and included all patient visits, both pre- and post-implementation, through at least the timing of the post-implementation satisfaction surveys.
We calculated 3rd available time in three time periods: up to one month before official implementation date, one month before implementation to four months post-implementation, and four to 24 months after implementation. We used Loess smoothing techniques in SAS Version 9.1 (SAS Institute, Cary, NC) (23) to chart trends in 3rd available times at each practice. For the other outcome variables, we compared pre-implementation vs. post-implementation periods. We did not perform formal statistical testing because of the heterogeneity of the practices and observations.
The study was supported by internal funding from Partners HealthCare, and Dr Mehrotra was supported by a National Research Service Award from the Health Resources and Services Administration (#5 T32 HP11001-15) and by the National Center for Research Resources, a component of the National Institutes of Health (KL2-RR024154-01). Partners HealthCare leadership reviewed the manuscript’s content but had no role in the analyses or decision to submit the manuscript for publication.
We attempted to implement open access scheduling at six primary care practices: three family medicine practices, two community health centers, and one internal medicine practice. Details on the practices are provided in Table 2. One practice implemented some changes, but did not fully implement the open access model. An extended physician leave of absence undermined the practice’s enthusiasm for the model and ability to make the necessary changes. Data describing the impact of the intervention are from the five practices who fully implemented the model and these practices implemented each of the steps listed in Table 1 with the exception that one practice that did not completely eliminate carve-out appointments. Four practices temporarily stopped seeing new patients during the implementation period as a way to decrease patient demand while working down the backlog of demand from existing patients.
The first practice began the implementation in October 2003 and the last began in August 2005. Prior to implementation, each practice devoted one to twelve months to planning. In the five practices that implemented open access, the average 3rd available appointment time prior to implementation was 21 days for a short visit and 39 days for a long visit.
Substantial improvements in access as measured by 3rd available appointment time were achieved in these five practices during the first 4 months after implementation. The average 3rd available time for short visits decreased from 21 to 8 days and for long visits from 39 to 14 days. However, from four months to two years after implementation there was a worsening of 3rd available time at four of the five practices (Figure 1). The average 3rd available time for short visits increased from 8 to 11 days and for long visits it increased from 14 days to 29 days, with varied patterns of change. In two of the five practices, the 3rd available times returned to baseline or even worsened (Figure 1).
The overall response rate to the mailed survey assessing patient satisfaction was 46%. In one practice which chose to implement the open access model immediately and did not wait to create a full implementation plan, pre-implementation sampling of patient and staff satisfaction was done two months after the official start date and the practice chose to collect follow-up surveys seven months later. At another practice which implemented the model, practice leaders chose not to measure post-intervention patient and staff satisfaction because there had been a worsening in access (Figure 1, Panel 5).
In the four remaining practices, there was no improvement in the overall percentage of patients who rated access highly: 53% pre-implementation to 51% post-implementation (Table 3). The proportion of staff at the four practices who felt access to appointments was “very good” or “excellent” increased from 35% to 59% after implementation. Across the five practices the average no-show rate was unchanged (14% to 14%).
Although the practices implemented almost all of the measures in Table 1, sometimes the level of acceptance by physicians was low and follow-through varied. We also observed fluctuations in appointment supply and demand.
Changes in appointment supply stemmed from provider leaves of absence due to illness (three of six practices), departure from practice (one of six practices), and maternity leave (five of six practices). Each of the six practices faced one or more of these extended absences during our two-year intervention. In one practice, two providers left at the same time, resulting in an increase in 3rd available times for short visits from under 5 days to 25 days (Figure 1, Panel 5), erasing all gains made in access. Similarly, worsening in 3rd available time was observed in another practice (Figure 1, Panel 2) due to a physician’s unexpected medical leave of absence.
We also saw changes in appointment demand. Despite the high density of physicians in Massachusetts, there is a reported substantial shortage of primary care physicians (24). In the implementation phase, four of the five practices closed the practice to new patients. After the initial implementation and the reopening to new patients, these practices faced great demand from new patients because appointment wait times were much shorter than other area practices. In one practice with the equivalent of 6.3 full-time providers, there were 109 new patients in one month and in another practice with the equivalent of 4.9 full-time providers, there were 186 new patients in a month. Absorbing these new patients was difficult because providers already had large panels and even larger panels resulted in worsening of access (Panels 2 and 4 of Figure 1).
In this evaluation of a case-series of six practices, we found mixed results on the impact of open access scheduling. Substantial improvements in appointment access were achieved in some practices, but none of the practices was able to sustain same-day access and no clear improvements in patient or staff satisfaction and no-show rates were demonstrated.
Our findings contrast with a number of prior articles that discuss potential benefits of open access scheduling including improved patient experience, provider work satisfaction, provider continuity, and decreased no-show rates (2, 9, 21, 25, 26) In a systematic search for studies of open access scheduling from 1998-2008 on PubMed using the search terms (“open access” AND (schedule or schedules or scheduling or appointment or appointments)) OR “advanced access” OR “access time”), we identified 124 articles, of which 29 studied the impact of open access scheduling. (5, 6, 8-20, 26-39) Nearly all of these 29 articles have important methodological limitations (many of which our study shares), including no statistical testing, being limited to access to care measures, lack of concurrent control groups, small sample size, and inconsistent methods. Among the few studies that assessed outcomes beyond access to care, there were mixed effects of open access on patient satisfaction (2 of 5 studies report improvement) (6, 13, 14, 16, 17), staff satisfaction (1 of 2 report improvement) (14, 16), and no-show rates (3 of 6 report improvement) (10, 13-15, 34, 38). Our results add to this literature and raise the question of whether open access scheduling truly leads to these ancillary benefits that advocates have proposed.
Quality improvement projects that have face validity or preliminary evidence of effectiveness sometimes fail to demonstrate clear benefits when evaluated more rigorously (40, 41). Thus, a large-scale cluster randomized trial of open access scheduling using both intervention and control sites would provide a more definitive assessment of this promising approach. Such a study should assess other secondary outcomes such as patient and staff satisfaction, quality of care, and continuity. Furthermore, it should assess costs of implementation. Although open access makes intuitive sense, practice leaders, physicians, health plans, and policymakers need to understand its potential benefits more thoroughly.
Our own belief is that the open access model may have the positive impacts described by proponents and that the mixed results we and others report stem from unexpected barriers that prevented the practices from fully implementing the model. We describe these barriers to inform other groups that are considering open access scheduling.
Unexpected fluctuations of appointment supply arose from extended provider leaves at each of the six practices. Some of these leaves, such as for maternity leave, were anticipated well in advance and attempts were made to arrange adequate coverage. However, these interventions could not fully offset the loss of physician supply. When unexpected barriers led to a worsening of patient appointment availability, it was difficult to convince busy providers to re-do the hard work of improving access. It is possible that sustaining the model might be more feasible in an environment where there are external incentives to improve access to appointments through, for example, pay-for-performance initiatives.
Another barrier to implementation was the difficulty in assessing appointment demand. Ideally each practice would have a roster of patients assigned to each physician, but in reality none had these data available. We were forced to estimate appointment physician panel size and patient demand, using measures of the number of unique patients seen in the last 3 years and the number of appointments requested over a sample period. Because these were estimates, practice leaders remained concerned about whether supply and demand were truly matched for their practices. As a result, planning periods were prolonged and there was less enthusiasm and fewer resources devoted to the implementation. Our experience highlights the need for more sophisticated means of measuring accurate panel sizes (42, 43).
Though staff and providers at the practices agreed that improvements in access were important, there was sometimes disagreement on whether same day access was an appropriate goal. For example, many physicians believed a busy parent would prefer to schedule an appointment for themselves or their child far in advance, rather than on the same or next day. The implementation team emphasized the flexibility of the model and that patients were able to make an appointment in advance if they desired. Nonetheless, this disagreement made it difficult for the implementation team to convince the practices to devote more resources to improving access beyond the improvements seen initially.
Interestingly, despite the limited benefits seen in our outcome measures, most practice leaders felt the open access initiative was beneficial. In their view the process of implementing open access forced a re-evaluation of the practice systems and a change of the mindset of the physicians and staff towards access to care. The implementation exposed longstanding issues such as problems in handling patients’ phone calls, scheduling, job descriptions, and patient flow. These issues had to be addressed, which led to improvements in practice processes but may not have been captured in the outcome variables we measured. For example, one practice noted there was more nursing time available for direct patient care as the nurses now spent less time on triage.
Our study had several important limitations including a small sample size, a lack of control practices, variable follow-up time, infrequent measurement of data, and being limited to Eastern Massachusetts. We also did not assess the impact of open access on other important outcomes such as continuity of care or quality of chronic disease management. Finally, as noted above, greater improvement in the outcome measures might have been observed if all physicians had fully embraced the model. Our study should therefore be considered exploratory in nature. Nonetheless, our findings may assist other practices and health care systems considering implementation of open access scheduling. As pay-for-performance incentives and other health plan initiatives focus on improving access to care, more practices will be considering this model. Our findings also underscore the need for rigorous large-scale evaluations of open access scheduling to assess more fully its impact (30, 37).
We are grateful to the staff of the participating practices and the project steering committee for their time and effort. We thank Lisa Noke, Jessica Desrosiers, and other staff at Partners Community HealthCare Inc. for their invaluable assistance with data collection and analysis.
Grant Support: The study was supported by internal funding from Partners HealthCare, and Dr Mehrotra was supported by a National Research Service Award from the Health Resources and Services Administration (#5 T32 HP11001-15) and by the National Center for Research Resources, a component of the National Institutes of Health (KL2-RR024154-01).
Publisher's Disclaimer: This is the pre-publication, author-produced version of a manuscript accepted for publication in Annals of Internal Medicine. This version does not include post-acceptance editing and formatting. The American College of Physicians, the publisher of Annals of Internal Medicine, is not responsible for the content or presentation of the author-produced accepted version of the manuscript or any version that a third party derives from it. Readers who wish to access the definitive published version of this manuscript and any ancillary material related to this manuscript (correspondence, corrections, editorials, linked articles, etc…) should go to www.annals.org or to the print issue in which the article appears. Those who cite this manuscript should cite the published version, as it is the official version of record.”