|Home | About | Journals | Submit | Contact Us | Français|
To develop a structured protocol for observing patient navigators at work, describing and characterizing specific activities related to their goals.
Fourteen extended observations of navigators at three programs within a national trial of patient navigation.
Preliminary observations were guided by a conceptual model derived from the literature and expert consensus, then coded to develop and refine observation categories. These findings were then used to develop the protocol.
Observation fieldnotes were coded, using both a priori codes and new codes based on emergent themes. Using these codes, the team refined the model and constructed an observation tool that enables consistent categorization of the observed range of navigator actions.
Navigator actions across a wide variety of settings can be categorized in a matrix with two dimensions. One dimension categorizes the individuals and organizational entities with whom the navigator interacts; the other characterizes the types of tasks carried out by the navigators in support of their patients.
Use of this protocol will enable researchers to systematically characterize and compare navigator activities within and across programs.
Challenges arise when people contemplate cancer screening. They multiply when tests suggest a threatening disease and indicate the need for follow-up investigations. And the challenges expand dramatically when such investigations reveal cancer, as people become “patients” in the complex realm of referrals, consultants, examinations, decisions, and often arduous treatment regimens.
Individuals who are socially and economically disadvantaged may find cancer care all the more problematic (Baquet et al. 2005). These people are at substantial risk of receiving inadequate care at each step of the cancer care continuum: screening, diagnostic follow-up of suspicious results, treatment when cancer is diagnosed, and survivorship surveillance. Moreover, the systems of care available to them, such as “safety net” institutions, are often beset with inefficiencies.
Patient navigation has emerged in the past decade in response to these widely recognized disparities in cancer care. Health care advocates, policy makers, and innovative health care organizations have called for the adoption of patient navigation to assist patients and remedy inefficiencies in the provision of timely care (Freeman 2006; Jandorf et al. 2006;). As a result, patient navigation services have proliferated rapidly in recent years.
Research to date supports the promise of patient navigation for reducing cancer disparities (Gabram et al. 2008). Currently, a large, multisite cooperative study of patient navigation is being conducted. The National Cancer Institute's Patient Navigation Research Program (PNRP) is designed to evaluate the effectiveness of navigation in improving timeliness of care (i.e., time to follow-up of abnormal screening results and to completion of treatment when cancer is diagnosed) and patient satisfaction (Freund et al. 2008). The nine cooperating studies of the PNRP provide a laboratory for characterizing the actual work of patient navigators and linking variation in their activities to patient outcomes. The first step in such an effort is to design procedures for systematically observing navigators' activities. This paper reports on the development of a protocol for observing what navigators actually do.
Despite continued advances across the spectrum of cancer care (Brenner, Gondos, and Arndt 2007), the distribution of these advances remains uneven. They are less likely to be enjoyed by those segments of our society defined by minority racial and ethnic status, low income, and limited health insurance (Weir et al. 2003; Shavers, Fagan, and McDonald 2007;). Disparities in cancer care are persistent and may in some instances actually be widening (Ries et al. 2007). Inequitable outcomes may result from, among other factors, well-documented delays in accessing diagnostic and treatment services by the most at-risk populations (Chang et al. 1996; Peterson, Han, and Freund 2003; Battaglia et al. 2007;).
Several seminal reports (Smedley et al. 2003; Weir et al. 2003;) have highlighted the barriers to cancer care inherent to socioeconomic disadvantage. Patient navigation is a community-based approach to reducing these barriers (Dohan and Schrag 2005; Battaglia et al. 2007; Ell et al. 2007; Ferrante, Chen, and Kim 2008;). Guided both by principles of disease management and by cultural sensitivity, navigators are responsible for identifying individuals most at risk for delays in cancer care and mitigating barriers to their receipt of that care (Vargas et al. 2008). Navigation programs also seek to remedy systemic barriers to care within organizations delivering care. Patient navigation services address barriers by assigning trained supportive staff who track patients and assist them in completing their diagnostic and treatment care, while also advocating for solutions to systemic causes of those barriers.
Navigation programs are usually funded through local resources or foundation support because insurers do not reimburse this care. Local innovation results in tremendous variability in program structures and activities. Trailing behind these developments is a small, but growing body of research documenting the efficacy of navigation (Battaglia et al. 2007; Ell et al. 2007; Ferrante, Chen, and Kim 2008; Gabram et al. 2008;). While these studies provide evidence that navigation is effective, the key components of a successful navigation program are not well understood.
Problems often arise in the evaluation of complex innovations such as patient navigation because the interventions have not been fully defined and developed (Campbell et al. 2000). Early evaluations of innovative programs often simply assume either that the intervention is in place as planned or that some variation is acceptable (Eccles et al. 2003). However, such assumptions lead evaluators to overlook the effects of variation. And in the case of patient navigation, a lack of information about variation in definition, style, and scope could lead to inaccurate conclusions about its effectiveness.
There is no generally accepted definition of patient navigation. Reviewing 56 articles published before early 2004, Dohan and Schrag (2005) identify two types of definitions: “service focused” and “barrier focused.” Service-focused definitions attend to activities such as connecting individuals to resources and assisting patients in completing courses of care. Dohan and Schrag (2005) criticize these definitions as nonspecific: such activities could be, and often were, performed by other providers as part of their duties. Barrier-focused definitions, they argue, attend to activities that identify and remove impediments preventing patients from moving through screening, diagnostic follow-up, and treatment. Furthermore, these responsibilities were distinguishable from those usually assigned to social workers, case managers, community outreach workers, and health advocates. The latter roles, they argue, were typically proactive, providing education and counseling services, while navigation was essentially reactive to emergent impediments to care.
However, defining patient navigation in terms of resolving barriers for individual patients may be too constricting. While service-oriented definitions could blur distinctions between navigators and other service providers, focusing only on what navigators do in relation to barriers facing individual patients may obscure a variety of related activities they perform. They may tweak organizational practices to expedite patient care, develop local resources for multiple patients, and build cooperative relationships with and among clinic staff that facilitate more efficient movement of patients through systems. Thus, while staff other than navigators may facilitate patient access to services, to define patient navigation work solely in terms of barrier reduction risks artificially excluding other patient navigation functions.
We sought to avoid such exclusions. Informed by research to date, we noted that navigators work with health care organizations—sometimes within, sometimes externally—to facilitate patients' receipt of care from providers. This framing gives attention to the fact that navigators often must involve others in their work, which, in turn, suggests that navigators' networks of relationships with these “others” might be essential to achieving their objectives. Drawing on research in various care settings, we then defined navigation in terms of tasks and networks: navigators do things for patients by working with patients and other actors in both the social network of the organization itself and the community in which the organization resides.
Social networks have been described as “patterns of relations joining actors” (Marsden 1990). Keating et al. (2007) use the social network approach to understand patterns of advice-giving and -following within a primary care practice. Earlier navigation research emphasized the pivotal role of the navigator in helping patients access necessary services, which suggests that part of navigation is knowing to whom to go for specific support. Thus, to understand what navigators do, we must understand their patterns of relations with others who provide services that facilitate screening, diagnosis, and/or treatment. Underscoring the importance of understanding these networks is a recent analysis of early patient navigation programs that defines patient navigation as a system, rather than a person (Vargas et al. 2008). The network framework is useful in bridging the service-focused/barrier-focused dichotomy: it usefully attends to the question of how navigators achieve their efforts on behalf of patients. Whether obtaining services proactively or in response to a specific barrier, navigators engage others in their networks to find, arrange, and seek reimbursement for those services.
The social network concept illuminates part, but not all, of the scope of patient navigation. We need also to describe the activities of navigators. Task analysis, with its emphasis on interaction of persons and environment, offers a useful complement. Task analysis is concerned with identifying the goal of a task, the criteria for reaching that goal, and the relevant resources and constraints. It also emphasizes that task-directed actions are determined by both the person carrying them out and the relevant environment (which presumably includes the person's social network), and that people develop personal expertise in how to accomplish their tasks (Norros and Nuutinen 2002). Navigators need to build a working knowledge of the tasks they must perform and a network of contacts to support their actions.
Development of the observation protocol was grounded in a qualitative study of PNRP navigators at work. We began by surveying the nine PNRP sites to characterize structural attributes of each program that would define the contexts in which navigators worked. Attributes included each program's physical site (facilities, geographic location, and populations served), its size, and the spectrum of navigation services offered. Based on these findings, we selected a convenience sample of three pilot programs that represented the diversity of settings and approaches to navigation implemented at the nine independent sites.
We also developed a preliminary guide for observing navigators at work. This guide was designed to enable the collection of comparable field observations of what navigators do (tasks) and the people and entities with whom they interact (networks) in accomplishing those tasks. The resulting data were analyzed to develop a comprehensive, yet simple observation protocol for observation of navigators across all PNRP sites and elsewhere.
The nine programs in the PNRP were selected through a competitive national process and designed according to general program criteria set by the NCI (Freund et al. 2008). The NCI sought to maximize diversity within this group to assess the usefulness of patient navigation across a range of settings. Thus, the programs differed in many respects, but they met common requirements regarding navigator training, patient population, patient criteria for inclusion, and collection of data. The programs address different combinations of four cancers—breast, cervical, prostate, and colorectal—where navigation would likely have a detectable effect in facilitating follow-up of suspicious screening results and completion of treatment (Freund et al. 2008). Each of these cancers is associated with a distinct patient population and pattern of care.
The locations where navigators work and interact with patients, medical providers, and others vary: hospital evaluation clinics, inpatient wards, and treatment units; primary care clinics; and community health centers. The scope of navigator involvement relative to the cancer care continuum also varies; some navigate in all phases, while others navigate only from diagnosis through treatment. At some sites, navigators focus on case finding, while at others, the focus is on supporting patients through treatment. The number of navigators employed varies, as does navigators' involvement in competing clinical or administrative responsibilities. Some navigators were hired and are supervised directly by the research program; others were hired by the clinical care sites themselves as a subcontract to the research program. Finally, the programs vary with respect to the professional background and training of navigators: some use navigators with clinical training and credentials, while others use “lay” navigators selected for congruence with the target patient population.
In selecting the pilot sites, we excluded one program due to restrictive local access requirements. Site 2 was selected because it is the research team's home site and had a small, longstanding navigation program that predates the PNRP. Sites 4 and 7 were selected to provide informative, qualitative contrasts along the dimensions outlined above. Information about all eight programs is presented in Table 1.
We enlisted other PNRP investigators to collaborate at their respective sites. Multisite, multi-investigator qualitative research required developing a common protocol. We designed a semistructured observation guide to support collection of comparable data across all sites. During a 1-day training, observers from each site were directed to describe specific actions of navigators and note the following: (1) the approximate duration of actions; (2) the parties with whom navigators interacted; (3) whether interactions were in person or via phone/email; (4) the relevance of the action to navigation.
Observers also were directed to query navigators about their actions at moments that would not interrupt observed activities. Navigators were asked to explain the relevance of observed activities to particular navigation issues and challenges, including (1) actions navigators took to develop a relationship with the patient and others relevant to the patient's case; (2) the role of others who were consulted for advice, direction, or assistance; (3) the initiation and extent of an interaction with a person; (4) the nature of the problems being addressed. Thus, while we focused on directly observable behaviors, we also explored navigators' reflections on the scope of and rationale for their actions.
Each navigator at each site was observed at least twice. Observations were scheduled in consultation with navigators to best capture the variation in their workflow, so the lengths of the sessions were defined by the ways navigators scheduled their own activities. For example, if a navigator worked with a specific provider seeing patients for screening follow-up, the observation ran the length of the provider's clinic. Most observations lasted about 4 hours.
At the end of each observation, navigators were asked further questions to characterize the representativeness of the actions just observed:
The observation guide was refined in a 1-day meeting of the observers, followed by regular conference calls. Issues of reliability were addressed by comparing observations within and between the three sites. Apparent discrepancies in the quality of observations were discussed and resolved by developing a consensus among all observers.
Using the observation guide, investigators conducted a total of 18 comparable observations of nine navigators working in three program sites of the PNRP. The analysis focused on refining the observation protocol. Data from all three sites were compiled by the project PIs. Fieldnotes from each observation were imported into software supporting text-based analysis, HyperRESEARCH (ResearchWare Inc. 2008).
The analysis was informed by the general approach of grounded theory methodology (Glaser and Strauss 1967). The first three authors each reviewed separately one set of fieldnotes from each of the three sites, identifying themes that characterized the activities reported. The team then met to compare descriptive codes and reach consensus on code definitions before coding the rest of the fieldnotes. The primary aim of this analysis was to define a comprehensive, yet parsimonious set of categories that would enable other observers to reliably categorize navigator behavior.
Once these categories were developed, we also drafted observation instructions to guide subsequent field observations of the work of navigators. Drafts of the analytic categories were developed by the Site 2 researchers and shared with the Sites 4 and 7 observers for substantive critique and revision in a day-long research team meeting, which yielded the final protocol reported herein.
The three sites provided a wide array of contexts for observing navigation. They differed with respect to the scope of the navigation program, the phases of cancer care addressed (i.e., screening, diagnosis, or treatment), the history/longevity of the program, the emphasis placed on various navigator responsibilities, and the background (e.g., clinical, survivor, cultural/ethnic) of the navigators, as well as their physical and organizational location (e.g., community health centers, large medical centers, outpatient primary care, or treatment clinics). These contextual differences appeared to influence, to an undetermined extent, what navigators do. Thus, we sought an observation protocol that would reliably capture activities in these diverse settings.
Guided by the concepts of task and network, we defined five categories each of navigator tasks and social networks. The task categories include navigating with a patient, facilitating for a patient, maintaining systems for all patients, documenting/reviewing actions, and other tasks. The five network categories include patient(s), clinical provider(s), nonclinical staff, formal and informal support, and medical record systems. Each of these categories is defined, described, and illustrated below.
Navigating tasks consist of identifying and mitigating barriers with patients. They include telling (explaining when and where biopsy will be done, describing what it will be like); inquiring (asking about barriers to attending the appointment, exploring the patient's concerns); supporting (listening to fears about treatment); and coaching (discussing questions that need to be asked at next appointment and how to ask them).
Facilitating tasks are performed for a specific patient. They include finding (locating current patients and ensuring that they will come to appointments); coordinating team communication (ensuring the entire care team is aware of the next steps); integrating information (ensuring that different types of patient data are documented and shared as needed); and seeking collaboration (enlisting other providers in addressing the patient's fears).
Maintaining systems tasks support all patients. They include identifying potential patients (reviewing lab results to note patients who need follow-up); building networks and referral routines (meeting with clinicians to explain navigator role and discuss referral criteria); and reviewing cases (checking on ticklers and open issues).
Documenting activities and reviewing information constitute another major navigator task. They include recording navigator actions (recording steps taken with or on behalf of the patient in the patient's medical record or a separate navigation file); handling test results (retrieving and entering patient data from labs, radiology, or other sources); and processing other necessary information (recording information or activities relevant to navigator role).
Other activities are those apparently unrelated to navigation. It was important to capture all network interactions, even when their relevance to navigation was not apparent. For example, many navigators have other distinct roles unrelated to navigation; documenting these other activities will help in understanding how the navigator role fits in with other roles, both formally and informally. This category includes research-related activities, such as consenting patients, providing clinical back-up, activities unrelated to navigation (interpreting for nonnavigated patients), and socializing (having informal conversation with co-workers).
Navigators may interact with a specific patient, such as when phoning the patient with information about an upcoming diagnostic procedure.
Navigators may also interact with providers, both within and outside their immediate location. For example, s/he might speak with the physician to confirm the meaning of a test result before discussing it with the patient.
Nonclinical staff, such as receptionists or administrators coordinating insurance, represent another group with whom the navigator may interact.
People who provide supportive services, either formally (social workers, translators, transportation staff) or informally (friends, family) within or outside the facility are another group with whom navigators interact.
The final category—paper or electronic medical record systems—could be perceived as merely a means to communicate with members of the other four network categories, and it does function in that way. However, our preliminary observations indicated that, in the eyes of the navigator, the medical record itself takes on some of the qualities of a person, in that it needs to be informed and/or consulted before other actions are taken. This observation is consistent with those of many studies of human–computer interaction (Turkle 2003).
The current observation protocol incorporates solutions to problems encountered in the field using the preliminary observation guide. Initially, observers were required to take continuous notes, recording the duration and mode of the navigator's activity, the person with whom s/he spoke, the activity, and the patient on whose behalf the activity was taken, plus descriptive narrative. This recording burden proved too onerous in the field: recording all observed activities not only interfered with the primary goal of noting tasks and social networks used by the navigators, but it did not produce more useful data.
Based on this early finding, two important changes were made to the observation protocol: activities were observed in 15-minute intervals, and coding focused on the primary activity of each interval. This time sampling methodology facilitates detailed reporting of navigator activities without attempting to capture everything that occurs during an observation. Observers start a new form every 15 minutes, focusing notes and coding on the navigator's primary activity during that period. Thus, each hour of observation time yields four distinct chunks of description and activity coding. This sampling interval provides some sense of the relative proportion of a navigator's time spent on different activities, while allowing observers to record more detailed notes about the main activity. This approach necessarily involves some observer judgment: sometimes a navigator tackles multiple short tasks during a single 15-minute interval. In such instances, observers were instructed to either group-like tasks into a single entry (making appointment reminder calls to a list of patients could be meaningfully described as one task) or focus on the first activity during the time period.
Through discussion, a five-by-five matrix emerged, with tasks on the vertical axis and social networks on the horizontal axis. The observation form itself was redesigned to incorporate on a single page both this simple matrix and an open area for handwritten fieldnotes (see Figure 1).1
After field testing, several additional refinements were made. Certain combinations of tasks and networks cannot occur. For example, the task of navigating can be performed only with a patient, while reviewing a patient's file can be done only with the medical record. To further simplify the form, matrix cells representing combinations that cannot occur are blacked out.
Also, observers at some sites reported that a significant amount of navigation is carried out by telephone, leaving and returning voicemail messages. Therefore, for each observation of a navigator action that involves contact with a patient, the observer also notes whether the interaction is synchronous (happening in real time) or asynchronous (delayed, as when leaving a voicemail) by recording either “S” or “A” in the appropriate cell. For all other cells in the matrix, a checkmark is used.
Finally, because some observed activities may involve more than one person or task, observers are encouraged to mark more than one cell if that best reflects what they are seeing. For example, if the navigator accompanies a patient to a physician visit, the observer puts an “S” in the cell representing “navigate/patient” and a checkmark in the box representing “facilitate/provider.”
The matrix supports coding of real-time activities as they occur, but we realized the need for simultaneous, structured, narrative fieldnotes, as well. Hence, we developed observation guidelines directing the observer to note relevant contextual factors, such as the location of navigation activity, the language used in navigation, the racial/ethnic backgrounds of both patient and navigator if known, and the navigator's other roles (if any) in the organization.
This matrix facilitates rapid categorization of tasks and networks, allowing the observer to concentrate on writing narrative description that will document important information about context and content that cannot be fully captured by the matrix. The observer is encouraged to ask the navigator questions to develop a better understanding of what the navigator is doing and why.2 Observers also are asked to record their impressions about interactions, clearly identifying these notes as their perceptions. For example, the observer might write “navigator and patient embraced warmly and seem to know each other well.” While these impressions are particular to specific observers, they nevertheless add richness to the description.
Patient navigation represents an emerging innovation in care adapted to a variety of specific local contexts. Capturing local adaptations is crucial to meaningfully assess the efficacy of navigation across different sites. The protocol we have developed reflects a plausible, generic definition of navigation that has been found thus far to be applicable in multiple contexts. We expect the protocol to be useful in capturing the existing variation in navigation programs, and we will use it for this purpose as our research program goes forward. This type of data is essential to inform both research and practice.
While this protocol represents an important step, it certainly does not capture every detail of navigators' actions. For example, while interactions with other providers are categorized and noted, the protocol does not note how extensive or collaborative they might appear to be. Data from narrative fieldnotes will compensate for this limitation, while also providing the potential for further revisions to the protocol based on emerging patterns in these data.
This protocol is grounded in a qualitative study of navigation in three sites of the PNRP. As such, it has a measure of validity, yet its validity requires further examination through application. The current definitions of the task/network categories may reflect the particular realities of the sites we have observed in developing the protocol; they may evolve as new sites are studied. Likewise, variation in organizational, political, and community contexts in which operational navigation programs are developed may require modifications to this protocol. Moreover, while this protocol enables the systematic observation of what navigators do, it is only part of a comprehensive method for evaluating the processes and outcomes of patient navigation. The mix of patients served, their resources and ability to access care; the types of health problems for which access is needed; and the specific array of health services and providers for which navigation is needed must be taken into account in evaluating the effectiveness of what navigators do, as captured by this, or any protocol.
To further investigate the protocol's validity, we are implementing it on a wider scale. It is being used at eight sites to produce a dataset of approximately 130 observations (four half-day observations of each navigator at each site). As of this writing, 89 observations have been completed, and none has presented activities that fall outside the task/network categories described above. Quantitative and qualitative analyses of the data collected will enable the research team to characterize variation, both within and across sites, in navigator tasks, networks, and emphasis.
Protocol development thus far has illuminated important dimensions along which navigation programs may vary. By accurately characterizing this variation, researchers should be better able to interpret variation in patient outcomes associated with different navigation programs. While this protocol does not provide information on program effectiveness, it provides important process information that may help explain the connections between navigation context and outcomes. Of course, outcomes also may be affected by the actions of other health care providers and advocates, which may overlap with the actions of navigators. It is important not to over-attribute outcomes to the actions of specific navigators, but rather to keep analysis at the programmatic level. In other words, while patient navigator actions may often be directed at specific patients, navigation is properly conceptualized as a systemic intervention that changes how care is delivered. Thus, any change in outcomes observed may be the result of the actions of multiple individuals, including the navigator, whose actions have been influenced by the presence of the navigation program. The information obtained from further use of this protocol to study the work of navigators may also inform the processes of selecting, training, supervising, and supporting navigators, as it will illuminate different practices that might optimize desired navigation program outcomes.
Joint Acknowledgment/Disclosure Statement: The research reported in this paper was supported by a grant from the Avon Foundation to Boston Medical Center (BMC), which subcontracted with the research teams at University of Chicago (Chicago) and University of Rochester Medical Center (Rochester). Battaglia is principal investigator of the project. Parker is co-principal investigator; Clark, co-investigator; Freund, clinical consultant; and Leyson, research assistant. All authors except Parker are also principal investigators, co-principal investigators, or research staff at their respective participating sites in the National Cancer Institute's Patient Navigation Research Project (PNRP).
1The observation form and matrix are reproduced here with the permission of the Trustees of Boston University.
2Please contact authors for observer training instructions.
Additional supporting information may be found in the online version of this article:
Appendix SA1: Author Matrix.
Please note: Wiley-Blackwell is not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.