identifies the key steps in RAP for clinical informatics evaluations. We assume that three to five RAP team members would be working together, with a full-time project manager and other team members working on the project from 10–20 hours per week (except during site visits which require a full-time commitment).
Key steps of RAP for clinical informatics:
Our goal in this paper is to present key issues, challenges, and methodological decisions to help other research and evaluation teams who wish to use RAP. First, we outline issues relating to setting up a RAP evaluation and assembling the research team. Second, we describe preparatory work that is essential for conducting rapid data collection during brief site visits. Third, we outline data collection procedures. Finally, we discuss data analysis and iterative data collection for evaluations that involve visits to multiple sites.
Assembling a RAP team
Ethnographers assume that there is not one objective reality underlying a given situation and that people from different backgrounds, who have different roles and different levels of power and autonomy, will perceive situations differently. Understanding the many perspectives of people who play a range of roles in health care delivery is crucial for assessing clinical IT interventions. Likewise, incorporating multiple perspectives within the research team itself enhances the likelihood of accurately describing the setting. Multidisciplinary teams will be less likely to overlook issues or constituencies, leave tacit assumptions unquestioned, or misinterpret findings.(34
We have found that at minimum it is beneficial for a RAP team to consist of an informatician, a trained qualitative researcher, and a clinician. We have found it essential to have at least one informatician on the team who understands the technical aspects of the systems under study as well as the context surrounding implementations. Equally important is a team member who is trained in qualitative research and ethnography. This person can lead the construction of interview guides, train team members to conduct naturalistic observation (for example, shadowing clinicians or observing interactions in a charting room or other common setting), train team members in informal and formal interviews, and guide the data analysis. In our experience, having a team member with clinical expertise (a physician, nurse, or pharmacist) also greatly enhances the team’s ability to ask good questions and interpret findings. It is not always feasible to have one person in each of these roles, but we strongly advise doing so. At least two people should be working together when collecting data at each site. When analyzing data, it is important to meet together frequently enough so that the team research process is intensive—at least two hours per week.
Doing RAP well requires asking numerous questions of research participants and of fellow team members. Individuals who are intimidated or intimidating are not the best RAP researchers. Rather, one should seek individuals who are comfortable interacting in clinical settings, good at listening and asking questions, and good at brainstorming. RAP team meetings will involve active questioning, testing preliminary findings, and proposing competing models and explanations for the phenomena under study. Picking a team of people who can participate in such work is as important as asking the right interview questions or constructing a sample for data collection.
The specific duties each RAP team member performs can vary. Each team member is considered a fieldworker, meaning each person will record observations and take detailed field notes. Some team members will also serve as interviewers. Ideally, one person will serve as the team leader and data manager. Depending on the situation, one team member can be designated to collect field survey data. Alternatively, multiple fieldworkers can gather field survey data during the course of their observations.
The creativity and flexibility of a multi-disciplinary RAP team needs to be fostered by a data management process that allows for rapid data collection and analysis. Reviews of protections for human subjects also need to be carefully designed in order to enable this. Some RAP projects may be considered exempt by Institutional Review Boards (IRBs). These include studies conducted strictly for process improvement purposes. A project may also be considered minimal risk if efforts are taken to ensure that no HIPPA-covered Protected Health Information (PHI) or medical data is collected. This may allow more flexibility in the informed consent process. Often, the research team must obtain IRB approval from both the researchers’ home institution(s) as well as the site(s) to be studied. When multiple IRBs are involved, researchers should ask that one IRB assumes overall responsibility for review and that the others “cede” their oversight role to that primary institution. Finally, unlike most research protocols that are reviewed by an IRB, the data collection materials and recruitment strategies of this method are fluid. RAP can be derailed by a month-long interval between IRB modifications and approval. Therefore, researchers should consult with their local IRBs on how to submit the initial IRB application and structure the research to ensure that subsequent minor changes in interview questions or recruitment letters do not necessitate full board review. Many changes can be approved by expedited review and some IRBs will be comfortable approving interview and observation topics rather than detailed questions and data collection templates. Finally, researchers should always ask for permission to engage in as many different recruitment or data collection approaches as they might need. For example, if you plan to recruit clinicians by email, impromptu introductions, or meetings arranged through a division head, prepare recruitment materials for review and ask for permission to use all of these strategies.
Having the right equipment on hand during site visits will also greatly enable flexibility. We have learned that each fieldworker should have a digital audio-recorder handy to capture impromptu informal interviews that can happen when shadowing clinicians. Although most observational data are easily recorded on a notepad, fieldworkers should have the ability to switch to audio recording of lengthy explanations or conversations about key research questions. Ideally, one team member should function as the data coordinator or project manager, and should bring a laptop during site visits. This person can modify data collection tools to include local terminology, upload audio or text files that researchers create during a site visit, and track the team’s work. For formal interview situations that result in transcripts, using a microphone designed to capture multiple people in a group (a high quality omnidirectional microphone) will ensure the best quality audio recording. Access to a printer is also very important, so that data collection instruments can be modified during the course of a site visit. Finally, each RAP team member must have a cell phone to coordinate schedules and report whereabouts, and a laptop for typing full field notes at the end of each day.
Preparing for fieldwork
Planning an effective site visit can take several months of careful preparation prior to entering the field. We have developed a form, the site visit preparation schedule, to help prepare for site visits (See Appendix A
, page 3).
During preparation for fieldwork, the RAP team needs to gain access to the organization, gather preliminary information about the site, schedule interviews with key people in the organization who can inform the team about the phenomenon or system being evaluated, and prepare a fieldwork manual that will be used to gather data during the site visit.
Health care organizations tend to be hierarchical, and approval for the site visit must come from an organizational leader. Thinking carefully about whom to approach and how to portray the RAP activity will help to ensure access to the organization you are trying to study. Once an organizational leader approves, he or she should point to a “shepherd,” or liaison who can facilitate interview scheduling, provide information about the organization’s structure, point the RAP team to the right places for observations, provide in-person introductions, and generally orient the team. This person may be a trainer who knows both informatics and clinical staff or someone from within the administration who has broad knowledge of the organization. Whoever takes on the shepherd role should be invested in the RAP team as well as the research question. The shepherd can expect to spend several days before the site visit and several hours during each site visit day to facilitate the RAP researchers’ activities.
Initial conversations with organizational leaders and the designated shepherd can help to identify candidates for in-person, formal interviews. We ask each person we contact for recommendations about who should be formally interviewed (a technique called ‘snowball’ or ‘chain’ sampling).(36
) We have conducted up to fifteen formal interviews per healthcare organization, but fewer may be appropriate at a small hospital or clinic network. Interview participants should be selected according to their role and relevant knowledge about the subject of inquiry such as CDS, CPOE, or health care quality and safety. For example, we have interviewed chief medical information officers, clinician users (including physicians, nurses, and pharmacists), quality assurance staff, information technology (IT) staff, and in-house health IT vendor staff.
The research team can develop an observation plan by asking itself, “Where does the entity or functionality I am evaluating get used, developed, and maintained?” We have purposely targeted people who are both expert users and new or reluctant users of a technology, as well as those who are either at the organization’s center or at its periphery (for example, outlying clinics or clinics owned by a subsidiary). With four or five researchers, the RAP team can distribute itself across sites to collect data that will represent as much variability as possible. Shepherds and organizational leaders tend to assume that you will want to watch and interview experts, super-users, or leaders. Make sure to communicate your desire to see the “average” users, the “outliers,” the non-users, and the “curmudgeons” so that interview and observation data will reflect the spectrum of perspectives likely found in the organization.
Gathering information about the site before the visit will help to ensure that the RAP team collects the right data. This should include a demonstration of the informatics applications being studied and a telephone interview or written questionnaire with an organizational leader who can describe the institution and its clinical information systems. We have found it useful to send a “site profile” form (see Appendix A
, page 4) to a knowledgeable leader to gather information about the organization and its clinical informatics capacities. This profile should be developed based on the literature describing the specific facet of clinical informatics to be studied, the knowledge and experience of the research team, and input from the site liaison or shepherd. The profile is formatted as a questionnaire that is easy to complete. The focus of the site profile should be on factual information, such as the number of beds in the hospital, the type of EHR system, and the number of physicians using CPOE, rather than matters of opinion. The latter should be gathered through field visits.
The site profile should be customized to help answer basic questions about the organization and about informatics features that are the focus of the research. When we have not been able to collect comprehensive profile data beforehand, we have assigned a team member the task of filling in the missing data during our site visits. Websites and online searches can also yield a wealth of information about an organization before the visit.
Making the most of intensive site visits
For some RAP teams, research sites are nearby and easily accessible. In such situations, RAP teams have the luxury of returning several times to a site in a data collection-analysis-collection-analysis cycle. Questions uncovered during an initial visit can be examined during data analysis and further investigated at a subsequent visit. However, in other cases a RAP team has only one chance to collect most of its data.
We have found that a team of 4–5 researchers can gather comprehensive information about a hospital or outpatient care network in a period of three days (see , which outlines the process, and Appendix A
, page 7, which shows a schedule). Success rests on identifying a shepherd who works inside the organization, supports the research endeavor, and has both good credibility and the availability to help the RAP team gain access to research participants (clinicians, nurses, administrators, and IT staff).
Having a multidisciplinary team means that observers will note different features of how clinical informatics tools are designed, implemented, used, and revised. To make the most efficient use of limited resources, individuals should interview and observe people and activities that are in their own areas of expertise—for example, if you have a pharmacist on your team, make sure that pharmacist observes local pharmacists and interviews the person tasked with managing an e-prescribing system.
Before the site visit, prepare a preliminary schedule of interviews and observations together with the local shepherd. This schedule may change based on early observations, so one member of the team should coordinate the logistics and keep track of researchers’ assignments. Gathering the research team to debrief over lunch, and at the end and beginning of each day, is essential to maximize rapid learning about local terminology, preliminary findings, and topics for subsequent observations and interviews. These debriefings can be held in person or by conference call when the team is dispersed geographically. A sample agenda for debriefing sessions is included in the fieldwork guide in Appendix A
, page 14.
Make sure to record debriefings so that they can become part of the dataset for each site visit. Interviews, both formal and informal, should be tape recorded and transcribed as quickly as possible. Interviewers should also compile interview notes so that the research process is not hindered by a transcription delay or a failed recording.
Preparing a fieldwork guide
A fieldwork guide, such as shown in Appendix A
, outlining the site visit activities and data collection tools has been invaluable for our research team. This is a frequently revised document that facilitates the coordination of team-based research. The guide should include everything a RAP team member will need to collect data during a site visit. For example, it should include the site profile (including its results if available), the site visit schedule, the field notes form, informal and formal interview guides, brief field surveys to be administered opportunistically, and team meeting agendas.
Each of these items requires considerable time to prepare and should be customized for each research or evaluation project. When developing the guide, think about what background information you will want during analysis. We have made data analysis much easier by incorporating a template or header in data collection documents that includes critical information about observations and participants. Participant characteristics could include gender, role, or years using the system. Observation characteristics could include time and location of observation, number of individuals observed, or focus of observation.
There are many data collection tools that RAP teams can use, and teams should select the ones that will work the best for their field setting and focus. Each element of the fieldwork guide is based on established ethnographic methods. These include familiar data collection strategies such as in-depth interviewing, naturalistic observation with opportunistic interviews, and survey research. Less familiar data analysis methods include creating charts, maps, or “rich pictures” that model the system being studied. These can include depictions of physical spaces, relationships among system components, or specific situations.(20
) A working understanding of the research principles behind each element of the fieldwork guide will enhance the validity and reliability of RAP findings. A detailed discussion of each of these methods is beyond the scope of this manuscript, but can be found in other sources.(16
) Any RAP team is likely to use formal semi-structured interviews, naturalistic observations, and brief field surveys (short surveys that include fixed-choice and a few open-ended questions that can be administered in person by a member of the RAP team or as a respondent-completed paper survey). The fieldwork guide should include the actual data collection instruments that will be used, as well as procedures for administering them. Because RAP data collection procedures are fluid, it is important for each team member to have a working version of the field manual so that procedures are followed consistently.
Prior to each interview or observation period, the subject is given a fact sheet (see Appendix A
, page 8) describing the study. Our formal interview guide (see Appendix A
, page 9) contains a comprehensive list of open-ended questions that are subsequently carefully tailored to fit the expertise and perspective of each interviewee. Some questions make sense to all participants, but in a highly specialized setting such as a hospital, tailored questions ensure that valuable interview time is focused on the areas in which the participant can speak most informatively. We have adopted a unique interviewing technique in which two interviewers with different training and perspectives are present at each interview to ensure the most productive follow-up questions and subsequent understanding of the interview data. One interviewer always takes a primary interviewer role. The second interviewer writes notes about the interview, makes notes about terms or statements that may be difficult to transcribe, and follows up on areas that have not been adequately covered. The secondary interviewer’s notes make rapid data analysis possible at team meetings at the end of each day so that interview data do not have to be transcribed in order to be incorporated in preliminary analysis.
In addition to formal interviews we utilize a field survey or informal interviewing guide. Appendix A
, page 12, shows a sample field survey instrument. The field survey is a short interviewing tool that is administered to as many participants as possible (up to 30–40 in a hospital site visit). We have tailored our field survey to each site depending on CDS modules available and on the local names given to different features. Questions for different field sites have covered usage, perceptions of CDS, awareness of a CDS committee, clinicians’ involvement with developing CDS, communication about new CDS, and training and support. This short interview instrument is intended to help us gather information from a wider range of users than those formally interviewed or observed. It is also useful for collecting quantitative data from multiple perspectives (especially when observing a hospital system, where it is relatively easy to collect 20–40 field surveys in one day). We use it whenever and wherever appropriate: in charting rooms, the cafeteria, and during our observations. All members of the RAP team need to have a clear understanding of how to collect both fixed-choice and open-ended data in an interview in order for the field survey data to be useful.
Our fieldwork guide also includes a field notes form (see Appendix A
, page 11), a template which helps to organize naturalistic observations. The field notes form includes fields in which the RAP team member documents who is being observed, when observations occur, a list of topics to investigate, and questions researchers are reminded to ask. For example, the field notes form may remind a team member to ask about a specific CDS module such as a drug-allergy alert. Team members who are new to fieldwork will need a lot of practice and training in naturalistic observation so that their field notes are useful for subsequent analysis. Patton’s chapter on fieldwork strategies and observation methods provides an excellent introduction to this method of data collection.(36
Data analysis can be greatly facilitated by carefully organizing the dataset. First, we assemble a database of all data elements (interviews, field notes, surveys, collected documentation, or artifacts). We name and format the items consistently so that items are easily identified. These data can be cataloged using an Excel spreadsheet or using qualitative analysis software such as NVIVO (QSR International, Doncaster, Victoria, Australia) or ATLAS.ti (ATLAS.ti Scientific Software Development GmbH, Berlin, Germany). The key is to assemble the database quickly and to assure that all data files and documents have been collected from the team members within several days of the site visit. Electronic copies of all files are stored in a password protected centralized repository. Team members should attempt to write detailed field notes of their observations within 24 hours. If this is not possible, they should have all field notes completed within several days.
We start analyzing data almost as soon as we collect it. We conduct extensive on-site debriefing sessions both at mid-day and at the end of the day. These debriefing sessions are either recorded with an audio recording device, pen and paper, or both. After the site visit, initial tasks include submitting audio recordings of interviews to a transcriptionist with an expectation that transcripts will be returned within one to two weeks. While we wait for transcripts, we meet for at least several hours as a team to synthesize our learnings and generate questions for subsequent analysis. We also summarize any quantitative data garnered from surveys or structured questions and construct a basic description of the fieldsite from site profile and other collected documentation.
Based on research questions and topics identified before data collection and during debriefings, as well as the sections we envision for reports we will write, we develop a list of key issues and “sensitizing concepts” that are the focus of data analysis efforts(36
). This list can form an initial template for data analysis and reporting results. We have conducted rapid data analysis by splitting the dataset among the team members and identifying sections of interviews, field notes, or other data elements that relate to each part of our template. This can be greatly facilitated through the use of qualitative analysis software, but can be accomplished in simple word processing or spreadsheet programs as well. Next, we compile all data elements that fit under a given template category, and summarize themes, issues, agreements and disagreements in the content for each. We are sensitive to and try to understand differences observed and reported by different researchers and informants.
For example, if a team has identified “training” as a key issue, the team will divide up the data set (interviews, field notes, surveys, etc.) and label all sections that relate to training. These sections will be compiled by one team member into a single file and then summarized to describe the range of issues relating to training. This summary needs to document variability in respondents’ and researchers’ statements about training. What do doctors say about training that is different from what nurses say about training? What does the informatician on the research team say that is different from what the ethnographer says? It is important to note consensus as well, and statements that directly confirm or discredit your interpretations.
Another analytic tool we often use is a modified form of cultural domain analysis, also called “pile sorting” or taxonomic analysis. This analytic process allows researchers to cluster assembled lists of themes and other low-level phenomena (such as lists of CDS tools or types of unintended consequences) and then assign categories and create a taxonomy or set of themes from these clusters,(40
Our site reports contain basic descriptive information as well as the results of topic summaries and taxonomic/theme analyses. Once the sections of a report are drafted, the whole team should meet to review the findings, discuss findings that emerge from that review, and revise the report. Our reports have varied in length from 8–18 pages. Keeping in mind the rapid data collection and analysis process, it is prudent to send the report back to a local participant, such as the shepherd or an organizational leader, so that descriptions of the site can be verified. By following this procedure, a RAP team should be able to produce a preliminary report about the site visit within one or two months after data are collected.
Conducting RAP across multiple sites
If assessments include multiple health systems or a sequence of visits to one site over time, the team will conduct interim analysis between observation periods. By writing a site report before the team visits a new site, the team will have the opportunity to revise the fieldwork guide based on what worked and did not work methodologically, include new sensitizing concepts in the subsequent data collection, and prevent a backlog of unexamined data.