Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Methods Inf Med. Author manuscript; available in PMC 2013 August 19.
Published in final edited form as:
Published online 2010 December 20. doi:  10.3414/ME10-01-0042
PMCID: PMC3746487

Rapid Assessment of Clinical Information Systems in the Healthcare Setting: An Efficient Method for Time-Pressed Evaluation

CK McMullen, PhD,1 JS Ash, PhD,2 DF Sittig, PhD,3 A Bunce, MA,2 K Guappone, MD, PhD,4 R Dykstra, MD, MS,2,* J Carpenter, RPh, MS,4 J Richardson, MLIS, MS,2 and A Wright, PhD5



Recent legislation in the United States provides strong incentives for implementation of electronic health records (EHRs). The ensuing transformation in U.S. health care, will increase demand for new methods to evaluate clinical informatics interventions. Timeline constraints and a rapidly changing environment will make traditional evaluation techniques burdensome. This paper describes an anthropological approach that provides a fast and flexible way to evaluate clinical information systems.


Adapting mixed-method evaluation approaches from anthropology, we describe a rapid assessment process (RAP) for assessing clinical informatics interventions in health care that we developed and used during seven site visits to diverse community hospitals and primary care settings in the United States..


Our multi-disciplinary team used RAP to evaluate factors that either encouraged people to use clinical decision support (CDS) systems, or interfered with use of these systems in settings ranging from large urban hospitals to single-practitioner, private family practices in small towns.


Critical elements of the method include: 1) developing a fieldwork guide; 2) carefully selecting observation sites and participants; 3) thoroughly preparing for site visits; 4) partnering with local collaborators; 5) collecting robust data by using multiple researchers and methods; and 6) analyzing and reporting data in a structured manner helpful to the oraganizations being evluated.


RAP, iteratively developed over the course of visits to seven clinical sites across the U.S., has succeeded in allowing a multidisciplinary team of informatics researchers to plan, gather and analyze data, and report results in a maximally efficient manner.

Keywords: Ethnography, Qualitative Methods, Clinical Informatics, Healthcare Evaluation Mechanisms, Computerized Medical Records Systems


The U.S. federal government’s American Reinvestment and Recovery Act (ARRA) incentivizes health care providers to implement and use electronic health records (EHRs).(1) Given the unprecedented changes that will likely ensue from the widespread implementation of EHR-related tools such as computerized physician order entry (CPOE) and clinical decision support (CDS), workplace evaluations in diverse healthcare settings will be in high demand. However, evaluating clinical informatics in the workplace entails assessment of an evolving entity and defies traditional evaluation methods for several reasons.(2) Clinical information tools are often modified as software and content are updated.(3)Moreover, these tools are used in complex healthcare settings that challenge simple assumptions about the way informatics affects clinical practice (2;4). Outcomes-oriented evaluation studies such as randomized controlled trials can formally evaluate the way informatics interventions affect clinician performance and patient outcomes (57). They focus on numbers and outcomes, but they do little to describe the processes that create successful implementations and interventions (8;9). Outcomes-oriented evaluations are also costly and time-consuming, thereby making them ill-suited to guide iterative system improvements or to provide “actionable information” in rapidly changing organizations.(10) In contrast, process-oriented evaluation methods using naturalistic designs can best discover how and why systems are successful or not (8;9;11). The term “naturalistic” refers to studying real-world situations as they unfold as opposed to experimental designs where conditions are tightly controlled (and less realistic).(12) Process-oriented interventions will become increasingly important in the current health care policy environment, which is driving rapid dissemination of health IT interventions throughout many sectors of health care.(13)

Ethnography—the methodological hallmark of cultural anthropology—gives us a powerful approach for understanding change processes, rooted in developing a deep understanding of interactions among components of complex systems.(14) This understanding is gained through interviews and observations, as well as by analyzing texts and other aspects of an environment such as physical layout and design, and the use of artifacts (such as computers, paper, or monitors). Traditional ethnographic methods are well suited to investigating how and why things happen, yet they often involve lengthy periods of fieldwork at one site. The researchers’ time and effort are spent developing cultural competence—an understanding of the people and culture they are observing—as they build rapport and trust among subjects. Cultural competence, rapport and trust are important in ethnographic research because they give ethnographers the ability to see the world the way another group sees it.(15;16)

Clinical IT decision makers often need answers to evaluation questions quickly and across multiple sites while opportunities remain to take action and modify informatics implementations. The ability to respond appropriately and in a timely way is especially important as patient safety can be threatened by unintended consequences associated with the introduction of informatics technologies in clinical settings. (17;18) A generalizable method of naturalistic inquiry that can help to rapidly identify and assess a situation is desirable for both research and application purposes. A rapid ethnographic assessment process therefore seems highly applicable to informatics.(19)

The rapid assessment process (RAP) provides a way of gathering, analyzing, and interpreting high quality ethnographic data expeditiously so that action can be taken as quickly as possible.(20;21) RAP uses a mix of qualitative and quantitative methods that have been used effectively in the public health arena, and allows reporting of findings according to STARE-HI principles.(22) In the past, RAP has been used to develop intervention programs for nutrition, primary health care, and HIV/AIDS.(2325) The Rapid Assessment, Response, and Evaluation Project (RARE) has been especially well documented, with manuals available to guide investigators.(26;27)

Our approach to RAP draws heavily from James Beebe’s book, Rapid Assessment Process: An Introduction.(20). The theoretical orientation of our approach to RAP brings together concepts from evaluation anthropology and systems theory,(28) whereby a clinical informatics intervention is considered as “a sociocultural entity, an inter-related set of activities conducted by a social organization to achieve one or more specific aims.”(14) Central to our approach is a commitment to “build evidence relative to the programs from the perspectives of multiple individuals or stakeholders.”(14)

RAP relies on a team approach to conducting a focused, problem-oriented investigation. Compared to traditional ethnography, RAP streamlines the data collection, analysis, and interpretation processes, involves less time in the field, and provides timely feedback to internal stakeholders. RAP depends heavily on triangulation, which examines an issue by using multiple perspectives and data points to enhance the validity and reliability of both qualitative and quantitative data.(29) Triangulation can take the form of asking similar questions of different people in different settings. Alternatively, it can entail collecting different types of data (such as data from surveys, observations, and interviews) about a single topic from the same group of individuals.(30;31) While fieldwork or direct interaction with study subjects is required, data can be collected both before and after fieldwork to facilitate timely and cost-efficient research. In adapting Beebe’s approach to assessing clinical informatics, we argue that RAP’s success hinges on the research team’s efforts to understand the “insider perspective” of many constituents: healthcare providers, informatics staff, administrative and ancillary staff, as well as patients. In this paper, we describe how to successfully design and manage RAP evaluations for health informatics.


The Provider Order Entry Team (POET) at Oregon Health & Science University in Portland, OR was funded to adapt RAP to study clinical decision support (CDS) systems in community hospitals. Subsequently, POET received further funding to use these methods to study CDS systems in outpatient clinics. We have defined computerized provider order entry (CPOE) as a system that allows a provider, such as a doctor or nurse practitioner, to directly enter medical orders via computer. We have defined clinical decision support as “passive and active referential information as well as reminders, alerts, ordersets, and guidelines.”(32)

We generated a fieldwork guide and process for conducting rapid assessments that can be applied to a range of workplace studies in clinical informatics. Please see Appendix A for a sample fieldwork guide. The guide and process have been refined over the course of two years (2007–2009), as we first visited two community hospitals and subsequently visited five ambulatory settings. Starting by using existing examples of protocols and fieldwork guides from public health settings as templates, (23;24;26;27) we have been able to successfully and reliably create a fieldwork guide to conduct RAP across a variety of organizations ranging from primary to tertiary care settings.

Our adaptation of RAP for clinical informatics has been informed by guidelines for designing and reporting such evaluations, such as Utarini, Winvist and Pelto’s “11 critical criteria” for conducting RAP(33) and the STARE-HI statement of reporting clinical informatics evaluations.(22) Scrimshaw and Hurtado (23) provide numerous examples of data collection guides to be used by RAP team members, including observation guides for studying health-care providers, suggestions for documenting specific health-care processes, and focused interview questions for specific types of health-care personnel. We used these as starting points for creating data collection tools that related to clinical decision support in community hospital and outpatient settings. At the end of each site visit, we discussed changes to our protocol and we met frequently before each site visit in order to tailor our protocol for site-specific conditions.

To date, our fieldwork guide includes the following: 1) a site visit preparation schedule, 2) a pre-visit site profile, 3) a site visit schedule, 4) a fact sheet to be given to subjects, 5) a typical interview guide, 6) a form for field notes, 7) a brief field survey instrument, and 8) an agenda for team debriefings. Data analysis procedures evolved over time to promote reflexivity (awareness of how each team member’s perspective may influence the research process), documentation, and triangulation. Within a few months of a site visit, we conduct our data analysis and write a report of our findings. As we visit multiple sites, we compare themes and findings across sites in order to produce research reports that examine focused topics across various sites. Previously published rapid assessment protocols have emphasized the importance of a fieldwork guide for rigorously documenting evaluation activities, gaining a clear understanding of what team members are expected to do, and ensuring replicability. (23;24;26;27) In developing and adapting our procedures over time, we have found that lesson extremely valuable.

Results: How to use RAP for clinical informatics evaluations

Table I identifies the key steps in RAP for clinical informatics evaluations. We assume that three to five RAP team members would be working together, with a full-time project manager and other team members working on the project from 10–20 hours per week (except during site visits which require a full-time commitment).

Table I
Key steps of RAP for clinical informatics:

Our goal in this paper is to present key issues, challenges, and methodological decisions to help other research and evaluation teams who wish to use RAP. First, we outline issues relating to setting up a RAP evaluation and assembling the research team. Second, we describe preparatory work that is essential for conducting rapid data collection during brief site visits. Third, we outline data collection procedures. Finally, we discuss data analysis and iterative data collection for evaluations that involve visits to multiple sites.

Assembling a RAP team

Ethnographers assume that there is not one objective reality underlying a given situation and that people from different backgrounds, who have different roles and different levels of power and autonomy, will perceive situations differently. Understanding the many perspectives of people who play a range of roles in health care delivery is crucial for assessing clinical IT interventions. Likewise, incorporating multiple perspectives within the research team itself enhances the likelihood of accurately describing the setting. Multidisciplinary teams will be less likely to overlook issues or constituencies, leave tacit assumptions unquestioned, or misinterpret findings.(34;35)

We have found that at minimum it is beneficial for a RAP team to consist of an informatician, a trained qualitative researcher, and a clinician. We have found it essential to have at least one informatician on the team who understands the technical aspects of the systems under study as well as the context surrounding implementations. Equally important is a team member who is trained in qualitative research and ethnography. This person can lead the construction of interview guides, train team members to conduct naturalistic observation (for example, shadowing clinicians or observing interactions in a charting room or other common setting), train team members in informal and formal interviews, and guide the data analysis. In our experience, having a team member with clinical expertise (a physician, nurse, or pharmacist) also greatly enhances the team’s ability to ask good questions and interpret findings. It is not always feasible to have one person in each of these roles, but we strongly advise doing so. At least two people should be working together when collecting data at each site. When analyzing data, it is important to meet together frequently enough so that the team research process is intensive—at least two hours per week.

Doing RAP well requires asking numerous questions of research participants and of fellow team members. Individuals who are intimidated or intimidating are not the best RAP researchers. Rather, one should seek individuals who are comfortable interacting in clinical settings, good at listening and asking questions, and good at brainstorming. RAP team meetings will involve active questioning, testing preliminary findings, and proposing competing models and explanations for the phenomena under study. Picking a team of people who can participate in such work is as important as asking the right interview questions or constructing a sample for data collection.

The specific duties each RAP team member performs can vary. Each team member is considered a fieldworker, meaning each person will record observations and take detailed field notes. Some team members will also serve as interviewers. Ideally, one person will serve as the team leader and data manager. Depending on the situation, one team member can be designated to collect field survey data. Alternatively, multiple fieldworkers can gather field survey data during the course of their observations.

Enabling flexibility

The creativity and flexibility of a multi-disciplinary RAP team needs to be fostered by a data management process that allows for rapid data collection and analysis. Reviews of protections for human subjects also need to be carefully designed in order to enable this. Some RAP projects may be considered exempt by Institutional Review Boards (IRBs). These include studies conducted strictly for process improvement purposes. A project may also be considered minimal risk if efforts are taken to ensure that no HIPPA-covered Protected Health Information (PHI) or medical data is collected. This may allow more flexibility in the informed consent process. Often, the research team must obtain IRB approval from both the researchers’ home institution(s) as well as the site(s) to be studied. When multiple IRBs are involved, researchers should ask that one IRB assumes overall responsibility for review and that the others “cede” their oversight role to that primary institution. Finally, unlike most research protocols that are reviewed by an IRB, the data collection materials and recruitment strategies of this method are fluid. RAP can be derailed by a month-long interval between IRB modifications and approval. Therefore, researchers should consult with their local IRBs on how to submit the initial IRB application and structure the research to ensure that subsequent minor changes in interview questions or recruitment letters do not necessitate full board review. Many changes can be approved by expedited review and some IRBs will be comfortable approving interview and observation topics rather than detailed questions and data collection templates. Finally, researchers should always ask for permission to engage in as many different recruitment or data collection approaches as they might need. For example, if you plan to recruit clinicians by email, impromptu introductions, or meetings arranged through a division head, prepare recruitment materials for review and ask for permission to use all of these strategies.

Having the right equipment on hand during site visits will also greatly enable flexibility. We have learned that each fieldworker should have a digital audio-recorder handy to capture impromptu informal interviews that can happen when shadowing clinicians. Although most observational data are easily recorded on a notepad, fieldworkers should have the ability to switch to audio recording of lengthy explanations or conversations about key research questions. Ideally, one team member should function as the data coordinator or project manager, and should bring a laptop during site visits. This person can modify data collection tools to include local terminology, upload audio or text files that researchers create during a site visit, and track the team’s work. For formal interview situations that result in transcripts, using a microphone designed to capture multiple people in a group (a high quality omnidirectional microphone) will ensure the best quality audio recording. Access to a printer is also very important, so that data collection instruments can be modified during the course of a site visit. Finally, each RAP team member must have a cell phone to coordinate schedules and report whereabouts, and a laptop for typing full field notes at the end of each day.

Preparing for fieldwork

Planning an effective site visit can take several months of careful preparation prior to entering the field. We have developed a form, the site visit preparation schedule, to help prepare for site visits (See Appendix A, page 3).

During preparation for fieldwork, the RAP team needs to gain access to the organization, gather preliminary information about the site, schedule interviews with key people in the organization who can inform the team about the phenomenon or system being evaluated, and prepare a fieldwork manual that will be used to gather data during the site visit.

Health care organizations tend to be hierarchical, and approval for the site visit must come from an organizational leader. Thinking carefully about whom to approach and how to portray the RAP activity will help to ensure access to the organization you are trying to study. Once an organizational leader approves, he or she should point to a “shepherd,” or liaison who can facilitate interview scheduling, provide information about the organization’s structure, point the RAP team to the right places for observations, provide in-person introductions, and generally orient the team. This person may be a trainer who knows both informatics and clinical staff or someone from within the administration who has broad knowledge of the organization. Whoever takes on the shepherd role should be invested in the RAP team as well as the research question. The shepherd can expect to spend several days before the site visit and several hours during each site visit day to facilitate the RAP researchers’ activities.

Initial conversations with organizational leaders and the designated shepherd can help to identify candidates for in-person, formal interviews. We ask each person we contact for recommendations about who should be formally interviewed (a technique called ‘snowball’ or ‘chain’ sampling).(36) We have conducted up to fifteen formal interviews per healthcare organization, but fewer may be appropriate at a small hospital or clinic network. Interview participants should be selected according to their role and relevant knowledge about the subject of inquiry such as CDS, CPOE, or health care quality and safety. For example, we have interviewed chief medical information officers, clinician users (including physicians, nurses, and pharmacists), quality assurance staff, information technology (IT) staff, and in-house health IT vendor staff.

The research team can develop an observation plan by asking itself, “Where does the entity or functionality I am evaluating get used, developed, and maintained?” We have purposely targeted people who are both expert users and new or reluctant users of a technology, as well as those who are either at the organization’s center or at its periphery (for example, outlying clinics or clinics owned by a subsidiary). With four or five researchers, the RAP team can distribute itself across sites to collect data that will represent as much variability as possible. Shepherds and organizational leaders tend to assume that you will want to watch and interview experts, super-users, or leaders. Make sure to communicate your desire to see the “average” users, the “outliers,” the non-users, and the “curmudgeons” so that interview and observation data will reflect the spectrum of perspectives likely found in the organization.

Gathering information about the site before the visit will help to ensure that the RAP team collects the right data. This should include a demonstration of the informatics applications being studied and a telephone interview or written questionnaire with an organizational leader who can describe the institution and its clinical information systems. We have found it useful to send a “site profile” form (see Appendix A, page 4) to a knowledgeable leader to gather information about the organization and its clinical informatics capacities. This profile should be developed based on the literature describing the specific facet of clinical informatics to be studied, the knowledge and experience of the research team, and input from the site liaison or shepherd. The profile is formatted as a questionnaire that is easy to complete. The focus of the site profile should be on factual information, such as the number of beds in the hospital, the type of EHR system, and the number of physicians using CPOE, rather than matters of opinion. The latter should be gathered through field visits.

The site profile should be customized to help answer basic questions about the organization and about informatics features that are the focus of the research. When we have not been able to collect comprehensive profile data beforehand, we have assigned a team member the task of filling in the missing data during our site visits. Websites and online searches can also yield a wealth of information about an organization before the visit.

Making the most of intensive site visits

For some RAP teams, research sites are nearby and easily accessible. In such situations, RAP teams have the luxury of returning several times to a site in a data collection-analysis-collection-analysis cycle. Questions uncovered during an initial visit can be examined during data analysis and further investigated at a subsequent visit. However, in other cases a RAP team has only one chance to collect most of its data.

We have found that a team of 4–5 researchers can gather comprehensive information about a hospital or outpatient care network in a period of three days (see Figure 1, which outlines the process, and Appendix A, page 7, which shows a schedule). Success rests on identifying a shepherd who works inside the organization, supports the research endeavor, and has both good credibility and the availability to help the RAP team gain access to research participants (clinicians, nurses, administrators, and IT staff).

Figure 1
Process for RAP site visits

Having a multidisciplinary team means that observers will note different features of how clinical informatics tools are designed, implemented, used, and revised. To make the most efficient use of limited resources, individuals should interview and observe people and activities that are in their own areas of expertise—for example, if you have a pharmacist on your team, make sure that pharmacist observes local pharmacists and interviews the person tasked with managing an e-prescribing system.

Before the site visit, prepare a preliminary schedule of interviews and observations together with the local shepherd. This schedule may change based on early observations, so one member of the team should coordinate the logistics and keep track of researchers’ assignments. Gathering the research team to debrief over lunch, and at the end and beginning of each day, is essential to maximize rapid learning about local terminology, preliminary findings, and topics for subsequent observations and interviews. These debriefings can be held in person or by conference call when the team is dispersed geographically. A sample agenda for debriefing sessions is included in the fieldwork guide in Appendix A, page 14.

Make sure to record debriefings so that they can become part of the dataset for each site visit. Interviews, both formal and informal, should be tape recorded and transcribed as quickly as possible. Interviewers should also compile interview notes so that the research process is not hindered by a transcription delay or a failed recording.

Preparing a fieldwork guide

A fieldwork guide, such as shown in Appendix A, outlining the site visit activities and data collection tools has been invaluable for our research team. This is a frequently revised document that facilitates the coordination of team-based research. The guide should include everything a RAP team member will need to collect data during a site visit. For example, it should include the site profile (including its results if available), the site visit schedule, the field notes form, informal and formal interview guides, brief field surveys to be administered opportunistically, and team meeting agendas.

Each of these items requires considerable time to prepare and should be customized for each research or evaluation project. When developing the guide, think about what background information you will want during analysis. We have made data analysis much easier by incorporating a template or header in data collection documents that includes critical information about observations and participants. Participant characteristics could include gender, role, or years using the system. Observation characteristics could include time and location of observation, number of individuals observed, or focus of observation.

There are many data collection tools that RAP teams can use, and teams should select the ones that will work the best for their field setting and focus. Each element of the fieldwork guide is based on established ethnographic methods. These include familiar data collection strategies such as in-depth interviewing, naturalistic observation with opportunistic interviews, and survey research. Less familiar data analysis methods include creating charts, maps, or “rich pictures” that model the system being studied. These can include depictions of physical spaces, relationships among system components, or specific situations.(20) A working understanding of the research principles behind each element of the fieldwork guide will enhance the validity and reliability of RAP findings. A detailed discussion of each of these methods is beyond the scope of this manuscript, but can be found in other sources.(16;20;37;38) Any RAP team is likely to use formal semi-structured interviews, naturalistic observations, and brief field surveys (short surveys that include fixed-choice and a few open-ended questions that can be administered in person by a member of the RAP team or as a respondent-completed paper survey). The fieldwork guide should include the actual data collection instruments that will be used, as well as procedures for administering them. Because RAP data collection procedures are fluid, it is important for each team member to have a working version of the field manual so that procedures are followed consistently.

Formal Interviews

Prior to each interview or observation period, the subject is given a fact sheet (see Appendix A, page 8) describing the study. Our formal interview guide (see Appendix A, page 9) contains a comprehensive list of open-ended questions that are subsequently carefully tailored to fit the expertise and perspective of each interviewee. Some questions make sense to all participants, but in a highly specialized setting such as a hospital, tailored questions ensure that valuable interview time is focused on the areas in which the participant can speak most informatively. We have adopted a unique interviewing technique in which two interviewers with different training and perspectives are present at each interview to ensure the most productive follow-up questions and subsequent understanding of the interview data. One interviewer always takes a primary interviewer role. The second interviewer writes notes about the interview, makes notes about terms or statements that may be difficult to transcribe, and follows up on areas that have not been adequately covered. The secondary interviewer’s notes make rapid data analysis possible at team meetings at the end of each day so that interview data do not have to be transcribed in order to be incorporated in preliminary analysis.

Informal Interviews

In addition to formal interviews we utilize a field survey or informal interviewing guide. Appendix A, page 12, shows a sample field survey instrument. The field survey is a short interviewing tool that is administered to as many participants as possible (up to 30–40 in a hospital site visit). We have tailored our field survey to each site depending on CDS modules available and on the local names given to different features. Questions for different field sites have covered usage, perceptions of CDS, awareness of a CDS committee, clinicians’ involvement with developing CDS, communication about new CDS, and training and support. This short interview instrument is intended to help us gather information from a wider range of users than those formally interviewed or observed. It is also useful for collecting quantitative data from multiple perspectives (especially when observing a hospital system, where it is relatively easy to collect 20–40 field surveys in one day). We use it whenever and wherever appropriate: in charting rooms, the cafeteria, and during our observations. All members of the RAP team need to have a clear understanding of how to collect both fixed-choice and open-ended data in an interview in order for the field survey data to be useful.

Our fieldwork guide also includes a field notes form (see Appendix A, page 11), a template which helps to organize naturalistic observations. The field notes form includes fields in which the RAP team member documents who is being observed, when observations occur, a list of topics to investigate, and questions researchers are reminded to ask. For example, the field notes form may remind a team member to ask about a specific CDS module such as a drug-allergy alert. Team members who are new to fieldwork will need a lot of practice and training in naturalistic observation so that their field notes are useful for subsequent analysis. Patton’s chapter on fieldwork strategies and observation methods provides an excellent introduction to this method of data collection.(36)

Data analysis

Data analysis can be greatly facilitated by carefully organizing the dataset. First, we assemble a database of all data elements (interviews, field notes, surveys, collected documentation, or artifacts). We name and format the items consistently so that items are easily identified. These data can be cataloged using an Excel spreadsheet or using qualitative analysis software such as NVIVO (QSR International, Doncaster, Victoria, Australia) or ATLAS.ti (ATLAS.ti Scientific Software Development GmbH, Berlin, Germany). The key is to assemble the database quickly and to assure that all data files and documents have been collected from the team members within several days of the site visit. Electronic copies of all files are stored in a password protected centralized repository. Team members should attempt to write detailed field notes of their observations within 24 hours. If this is not possible, they should have all field notes completed within several days.

We start analyzing data almost as soon as we collect it. We conduct extensive on-site debriefing sessions both at mid-day and at the end of the day. These debriefing sessions are either recorded with an audio recording device, pen and paper, or both. After the site visit, initial tasks include submitting audio recordings of interviews to a transcriptionist with an expectation that transcripts will be returned within one to two weeks. While we wait for transcripts, we meet for at least several hours as a team to synthesize our learnings and generate questions for subsequent analysis. We also summarize any quantitative data garnered from surveys or structured questions and construct a basic description of the fieldsite from site profile and other collected documentation.

Based on research questions and topics identified before data collection and during debriefings, as well as the sections we envision for reports we will write, we develop a list of key issues and “sensitizing concepts” that are the focus of data analysis efforts(36;39). This list can form an initial template for data analysis and reporting results. We have conducted rapid data analysis by splitting the dataset among the team members and identifying sections of interviews, field notes, or other data elements that relate to each part of our template. This can be greatly facilitated through the use of qualitative analysis software, but can be accomplished in simple word processing or spreadsheet programs as well. Next, we compile all data elements that fit under a given template category, and summarize themes, issues, agreements and disagreements in the content for each. We are sensitive to and try to understand differences observed and reported by different researchers and informants.

For example, if a team has identified “training” as a key issue, the team will divide up the data set (interviews, field notes, surveys, etc.) and label all sections that relate to training. These sections will be compiled by one team member into a single file and then summarized to describe the range of issues relating to training. This summary needs to document variability in respondents’ and researchers’ statements about training. What do doctors say about training that is different from what nurses say about training? What does the informatician on the research team say that is different from what the ethnographer says? It is important to note consensus as well, and statements that directly confirm or discredit your interpretations.

Another analytic tool we often use is a modified form of cultural domain analysis, also called “pile sorting” or taxonomic analysis. This analytic process allows researchers to cluster assembled lists of themes and other low-level phenomena (such as lists of CDS tools or types of unintended consequences) and then assign categories and create a taxonomy or set of themes from these clusters,(40;41)

Our site reports contain basic descriptive information as well as the results of topic summaries and taxonomic/theme analyses. Once the sections of a report are drafted, the whole team should meet to review the findings, discuss findings that emerge from that review, and revise the report. Our reports have varied in length from 8–18 pages. Keeping in mind the rapid data collection and analysis process, it is prudent to send the report back to a local participant, such as the shepherd or an organizational leader, so that descriptions of the site can be verified. By following this procedure, a RAP team should be able to produce a preliminary report about the site visit within one or two months after data are collected.

Conducting RAP across multiple sites

If assessments include multiple health systems or a sequence of visits to one site over time, the team will conduct interim analysis between observation periods. By writing a site report before the team visits a new site, the team will have the opportunity to revise the fieldwork guide based on what worked and did not work methodologically, include new sensitizing concepts in the subsequent data collection, and prevent a backlog of unexamined data.


There are many aspects of clinical informatics that could benefit from RAP, such as evaluating training efforts to support a new functionality, or assessing the impact of an intervention on clinical workflow. Although most clinical informatics evaluations are quantitative, mixed-methods and qualitative evaluations are gaining importance in the field (2;42;43) Similarly, the field of evaluation studies has expanded beyond holding quasi-experimental designs and quantifiable outcomes as the “gold standard.”(44;45) However, RAP is a new methodology for health informatics. It contrasts with other ethnographic research in health informatics in the same way it differs from traditional ethnography (less in-depth engagement with participants, rapid data-collection and analysis cycles, and an evaluative focus). It is perhaps too early to assess the method’s value. However, our RAP evaluation of CDS implementations in the United States has generated findings that are relevant for clinicians, health system administrators, clinical content vendors, and electronic medical record (EMR) vendors. We have learned that clinical decision support is more than the clinical logic; it requires accurate, reliable data, good governance, a team of highly skilled and committed staff who act as a “bridge” between medicine and informatics, and extensive communication between all involved parties.(46) However, the method has some limitations. It is not an appropriate method for generating quantitative results(20), it does not produce results equivalent to traditional ethnography, and it can be expensive.(33;47)

To conclude this overview, we wish to highlight several “take away” messages to assure appropriate and successful RAP evaluations.

RAP requires a cohesive team that can work under stressful conditions. Because the time in the field is short, researchers tend to feel stress from trying to gather data so intensively. RAP is extremely iterative and nonlinear and may seem chaotic to researchers who are more accustomed to formal, quantitative research approaches. Data collection tools are modified to adjust to preliminary findings, even during the course of one day’s fieldwork. Data analysis begins with extensive team de-briefing sessions to identify emerging themes, pose outstanding questions, and decide what the data analysis should focus on. Data are reviewed and summarized in a way that maximizes each team member’s exposure to the dataset and encourages disputes about tentative conclusions, so that the team develops a robust understanding of the site.

Although the process we have described was followed by a team of outside researchers receiving federal research funds, it could easily be adapted for internal use. The same basic principles apply: team members should be interdisciplinary, should have qualitative research skills, and should feel comfortable disagreeing with each other. However, RAP is not equivalent to community-based participatory research (CBPR) or action research. These methods, currently gaining in popularity, also focus on collaboration, cultural competency and practical results. (4850) RAP has some methodological overlap with these traditions, including the emphasis on real-time useful outcomes and the inclusion of people from the culture/community on the research team. However, CBPR is extremely time-intensive and does not lend itself to quick turn-around.

A potential pitfall of speeding up the ethnographic data collection and analysis process is that researchers do not have the time to gain an in-depth understanding of the research setting or to build rapport with participants that will allow negative perceptions, embarrassing situations, or failures to be disclosed. We have outlined techniques to avoid this bias, such as assembling a RAP team that already has extensive knowledge about the phenomena or cultures being studied, seeking out the outliers and “curmudgeons” who will share information about failures and problems, sampling for diversity of perspectives, and triangulating data from multiple respondents and data collection techniques. Even when these techniques are diligently followed, RAP cannot substitute for the in-depth understanding that is gained by traditional ethnography.


Refinement of RAP has allowed our team of informatics researchers to plan for site visits, collect and analyze data, and report results on short timelines in a maximally efficient manner. Using the materials and techniques we present here, teams of researchers or program evaluators will be able to use RAP evaluation as an efficient and flexible tool for evaluating clinical IT interventions.

Supplementary Material



We thank James Beebe for conducting the enlightening RAP workshop that got us started. This work was supported by AHRQ contract #HHSA290200810010, NLM Research Grant 563 R56-LM006942 and Training Grant 2-T15-LM007088. AHRQ and NLM had no role in the design or execution of this study, nor in the decision to publish. Special thanks go to Emily Campbell RN, PhD, Kenneth Guappone, MD, PhD, and James Carpenter, RPh, MS, for their many contributions to POET and to Jill Pope for her editorial assistance.

Reference List

1. Blumenthal D. Stimulating the adoption of health information technology. N Engl J Med. 2009 Apr 9;360(15):1477–9. [PubMed]
2. Kaplan B, Shaw NT. Future directions in evaluation research: people, organizational, and social issues. Methods Inf Med. 2004;43(3):215–31. [PubMed]
3. Ash J, Stavri P, Kuperman G. A consensus statement on considerations for a successful CPOE implementation. J Am Med Inform Assoc. 2003 May;10(3):229–34. [PMC free article] [PubMed]
4. Kaplan B, Harris-Salamone KD. Health IT success and failure: recommendations from literature and an AMIA workshop. J Am Med Inform Assoc. 2009 May;16(3):291–9. [PMC free article] [PubMed]
5. Hunt DL, Haynes RB, Hanna SE, Smith K. Effects of computer-based clinical decision support systems on physician performance and patient outcomes: a systematic review. JAMA. 1998 Oct 21;280(15):1339–46. [PubMed]
6. Kawamoto K, Houlihan CA, Balas EA, Lobach DF. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ. 2005 Apr 2;330(7494):765. [PMC free article] [PubMed]
7. Garg AX, Adhikari NK, McDonald H, Rosas-Arellano MP, Devereaux PJ, Beyene J, et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA. 2005 Mar 9;293(10):1223–38. [PubMed]
8. Kaplan B. Evaluating informatics applications--some alternative approaches: theory, social interactionism, and call for methodological pluralism. Int J Med Inform. 2001 Nov;64(1):39–56. [PubMed]
9. Heathfield H, Pitty D, Hanka R. Evaluating information technology in health care: barriers and challenges. BMJ. 1998 Jun 27;316(7149):1959–61. [PMC free article] [PubMed]
10. McNall M, Foster-Fishman PG. Methods of Rapid Evaluation, Assessment and Appraisal. American Journal of Evaluation. 2007;28(2):151–68.
11. Patton MQ. Utilization-Focused Evaluation. 3. Thousand Oaks CA: Sage; 1997.
12. Guba EGLY. Fourth generation evaluation. Newbury Park CA: Sage; 1989.
13. Blumenthal D. Launching HITECH. N Engl J Med. 2010 Feb 4;362(5):382–5. [PubMed]
14. Butler MO. Translating Evaluation Anthropology. NAPA Bulletin. 2005;(24):17–30.
15. Dewalt KM, Dewalt BR, Wyland CB. Participant observation - 1998. In: Bernard HR, editor. Handbook of Methods in Cultural Anthropology. Walnut Creek, CA: Altamira Press; 1998. pp. 259–300.
16. Creswell JW. Qualitative Inquiry & Research Design: Choosing among Five Approaches. Thousand Oaks CA: Sage Publications; 2007.
17. Han YY, Carcillo JA, Venkataraman ST, Clark RS, Watson RS, Nguyen TC, et al. Unexpected increased mortality after implementation of a commercially sold computerized physician order entry system. Pediatrics. 2005 Dec;116(6):1506–12. [PubMed]
18. Sittig DF, Ash JS. Clinical information systems: Overcoming adverse consequences. Sudbury MA: 2010.
19. Ash J, Sittig D, McMullen C, Guappone K, Dykstra R, Carpenter J. A rapid assessment process for clinical informatics interventions. AMIA Annu Symp Proc. 2008:26–30. [PMC free article] [PubMed]
20. Beebe J. Rapid Assessment Process: An Introduction. Walnut Creek, CA: AltaMira; 2001.
21. Handwerker WP. Quick Ethnography. Walnut Creek CA: AltaMira Press; 2001.
22. Talmon J, Ammenwerth E, Brender J, de KN, Nykanen P, Rigby M. STARE-HI--Statement on reporting of evaluation studies in Health Informatics. Int J Med Inform. 2009 Jan;78(1):1–9. [PubMed]
23. Scrimshaw SCMHE. Rapid Assessment Procedures for Nutrition and Primary Health Care: Anthropological Approaches to Improving Programme Effectiveness. Los Angeles Ca: 1987.
24. Needle RH, Tsukamoto T, Goosby E, Bates C, von Zinkermagel D. Crisis Response Teams and Communities Combet HIV/AIDS in Racial and Ethnic Minority Populations: A guide for conducting community-based Rapid Assessment, Rapid Response and Evaluating. Washington DC: 2000.
25. Manderson L, Aaby P. Rapid Anthropological Procedures (RAP) and their applicability to tropical diseases. Health Policy Planning. 1992;(7):46–55.
26. Trotter RT, Goosby E, Needle RH, Bates C, Singer M. A methodological model for rapid assessment, response, and evaluation: The RARE Program in public health. Field Methods. 2001;13(2):137–59.
27. Trotter RT, Needle RH. RARE Field Team Principal Investigator Guide. Washington DC: 2000.
28. Butler MO, Linstone HA. Decision making for technology executives: Using multiple perspectives to improve performance. Boston: Artech House; 1999.
29. Manderson L, Aaby P. An epidemic in the field? Rapid Assessment procedures and health assessment. Social Science and Medicine. 1992;35(7):839–950. [PubMed]
30. O’Cathain A, Murphy E, Nicholl J. Three techniques for integrating data in mixed methods studies. BMJ. 2010;341:c4587. [PubMed]
31. Greene J, McClintock CM. Triangulation in evaluation: design and analysis issues. Evaluation Review. 2010;9(5):523–45.
32. Bates DW, Kuperman GJ, Wang S, Gandhi T, Kittler A, Volk L, et al. Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. J Am Med Inform Assoc. 2003 Nov;10(6):523–30. [PMC free article] [PubMed]
33. Utarini A, Winkirst A, Pelto GH. Appraising Studies in Health Using Rapid Assessment Procedures (RAP): Eleven Critical Criteria. Human Organization. 2001;60(4):390–400.
34. Thomas MD, Blacksmith J, Reno J. Utilizing insider-outsider research teams in qualitative research. Qual Health Res. 2000 Nov;10(6):819–28. [PubMed]
35. Fernald DH, Duclos CW. Enhance your team-based qualitative research. Ann Fam Med. 2005 Jul;3(4):360–4. [PubMed]
36. Patton MQ. Qualitative Evaluation and Research Methods. 3. Thousand Oaks, CA: Sage; 2002.
37. Bernard HR, Ryan GW. Handbook of Methods in Cultural Anthropology. Walnut Creek, CA: AltaMira Press; 1998. Text analysis: Qualitative and Quantitative methods.
38. Miles MB, Huberman AM. Qualitative Data Analysis. 2. Thousand Oaks, CA: Sage; 1994.
39. van den Hoonaard WC. Analytical Field Research: Working with Sensitizing Concepts. Thousand Oaks, CA: Sage; 1997.
40. Bernard HR, Ryan GW. Analyzing Qualitative Data: Systematic Approaches. Thousand Oaks CA: Sage Publications; 2010.
41. Spradley JP. The Ethnographic Interview. Fort Worth TX: Harcourt Brace Jovanovich; 1979.
42. Khajouei R, Jaspers MW. The impact of CPOE medication systems’ design aspects on usability, workflow and medication orders: a systematic review. Methods Inf Med. 2010;49(1):3–19. [PubMed]
43. Ammenwerth E, de KN. An inventory of evaluation studies of information technology in health care trends in evaluation research 1982–2002. Methods Inf Med. 2005;44(1):44–56. [PubMed]
44. Kaplan B. Evaluating informatics applications--clinical decision support systems literature review. Int J Med Inform. 2001 Nov;64(1):15–37. [PubMed]
45. Patton MC. The View from Evaluation. NAPA Bulletin. 2005;24:31–40.
46. Ash JS, Sittig DF, Dykstra R, Wright A, McMullen C, Richardson J, et al. Identifying best practices for clinical decision support and knowledge management in the field. Stud Health Technol Inform. 2010;160:806–10. [PubMed]
47. Schensul JJ, Schensul SL. Ethnographic evaluation of AIDS prevention programs: better date for better programs. New Directions for Program Evaluation. 1990;(46):51–62.
48. Checkland P, Holwell S. Action Research: Its Nature and Validity. In Systemic Practice and Action Research Action Research. 1998;11(1):9–21.
49. Reason P, Bradbury H. Handbook of Action Research. London UK: Sage Publications; 2001.
50. Israel BA, Schulz AJ, Parker EA, Becker AB. Review of community-based research: assessing partnership approaches to improve public health. Annu Rev Public Health. 1998;19:173–202. [PubMed]