We conducted eight facilitated focus groups (six with consumers and two with professionals, total N
82) from November 2006–January 2007. The overarching purpose was to gain insights from group inquiry into how future technologies might help with information and functionality that diverse consumers and health professionals will need to improve the management of health and illness. In that context, we asked the participants to address issues ranging from health-related decisions made during a normal day to help in interacting with the myriad components of the US health-care system.
A professional facilitator experienced in health care and in working with disadvantaged populations led five of the eight groups, and one of the authors (TD) led three groups. The research team developed a discussion guide with input from the facilitator. We audiotaped and transcribed each group session. The Institutional Review Board (IRB) of Beth Israel Deaconess Medical Center approved a protocol for protection of human subjects, and we presented all participants with the approved informed consent document and encouraged them to ask questions about the study prior to signing consent. At the end of each group discussion, we paid participants $75-$200, depending on local norms. Some professionals waived the honorarium.
Screening and Recruiting We chose four cities: Boston, MA; Portland, ME; Tampa, FL; and Denver, CO, selected because they were geographically dispersed, offered the investigators access to individuals from urban and rural locations, had considerable ethnic and cultural diversity, and were accessible to the primary focus group contractor.
Drawing from a diverse population of healthy and chronically ill adults, we established eight focus groups. We sought relative homogeneity within groups (e.g., individuals with chronic illness, healthy individuals, technologically savvy college students, and caregivers), establishing each group essentially as an “n of one,” while seeking heterogeneity across groups. The six consumer focus groups collectively were constituted to include diversity of income, ethnicity, race, age, geography, and urban/rural/suburban settings.
A telephone questionnaire (available from the first author) collected demographic information and assessed consumers’ suitability for a group. We targeted consumers who were concerned about health matters, were less than completely satisfied with services and information currently available to manage their health, and used the Internet at least once a week for at least four different transaction types (e.g., banking, e-mail, and travel reservations). The interview included an open-ended question designed to help identify participants who would engage actively in conversation (“If you could invite anyone to dinner, whether living, dead, or imaginary, who would it be, and what would you talk about?”). To ensure diversity, each group had additional criteria related to age, geography, health status, ethnicity, and/or education (Table ). In four groups, a criterion was the presence of chronic illness, and participants self-identified one or more conditions on a list of 31 chronic diseases compiled by the investigators.
Criteria for Recruiting Consumers into Six Focus Groups
Consumer candidates were identified though local phone listings, responses to IRB-approved recruitment brochures, and contractors’ databases that contain names of people known to the company from previous recruitment efforts. Our goal was to recruit half men and half women. All consumer groups were held at focus group facilities.
We assembled two groups of health-care professionals from different areas of the country (Boston and Denver) to elicit their perspectives on the role of health information technology (HIT) and to compare those opinions with the consumers’ perspectives. We believed the views of the professionals could provide a useful reference for unanticipated consumer ideas. These participants were drawn from the investigators’ professional networks; we targeted general internists, nurses, social scientists, and entrepreneurs with strong interest in HIT. These groups were held at a Boston medical center conference room and in a Denver hotel meeting room convenient for the participants.
We developed a discussion guide (available from first author) with three parts, addressing: how participants currently
organize (for clinicians, how they believe their patients organize) the information they need to manage their health and care; how they would ideally
manage and use such information; how technologies could address gaps. We asked our groups to identify generalizeable principles not limited to disease-specific needs and used concept phrases to stimulate discussion, such as “being able to offer corrections to your medical record,” and “being able to chat online with a doctor or nurse whenever you need to.” We used well-tested querying techniques, e.g., “how to…” and “I wish…” to tap into participants’ creativity. We used a modified nominal group technique to encourage all to contribute, but we did not ask group members to vote or reach consensus. We discussed technology in lay functional terms in the consumer groups, rather than using technical names (e.g., “your own electronic medical record with information from your records at doctors’ offices and hospitals, plus information added by you,” rather than “personal health record”). For the professional groups, we did not define the different technology resources. However, we generally adhered to definitions presented in the taxonomy recommended by the National Alliance for Health Information Technology.15
We performed member checks in the course of discussion, but did not return conclusions to the groups for verification afterwards. At least one of the investigators observed each group.
We employed an iterative process based on grounded theory to guide the overall evaluation and interpretation of the qualitative data. At the end of each focus group, the facilitator and investigators noted their impressions of significant messages emerging from the group. Next, the audiotapes were transcribed by a well-established transcription company and checked for accuracy by one of the investigators observing the group. We analyzed the ethnographic data with the help of NVivo software, in conjunction with iterative rounds of analysis using immersion-crystallization techniques.16,17
Immersion-crystallization is a process requiring “prolonged immersion into and experience of the text and then emerging, after reflection, with an intuitive crystallization of the text.”18
Next, the four investigators independently printed and reviewed the audio-recorded transcript of each focus group for overall comprehension and to identify coding categories. Each investigator formed categories separately. We then met on several occasions to construct an overall structure of categories to code. In those meetings, we listened to selective portions of audio-taped recordings to clarify questions and/or differences of opinion among authors about key points/themes. To support key themes, as a group we used key word search and text review to identify and document specific quotes and references in the transcripts.
Collectively, the team then developed a codebook to represent significant categories of data (e.g., access vs privacy of personal health records). One of the investigators (LL) then used the codebook to review the transcripts in their entirety. In this study, the coding process was not the end result of analysis. Rather, using an anthropological approach, the codes were categorical place-finders, facilitating the next step in data analysis.
After text review and coding decisions, the team incorporated the coded texts into immersion-crystallization data analysis sessions. Over the course of five group meetings, the team read and discussed the transcripts and coded reports to identify significant categories, themes, and meta-themes arising from the data. We then moved toward developing interpretations of the data, which led to searches through the data set for alternative interpretations before we made final decisions about how to report and discuss the findings.