PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Patient. Author manuscript; available in PMC 2010 December 1.
Published in final edited form as:
Patient. 2009 December 1; 2(1): 269–282.
PMCID: PMC2874905
NIHMSID: NIHMS157286

Using Qualitative Research to Inform the Development of a Comprehensive Outcomes Assessment for Asthma

Abstract

Background

Qualitative research can inform the development of asthma patient-reported outcome (PRO) measures and user-friendly technologies through defining measurement constructs, identifying potential limitations in measurement and sources of response error, and evaluating usability.

Objective

The goal of the current study was to inform the development of a comprehensive asthma PRO assessment with input from patients and clinical experts.

Method

Self-reported adult asthma sufferers recruited from a 3,000 member New England-area research panel participated in either one of three focus groups (N=21) or individual cognitive item debriefing interviews (N=20) to discuss how asthma impacts their health-related quality of life (HRQOL), and provide feedback on a preliminary set of asthma impact survey items and prototype patient report. Focus groups and cognitive interviews were conducted using traditional research principles (e.g., semi-structured interview guide, probing, and think aloud techniques). An Expert Advisory Panel (N=12) including asthma clinical specialists and measurement professionals was convened to review results from the focus group and cognitive interview studies and make recommendations for final survey and report development.

Results

Domains of health impacted by asthma included physical (recreation, play, competitive sports, and exercise), social (activities, family relationships), emotional (anger, upset, frustration, anxiety, worry), sleep, role (recreational/leisure activities; work), and sexual functioning. Most items in the impact survey were easily understood, covered important content, and included relevant response options. Items with contradictory examples and multiple concepts were difficult to comprehend. Suggestions were made to expand survey content by including additional items on physical and sexual functioning, sleep, self-consciousness, stigma, and finances. Reports were considered useful and participants saw value in sharing the results with their doctor. Graphic presentation of scores was not always understood; participants preferred tabular presentation of score levels with associated interpretative text. Display of inverse scores for different measures (higher scores equaling better health on one scale and worse health on another) shown on a single page was confusing. The score history section of the report was seen as helpful for monitoring progress over time, particularly for those recently diagnosed with asthma.

Expert panelists agreed that displaying inverse scores in a single summary report may be confusing to patients and providers. They also stressed the importance of comprehensive interpretation guidelines for patients, with an emphasis on what they should do next based on scores. Panelists made recommendations for provider and aggregate-level reports (e.g., “red flags” to indicate significant score changes or cut-points of significance; identification of subgroups that have scored poorly or recently gotten worse).

Conclusion

Incorporating input from patients, clinicians, and measurement experts in the early stages of product development should improve the construct validity of this PRO measure and enhance its practical application in healthcare.

INTRODUCTION

Increasingly, the success of healthcare is being defined in terms of its ability to improve patients’ functional status and well-being, constructs that are often measured using patient self-report.[13] Use of self-report measures may enhance the clinical encounter, since patient perceptions of health and illness, treatment satisfaction, and self-management skills have an influence on treatment adherence and preventative health behaviors.[46]

In recent years, there have been innovations in technology designed to support patients and their health care providers in achieving positive health outcomes, including the development of electronic systems for data capture, information management, health record integration, and reporting.[711] Additionally, advancements in psychometric methods, such as the use of Item Response Theory (IRT) in test development, offer substantive practical advantages to classical methods of test construction.[12] PRO measures can be improved using item response theory (IRT), and competing objectives of more practical and more precise tools can be achieved over a wide range of severity levels using computerized adaptive testing (CAT).[1319] During the last decade, CAT applications have been increasingly used in the assessment of health outcomes.[1530] These applications require a large set of items (banks) in any one functional area; items that consistently scale along a dimension of low to high functional proficiency; and rules guiding starting, stopping and scoring procedures.

Qualitative research can inform the development of PRO measures and user-friendly technologies.[3136] In 2006, the Federal Drug Administration published draft guidelines for the development of PRO measures, recognizing that survey development “is incomplete without patient input.”[37] For example, focus groups can help researchers identify and define constructs that are relevant from the patient point-of-view, detect possible limitations in a survey, and generate ideas for hypothesis testing. Cognitive item testing can evaluate sources of response error in the survey questionnaire that may affect results (e.g., problems that will reduce response reliability or change the meaning of responses). Usability studies can evaluate the survey and the technology platform upon which it is delivered (interface design, navigational elements, and user preferences). Such research enables the developers of PRO measures to evaluate the application of basic design principles.[3840]

Numerous studies have used qualitative research to understand risk factors associated with asthma, the impact of asthma on quality of life, beliefs about asthma and perceptions of treatment, and how health technologies and treatment interventions may improve health outcomes for asthma patients.[4153]

Objective

Qualitative research was used to inform the development a comprehensive PRO measure, collectively referred to as the ASTHMA-CAT. The ASTHMA-CAT assessment combines asthma impact, asthma control, and generic health-related quality of life (HRQOL) measures in a single administration, yielding patient, provider, and aggregate feedback reports.

The DYNHA Asthma Impact Survey (DYNHA® AIS) is the CAT component of the ASTHMA-CAT assessment.[54,55] Its development was modeled after the DYNHA Headache Impact Test (DYNHA HIT).[15] DYNHA AIS draws on a 37-item impact bank constructed to develop disease-specific CATs.[23]

A prototype DYNHA AIS was piloted in a disease management population and was found to reduce response burden while providing equally precise scores as compared to the full AIS item bank. Static (full bank) and dynamic AIS versions discriminated between respondents with differing levels of asthma severity, and mean static and dynamic AIS scores were equivalent within severity categories. However, findings also indicated the need for wider coverage of the functional impact of asthma, including items with greater specificity in content, in order to precisely assess asthma patients at varying levels of severity, and to capture changes in impact due to treatment.[5658]

The goal of the current study was to construct and evaluate an expanded AIS item bank by: (1) conducting focus groups with asthma patients to confirm the importance of hypothesized domains and gaps in content coverage, and to collect feedback on the important components necessary for inclusion in a patient feedback report; (2) conducting cognitive interviews with asthma patients to evaluate sources of response error in the survey that may affect results (e.g., problems that will reduce response reliability or change the meaning of responses); and (3) convening an Expert Advisory Panel of measurement professionals and clinical experts in asthma to advise survey and report development efforts.

METHOD

Focus Groups

Participants

Participants (N=21) were sampled from a research panel database including approximately 3,000 members from the southern New England area. The database was developed by American Institutes for Research (AIR) over a period of several years and continues to be updated through active recruitment on websites, listservs, and discussion boards; local newspapers; postings at businesses, community centers, libraries, hospitals, and other public venues; and referrals from research partners and past study participants. Sampling criteria included self-reported adult (ages 18 years and older) asthma sufferers, who were able to read and write in English. Current smokers and those reporting current depression, congestive heart failure, chronic bronchitis, chronic obstructive pulmonary disease, emphysema, pneumonia, and respiratory conditions other than asthma were excluded from the study. AIR emailed potential panelists who met at least one of the study criteria and informed them of the study. Individuals interested in study participation contacted the study coordinator for screening. To ensure adequate representation across control levels, the Asthma Control Test (ACT) was fielded in the screening survey. Seventy individuals were screened before meeting target enrollment.

Instruments

The following measures and materials were used in the focus groups:

ASTHMA-CAT Measures

Asthma Impact Survey (AIS) is a 37-item survey measuring the impact of asthma on health and serves as the item bank for the DYNHA AIS survey, a computerized adaptive test (CAT).[23,24]

Asthma Control Test (ACT) is a 5-item survey measuring asthma control, administered solely as a means to explain origination of ACT scores in the patient feedback report.[59,60]

SF-8 Health Survey (SF-8) is an 8-item generic survey measuring functional health and well-being, administered solely as a means to explain origination of SF-8 scores in the patient feedback report.[6163]

Background Information Survey is a 13-item module assessing participant characteristics.

Chronic Conditions Checklist is a 2-item module assessing presence of co-existing conditions: “Has a doctor ever told you that you had any of the following conditions?”, and “Do you now have any of the following conditions?”

Materials

Patient Feedback Reports

Five different patient feedback score report options were shown. Figure 1 presents a sample summary report for patients, reporting asthma-specific and generic scores, interpretation, progress, and resource information.

Figure 1
Sample Patient Feedback Report

Procedure

Three focus groups, held at a Boston-area research facility, were led by the same moderator, a PhD-level researcher with nearly 20 years of experience in measurement and evaluation. Participants signed informed consent documents prior to the start of their focus group. The moderator used a semi-structured interview guide and each session included structured discussion to define health-related quality of life (HRQOL); identify ways asthma impacts HRQOL; rank order HRQOL domains most impacted by asthma; review AIS items for content, meaning, clarity, layout; identify content areas not covered in the AIS; suggest improvements to the survey and report content; and recommend possible uses for the survey and report. Through discussion, the group identified ways asthma impacts quality of life, and then independently rank ordered their top five impacts. A standard list of probes was developed for the discussion, but used only when needed. Each 2-hour session was videotaped and transcribed. Participants received $100 incentive for taking part in the study.

Analyses

Recordings from each focus group session were transcribed, and two trained researchers independently reviewed the transcripts in their entirety. Data from the activity to identify the top ways asthma impacts HRQOL were tallied. Content analyses were performed manually. Unique and repetitive responses to all questions were noted. Following a grounded theory approach[64], themes emerged from the data (emic approach). After reading and reviewing the data multiple times to look for similarities and differences, the two researchers categorized themes using open coding. Any conflicting opinions were resolved through debate until consensus was achieved.

Cognitive Interviews

Participants

A sample of participants (N=20, non-overlapping with the focus group sample) was drawn from a research panel database including approximately 3,000 members from the southern New England area. Recruitment and sampling criteria were identical to those in the focus group study. Sixty individuals were screened before meeting target enrollment.

Instruments

ASTHMA-CAT Measures

Measures included the AIS, Chronic Conditions Checklist, and 36 experimental asthma impact bank items constructed based on focus group results (covering general health, sleep, finances, stigma, and physical, mental, role and social functioning).

Remaining ASTHMA-CAT measures were administered to participants, solely for the purpose of explaining the origin of scores in the patient feedback report.

Materials

Patient Feedback Reports

Score reports were edited based on focus group results, and a revised version was shared with cognitive test participants for their input.

Procedure

Individual semi-structured cognitive interviews were conducted in a Boston-area research facility by one of two AIR (PhD and MS-candidate) researchers. Participants signed informed consent documents prior to the start of each session which was recorded in its entirety on audiotape. Also, the moderator recorded verbatim notes and responses to all questions in an electronic database.

Participants were asked to carefully review each item and to think aloud as they formulated their responses. This allowed data on the cognitive processes respondents used to comprehend information and formulate their responses to be recorded.[65,66]

Participants were then asked to review the report options, and provide feedback on the clarity, layout, organization, and perceived usefulness of the report.

Participants received $100 incentive for taking part in the study.

Analyses

Recorded feedback from each cognitive interview was used to evaluate cognitive processes, item interpretation and comprehension, memory recall of relevant information, decision processes, and response processes.[65,66] Each item was evaluated for potential sources of response error (e.g., problems that may reduce response reliability or change the meaning of responses). The analytic focus was on identifying possible problems and frequency with which they occurred.

Expert Advisory Panel

An Expert Advisory Panel of asthma clinical specialists (n=7, including 4 MD, 1 MD-PhD, 2 PhD-level) and measurement professionals (n=5, including 4 PhD and 1 MA-level) was convened to review and advise survey and report development efforts. The panelists were selected from health management organizations, academic medical centers, and private practice to represent diverse interests and expertise in asthma clinical practice and research, disease management, patient advocacy, and HRQOL measurement (including three authors on this paper). Panelists were paid for their time and travel, and the meeting was facilitated by an expert moderator from the American Institutes for Research.

Following the focus group and cognitive debriefing studies, a full-day meeting of the panel was held to solicit feedback on content areas most impacted by asthma, content additions to the AIS bank, and feedback reports.

RESULTS

Focus Group

Twenty-one adults self-reporting asthma from Massachusetts, varied in terms of age (range 24–59 years), gender (62% female), race/ethnicity (76% White, 19% African American, 5% Asian), and asthma control (38% controlled, 29% somewhat controlled, 33% uncontrolled) participated in one of three focus group sessions. Table I summarizes sample characteristics. Five individuals participated in group 1, and eight participated in groups 2 and 3, respectively.

Table I
Focus Group and Cognitive Test Sample Characteristics

Defining Quality of Life

Figure 2 summarizes how participants defined and described quality of life. In particular, they associated good quality of life with the ability to engage in and enjoy exercise, physical activity, and recreational, travel and leisure activities. Good quality of life was also associated with happiness, independence, and health. Others emphasized the importance of personal time, and time spent with family and loved ones.

Figure 2
Patient-defined Quality of Life

Impact of Asthma on Quality of Life

Participants identified several areas impacted by asthma, and primarily focused on physical, social, and emotional aspects of health (see Table II). Participants described limitations in various physical activities, including recreation, play, competitive sports, and exercise. Several noted physiological effects, associating asthma with increased susceptibility to infection and as a factor in escalating illness. Most indicated that exposure to environmental triggers resulted in negative physical, emotional, financial, and, most notably, social consequences. Respondents described negative effects on social activities and family relationships. Interestingly, one participant noted a positive effect on family life when other family members benefited from a household no-smoking rule.

Table II
Sample Comments on Domains of Asthma Impact

When describing emotional impact from asthma, participants cited feelings of anger, upset, frustration, anxiety, and worry. These feelings were often mediated by other functional limitations (e.g., social). The anticipation of a future asthma attack and how this anticipation affects subsequent behavior was a cause for concern for many participants. Those with poorly controlled or uncontrolled asthma were particularly vocal about these issues. Additionally, several respondents discussed the impact of asthma on sleep and resulting fatigue; the effects of asthma treatment and associated burden; limitations on travel and other recreational/leisure activities; financial burden; impact on work and productivity; and other personal consequences.

In rank order, HRQOL domains most impacted by asthma were physical, social, emotional, financial, sleep, and work.

AIS Content

Participants identified favorable characteristics of the AISsurvey, instructions, items and response options, reporting that most items were clear and easy to understand, underlining emphasized key text, content covered the important areas, and response options were appropriate. Participants also noted some problem areas. Due to the seasonal effects of asthma, some preferred a 3-month recall period to the 4-week recall used in the existing bank. Some participants noted difficulty attributing health problems solely to asthma. Items with contradictory examples (e.g., “reading a book or exercising”) were difficult to answer, and participants noted that some items contained repetitive content. Content coverage could be expanded to include more items on anxiety, worry, and fear; and hierarchical probing items are needed for more depth of coverage. Participants favored response options presented in the same direction.

Reports

Participants were interested in feedback reports that showed their progress over time, with sufficient score interpretation, and saw value in sharing results with their doctor. Specifically, participants considered asthma-specific scores (ACT and AIS) to be very useful. Tabular presentation of score ranges and interpretive text was familiar and recognizable, and easier to understand than bar graphs. Participants said the report could be used to track asthma outcomes over time, and this was seen as a particularly useful feature for those newly diagnosed with the condition. Participants indicated that they would share this report with their doctor as a way to communicate about their health, and thought the information provided would add value to the overall clinical encounter.

Participants also identified problems with the report and suggested possible improvements. Bar graphs displaying norm-based scores (t-scores) were difficult for most to interpret. Scores, surveys, and normative comparisons could be improved with better labeling and more thorough explanation. Participants preferred to view their own progress over time, rather than detailed information on normative comparisons. Bold, red font was interpreted by some as bad or severe; while others thought it helped bring attention to important information. It was difficult for participants to match score output with the corresponding survey that had been administered. Participants preferred a focus on asthma-specific rather than generic outcomes, and some were uncomfortable with the emotional health score output. A few participants indicated that the mental health scores and associated interpretation text may make them feel depressed or may set them back.

Cognitive Item Testing

Twenty adults self-reporting asthma from the Boston area who varied in terms of age (range 18–64 years), gender (65% female), race/ethnicity (80% White, 10% African American, 5% Asian, 5% Middle Eastern or Indian), and asthma control (30% controlled, 35% somewhat controlled, 35% uncontrolled) participated in the one-on-one interview sessions (see Table I).

AIS Content

Participants indicated that most items and instructions were worded clearly, easily understood, and included a relevant recall period. Instructions and items were “easy” or “very easy” to understand. However, comprehension was reduced when items contained multiple concepts (e.g., participants reported that they would respond differently depending on whether asked about work, school, or daily activities), vague terms, words with dual meaning, or perceived irrelevant content (e.g., asthmatics might be asked “How often did you sit down and rest”, but not “…lie down and rest” since lying down often increases distress). Interpretation improved with simplified language and when specific examples were provided in the item stem. Participants suggested it may be important to include a “not applicable” response option, particularly for items with activities not common to all adults (e.g., “…participate in competitive sports?”, “…run a short distance?”, “…playing with children?”). Some participants expected the survey to cover the impact of asthma medications and the impact of specific triggers and/or environments on asthma.

Generally, participants reported no difficulty recalling events over the past 4 weeks, and as they answered items, considered “average asthma impact” over this timeframe. However, for some items, a 4-week recall period seemed a mismatch (e.g., in the case of assessing impact of “cost of asthma treatment”, participants reported that past 4 weeks seemed too short a recall timeframe).

Reports

Similar to the focus group sample, cognitive testing participants were interested in feedback reports that showed their progress over time, with sufficient score interpretation, and saw value in sharing results with their doctor. Most preferred to see a summary report showing scores for all survey modules, with options to learn more (e.g., hyperlink to additional interpretation). And participants who viewed the feedback summary before the individual survey report modules seemed better able to interpret scores and interpretative information than those who viewed the single reports first.

Replicating prior results, participants had difficulty correctly interpreting scores shown in a graph, and preferred a table of possible score levels with associated interpretative text. Participants said that the survey results could help them communicate better with their doctors, and those newly diagnosed with asthma may find the information particularly useful for self-management. Further, participants reported that the survey would be more useful if they could access it from home on a periodic basis. The same respondents liked the “Your Progress” section of the report (see Figure 1).

Expert Advisory Panel

Panelists provided feedback on content areas most impacted by asthma (see Figure 3), and then reviewed focus group and cognitive testing results. Although several content areas were discussed, the panelists seemed to focus on physical, role, and sleep domains. Panelists also reviewed each item in the developmental AIS item bank, recommending specific revisions to item stem and response options and additional content coverage (e.g., sexual functioning domain).

Figure 3
Impact of Asthma on Quality of Life: The Clinician’s View

Panelists reviewed and provided input on patient, provider, and aggregate feedback reports. One of the main discussion points centered around score/scale direction; as the generic survey is scored so that higher scores equal better health, while higher scores indicate worse health on the asthma-specific impact survey. Panelists recommended that all scales shown in a combined report should be scored in the same direction. For patient reports, the focus was on ensuring efficient, yet comprehensive interpretation guidelines with more emphasis on the “What You Should Do” section (including actionable next steps for self-management). Panelists agreed that progress over time should be shown for as many time points as possible (e.g., hyperlink to full score history). Also, they agreed that the report should inform patients when they should take the survey next. They requested that the report take patient reading level into account.

Panelists suggested that the provider report display item-level responses, so clinicians can understand which items are driving the resulting score. Also, panelists suggested that the reports display a “red flag” when scores change substantially or exceed a cut-point of significance. For instance, this system could be used to inform a clinician that his/her patient screens positive for depression, or it could be used to instruct a patient to contact her/his doctor immediately for treatment follow-up.

They also suggested that the aggregate report should display distribution of scores, and flag sub-groups (e.g., clinic groups, regional groups, etc.) that have scored poorly, as well as those that have recently gotten worse. Finally, they recommended that the aggregate report highlight which groups are doing better or worse than normative averages.

CONCLUSION

The goal of this study was to incorporate patient and provider perspectives into the development of an asthma outcomes assessment and feedback reports.

Similar to previous work exploring how asthma impacts HRQOL[6768], several areas were identified, with a focus on physical (recreation, play, competitive sports, and exercise), social (activities, family relationships), and emotional (anger, upset, frustration, anxiety, worry) functioning; sleep and resulting fatigue; the effects of asthma treatment and associated burden; limitations on travel and other recreational/leisure activities; financial burden; impact on work and productivity; and sexual functioning.

Most items in the AISsurvey were clear and easy to understand, covered many of the important content areas, and included relevant response options. Some participants suggested that a 3-month recall period may change how they report asthma impact, due to the seasonal effects of the condition. Since the overall ASTHMA-CAT assessment is intended to be administered at regular intervals to monitor changes in asthma impact and control over time, and since respondents had no difficulties recalling events over the past 4 weeks, the shorter recall period will be retained for future work.

Additionally, participants favored response options presented in the same direction, and some requested a “N/A”, or not applicable, option for content areas not likely to be experienced by them. Future iterations of item bank development will test the inclusion of a N/A response option and other ways to tailor the assessment (i.e., content selection procedures) will be considered.

Items with contradictory examples and those containing multiple concepts were difficult to comprehend. For example, a single question asked respondents how much asthma limits their performance in housework, work, school, or social activities. Revised AIS bank items have been developed based on detailed findings from the current study. The new bank includes the original items, plus a set of experimental items designed to be more concise and focused in their content coverage (limiting each item to one idea).

Also, suggestions were made to expand AIS content and depth of coverage. In addition to cognition, fatigue, general social support, mental health, and role and social functioning, the revised item bank provides additional coverage for finance, physical functioning, self-consciousness, sleep, stigma, and sexual functioning. This bank, covering new content areas and providing shorter and more specific items, will be evaluated in a forthcoming large-scale item calibration study.

In response to patient feedback reports, participants considered this information to be useful, saw value in sharing the results with their doctor, and thought the information would add value to the clinical encounter. Score interpretation was facilitated by tabular presentation of score levels with associated interpretative text. Some participants reported that the ability to track progress over time would be particularly useful for those recently diagnosed with asthma. In addition, participants reported that the assessment would be especially useful if they could access it from home on a periodic basis, with the goal of monitoring their progress over time.

Expert panelists agreed that displaying scales with varied score direction (i.e., high scores indicating better health on one scale and worse health on another) in a single summary report may be confusing to patients and providers. They also stressed the importance of comprehensive interpretation guidelines for patients with more emphasis on the “What You Should Do” section of the report. Future work will evaluate reporting options to enhance patient self-management.

Panelists requested item-level responses for provider reports and “red flags” to indicate significant score changes or cut-points of significance (i.e., likelihood for depression), information that may be useful for clinical decision-making. Panelists also recommended red flags in aggregate reports to identify subgroups that have scored poorly or recently gotten worse, and indicators to highlight those groups doing better or worse than normative comparisons.

Participants in this study generally were well educated, a potential limitation, which should be considered prior to finalizing the tool. Possible ways to address this issue might include application of readability statistics, heuristic evaluation, and/or validation studies with asthma sub-groups.

Although qualitative research often relies on small participant samples, it enables researchers to gather detailed information useful for product conceptualization, design, and implementation.[6971] In this study, patient-centered research helped to identify gaps in measurement, possible sources of survey response error, areas for improvement in survey and report development, and new hypotheses for future testing that may not have been discovered vis-à-vis traditional quantitative methods.

Incorporating input from patients, clinicians, and measurement experts in the early stages of product development should improve the construct validity of this PRO measure and enhance its practical application in healthcare.

Acknowledgments

This research was supported in part by an NIH-sponsored grant (National Heart, Lung, and Blood Institute, #2R44HL078252-02) and QualityMetric Incorporated from its own research funds. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Heart, Lung, and Blood Institute or the National Institutes of Health. Dr. Turner-Bowker, Dr. Saris-Baglama, and Mr. DeRosa are employed by QualityMetric Incorporated. The focus groups and cognitive interviews were conducted by Dr. Paulsen and Mr. Bransfield while they were employed at American Institutes for Research under a subcontract from QualityMetric Incorporated.

References

1. DeWalt DA, Revicki DA. National Quality Measures Clearinghouse. Expert commentary: Importance of patient-reported outcomes for quality improvement. [[Accessed 2008 Nov 24]]. [online]. Available from URL: http://www.qualitymeasures.ahrq.gov/resources/commentary.aspx?file=PROMIS.inc/
2. Agency for Healthcare Research and Quality (AHRQ) Mission & budget: Annual highlights 2007. [[Accessed 2008 Nov 24]]. [online]. Available from URL: http://www.ahrq.gov/about/highlt07.htm/
3. U.S. Food and Drug Administration. The importance of patient-reported outcomes: It’s all about the patients. [[Accessed 2008 Nov 24]]. [online]. Available from URL: http://www.fda.gov/fdac/features/2006/606_patients.html/
4. Schmittdiel J, Mosen DM, Glasgow RE, Hibbard J, Remmers C, Bellows J. Patient assessment of chronic illness care (PACIC) and improved patient-centered outcomes for chronic conditions. J Gen Intern Med. 2008 Jan;23( 1):77–80. [PMC free article] [PubMed]
5. Mosen DM, Schmittdiel J, Hibbard J, Sobel D, Remmers C, Bellows J. Is patient activation associated with outcomes of care for adults with chronic conditions? J Ambul Care Manage. 2007 Jan-Mar;30(1):21–9. [PubMed]
6. Baker DW, Asch SM, Keesey JW, Brown JA, Chan KS, Joyce G, Keeler EB. Differences in education, knowledge, self-management activities, and health outcomes for patients with heart failure cared for under the chronic disease model: The improving chronic illness care evaluation. J Card Fail. 2005 Aug;11( 6):405–13. [PubMed]
7. Uslu AM, Stausberg J. Value of the electronic patient record: An analysis of the literature. J Biomed Inform. 2008 Aug;41( 4):675–82. [PubMed]
8. Chaudhry B, Wang J, Wu S, Maglione M, Mojica W, Roth E, Morton SC, Shekelle PG. Systematic review: Impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med. 2006 May 16;144( 10):742–52. [PubMed]
9. Bussey-Smith KL, Rossen RD. A systematic review of randomized control trials evaluating the effectiveness of interactive computerized asthma patient education programs. Ann Allergy Asthma Immunol. 2007 Jun;98( 6):507–16. [PubMed]
10. Fiscella K, Geiger HJ. Health information technology and quality improvement for community health centers. Health Aff (Millwood) 2006 Mar-Apr;25(2):405–12. [PubMed]
11. Bu D, Pan E, Walker J, Adler-Milstein J, Kendrick D, Hook JM, Cusack CM, Bates DW, Middleton B. Benefits of information technology-enabled diabetes management. Diabetes Care. 2007 May;30( 5):1137–42. [PubMed]
12. Embretson SE, Reise PE. Item response theory for psychologists. Mahwah (NJ): Lawrence Earlbaum Associates, Inc; 2000. Publishers.
13. Chang CH. Patient-reported outcomes measurement and management with innovative methodologies and technologies. Qual Life Res. 2007;16(Suppl 1):157–66. [PubMed]
14. Wainer H, Dorans NJ, Eignor D, et al. Computer-adaptive testing: A primer. Mahwah (NJ): Lawrence Erlbaum Associates; 2000.
15. Ware JE, Jr, Bayliss MS. The practical assessment of headache impact using item response theory and computerized adaptive testing. Qual Life Res. 2003;12:887–1012. [PubMed]
16. Bjorner JB, Chang CH, Thissen D, Reeve BB. Developing tailored instruments: Item banking and computerized adaptive assessment. Qual Life Res. 2007;16( Suppl 1):95–108. [PubMed]
17. Chakravarty EF, Bjorner JB, Fries JF. Improving patient reported outcomes using item response theory and computerized adaptive testing. J Rheumatol. 2007 Jun;34( 6):1426–31. [PubMed]
18. Cella D, Gerhson R, Lai S, Choi S. The future of outcomes measurement: Item banking, tailored short forms, and computerized adaptive assessment. Qual Life Res. 2007;16:133–141. [PubMed]
19. Cook KF, O’Malley KJ, Roddey TS. Dynamic assessment of health outcomes: Time to let the CAT out of the bag? Health Serv Res. 2005 Oct;40( 5):1694–1711. [PMC free article] [PubMed]
20. Fliege H, Becker J, Walter OB, Bjorner JB, Klapp BF, Rose M. Development of a computer-adaptive test for depression (D-CAT) Qual Life Res. 2005;(14):2277–2291. [PubMed]
21. Haley SM, Raczek AE, Coster WJ, Dumas HM, Fragala-Pinkham MA. Assessing mobility in children using a computer adaptive testing version of the pediatric evaluation of disability inventory. Arch Phys Med Rehabil. 2005;(86):932–939. [PubMed]
22. Turner-Bowker DM, Raczek AE, Bjorner JBB, Saris-Baglama RN, DeRosa M, Anatchkova M, Becker J, Ware JE., Jr Real world applications of dynamic health assessment: Lessons from the development and clinical field testing of five new prototype tools. Proceedings of the 13th Annual Meeting of the International Society for Quality of Life Research; 2006 Oct 14; Lisbon (Portugal).
23. Schwartz C, Welch G, Santiago-Kelley P, Bode R, Sun X. Computerized adaptive testing of diabetes impact: A feasibility study of Hispanics and non-Hispanics in an active clinic population. Qual Life Res. 2006;15( 9):1503–1518. [PubMed]
24. Kosinski M, Bjorner JB, Ware JE, Jr, Sullivan E, Straus WL. An evaluation of a patient-reported outcomes found computerized adaptive testing was efficient in assessing osteoarthritis impact. J Clin Epidemiol. 2006;(59):715–723. [PubMed]
25. Petersen MA, Groenvold M, Aaronson N, Fayers P, Sprangers M, Bjorner JB. Multidimensional computerized adaptive testing of the EORTC QLQ-C30: Basic developments and evaluations. Qual Life Res. 2006;(15):315–329. [PubMed]
26. Thissen D, Reeve BB, Bjorner JB, Chang CH. Methodological issues for building item banks and computerized adaptive scales. Quality of Life Res. 2007;(16):109–119. [PubMed]
27. Rose M, Bjorner JB, Becker J, Fries JF, Ware JE. Evaluation of a preliminary physical function item bank supported the expected advantages of the Patient-Reported Outcomes Measurement Information System (PROMIS) J Clin Epidemiol. 2008;(61):17–33. [PubMed]
28. Walter OB, Becker J, Bjorner JB, Herbert F, Klapp BF, Rose M. Development and evaluation of a computer adaptive test for ‘Anxiety’ (Anxiety-CAT) Quality of Life Res. 2007;(16):143–155. [PubMed]
29. Kocalevent RD, Rose M, Becker J, Walter OB, Fliege H, Bjorner JB, Kleiber D, Klapp BF. An evaluation of patient-reported outcomes found computerized adaptive testing was efficient in assessing stress perception. J Clin Epidemiol. 2008 Jul 16; [PubMed]
30. Kopec JA, Badii M, McKenna M, Lima VD, Sayre EC, Dvorak M. Computerized adaptive testing in back pain: Validation of the CAT-5D-QOL. Spine. 2008 May 20;33( 12):1384–90. [PubMed]
31. Reeve BB, Burke LB, Chiang YP, et al. Enhancing measurement in health outcomes research supported by agencies within the U.S. Department of Health and Human Services. Qual Life Res. 2007;16(Suppl 1):175–86. [PubMed]
32. Perez L, Huang J, Jansky L, Nowinski C, Victorson D, Peterman A, Cella D. Using focus groups to inform the Neuro-QOL measurement tool: Exploring patient-centered, health-related quality of life concepts across neurological conditions. J Neurosci Nurs. 2007 Dec;39( 6):342–53. [PubMed]
33. Carbone ET. Use of cognitive interview techniques in the development of nutrition surveys and interactive nutrition messages for low-income populations. J Am Die Assoc. 2002;102( 5):690–696. [PubMed]
34. Collins D. Pretesting survey instruments: An overview of cognitive methods. Medicine. 2003 May;12( 3):229–238. [PubMed]
35. Dumas JF, Redish JC. A practical guide to usability testing. Westport (CT): Greenwood Publishing Group Inc; 1993.
36. George M, Apter AJ. Gaining insight into patients’ beliefs using qualitative research methodologies. Curr Opin Allergy Clin Immunol. 2004 Jun;4( 3):185–9. [PubMed]
37. Guidance for industry patient-reported outcomes measures: use in medical product development to support marketing claims. Rockville (MD): US Department of Health and Human Services, Food and Drug Administration, Center for Drug Evaluation and Research, Center for Biologics Evaluation and Research, Center for Devices and Radiological Health; 2006. Feb, Report no.: HFA-305. [PMC free article] [PubMed]
38. Dillman DA, Symth JD, Christian LM. Internet, mail, and mixed-mode surveys: The tailored design method. Hoboken (NJ): John Wiley & Sons, Inc; 2009.
39. Dillman DA, Gertseva A, Mahon-Haft T. Achieving usability in establishment surveys through the application of visual design principles. Journal of Official Statistics. 2005;21(2):183–214.
40. Mullin PA, Lohr KH, Bresnahan BW, McNulty P. Applying cognitive design principles to formatting HRQOL instruments. Medicine. 2000;9( 1):13–27. [PubMed]
41. Donald KJ, McBurney H, Browning C. Self management beliefs, attitudes and behaviour of adults with severe life threatening asthma requiring an admission to hospital. Aust Fam Physician. 2005 Mar;34( 3):197–200. [PubMed]
42. Goeman DP, Douglass JA. Understanding asthma in older Australians: A qualitative approach. Med J Aust. 2005 Jul 4;183(1 Suppl):S26–S27. [PubMed]
43. Haugbolle LS, Sorensen EW. Drug-related problems in patients with angina pectoris, type 2 diabetes and asthma—interviewing patients at home. Pharm World Sci. 2006 Aug;28( 4):239–47. [PubMed]
44. Oregon Department of Human Services. People with asthma and caregivers of children with asthma: Focus group report. [[Accessed 2008 Oct 20]]. [online]. Available from URL: http://www.oregon.gov/DHS/ph/asthma/docs/asthmafocusgroupreport.pdf/
45. Raynor DK, Savage I, Knapp P, Henley J. We are the experts: People with asthma talk about their medicine information needs. Patient Educ Couns. 2004 May;53( 2):167–74. [PubMed]
46. Vincent SD, Toelle BG, Aroni RA, Jenkins CR, Reddel HK. Exasperations of asthma: A qualitative study of patient language about worsening asthma. Med J Aust. 2006 May 1;184( 9):451–4. [PubMed]
47. Anhoj J, Nielsen L. Quantitative and qualitative usage data of an Internet-based asthma monitoring tool. J Med Internet Res. 2004 Sep 3;6( 3):e23. [PMC free article] [PubMed]
48. Cleland J, Caldow J, Ryan D. A qualitative study of the attitudes of patients and staff to the use of mobile phone technology for recording and gathering asthma data. J Telemed Telecare. 2007;13( 2):85–9. [PubMed]
49. Douglass JA, Goeman DP, Yu EA, Abramson MJ. Asthma 3+ Visit Plan: A qualitative evaluation. Intern Med J. 2005 Aug;35( 8):457–62. [PubMed]
50. Hartmann CW, Sciamanna CN, Blanch DC, et al. A website to improve asthma care by suggesting patient questions for physicians: Qualitative analysis of user experiences. J Med Internet Res. 2007;9( 1):E3. [PMC free article] [PubMed]
51. Pinnock H, Slack R, Pagliari C, Price D, Sheikh A. Understanding the potential role of mobile phone-based monitoring on asthma self-management: Qualitative study. Clin Exp Allergy. 2007 May;37( 5):794–802. [PubMed]
52. Smith JR, Mugford M, Holland R, Noble MJ, Harrison BD. Psycho-educational interventions for adults with severe or difficult asthma: A systematic review. J Asthma. 2007 Apr;44( 3):219–41. [PubMed]
53. van Baar JD, Joosten H, Car J, et al. Understanding reasons for asthma outpatient (non)-attendance and exploring the role of telephone and e-consulting in facilitating access to care: Exploratory qualitative study. Qual Saf Health Care. 2006 Jun;15( 3):191–5. [PMC free article] [PubMed]
54. Kosinski M, Turner-Bowker DM, Bayliss MS, et al. Asthma Impact Survey: A user’s guide. Lincoln (RI): QualityMetric Incorporated; 2003.
55. Martin M, Fortin E, Maruish ME. The DYNHA® Asthma Impact Survey: A user’s guide. Lincoln (RI): QualityMetric Incorporated; 2004.
56. Turner-Bowker DM. SBIR grant # 1 R43 HL078252–01. Phase I final progress report. Lincoln (RI): QualityMetric Incorporated; 2005. Computerized adaptive assessment of asthma impact.
57. Turner-Bowker DM, Saris-Baglama RN, Anatchkova MD, Mosen DM. Development and preliminary evaluation of a computerized adaptive assessment for asthma (ASTHMA-CAT(™)). Poster presented at the annual meeting of Academy Health; 2005 June 26–28;; Boston (MA).
58. Anatchkova M, Turner-Bowker DM, Mosen D, Bartley P, Ware J. Computerized adaptive assessment of asthma impact. Poster presented at the annual meeting of the Society of Behavioral Medicine; 2005 April 13–16; Boston (MA).
59. Bayliss MS, Kosinski M, Turner-Bowker DM, et al. Asthma Control Test: A user’s guide. Lincoln (RI): QualityMetric Incorporated; 2003.
60. Nathan RA, Sorkness CA, Kosinski M, et al. Development of the asthma control test: A survey for assessing asthma control. American Academy of Allergy, Asthma and Immunology. 2004;113( 1):59–65. [PubMed]
61. Ware JE, Jr, Kosinski M, Gandek B, et al. Development and testing of the SF-8 Health Survey. Qual Life Res. 2000;9( 30):307.
62. Ware JE, Jr, Kosinski M, Dewey JE, et al. How to score and interpret single-item health status measures: A manual for Users of the SF-8 Health Survey. Lincoln (RI): QualityMetric Incorporated; 2001.
63. Turner-Bowker D, Bayliss MS, Kosinski M, et al. Usefulness of the SF-8 Health Survey for comparing the impact of migraine and other conditions. Qual Life Res. 2003;12( 8):1003–1012. [PubMed]
64. Strauss A, Corbin J. Basics of Qualitative Research: Grounded Theory Procedures and Techniques. Sage; 1990.
65. Ericsson KA, Simon HA. Protocol analysis: Verbal reports as data. Cambridge (MA): The MIT Press; 1993. Revised edition.
66. Sudman S, Bradburn N, Schwarz N. Thinking about answers: The application of cognitive processes to survey methodology. San Francisco (CA): Jossey-Bass; 1996.
67. Juniper EF, Guyatt GH, Epstein RS, Ferrie PJ, Jaeschke R, Hiller TK. Evaluation of impairment of health related quality of life in asthma: Development of a questionnaire for use in clinical trials. Thorax. 1992;47(2):76–83. [PMC free article] [PubMed]
68. Marks GB, Dunn SM, Woolcock AJ. A scale for the measurement of quality of life in adults with asthma. J Clin Epidemiol. 1992;45(5):461–72. [PubMed]
69. Virzi RA. Refining the test phase of usability evaluation: How many subjects is enough? Human Factors. 1992;34:457–468.
70. Nielsen J, Landauer TK. A mathematical model of the finding of usability problems. Proceedings of ACM INTERCHI’93 Conference; 1993; Amsterdam (Netherlands): ACM Press; 1993.
71. Morgan D. Focus groups as qualitative research. 2. Thousand Oaks (CA): Sage Publications; 1997.