|Home | About | Journals | Submit | Contact Us | Français|
Pain and emotional distress are primary symptoms in conditions that affect children with disabilities such as cerebral palsy (CP), spina bifida (SB), neuromuscular disease (NMD), spinal cord injury (SCI), limb deficiency (LD), and amputation (AMP) (Engel, Jensen, & Schwartz, 2006; Engel, Kartin, & Jaffe, 2005; Jan & Wilson, 2004; Oddson, Clancey, & McGrath, 2006; Wilkins, McGrath, Finley, & Katz, 2004). Despite advances in understanding the consequences of these diseases and conditions, pain and emotional distress in children are often poorly managed (Eccleston, Crombez, Scotford, Clinch, & Connell, 2004; Finley, McGrath, & Chambers, 2006; Sakolsky & Birmaher, 2008).
There is little known about pain, fatigue, depression and related symptoms in children with disabilities. Although there are a number of strategies to evaluate pain in children, there is little evidence to support their use with children with disabilities (Engel & Kartin, 2004). Fatigue in children with cancer has been found to be a significant problem (Hinds et al., 1999; Hockenberry-Eaton et al., 1998; Varni, Burwinkle, Katz, Meeske, & Dickinson, 2002), but fatigue in children with disabilities has not been well-studied (Eddy & Cruz, 2007). Little is known about the prevalence of depression in children with the particular disabilities we studied, but given the relationship between pain, depression, and fatigue described in adults with similar disabilities (Benony, Daloz, Bungener, Chahraoui, Frenay, & Auvin, 2002), there is reason to believe that these children and adolescents are at high risk. Measurement of participation among children with disabilities has been limited to the Community Activities Questionnaire (CAQ)(Ehrmann, Aeschleman, & Svanum, 1995). Advances in management of these symptoms and research on effective treatments depend on the existence of measures that function well in pediatric populations.
Although there is increasing evidence that, given age-appropriate measures, children as young as five years of age can reliably and validly self-report aspects of their health status, challenges in assessment of outcomes in children remain, including inadequate construct definition and lack of consistent terminology (Jan & Wilson, 2004; Varni, Limbers, & Burwinkle, 2007; von Baeyer, 2006). In addition, many of the scales that target pediatric populations were developed for use in adult populations and later adapted for use with children. For example, the Brief Pain Inventory (BPI) was used to assess pain in children with cerebral palsy (Engel, Petrina, Dudgeon, & McKearnan, 2005) and Srisaeng (2003) used the Multidimensional Scale of Perceived Social Support (MSPSS) to assess social support in adolescent mothers in Thailand. The Community Activities Questionnaire (Ehrmann, Aeschleman, & Svanum, 1995), the primary measure of participation in children with disabilities, was derived from an adult measure, the Activity Pattern Indicator (Brown & Gordon, 1987). There is little research investigating the validity of these adapted measures in pediatric populations. Many studies claim that selected measures function reliably in pediatric populations because children provide responses. However, for the most part these same studies have not investigated children’s understanding of these measures (Canty-Mitchell & Zimet, 2000; Imms, 2008). In a comprehensive review of quality of life measures in childhood (Eiser & Morse, 2001), the authors found that instruments based primarily or entirely on adult measures imposed considerable response burden for children in terms of length, reading skills, and response scale. These authors stressed that item and scale wording and format need to be modified for children’s cognitive and language skills and that more research was needed to determine the level of response burden that children of different ages can manage.
The purpose of this study was to learn how items from self-report measures used with adults and children were interpreted by children with disabilities. Words and concepts presented in the Brief Pain Inventory (BPI), the Short Form Health Survey (SF-8), and the Multidimensional Scale of Perceived Social Support (MSPSS) were analyzed by participants during cognitive interviews (CIs).
Cognitive interviewing has emerged as a promising method for obtaining respondent feedback on potential items for inclusion in self-report questionnaires (Christodoulou, Jungahaenel, DeWalt, Rothrock, & Stone, 2008) and for correcting problems with existing self-report items (Beatty & Willis, 2007). Cognitive interviewing is defined as “administering draft survey questions while collecting additional verbal information about the survey responses that is used to help determine whether the question is generating the information that its author intends” (Beatty & Willis, 2007, p. 287). Its purpose is to identify and correct problems with survey questions in an effort to minimize response error (Drennan, 2003). CIs have been shown to contribute to both the reliability and validity of measures by providing data on the clarity and meaning of questionnaire items to participants (Beatty & Willis, 2007; Clarke, 2004; Knafl et al., 2007).
Although the literature on cognitive interviewing with children is sparse (de Leeuw, Borgers, & Smits, 2004), researchers stress the importance of this step in the development of child self-report instruments because of differences between children and adults in social, emotional, and cognitive skills (Drennan, 2003; Levine, Huberman, Allen, and Dubois, 2001; Woolley, Bowen, & Bowen, 2004). For example, in a study of the validity of child-report data from the Elementary School Success Profile, Bowen (2008) found that children had issues with word recognition, misunderstanding content, response option incongruence, and misapplying response options to content. These authors suggested strategies to address these problems including deletion of problem words or items, item simplification, reordering of scale items, and adding content as needed.
Irwin and colleagues used cognitive interviews successfully with children and adolescents from the general pediatric population to gain feedback on items from patient reported outcome measures (Irwin, Varni, Yeatts, & DeWalt, 2009), and Stewart, Lynn, and Mishel (2005) were able to demonstrate the validity of using children with cancer as content experts in a self-report instrument content validity assessment. This study extends that research to include children with a variety of physical disabilities.
Cognitive Interviews were conducted as part of a larger study (N=119) on pain and fatigue in children with disabilities. The two most frequently used CI techniques are “think-aloud” (where the respondent articulates what he or she is thinking as he answers the question) or ‘retrospective probing’ (where the researcher asks specific, direct questions to illuminate how the respondent went about answering the question) (Bell, 2010). The retrospective probing method was utilized in this study. After training in CI methodology, cognitive interviews were conducted by research staff using a list of guiding questions to encourage participants to talk about any words or concepts they found troublesome. For this study, researchers were most interested in two of Bowen’s (2008) categories of issues with self-report items: misunderstanding content and misapplying response options to content. Therefore CI questions were developed based on those categories (Irwin, Varni, Yeatts, & DeWalt, 2009). Terms used in the BPI, SF-8, and the MSPSS and questions designed to elicit understanding of time frames were presented during cognitive interviews and only children who responded to items from the BPI, SF-8 and the MSPSS during the larger study were included in this part of the study.
Parents of children who were seen at two participating Pacific Northwest children’s hospitals with outpatient rehabilitation programs within the last three years were sent letters inviting them and their child to participate in a study on pain and fatigue in children with disabilities. Families interested in the study were called by research staff and screened based on the following eligibility criteria: (a) child diagnosed with a chronic condition associated with pain and/or fatigue such as spina bifida (SB), cerebral palsy (CP), neuromuscular disease (NMD), spinal cord injury (SCI), or limb deficiency (LD) (congenital or amputation), (b) child between the ages of 8 and 20 years old, (c) child able to understand and read English, (d) child able to accurately respond to questions about pain and fatigue such as, “In the past week how many days have you had pain?” and (e) child reported at least some pain and/or fatigue. Children were also recruited from three local camps for children with disabilities and screened using the same eligibility criteria.
Parents and participants aged 18 and over were asked to sign a consent form that described the study and the risks and benefits of participation. Children younger than 18 were asked to sign an assent form that described the study in age appropriate language (one form for children 8–13 and a second for children 14–17). The study protocol was approved by the University of Washington Institutional Review Board as well as the Washington State University Institutional Review Board and the institutional review boards of the participating children’s hospitals.
Interviews were conducted in person or over the phone by trained research staff. A parent or camp counselor was present if desired by the child or adolescent participant, and for some interviews a second trained research staff member was present to take notes. To measure literacy, children completed the Wide Range Achievement Test—Reading Subtest (WRAT). Cognitive Interviews were conducted directly following the larger survey study in which children completed a number of questionnaires developed to measure pain, fatigue, participation, social and emotional health. Trained interviewers were provided with a list of guiding questions such as “please tell me what ‘average pain’ means to you,” and were advised to follow-up with probes in order to gain a deeper understanding of issues relevant to participants (Appendix A). Cognitive interview questions were designed to correspond to items from measures that children completed as part of the survey study and were developed to evaluate participants’ understanding of these measures.
Parents were asked to complete a parent survey that included questions about their child’s experience with pain and fatigue as well as the impact of disability on the family. However, parent participation was auxiliary and not required for child participation.
Interviewers used a standard set of cognitive interview questions and took notes during the interviews. In addition, as part of data quality management for the larger study, each interviewer audio taped their first two or three interviews. The CI parts of the tapes were examined for consistency between interviewers and procedures were standardized for the remainder of the study.
The notes containing the cognitive interview responses were transcribed verbatim and entered into a database. Responses were reviewed independently by six researchers, and then as a group (Knafl et al., 2007). Researchers sorted each participant response into one of three categories: (a) response demonstrated understanding of intended meaning of item, (b) response suggested lack of understanding of item content, or (c) participants acknowledged that they did not know the meaning of the items or concepts.
Cognitive interviews were completed with 32 children (23 female and 9 male). Participants ranged in age from 8 to 20 years with an average age of 13 years and 2 months. Of the participants, 15 had neuromuscular disease, seven had cerebral palsy, five had spina bifida, four had a limb deficiency or amputation, and one had a spinal cord injury. Participant demographics are summarized in Table 1. Standardized scores on the WRAT III were calculated for all participants. Based on national averages, 59% of participants read at or above average for their grade reading level and 41% read below their grade reading level. This differs somewhat from national averages that show 34% reading below grade level (U.S. Department of Education, 2010). This may be related to disability or to lack of participation in regular school activities by some study participants.
Participant responses revealed several areas of misunderstanding of item content and context. Participants demonstrated difficulty using the assigned time frame when responding to items. As described below, they had substantial difficulty with constructs related to pain intensity and severity, pain interference, and emotional health constructs such as distress, anxiety and support.
All measures in the survey study used the time frame “in the past seven days.” Participants were asked what time frame they thought of when responding to these items. Of those who indicated using a time frame, 52% (n=13) reported using the past week, 40% (n=10) reported using a time frame other than the past week, and 8% (n=2) said they did not know what time frame they used in spite of the fact that they indicated using a timeframe when responding to the previous CI question. Participants who indicated using a time frame other than seven days said they thought about “[the] whole time, last couple [of] years” or “[the] past week doesn’t include now.”
Participants were asked to define “average” after responding to a survey item asking about their average pain in the past week. Forty-five percent (n=13) of participants provided answers that suggested understanding of the concept of “average” such as: “Not the highest, not the lowest, just in between.” Another 45% (n=13) provided inaccurate definitions such as: “Average means the total amount.” The final 10% (n=3) of participants responding to this question simply said, “I don’t know.”
When asked how they came up with their answer for their average pain in the past week, 67% (n=10) of respondents provided answers that showed they understood the question and were able to retrace their thought pattern when selecting a response option. For example, one reported having “thought back to the last time I had pain and it was not in the last week.” One participant answered in a way that indicated difficulty understanding the question, saying it “just feels like that” and 26% (n=4) reported not knowing how they came up with their average pain.
When asked what they thought of when they reported their worst pain in the past week, 81% (n=17) of participant responses showed understanding of the question, while 19% (n=4) gave answers that indicated a lack of understanding. Some participants responded to this question with specific examples in the past week such as: “I used crutches at camp last week[…]” and others gave examples that were outside of the one week time frame such as: “thought about when I fell out of my [wheel]chair in 2004.” All respondents reported knowing how they came up with their worst pain.
When asked for their definition of “intense,” participants had a variety of responses ranging from “dramatic and shocking” to “unbearable, don’t know what to do.” While there was substantial variation in how participants responded, 65% (n=17) of participants provided definitions that indicated understanding adequate for answering survey questions about intense pain. However, 20% (n=5) of participants defined “intense” in a way that demonstrated inadequate understanding. For example, one participant defined “intense” as “hyper;” another defined it as “[…] ecstatic or happy.” The other 15% (n=4) reported being unable to define “intense.”
When asked to define “severe,” 88% (n=23) of participants provided answers that indicated understanding of the term. Responses included: “the most pain you can have” and “hurts so bad that I can’t stand up or sit down or do anything fun.” Four participants said that “severe” means the same as intense. Three participants (12%) gave inaccurate definitions of “severe” including “not really painful, only a bit” and “[pain that hurts] hardly never.”
Next, children were asked if they thought severe pain was the same as intense pain. Sixty seven percent (n=14) of participants believed that severe was the same as intense, 29% (n=6) believed the two words meant different things, and 4% (n=1) said they did not know. Participants who described a difference between severe and intense pain gave responses such as, “[severe is] a different way of saying it, more than intense.”
Participants were asked how much pain interfered with general activity, a question in the Brief Pain Inventory. Answers that demonstrated good understanding of the term “general activity” were provided by 92% (n=23) of the sample. General activities cited included “when I’m playing with my friends”, “throwing a football” or “chores.” Two participants (8%) were unable to specify any of their “general activities,” providing answers such as, “can’t think [of any].”
When asked about pain interfering with enjoyment of life, 95% (n=18) of participants were able to define “enjoyment of life” in a way that indicated understanding. Responses included, “all the happy things” and “being able to do the stuff you want to do, follow your dreams.” One participant demonstrated some confusion with the phrase, defining enjoyment of life as, “interfering with family.”
When respondents were asked the meaning of “recreational activities,” definitions ranged from “activities you use your body with” to “smoking with friends.” Of those who responded, 70% (n=19) provided definitions indicating accurate understanding. While only 4% (n=1) of participants gave inaccurate definitions, 26% (n=7) reported not knowing what was meant by “recreational activities.”
When researchers queried respondents about the term “social activities,” 80% (n=20) provided accurate definitions. Responses included, “Talking on the phone, being with friends, physically,” “being with each other, playing with friends, swimming,” and “smoking with friends.” Twelve percent (n=3) of respondents gave definitions of social activities that indicated misunderstanding. These responses included: “problems with family” and “every day activities.” Eight percent (n=2) of respondents said that they did not know what social activities were.
When participants described what they thought of when asked if their pain interfered with “concentrating,” 81% (n=17) gave accurate examples such as interference with “classes, teachers,” or “staying on track, not distracting other people, no dilly dallying.” Two participants showed lack of understanding, providing examples such as: “family problems, emotional pain.” Two others said they did not know what they thought of.
Participants were asked to describe what “emotional help and support” meant to them. Of those that responded, 81% (n=22) gave meaningful definitions such as: “friends and family help me,” “Mom, maybe my dog,” and “counseling.” One participant used the original term, help, in the definition so this was categorized as an inaccurate response. Another 15% (n=4) of participants said they didn’t know what emotional help and support meant.
When asked about “emotional distress,” participant responses ranged from: “feeling alone and don’t want to talk to people about it” to “eat a bunch of chocolate, stay in bed, and cry.” Such responses represented 65% of the group (n=15). The other 35% (n=8) of the group, said they didn’t know what emotional distress was.
Based on the SF-8 item that asks respondents if they have had emotional problems such as feeling anxious, depressed, or irritable in the past week, participants were asked what the word “anxiety” meant. Thirty-two percent of participants (n=8) gave accurate responses that included: “feeling uneasy, nervous, worrying” and “fear before something happens.” Nine participants (36%) gave responses indicating misunderstanding of the word. Their responses included “really wanting to do something and getting in trouble at school for talking about it” and “over excited, thinking about tons of stuff at once.” Additionally, 32% (n=8) said they did not know what anxiety meant.
Results indicated substantial misunderstanding of terms used in the BPI, SF-8, and MSPSS. Younger study participants (8–13 years) were twice as likely to answer “I don’t know” to questions about words and constructs than were older participants (14–20 years) who were more likely to offer some response. Despite instructions that there were no right or wrong answers, older participants may have been more likely to view the study as a test and may have wanted to be perceived as knowledgeable.
Both younger and older participants had substantial difficulty with the idea of “average” symptoms. Although clinicians and researchers commonly ask people to describe symptoms such as pain and fatigue in terms of averages, adults as well as children find this a difficult task (Broderick, Stone, Arthur, et al., 2005).
Similar to findings from other studies (Mulcahey, Calhoun & Haley, 2009), concrete questions such as naming a recreational or social activity were easier for all participants than trying to explain abstract concepts. However, even when responses indicated understanding, the particular activities references varied broadly (e.g., definitions of social activities included “board games” and “smoking with friends”). Such a wide range of responses calls into question whether the same latent trait is being measured in different persons.
It is important for clinicians and researchers to have a solid understanding of child and adolescent development when they develop and administer self-report scales for this population. For example, since adolescents in this study tended to “guess” when they did not know the meaning of a word or an entire item, practitioners can reassure them that these measures are not tests, that there is no right or wrong answer to any question, and that verbalizing lack of understanding can help us develop better questions.
Question stems should be structured to avoid asking about “average” levels of symptoms. Instead, participants can be asked to remember when the “worst” level of a symptom occurred as well as times when they did not experience that symptom at all. Instrument developers strive for consistency in time frame. However, age-related differences in understanding time may make recalling a specific time frame difficult for children. Although Rebok and colleagues (2001) found that school-aged children could accurately respond to standardized time frames in self-report questionnaires, our results did not support that finding. In research about self-report of pain intensity, Von Baeyer (2006) found that pre-training children in use of self-report measures increased their accuracy. Therefore, if time frames are necessary they should be kept short and child respondents should be clearly briefed on their use and questioned about their understanding of the time frame prior to survey administration.
Item wording should be kept as concrete as possible, and probes offered as appropriate. For example, when asking about recreational activities, clinicians or interviewers might say “such as playing sports, dancing, or riding a bicycle.” Use of words with similar meanings, such as intense and severe, should be avoided in all self-report instruments for adults as well as for children and adolescents.
Although prior studies document the ability of children to read and respond to self-report items (Mulcahey, Calhoun, & Haley, 2009; Varni, Limbers, & Burwinkle, 2007), researchers and clinicians must feel confident about response accuracy. Children can be tested on their understanding of concepts to be presented in a clinical or research data collection instrument prior to its administration. For example, simple drawings of items that children would be likely to nominate as liking ‘least’ or ‘best’ can be offered and the child can be instructed to order the drawings in that way. Similarly, a child can be asked to name something that happened within the last week and something that happened more than a week ago. Parents can be asked for feedback about accuracy of their child’s responses. Use of these and other developmentally-sensitive screens can increase accuracy of self-report measures in children.
Guidelines for interpreting cognitive interview results are not yet standardized in the research literature. One concern is that even with the standardized interviewer orientation provided to research staff, differences in interview styles and personalities could have influenced response accuracy. A potential limitation of this study was the small sample size. In addition, age-related differences may exist in a sample with an age span this broad. Larger studies of cognitive interviewing could uncover the relationship between chronological as well as developmental age on understanding of self-report items. Another potential limitation was the inclusion of a variety of physical disabilities in the sample. For example, children with disabilities such as cerebral palsy, traumatic brain injury, and spina bifida often have some degree of cognitive deficit. All participants, however, were screened for ability to participate in interviews such as these.
As researchers develop items for scales that may be used in pediatric populations, it is important to know how children are interpreting those items. We found that many children answered questions even when they did not understand them. Since poorly constructed outcome measures hinder evaluation of nursing and other health care interventions, we suggest that children be directly involved in instrument development. This can be accomplished through interviews, focus groups, or further cognitive interviewing of currently available items and scales with revisions to the items made as needed. Our team used results from this study to revise items and scales used in later time points in the larger study of pain and fatigue in children with disabilities.
A version of this paper was presented at the Western Institute of Nursing Communicating Nursing Research Conference on April 18, 2008 in Garden Grove, CA. This study was funded by the National Institutes of Health (Grant #5U01AR052171) to University of Washington Center on Outcomes Research in Rehabilitation and by a National Institutes of Health Re-entry to Biomedical and Behavioral Health administrative supplement to Washington State University College of Nursing
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Linda Eddy, College of Nursing Washington State University.
Leyla Khastou, School of Nursing, University of Washington.
Karon F. Cook, Department of Rehabilitation Medicine, University of Washington.
Dagmar Amtmann, Department of Rehabilitation Medicine, University of Washington.