Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Med Care. Author manuscript; available in PMC 2010 February 16.
Published in final edited form as:
PMCID: PMC2822706

Considerations for Developing Interfaces for Collecting Patient-Reported Outcomes That Allow the Inclusion of Individuals With Disabilities

Mark Harniss, PhD,* Dagmar Amtmann, PhD,* Debbie Cook, BS, and Kurt Johnson, PhD*


PROMIS (Patient-Reported Outcome Measurement Information System) is developing a set of tools for collecting patient reported outcomes, including computerized adaptive testing that can be administered using different modes, such as computers or phones. The user interfaces for these tools will be designed using the principles of universal design to ensure that it is accessible to all users, including those with disabilities. We review the rationale for making health assessment instruments accessible to users with disabilities, briefly review the standards and guidelines that exist to support developers in the creation of user interfaces with accessibility in mind, and describe the usability and accessibility testing PROMIS will conduct with content experts and users with and without disabilities. Finally, we discuss threats to validity and reliability presented by universal design principles. We argue that the social and practical benefits of interfaces designed to include a broad range of potential users, including those with disabilities, seem to outweigh the need for standardization. Suggestions for future research are also included.

Keywords: computer-adapted testing, patient reported outcomes, accessibility, disability

Using computers for data collection has become routine in health care research. Research and clinical instruments are increasingly administered using computers, personal digital assistants, cell phones, voice response systems, and other information technologies.1,2

Computer-adaptive testing systems will be developed by the PROMIS (Patient-Reported Outcome Measurement Information System) network. Computer-adaptive tests (CAT) administer individually tailored questionnaires, collect and analyze responses, and provide instant results to patients, doctors, and researchers. CAT can be programmed to be used on computers with or without Internet access, personal digital assistants (PDAs), cell phones that include the necessary features, and voice response systems (VRS).

Compared with traditional paper-and-pencil questionnaires, the use of information technology (IT) to administer instruments offers significantly more flexibility to researchers and clinicians, and to patients and research participants. Information technology allows remote data collection from home or on the go in addition to clinics and health care facilities. Using CAT reduces participant burden and provides immediate scoring. The advantages IT brings are substantial, but without attention to the potential functional limitations of users, these technologies can easily be programmed in ways that exclude some people with disabilities. Inclusiveness and representation of research participants and researchers from all backgrounds is a cornerstone of research funded by National Institutes of Health.3 When health-related IT is designed with accessibility in mind, a broad range of users including those with disabilities can participate in all aspects of medical research and clinical care.

Ensuring that health-related IT is accessible to individuals with disabilities is essential. When disability is defined as an activity limitation, an estimated 49.7 million4 have disabilities; these numbers will increase as the general population ages.5 In addition, life expectancy for adults has been increasing during the past century and the likelihood of having a disability increases with age. For instance, for those 80 years of age and older, 73.6% are estimated to have some form of disability, with 57.6% expected to have a severe disability.6,7

Access to health care is already limited for people with disabilities and chronic illnesses.8 They confront a variety of barriers, including lack of health insurance coverage, difficulty finding a physician familiar with disability issues, reduced access to preventive health care such as PAP smears, and reduced access to mental health services.912 People with disabilities have often been excluded from participation in clinical trials and other health-related research, either intentionally because they did not meet the inclusion criteria, or unintentionally because they were not able to travel to the research site or because no accommodations were available for responding to paper-and-pencil test questionnaires. People with disabilities represent a significant and growing constituency and it is important to ensure they are included in health care research and have equal access to health care assessment and intervention. Building CAT to be fully accessible to people with disabilities is an important step.

CAT applications rely on IT (eg, computers, telephones, the Internet) to present information to the user. In general, IT may be inaccessible to people with disabilities if it relies on only a single way for users access or manipulate information. For example, people who have visual impairments cannot access pictures unless alternative text descriptions are provided; people who are deaf or hard of hearing cannot understand content that is only presented aurally; people who are colorblind cannot discriminate among color-coded options; and people who have limited use of their hands or arms may not be able to use a mouse, keyboard, or touch screen. Graphics often are used in health-related scales, as in the visual analog scales (VAS) and faces pain scales used to assess pain. VAS requires that respondents be able to see the 10mm line and use a mouse to rate their pain. Many of these barriers can be reduced or eliminated when technology environments are developed using universal design.13

Universal design encompasses the development of products that are usable by all people, to the greatest extent possible, without adaptation or specialized design. These products accommodate a wide range of individual preferences and abilities; communicate necessary information effectively (regardless of ambient conditions or the user's sensory abilities); and can be approached, reached, manipulated and used regardless of the individual's body size, posture or mobility. Universal design has been defined as “… the process of creating products (devices, environments, systems, and processes) which are usable by people with the widest possible range of abilities, operating within the widest possible range of situations (environments, conditions, and circumstances), as is commercially practical.”14 In the universal design process, there are 2 goals. The first is to ensure ease of use for all participants and the second is to design the system so that users have a variety of options for how to accomplish the same task.15

In an application that is designed using a process for universal design, features may be “always on,” such as auditory feedback for an information kiosk or a high contrast option in computer operating software, or they may be available on demand, like closed captioning or audio description for video. These features are necessary to provide access for some individuals with disabilities, but they may be used and appreciated by many users without disabilities as well.16,17

Even when IT systems are designed using universal design processes, some people with disabilities still need to use assistive technology (AT) or even personal assistance to access the information. People with disabilities may use AT to access computers, software, the Internet, telephones, and other IT. People with blindness or significant vision impairments may use screen readers to gain access to the content on the screen. Screen readers are complex software programs that include features important for full access to text, not just the feature of reading text out loud. People with low vision often enlarge the fonts on web browsers, use screen-enlargement software, or use the accessibility features built into the operating system to enhance contrast. People with reading disabilities may use specialized software that reads the text on the screen. People who cannot use a keyboard may use voice recognition software to dictate text, or word prediction systems that allow them to enter a keystroke or 2 and accept the word predicted by the software. Others may use pointing or scanning systems with on-screen keyboards. People of all ages with disabilities ranging from stroke to traumatic brain injuries to spinal cord injuries to multiple sclerosis may require flexible options for using IT and/or require that IT is compatible with their assistive technology devices. It is critical that this be considered from the beginning in the development of CAT. Retrofitting is difficult and expensive.18,19

The PROMIS Administration System

PROMIS will develop several delivery platforms to be used in a variety of clinical and research settings and with a wide range of chronic disease populations. PROMIS is still in the process of selecting the delivery platforms that will be used. Currently under consideration are a web-based application that can run on a computer that is not connected to the Internet, a web-based application delivered over the Internet, a cell-phone application, and an interactive voice response system. Researchers or health care providers may choose to install the PROMIS CAT as a web-based application on a stand-alone computer not connected to the Internet, because the setting, such as a clinic, may not have access to an Internet connection or may have security policies limiting access to the Internet. In clinical settings, when patients or research participants arrive, they could respond to computer-administered questionnaires in the waiting room to measure relevant symptoms (such as pain or fatigue) before seeing their physician. The results would be immediately available and could be discussed by the patient and health care provider during the visit and could be saved to a database for analysis by researchers. In the health care provider's office, most users will interact with a computer using a standard interface (ie, a keyboard, monitor, and mouse). In some cases, patients or research participants with disabilities such as low vision may use adaptations that are part of a computer operating system (like Windows). For example, in Windows, the respondent might use system software to enlarge the text on the screen.

However, some computers in clinical settings may have security settings enabled that do not allow users to modify accessibility settings. In this situation, an individual who could not access the computer because of limitations associated with disability could use a telephone to respond using the PROMIS Interactive Voice Response (IVR) system. This system would allow the user to listen as the items are read and then select his or her responses using the telephone keypad, much like a telephone-based banking system.

In another scenario, a research participant may be asked to fill out study-related questionnaires at home using a computer connected to the Internet. In this case, the CAT will be implemented as a web-based application located on a server. Many people with disabilities use AT to access computers and have the necessary software and hardware installed on their home and work computers. For instance, individuals with low vision or who are blind can use screen enlargement software or text-to-speech software called a screen reader. When the interfaces are designed to be compatible with AT, users of AT can fully participate in research studies and clinical trials.

Guidelines and Standards for Developing Accessible it

A variety of standards and models inform efforts to maximize accessibility.20 Currently, the widest reaching standards are derived from Section 508 of the Rehabilitation Act Amendments of 1998.21 Section 508 requires that when federal departments and agencies procure, develop, use, maintain, or upgrade electronic and information technology they must ensure that it complies with standards developed by the Architectural and Transportation Barriers Compliance Board. Federal agencies must provide “…access to and use of information and data to federal employees with disabilities and members of the public with disabilities that is comparable to the access to and use of the information and data by [those] who are not individuals with disabilities.”22

The intent of Section 508 was to ensure that IT be accessible in the same way that physical environments must be accessible. Section 508 standards are mandated only for IT procured and developed by the federal government but provide good guidance for all IT developers. Individual states also have passed laws regarding accessible IT, many of them based on Section 508, that apply within their boundaries.20

Although Section 508 broadly covers accessible IT, there are additional guidelines developed by the World Wide Web Consortium that apply specifically to web development. Their Web Content Accessibility Guidelines (WCAG) delineate 3 levels of accessibility: Priority 1—a web developer must meet; Priority 2—a web developer should meet; and Priority 3—a web developer may want to meet.23 Levels 1 and 2 of WCAG overlap with the Section 508 standards. Thatcher (2001) has written a comparison of the 508 standards to the WCAG guidelines.24

Considerations for Making Web-Based Applications Accessible

In the following sections, we provide an overview of the issues that apply to implementations of CAT. We also present tables that give an overview of the standards from Section 508 and provide a rationale for each. We do not cover all Section 508 criteria, but rather only those relevant to the PROMIS systems that use IT to collect patient reported outcomes. We have simplified some of the language found in the standards to make it more accessible for less technical audiences. Readers should consider this a broad overview of a complex topic and are encouraged to read more about accessibility and consult accessibility experts regarding specific applications. We conclude with an overview of how PROMIS will test for accessibility and briefly discuss additional relevant issues such as administration effects.

Accessibility of Web-Based Applications

Web-based applications have the potential to be highly accessible to individuals with disabilities. However, without the understanding and expertise required to assure accessibility, they can be inadvertently created in ways that are difficult or impossible to use by participants or patients with disabilities. Table 1 shows the relevant Section 508 standards that apply to computer administered health related questionnaires. These standards cover an array of issues ranging from appropriate use of color to appropriate use of client-side scripting.

Accessibility of Web-based Applications

Accessibility of Multimedia

Developers of computer-administered applications may use multimedia (audio/video) to make the process more engaging and allow participants with emerging literacy (such as children less than 8 years old) to respond to health questionnaires. Unless accessibility is considered during development, the use of multimedia can create barriers for people with disabilities because content is provided both visually and audibly, with each method dependent upon the other for a complete presentation. Table 2 shows the relevant Section 508 standards. The primary concerns addressed by these standards relate to building in alternate presentation of content through captioning and audio description.

Accessibility of Multimedia

Accessibility of Interactive Voice Response Systems

Interactive Voice Response Systems are telephone-operated systems like those used by banks and credit card companies. Table 3 shows the relevant Section 508 standards. The primary concerns addressed by these standards have to do with the use of TTYs (telecommunication devices for the deaf) and the timing of responses.

Accessibility of Interactive Voice-Response Systems

Testing of Accessibility

Section 508 does not provide a step-by-step process for evaluating whether a product meets the standards; determining whether a product meets accessibility standards is a complex process. PROMIS will engage in 2 types of accessibility testing. The first step involves expert review by individuals with detailed knowledge of computer systems and accessibility issues. To make accurate judgments about accessibility, evaluators must have good working knowledge, experience and expertise in several areas. These areas of expertise include technical skills in the area of IT (eg, a thorough knowledge of all components and functions of the Windows operating system, including accessibility and display options), expertise in assistive technology (in particular, evaluators need to be skilled users of screen readers, screen enlargement and other relevant AT applications), good working knowledge of the application being tested, and previous experience in the process of evaluating applications/products (eg, familiarity with the process, training in use of accessibility checklists, and understanding of how the results will be used). A single individual rarely has all the skills required; creating teams that collectively have the expertise needed is the best practice. As part of its work, PROMIS has funded a network project to ensure that all applications are designed with accessibility in mind. Working with the Center for Technology and Disability Studies at the University of Washington, the network project has access to teams of experts who are skilled in using all types of AT and in approaches that maximize accessibility of electronic and information technology.

Second, PROMIS will implement methodologies for studying usability and accessibility.25 These approaches involve testing of the CAT by potential users with and without disabilities who may use assistive technologies. This testing includes: (1) self-report by users in open-ended question response format, (2) observation and video recording by usability and accessibility experts to examine how users interact with the interface, and (3) cognitive interviewing to improve the CAT's functionality.26

During this testing, researchers will ask questions to better understand users' expectations and actions. Examples of such questions include: “What were you thinking when you followed that link?”, “What did you expect to see?”, “Did you find the information provided by the “HELP” feature useful?” After the session, researchers will engage in a “cognitive interviewing” process in which they ask participants more about the specific choices they made, with questions like: “What did you find yourself thinking about when you selected X?” and “Can you think of a way to change the screen that would make it easier for you to use?”, “How intuitive or user friendly did you find the navigation system?”, “Would you have preferred your choices to be displayed vertically versus horizontally?”, “Were the buttons large enough for you to select them?”, “Was the help button positioned where you expected it?”, “Were the pictures on top of the screen distracting?” The final design will be modified based on the findings from these accessibility and usability studies.

Threats of Different Modes of Administration to Validity and Reliability

Testing accommodations for examinees with disabilities have been intensely discussed in educational testing, especially in respect to large-scale high-stakes testing, such as SAT and GRE. To a lesser extent, the issues have been examined in licensure testing. In the following section, we review the topics most relevant to health related assessment, leaving out the issues concerning cheating and guessing because they present a much lesser threat in measuring patient reported outcomes. For a review of legal cases and published studies related to testing accommodations in educational and licensure testing see Pitoniak and Royer.27

In health-related testing, the primary concern is that error is introduced if users are allowed to respond in non-standard ways by choosing the mode of administration (ie, telephone or computer) and using assistive technology (eg, screen readers, screen enlargement). Standardization is the degree to which the assessment is administered and scored under uniform conditions.28 Uniformity in the instructions, time limits, item presentation and recording of responses, and the type of equipment used helps to ensure that any differences found among the respondents are due to different levels of the construct measured, rather than because of differences in assessment conditions.27 Ideally, all respondents would be assessed under identical conditions; however, that is often impossible or inappropriate for individuals with disabilities. The choice is to either exclude individuals with disabilities because they are unable to respond to the standardized assessment tool designed for the general public or to relax standardization requirements.29 By administering the assessment under nonstandard conditions, difficulties that are not relevant to the construct measured can be removed. When making a decision, it is important to understand that the potential threats to validity are also introduced when individuals with disabilities are excluded from clinical trials, clinical practice and research. While designing assessment systems that allow for inclusion of individuals with disabilities is clearly the choice we strongly advocate, we recognize that it raises psychometric issues that need to be examined and balanced with practical, legal, and social policy implications.

Bennett et al30 outlined the implications of increased measurement error. When a group of respondents is measured with less precision, score comparability may be affected and any decisions made on the score are more prone to error. In educational settings this may mean that an inappropriate college admission decision is made. In clinical settings it may mean that less than optimal treatment may be selected. Research suggests that a change in mode of administration may result in both measurement and nonmeasurement error.31 Nonmeasurement error is produced when individuals cannot or do not want to respond to an instrument because of the mode in which it is administered. For example, individuals with low literacy, low vision, or blindness may not be able to respond to a paper-and-pencil questionnaire, and individuals without access to computers cannot respond to an online questionnaire. Mode-of-administration effects for this type of error include inadequate sampling and differences in total and item response rates. Approaches that use universal design may actually reduce error due to response rate because they allow individuals with disabilities to participate in a survey. For instance, a blind individual with back pain can participate in clinical trials on efficacy of pain medication that requires keeping of diaries and responding to questionnaires on the computer as long as both modes of administration were designed with accessibility in mind.

Measurement error also can be introduced as a result of mode of administration. Potential sources of error in the PROMIS developed assessment that relate to mode of administration include response choice order (ie, when using paper, people consider the first item; when using the telephone, people consider the last item and must recall the rest), recall effects (ie, the extent to which the individual can recall relevant information alone vs. with an interviewer to prompt), and respondent preferences (ie, the individual's preference for face-to-face or computer or paper).31 In addition, in the PROMIS assessment system all respondents will be able to choose whether they use the mouse or the keyboard to select their responses. Respondents who are unable to use the mouse (or to use it well) may choose to use the keyboard. This built-in accessibility feature is not expected to affect the level of the construct measured (eg, pain or physical function) because it is equally available to all participants. There is no empirical evidence that suggests that people with disabilities would respond to the listed sources of measurement error differently from the general public and any strategies for reducing these types of error in the general public should be equally effective in self-reported health of individuals with disabilities.

Before we review the types of measurement error that may be specific to individuals with disabilities, it may be useful to understand the distinction between “accommodations” and “modifications.” Accommodations provide “unique and differential access” to allow respondents to participate in an assessment that they would otherwise be unable to complete. In other words, accommodations remove confounding influences of the assessment format, administration or ways of responding.32 Modifications change the nature of the construct being assessed, the way in which the assessment is given or how it is completed. For instance, when using the “faces” scales for pain assessment33,34 respondents unable to see or unable to see well enough to recognize the features of the drawings that show faces expressing increasing levels of distress would have to be administered a different scale (eg, a numeric rating scale) or provided with a description of each face. This modification would likely change the construct being assessed and the way in which the item is administered. By designing assessment systems with accessibility in mind, the need for modification necessary for individuals with disabilities is reduced, keeping the construct being measured constant across all respondents. In the next section, we will review the published research on how accommodations affect the measurement error.

A series of studies were conducted by Educational Testing Service, the College Board, and the Graduate Record Examination Board, and other researchers in educational settings. Accommodations for individuals with visual impairments, physical impairments, hearing impairments, and learning disabilities typically were considered. Accommodations typically included large type, Braille, and audio recordings of the text on cassette. The studies found that there were no statistically significant differences in standard error of measurement between examinees with and without disabilities,30 generally the same factor structure was supported,35 and item-test correlations were found to be very similar for examinees with and without disability, although some items were differentially easier or harder for individuals with disabilities who used accommodations.36,37 The main concerns reported in educational research are related to mathematics items administered to blind students using Braille and the extended time provided to students with learning disabilities. These concerns are not highly relevant in self-reported health assessment. On the contrary, the effects of low literacy in English are very relevant and were found consistently with prelingually deaf examinees,38 suggesting that attention should be paid and more research is needed in this area of health assessment.

Most of the published research on effects of disability accommodations comes from educational settings because the consequences of the potential biases are considerable. There are several reasons why less attention has been paid to studying these issues in health assessment. Pitoniak and Royer identify 3 main research challenges in analyzing the effects of accommodations on validity: small sample sizes, variability of disabilities, and variability in accommodations.27 In the development of the PROMIS assessment system we decided that the error introduced by not providing individuals with disabilities access to the PROMIS measurement tools is a greater concern than potentially introducing more error by building in accessibility features and accommodations. Individuals with disabilities cannot participate unless the system is designed with accessibility in mind, leaving potentially large and to date mostly untapped segments of the population out of participation in clinical trials and research opportunities.


Implementing standards and guidelines for accessible IT can lead to the development of user interfaces that are accessible to individuals with disabilities, including those who use AT. Testing for accessibility and usability in the development of interfaces facilitates the inclusion of individuals with disabilities in future research and clinical assessment of important patient-reported outcomes such as pain and fatigue—conditions that people with disabilities commonly experience. Historically individuals with disabilities have been largely excluded from participation in clinical trials and, more broadly, in equal access to health care. By utilizing information technologies programmed with accessibility in mind most people with disabilities can fully participate in PROMIS developed health assessment systems without requiring that additional accommodations be provided by researchers or clinicians.


Supported in part by the National Center on Accessible Information Technology in Education (AccessIT) funded by the National Institute on Disability and Rehabilitation Research (NIDDR) Grant #H133D010306.

Funded by the National Institutes of Health through the NIH Roadmap for Medical Research, Grant (5U01AR052171-02). Information on RFA-RM-04-011 Dynamic Assesment of Patient Reported Chronic Disease Outcomes can be found at


1. Stone AA, Schwartz JE, Broderick JE, et al. Variability of momentary pain predicts recall of weekly pain: a consequence of the peak (or salience) memory heuristic. PSPB. 2005;31:1340–1346. [PubMed]
2. Sorbi MJ, Peters ML, Kruise DA, et al. Electronic momentary assessment in chronic pain I: psychological pain responses as predictors of pain intensity. Clin J Pain. 2006;22:55–66. [PubMed]
3. NIH Grants Policy Statement (12/03) Part II: Terms and Conditions of NIH Grant Awards Subpart A: Requirements for Inclusiveness in Research Design (NIH web site) Dec2003. [November 15, 2006]. Available at:
4. Okoro CA, Balluz LS, Campbell VA, et al. State and metropolitan-area estimates of disability in the United States, 2001. Am J Public Health. 2005;95:1964–1969. [PubMed]
5. McNeil JM, Binette J. Prevalence of disabilities and associated health in the United States, 1999. MMWR Weekly. 2001;50:120–125.
6. Ansello EF, Eustis NN. A common stake? Investigating the emerging ‘intersection’ of aging and disability. Generations. 1992;16:5–8.
7. McNeil JM. Current Population Reports P70-73. Los Angeles, CA: USC Bureau; 2002. Americans with Disabilities: 1997.
8. Hagglund K, Clark M, Conforti K, et al. Access to health care services among people with disabilities receiving Medicaid. Mo Med. 1999;96:447–453. [PubMed]
9. Gold M, Nelson L, Brown R, et al. Disabled Medicare beneficiaries in HMOs. Health Aff (Millwood) 1997;16:149–162. [PubMed]
10. Nelson L, Brown R, Gold M. Access to care in Medicare HMOs, 1996. Health Aff (Millwood) 1997;16:148–156. [PubMed]
11. Iezzoni L, McCarthy E, Davis R. Use of screening and preventive services among women with disabilities. Am J Med Qual. 2001;16:135–144. [PubMed]
12. Hanson K, Neuman P, Dutwin D, et al. Uncovering the health challenges facing people with disabilities: the role of health insurance. Health Aff (Millwood) 2003;(Web Exclusivessuppl):W3-552–565. [PubMed]
13. Story M. Maximizing usability: the principles of universal design. Assist Technol. 1998;10:4–12. [PubMed]
14. Vanderheiden G, Tobias J. Universal Design of Consumer Products: Current Industry Practice and Perceptions. [November 15, 2006]. Available at:
15. Tobias J. Universal design: Is it really about design? Information Technol Disabilities. 2003;9:2.
16. Amtmann D, Johnson K. Internet and information technologies: consumer empowerment. Technol Disability. 1998;8:107–113.
17. Johnson K, Amtmann D, Brown S. Universal design and access to information in higher education. World Conference on Educational Multimedia, Hypermedia and Telecommunications; 2003. pp. 3298–3302.
18. Johnson KL, Dudgeon B, Amtmann D. Assistive technology in rehabilitation. In: Johnson KL, Haselkorn J, editors. Physical Medicine and Rehabilitation Clinics of North America: Vocational Rehabilitation. Philadelphia, PA: Saunders; 1997.
19. Johnson K, Puckett F. IT Corner: accessibility issues for electronic and information technology. Rehabil Ed. 2003;16:374–379.
20. Johnson K, Brown S, Amtmann D, et al. Web accessibility in post-secondary education: legal and policy considerations. Information Technol Disabilities. 2003;9:2–24.
21. Summary of Section 508 Standards. [November 15, 2006]. Available at:
22. Rehabilitation Act Amendments. 29 USC §794. 1998.
23. Chisholm W, Vanderheiden G, Jacobs I, editors. Web Content Accessibility Guidelines 1.0. [November 15, 2006]. Available at:
24. Thatcher J. Side by Side WCAG vs 508. [November 15, 2006]. Available at:
25. Russell K, Dudgeon B, Deitz J, et al. Accessibility and usability of communication and organizational systems in distance learning. Rehabil Ed. 2003;17:81–93.
26. Willis G. Cognitive Interviewing: A Tool for Improving Questionnaire Design. Thousand Oaks, CA: Sage Publications, Inc.; 2005.
27. Pitoniak M, Royer J. Testing accomodations for examinees with disabilities: a review of psychometric, legal, and social policy issues. Rev Ed Res. 2001;71:53–104.
28. Sax G. Principles of Educational and Psychological Measurement and Evaluation. Belmont, CA: Wadsworth; 1997.
29. American Educational Research Association. Standards for Educational and Psychological Testing. Washington, DC: Author; 1999.
30. Bennett R, Rock D, Kaplan B, et al., editors. Psychometric Characteristics Testing Handicapped People. Boston, MA: Allyn & Bacon; 1988. pp. 83–97.
31. Bowling A. Modes of questionnaire administration can have serious effects on data quality. J Public Health. 2005;27:281–291. [PubMed]
32. Tindal G, Fuchs L. A Summary of Research on Test Changes: An Empirical Basis for Defining Accommodations. Lexington, KY: Mid-South Regional Resource Center; 2000.
33. Bieri D, Reeve RA, Champion GD, et al. The Faces Pain Scale for the self-assessment of the severity of pain experienced by children: development, initial validation, and preliminary investigation for ratio scale properties. Pain. 1990;41:139–150. [PubMed]
34. Hicks C, VonBaeyer C, Spafford P, et al. The Faces Pain Scale— revised: toward a common metric in pediatric pain measurement. Pain. 2001;93:173–183. [PubMed]
35. Rock D, Bennett R, Kaplan B. Construct validity. In: Willingham W, Ragosta M, Bennett R, et al., editors. Testing Handicapped People. Boston, MA: Allyn & Bacon; 1988. pp. 83–97.
36. Koretz D. The Assessment of Students With Disabilities in Kentucky (CSE Technical Report No 431) Los Angeles, CA: University of California, National Center for Research on Evaluation, Standards, and Student Testing; 1997.
37. Koretz D, Hamilton L. Assessing Students With Disabilities in Kentucky: The Effects of Accommodations, Format, and Subject (CSE Technical Report No 498) Los Angeles, CA: University of California, National Center for Research on Evaluation, Standards, and Student Testing; 1999.
38. Nester M. Employment testing for handicapped persons. Public Personnel Manage J. 1984;13:417–434.