Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
J Off Stat. Author manuscript; available in PMC 2010 November 24.
Published in final edited form as:
J Off Stat. 2010; 26(3): 507–533.
PMCID: PMC2990982

Using Audio Computer-Assisted Self-Interviewing and Interactive Voice Response to Measure Elder Mistreatment in Older Adults: Feasibility and Effects on Prevalence Estimates


Demographic trends indicate an aging population, highlighting the importance of collecting valid survey data from older adults. One potential issue when surveying older adults is use of technology to collect data on sensitive topics. Survey technologies like A-CASI and IVR have not been used with older adults to measure elder mistreatment. We surveyed 903 adults age 60 and older in Allegheny County, Pennsylvania (U.S.) with random assignment to one of four survey modes: (1) CAPI, (2) A-CASI, (3) CATI; and (4) IVR. We assessed financial, psychological, and physical mistreatment, and examined feasibility of A-CASI and IVR, and effects on prevalence estimates relative to CAPI and CATI. Approximately 83% of elders randomized to A-CASI/IVR used each technology, although 28% of respondents in the A-CASI condition refused to use headphones and read the questions instead. A-CASI produced higher six month prevalence rates of financial and psychological mistreatment than CAPI. IVR produced higher six month prevalence rates of psychological mistreatment than CATI. We conclude that, while IVR may be useful, A-CASI offers a more promising approach to the measurement of elder mistreatment.

Keywords: mode effects, sensitive topics, older adults, A-CASI, IVR

1. Introduction

Current worldwide demographic trends clearly indicate that the population is aging. The National Center for Health Statistics (2005) estimates that by 2030, people over 65 will represent about 22 percent of the population in the United States (versus approximately 13% in 2008), with the fastest growing cohort being those 75 years and older. Similar trends can be seen worldwide, with an estimated 24% of Europeans and about 12% of Latin Americans age 65 and older by 2030. Given these trends, the importance of collecting valid and reliable survey data from older adults is clear.

One issue when conducting surveys of the elderly is the use of technology such as computer-assisted self-interviewing (CASI), audio-CASI, or Interactive Voice Response (IVR) telephone surveys (Couper, 2008). These methods are often employed to measure sensitive topics in surveys, with the idea that they combine the benefits of self-administration with the control of computerization to result in more valid estimates (see review below). However, older adults may be less comfortable interacting with technology than their younger counterparts. For example, the latest data available from the April, 2009 Pew Research Center Internet and American Life Project shows that 42% of those age 65 and older use the internet, compared to 79% of those 50 – 64; 87% of those 30 – 49; and 92% of 18 – 29 year olds (Pew Research Center, 2009). Older adults may also have sensory (hearing, vision) or cognitive deficits that make it more difficult to use certain technology. Thus, a key question is whether survey technologies that have been shown to be effective for measuring sensitive topics in younger adults will also be feasible and effective for the older adult population. One sensitive topic for older adults that is receiving increased attention from researchers and policy makers is elder mistreatment.

Elder mistreatment has been recognized as a significant social problem for several decades. However, given the nature of the phenomenon, it has been difficult to derive scientifically sound, population-based estimates of the prevalence and incidence of elder mistreatment. The National Research Council (2003) issued a report summarizing the state of scientific knowledge in the area, noting a variety of fundamental deficits. One of the most pressing problems in elder mistreatment research is the widely acknowledged fact that cases that make it into the Adult Protective Services (APS) system – those severe enough to come to the attention of service providers or public officials – represent just the “tip of the iceberg.” The official APS statistics are known to be significant underestimates of the problem.

1.1. Population-based surveys with elder mistreatment victims

In order to capture this “hidden” or “submerged” portion of elder mistreatment, direct population-based surveys of victims are one of the most promising avenues (National Research Council, 2003). While many in the field are skeptical that elders would tell a stranger about mistreatment occurring at the hands of a family member, previous surveys have indeed found this to be the case. The first population-based survey effort in the U.S. was conducted by Pillemer and Finkelhor (1988). These authors conducted a survey of 2,020 non-institutionalized elderly (65 and older) living in the Boston (U.S.) Metropolitan area. Interviews were conducted primarily by telephone (some were done in person) using structured questionnaires – a modified version of the Conflicts Tactics Scale (CTS; Straus, 1979) - to measure three domains of elder mistreatment: physical abuse, psychological abuse, and neglect. The survey produced an overall prevalence rate of 3.2%. A new national elder mistreatment survey of 3,005 U.S. adults age 57 to 85 by Laumann, Leitsch, and Waite (2008) reported one year prevalence rates of 9% (psychological), 3.5% (financial), and 0.2% (physical) using in-person interviews. A national telephone survey of 2,008 non-institutionalized elderly conducted in Canada by Podneiks (1992), using methods similar to Pillemer and Finkelhor (1988), found a 4.0% prevalence rate, but this survey also included a measure of material/financial abuse, which was most prevalent (2.5%). A population-based, in-person survey of 1,797 community-dwelling 69 – 89 year olds in Amsterdam, the Netherlands, conducted by Comijs et al. (1998) found an overall prevalence rate of 5.6%, with chronic verbal aggression (3.2%) most common (the study also included physical abuse, neglect, and financial mistreatment).

In sum, there have been very few attempts at population-based surveys of victims of elder mistreatment. The surveys differed in the age criteria used to define “elder,” the types of elder mistreatment examined, the definitions of what constituted a “case” of elder mistreatment, the mode of data collection (telephone versus in-person), and results varied as to which sub-type of elder mistreatment was most prevalent. Questions also remain as to whether even these prevalence estimates are low, given the sensitive nature of elder mistreatment and the potential reluctance of victims to admit it to an interviewer using standard telephone or in-person methods.

1.2. Survey Methodology and Sensitive Questions

The survey methodological literature on asking sensitive questions is relevant to any attempt to measure elder mistreatment directly from victims (Tourangeau and Yan, 2007). Survey methodologists are well aware that reports of sensitive behaviors are potentially subject to underreporting as a result of social desirability concerns, perceived invasion of privacy, and fear of disclosure to third parties (see Tourangeau, Rips, and Rasinski, 2000, ch. 9 for a review). Direct questions to older adults about mistreatment at the hands of family members or other trusted persons are likely subject to these concerns. These questions could potentially be seen as embarrassing or nobody's business (e.g., airing dirty laundry about intimate family matters). Potentially more important, questions about elder mistreatment may be seen as threatening, potentially resulting in family consequences (e.g., retaliation by the perpetrator if survey reports are revealed), or even legal consequences (removal or prosecution of the family perpetrator).

One of the most consistent findings in the survey literature on sensitive behaviors is that self-administered modes of data collection result in higher levels of reporting than interviewer-administered modes (e.g., Aquilino and LoSciuto, 1990; Aquilino, 1994; Tourangeau et al., 1997; Tourangeau and Yan, 2007). Although most studies do not involve a “gold standard,” or “objective” record to verify individual reports of sensitive behaviors, researchers generally assume that, due to social desirability and other concerns, respondents tend to under-report behaviors that reflect poorly on them (e.g., drug use; illegal activity; abortions; same-sex relationships), and over-report behaviors that make them look good (e.g., voting; church attendance; charitable donations). Thus, higher prevalence of “bad” things and lower prevalence of “good” things are seen as more valid or accurate. Self-administered methods of data collection are generally thought to increase validity in reporting sensitive behaviors by increasing perceived privacy through eliminating the need to disclose such information directly to an interviewer. In the case of elder mistreatment, self-administration may reduce at least some of the concerns mentioned above, including less embarrassment at having to report mistreatment to another person, as well as less threat of a potential perpetrator over-hearing survey responses to an interviewer. Thus we would expect self-administration to result in increased reports of elder mistreatment relative to interviewer-administered modes.

Another aspect of survey data collection that has important implications for gathering sensitive data is the use of computers. The most common examples are computer-assisted telephone interviews (CATI), and computer-assisted personal interviews (CAPI), which have both been widely adopted (Weeks, 1992; Couper, 2008). Subsequent research has shown that reports of sensitive behaviors can sometimes be increased with computers versus traditional paper and pencil methods (Baker, Bradburn, and Johnson, 1995; Tourangeau et al., 1997; Wright, Aquilino, and Supple, 1998), although the effects are not as dramatic as for self- versus interviewer-administered surveys (see above). More recently, computerized telephone (CATI) and personal (CAPI) interviewing have been supplemented with new applications of technology that attempt to enhance the perceived privacy of the interview (see Couper, 2008 for an overview of technology trends in surveys).

1.3. Audio Computer-Assisted Self Interviewing (A-CASI)

In the personal interview setting, audio computer-assisted self interviewing (A-CASI) has been developed, in which the computer plays a recorded version of questions and answer choices over headphones, and the respondent answers via keyboard, mouse, or touch-screen. This technology has been used in conjunction with face-to-face interviews, in which sensitive questions are administered via A-CASI, with the remainder of the questions being administered by an interviewer using CAPI. Research has shown that A-CASI is both generally acceptable to respondents (O'Reilly et al., 1994), and results in higher levels of reporting of sensitive behaviors than in-person interviewer-administered methods (e.g., Des Jarais et al., 1999; Epstein, Barker, and Kroutil, 2001; Lessler and O'Reilly, 1998; Metzger et al., 2000; Perlis et al., 2004; Tourangeau and Smith, 1996; Turner et al., 1998).

However, A-CASI has generally been used only with younger, more computer literate populations, and has not been tested specifically with older respondents who may be less comfortable with technology. In fact, in one exception found in the literature testing CASI (but not A-CASI), Couper and Rowe (1996) reported that only 60% of the 682 respondents 65 years and older did CASI themselves, with 14% requiring the interviewer to read the questions, which they keyed in, and 26% requiring the interviewer to continue with a CAPI interview. In contrast, a usability study testing A-CASI that included two elderly females did show that they were able to use the system without difficulty (Schneider and Edwards, 2000), suggesting that the Couper and Rowe (1996) findings may have other explanations besides poor usability. Thus, despite its potential to further increase reporting of sensitive behaviors, there are still questions about the feasibility of incorporating A-CASI into personal interviews with the elderly.

One issue is whether or not the “audio” portion of A-CASI is actually adding anything over and above simpler computer-assisted self interviews (CASI), where respondents read questions from the screen and respond using a laptop computer without hearing questions via headphones. A recent study by Couper, Tourangeau, and Marvin (2009) found that many respondents did not even use the audio features of A-CASI, and that very few substantive differences were found between A-CASI and CASI for sensitive topics like drug use and sexual behavior. The authors thus call into question the need for A-CASI (the “audio” portion – use of headphones) relative to the simpler (text-only) CASI approach. We decided in this study to test the full-blown A-CASI version, including displaying the questions on the screen, for two main reasons. First, given that this is the first large-scale test of the method with older adults, we reasoned that using a full-scale A-CASI method would allow more detailed feasibility testing of whether or not audio features were actually used, and, if so, whether they had effects on prevalence of elder mistreatment over and above those choosing to use the simpler CASI. Second, on a more practical level, we thought that due to possible vision, hearing, or cognitive problems among the elderly, giving respondents the option to both hear and see the questions would limit potential difficulties in responding.

1.4. Interactive Voice Response (IVR)

Traditional CATI telephone interviews have been supplemented with Interactive Voice Response (IVR; also called Telephone A-CASI or T-ACASI), in which a recorded voice system administers questions and the respondent replies by using the keys on a touchtone telephone (see Miller Steiger & Conroy, 2008 for an overview). Early versions of IVR were completely automated, with respondents dialing into or being “called” by the computer, and have been used in a variety of commercial, medical and clinical contexts (see Corkrey and Parkinson, 2002a for a review). More relevant to surveys involving sensitive topics are IVR applications in which a live interviewer initiates the call and administers non-sensitive items, while transferring the respondent to the IVR system for the sensitive questions, with the option of returning to the interviewer to complete the survey (Cooley et al., 2000; Mingay, 2001; Turner et al., 1996). Studies comparing IVR to traditional CATI telephone surveys have generally shown increased reporting of sensitive behaviors with IVR (Corkrey and Parkinson, 2002b, Currivan et al., 2004; Gribble et al., 2000; Mingay, 2001; Moskowitz, 2004; Turner et al., 1996; 2005; Villarroel et al., 2006; 2008).

However, IVR interviews are not without problems. The primary issue is higher rates of break-offs (hang-ups) once the respondent has been transferred to the IVR system (Cooley et al., 2000; Couper, Singer, and Tourangeau, 2004; Gribble et al., 2000; Mingay, 2001; Moskowitz, 2004; Tourangeau et al., 2002; Turner et al., 1996; Villarroel et al., 2006). Additional issues with IVR are lack of touchtone telephones, and cordless phones with keys on the handset, which may be awkward to use for responding. These issues may be particularly likely to occur for older adults, who may still be using older rotary phones or have less manual dexterity to respond to IVR using a cordless phone with handset keys. Thus, similar to A-CASI (see above), despite the potential to further increase reporting of sensitive behaviors, there are questions about the feasibility of incorporating IVR into telephone interviews with the elderly. We are aware of new and improving voice recognition technology that allows IVR responses to be spoken rather than responding by keying numbers on the handset (Bloom, 2008). However, we decided to use the more common keypad approach, partially due to concerns about reliability and cost. We also reasoned that keypad responding would reduce the likelihood of family members over-hearing the respondents' end of the conversation about elder mistreatment, which could inhibit responding.

1.5. Study Overview and Research Questions

This study tested the feasibility, acceptability, and validity of survey methodologies for collecting self-report data from elder mistreatment victims in Allegheny County (Pittsburgh) Pennsylvania, U.S.A. We conducted population-based household surveys of the elderly, with random assignment to one of four survey modes: (1) standard CAPI in-person interview; (2) “privacy enhanced” CAPI in-person interview with a switch to an A-CASI system for the elder mistreatment items; (3) standard CATI telephone interview; and (4) “privacy enhanced” CATI telephone interview with a switch to an IVR system for the elder mistreatment items. Interviews were conducted with 903 adults age 60 and older, with approximately equal numbers of interviews in each condition. We focused on self-reports of elders regarding financial, psychological, and physical mistreatment by family members or other trusted persons.

This research was conducted to address two major themes

1. Feasibility / acceptability of new survey technologies among older adults

Can elders use newly developed survey technologies to report data on sensitive topics like elder mistreatment? Will they accept the technology, or will they prefer simply responding to an interviewer?

2. Effects of new survey technologies on prevalence estimates of elder mistreatment (i.e., validity)

If feasible and acceptable, do these new methods actually produce more valid reporting of elder mistreatment? Given the likelihood that elders have a tendency to under-report mistreatment due to embarrassment, shame, or fear of reprisal, do the new technologies increase reporting relative to traditional CAPI and CATI approaches?

An additional issue that can be addressed with this design is that of standard mode effects in reports of elder mistreatment, which have received very little attention in the literature (National Research Council, 2003). Do in-person methods result in different prevalence rates than telephone interviews? This question can be addressed in general, within the more traditional CAPI versus CATI framework, or in the context of the new survey technologies (i.e., A-CASI vs. IVR). The literature on telephone versus in-person methods for sensitive topics is somewhat equivocal in this area (see De Leeuw, 2008 for an overview).

2. Data and Methods

2.1. Sample Design

The target population was adults 60 years and older residing in households with landline telephones in Allegheny County (Pittsburgh) Pennsylvania, U.S.A. Additional eligibility criteria included English-speaking and no severe cognitive impairment – defined as a score of 9 or higher on the Telephone Interview for Cognitive Status (Herzog and Wallace, 1997), which was administered early in the interview.

Random digit dialing (RDD) telephone sampling with screening was used to obtain samples for all four conditions of the experimental design (see below). More specifically, 1+ bank list-assisted RDD using the Genesys Sampling System was used. One goal of the study (though not of particular interest here) was to explore potential racial differences in the experience of elder mistreatment, and thus African Americans were over-sampled. In order to over-sample African Americans, telephone exchanges in Allegheny County were placed into two strata prior to sampling: (1) those containing an estimated 25% or higher African American population (approximately 16% of exchanges), and (2) those containing an estimated less than 25% African American population. Approximately equal numbers of telephone numbers were randomly sampled from the two strata, with the goal of achieving approximately 25% African Americans (about twice their proportion in the population). Non-African Americans in the high density strata, and African Americans in the low density strata were eligible for interview. In the telephone conditions, once a household was determined to be eligible and the respondent agreed to the interview, it was conducted at that time. For the in-person conditions, once eligibility was determined and the respondent agreed to the interview, an appointment was scheduled, and an address was obtained for those being conducted in the home. If the household contained more than one eligible adult (i.e., 60 or older), the target respondent was selected using the last birthday method (Gaziano, 2005).

2.2. Randomization to Interview Mode

After drawing the initial RDD sample telephone numbers, they were randomly assigned to one of the following conditions prior to release to the interviewers:

a. CAPI / Standard in-person

This condition involved a traditional face-to-face CAPI survey, entirely interviewer-administered, and served as a control condition for the “privacy enhanced” in-person condition described next.

b. A-CASI / “Privacy enhanced” in-person

This condition involved a hybrid approach combining interviewer-administered CAPI for introductory and less sensitive sections, with a switch to A-CASI for the elder mistreatment items, and a switch back to CAPI for demographic items (see Appendix A for switch protocol). All questions were displayed on the screen (one question per screen), and respondents could use either the mouse (clicking radio buttons) or the number keypad (number of response) to enter responses.

c. CATI / Standard telephone

This involved a traditional CATI telephone survey, entirely interviewer-administered, and served as a control condition for the “privacy enhanced” telephone condition described next.

d. IVR / “Privacy enhanced” telephone

This condition involved a hybrid approach combining interviewer-administered CATI for introductory and less sensitive sections, with a switch to IVR for the elder mistreatment items, and a switch back to CATI to complete demographic items. The IVR questions were delivered by a recorded female voice, and the respondent used the key pad of a touchtone telephone for responses. (See Appendix A for more detail.)

No mention was made to respondents in the A-CASI or IVR conditions in the introduction or recruitment procedures that they would be using special technologies to answer certain questions. The technologies were introduced during the interview, prior to the elder mistreatment items. This was done to minimize possible effects of the technology on unit non-response rates across conditions.

2.3. General Procedures

Interviews were conducted by 16 female interviewers at the University Center for Social and Urban Research (UCSUR) at the University of Pittsburgh between May, 2007 and January, 2008. Interviewers conducted interviews in all four conditions to avoid confounding of interviewer with survey technology. The assignment of cases to interviewers was blocked by condition with the order of conditions randomized across interviewers. Supplemental Chi-Square analyses showed no differences by interviewer on any of the prevalence estimates reported in the paper. Across experimental conditions, up to 10 calls were made to each number on different days of the week at different times of the day to attempt a screening interview. Once an elderly household and respondent had been identified, as many calls as necessary were made to complete the interview (telephone conditions), schedule an in-home interview (in-person conditions), or obtain an outright refusal to participate. In-person interviews were primarily conducted in the respondent's home (83.5%), but others were conducted at the UCSUR offices (13.2%) or a neutral location (e.g., a restaurant; 3.3%).1 In most cases, in-person interviews were conducted in private.2 Participants in the telephone interview conditions were offered a $10 supermarket gift card as an incentive, while those in the in-person conditions were offered $20 gift cards. We felt that asking respondents to either allow us into their homes or to travel to our offices or a neutral location justified the larger incentive for the in-person conditions. The interviews took an average of 45 minutes to complete, although the IVR and A-CASI interviews took an average of about 10 minutes longer.

In an attempt to maximize comfort level and rapport, race of interviewer (White / African-American) was matched to race of respondent (collected at telephone screening) for the in-person survey conditions (CAPI and A-CASI), which were conducted primarily in respondents' homes. We were unable to match in the telephone conditions for practical reasons – we attempted to conduct the phone surveys at the time of recruitment. Supplemental Chi-Square analyses of matched versus unmatched cases in the telephone conditions revealed no differences in prevalence of psychological or physical mistreatment, but showed that the unmatched cases actually produced significantly higher prevalence of financial mistreatment, despite our assumption that matching the race of the interviewer and respondent helps create rapport. These results are hard to interpret, given that we did not ask telephone respondents the perceived race of the interviewer, but they nonetheless argue against race matching as a clear alternative explanation for or confounding factor in any treatment effects.

2.4. Sample Outcomes and Response Rates

For the in-person conditions, 20,448 telephone numbers were processed following elimination of business and non-working numbers by Genesys pre-screening. Among these numbers, 9,126 (44.6%) were determined to be non-working or non-households. Among the remaining 11,322 numbers, the following outcomes were achieved: 3,784 were screened not eligible (i.e., no one 60 or over in household); 448 completed interviews (224 CAPI; 224 A-CASI); 976 eligible refused to participate; 1,184 households refused to be screened; and we were unable to complete screening interviews at 4,930 numbers due to multiple no answers / answering devices / busy signals. Using the proportion of households found among phone numbers where household status was determined as the “e” multiplier for the 4,930 unknown household status numbers, we calculated an AAPOR #3 screening rate of 62.0% for the in-person conditions. The interview completion rate (448 / 448 + 976) was 31.5%, for an overall AAPOR RR3 of 19.5% (.620 × .315) for the in-person conditions.3

For the telephone conditions, 14,714 telephone numbers were processed following Genesys pre-screening. Among these numbers, 6,665 (45.3%) were determined to be non-working or non-households. Among the remaining 8,049 numbers, the following outcomes were achieved: 2,590 were screened not eligible; 455 completed interviews (228 CATI; 227 IVR); 516 eligible refused to participate; 856 households refused to be screened; and we were unable to complete screening interviews at 3,632 numbers due to non-contact. The AAPOR #3 screening rate was 60.8%, and the interview completion rate (455 / 455 + 516) was 46.9%, for an overall AAPOR RR3 of 28.5% for the telephone conditions.4

In sum, the final sample consisted of 903 adults age 60 or older residing in Allegheny County (Pittsburgh) Pennsylvania, U.S.A., including 224 in the CAPI condition; 224 in the A-CASI condition; 228 in the CATI condition; and 227 in the IVR condition.

2.5. Survey Measures

Rather than asking whether or not the elder thinks he or she has been subject to “mistreatment or abuse” at the hands of a spouse or other family member, we asked directly via simple yes or no items, whether a series of behaviors or events has occurred, and if so, who did it, how frequently, and how upsetting this was for them. (The follow-up contextual items are not discussed further here.5) Each behavior was assessed for it's occurrence since the respondent turned 60, and if so, during the six months prior to the interview. Appendix B contains exact wording for the elder mistreatment items, along with the definition of a “case” for prevalence estimates.

2.6. Statistical Analysis

Our analyses focused on two major questions: 1) Feasibility: Can elders use newly developed survey technologies to report data on sensitive topics like elder mistreatment?; and 2) Validity of prevalence estimates: Do the new technologies increase reporting relative to traditional CAPI and CATI approaches? More traditional mode effects (In-person vs. telephone) were also examined.

Feasibility was addressed by having the interviewers code the general outcome of the attempt to have the older adult use the A-CASI or IVR technology (e.g., completed with no problems; refused to use, etc.). Within each technology, the overall distribution of feasibility outcomes was calculated and Chi-square tests of whether the distributions varied by gender, race, and age were conducted. Logistic regressions (used A-CASI / IVR vs. not) were also conducted with age, sex, race, and other demographic variables entered simultaneously. A decision was made to shift respondents who did not complete A-CASI to the CAPI condition, and those who did not complete IVR to the CATI condition, for tests of effects on prevalence to preserve statistical power. Sensitivity analyses were performed to compare reported results to those obtained when the non-completion cases were simply dropped from the analyses. The results when the cases were dropped were very similar to those found with the shifting approach (see below).

Even though we employed an unequal probability design (i.e., African Americans over-sampled), the feasibility analyses were conducted on un-weighted data, as they are meant to be primarily descriptive in nature. Feasibility analyses were conducted using SPSS version 15.0.

Effects of the experimental conditions on the reported prevalence of financial, psychological, and physical mistreatment were tested first with Chi-square statistics within the in-person (A-CASI vs. CAPI) and telephone (IVR vs. CATI). The primary analyses consisted of odds ratios (OR) of the prevalence in the privacy-enhanced condition versus the traditional interviewer condition within the in-person and telephone cells (i.e., A-CASI vs. CAPI; IVR vs. CATI), adjusting for age, sex, race, education, marital status, number of children, and household composition. A secondary interest was in general mode effects - in-person versus telephone modes - and these effects were also tested using Chi-Square statistics and adjusted odds ratios.

Since these analyses focused on estimating prevalence in the population of adults 60 and older residing in Allegheny County (Pittsburgh) Pennsylvania, U.S.A., weights were applied. The weight contained two components: 1) an adjustment for the probability of selection of the phone number for the two sampling strata (i.e., a base design weight); and 2) a post-stratification adjustment based on six gender X age cells (60 – 64; 65 – 74; 75 and older) using the most recently available American Community Survey (ACS) estimates for the county. Relative weights ranged from 0.22 to 2.81 (sd = 0.75). All weighted prevalence analyses were performed using STATA version 10.0.

3. Results

3.1 Sample Characteristics and Equivalence Across Experimental Conditions

Table 1 shows demographic characteristics for the sample as a whole and within each originally assigned experimental condition. The sample was heavily weighted towards females, which was adjusted for with post-stratification weights (along with age) for the prevalence analyses. Approximately 23% of the sample was African American, and the majority was between 65 and 84 years old. The sample was fairly educated, and showed significant variability in terms of marital status and household composition. In general, the samples from the four original experimental conditions were similar in terms of socio-demographic variables, except for mean age, which varied significantly across the four groups.

Table 1
Demographic characteristics for total sample and by original experimental condition. (Entries are percentages unless otherwise noted.)

3.2 A-CASI Feasibility Outcomes

Overall feasibility outcomes for A-CASI, based on un-weighted data, are summarized in Table 2. Among the 224 respondents initially assigned to this condition, about 83% used the computer to answer the elder mistreatment items, although only about 55% agreed to use the headphones to listen to the questions. Thus, about 28% completed what is traditionally termed CASI (computer-assisted self interviewing) – they read the questions silently and entered their own responses. Supplemental Chi-Square analyses comparing the prevalence of mistreatment reports between the CASI and “pure” A-CASI respondents (those using headphones) showed slightly higher levels of mistreatment among A-CASI respondents, but these differences were not significant, and CASI cases were included in the A-CASI group for prevalence comparisons. The key was that these were self-administered and not interviewer-administered. Thirty-five respondents (16%) in the A-CASI condition refused to use the computer for any responses and continued with a CAPI interview (and were included in the CAPI group for prevalence comparisons; sensitivity analysis reported below). Three respondents started with A-CASI, refused to finish, but answered the remaining questions via CAPI; and only one respondent refused to answer any elder mistreatment items (these four cases were dropped from the prevalence analyses). The majority of the problems or questions respondent's had were either general navigation issues - for example, not knowing to press “enter” to get to the next screen, or problems using the mouse – 23 (10.3%) opted for the number keys instead of the mouse. The majority refusing to use headphones simply said they “didn't need them,” although a few mentioned issues with hearing aids.

Table 2
A-CASI Feasibility Outcomes by Gender, Race and Age.

Demographic analyses revealed overall significant differences in A-CASI outcomes for race [χ2 (6) = 14.7; p = .023] and age [χ2 (18) = 31.7; p = .024]. While African American respondents were more likely to simply refuse to use A-CASI, White respondents were much more likely to refuse to use headphones. The age difference was primarily caused by a linear trend in which older respondents were more likely to refuse A-CASI altogether. In addition, the youngest respondents (age 60-64) were most likely to refuse headphones. Finally, even though the gender effect was not significant, females tended to refuse headphones more often than males. To help clarify these findings, a logistic regression analysis was conducted with completing A-CASI (or CASI) as the outcome, and gender, race, age, education, marital status, and household composition as predictors. In addition, although we didn't ask detailed questions about computer use, we did ask a single item – “Do you use e-mal?” – as part of a social network measure (37% said “yes;” 63% “no”). This was also included as a predictor in the logistic regressions. Results showed that e-mail users were in fact more likely to use A-CASI, and that age was the only other significant predictor in the model – a linear trend in which older respondents were less likely to use A-CASI or CASI. A similar logistic regression with use of headphones as the outcome revealed only a race effect – whites were less likely to use the “audio” portion of A-CASI than African Americans, adjusting for all other factors. Use of e-mail was not predictive of whether or not the respondent used the “audio” portion of A-CASI.

3.3. IVR Feasibility Outcomes

Feasibility outcomes for the IVR condition, based on un-weighted data, are summarized in Table 3. Among the 227 respondents originally assigned to the IVR condition, about 83% completed the elder mistreatment items using the system, the same figure as for A-CASI (see above). The main reason for non-completion of IVR was either lack of a touchtone phone (n = 18; 8%) or difficulties using a cordless touchtone phone for response (n = 7; 3%). Note that only one respondent refused to use the IVR system, which contrasts with the A-CASI findings, where about 16% refused to use the system. These twenty-six respondents continued with a CATI interview (and were included in the CATI group for prevalence comparisons; sensitivity analysis reported below). Six respondents either began IVR but switched back to the interviewer to complete the elder mistreatment items either by choice (n = 2) or due to technical problems with the system (n = 4); while another five respondents quit IVR and refused to answer the remaining mistreatment items. Finally, only two respondents hung up or broke off during IVR, and both were re-contacted by the interviewer immediately after the break-off and completed the mistreatment items in CATI. Both reported technical difficulties with the system. These thirteen cases were dropped from the prevalence analyses.

Table 3
IVR Feasibility Outcomes by Gender, Race and Age.

Demographic analyses of the IVR outcomes revealed a significant effect for age [χ2 (24) = 42.5; p = .011]. Similar to the A-CASI findings, the IVR completion rate dropped steadily with increasing age, and this was primarily due to the oldest old tending to have touchtone phone issues. Although the race effect was not statistically significant overall, African Americans had a slightly higher IVR completion rate (88%) than Whites (78%). A logistic regression looking at IVR completion similar to that described for A-CASI revealed that, once again, age was the only significant predictor in the model – older respondents were less likely to use IVR than their younger counterparts. Use of e-mail was not predictive of whether or not the respondent used IVR.

3.4 Effects of A-CASI and IVR on Prevalence Estimates

Weighted prevalence estimates – since turning 60, and in the last 6 months – for financial, psychological, and physical mistreatment in the overall sample and within each of the four experimental conditions are presented in Table 4. Looking first at the financial domain, using our definition, 9.7% of adults age 60 and older in Allegheny County (Pittsburgh) Pennsylvania, U.S.A. reported experiencing some form of financial mistreatment since turning 60, and 3.5% reported experiencing this in the six months prior to the interview. Although the prevalence of financial mistreatment since turning 60 was highest in the A-CASI condition (15.6%), neither the Chi-Square tests nor the adjusted OR's comparing A-CASI to CAPI, and IVR to CATI were statistically significant. When asked about financial mistreatment in the last 6 months, those in the A-CASI condition (7.3%) reported significantly (p < .05) more than those in the interviewer-administered CAPI condition (2.5%), and when socio-demographics were controlled, the adjusted OR (2.78; p = .078) was marginally significant. IVR did not increase reporting of financial mistreatment in the last 6 months relative to CATI, which in fact produced a higher prevalence rate (3.2% vs. 1.8%), though this difference was not significant.

Table 4
Financial, psychological, and physical mistreatment: Overall prevalence, prevalence by experimental condition, and adjusted odds ratios comparing privacy-enhancing methods with traditional interviewer methods. (Prevalence as percentage, standard errors ...

The overall prevalence of psychological mistreatment since age 60 was 14.3%, and the 6 month prevalence was 8.2%. The prevalence of psychological mistreatment since age 60 was highest in the A-CASI condition (22.5% vs. 13.6% for CAPI; p < .05), and the adjusted OR (1.79; p = .082) was marginally significant. The IVR condition also resulted in slightly higher prevalence of psychological mistreatment since 60 (14.0% vs. 9.1% for CATI; n.s.), although the adjusted OR (1.84; p =.093) was marginally significant. Results for psychological mistreatment in the last 6 months were clearer. In this case, once again, prevalence was highest in the A-CASI condition (16.4% vs. 5.0% for CAPI; p < .01), and the adjusted OR (4.13; p < .01) was statistically significant. The IVR condition also produced higher 6 month prevalence of psychological mistreatment (9.3% vs. 4.5% for CATI; n.s), and the adjusted OR (2.83; p < .05) reached conventional levels of significance.

The estimated prevalence of physical mistreatment since turning 60 was 4.4%, and 1.8% in the last six months. The experimental conditions had less effect on reports of physical mistreatment. The only significant finding was that IVR respondents were more likely to report physical mistreatment in the last 6 months (3.5% vs. 0.1% for CATI; p < .01), but the results are highly unstable due to the fact that only one CATI respondent reported such an experience. One unexpected finding was that the interviewer-administered CAPI condition actually produced higher levels of reporting of physical mistreatment since turning 60 (6.1%) than the A-CASI condition (2.8%), although the difference was not statistically significant.

3.5 Sensitivity Analysis of the Effects of Dropping A-CASI and IVR Refusal Cases

Recall that we decided to switch cases that refused to use A-CASI (n = 35) or did not complete IVR due to touchtone phone issues (n = 24) or refusal (n = 1), but who answered all mistreatment items administered by the interviewer to the CAPI or CATI conditions, respectively. We performed a sensitivity analysis to examine what would have been the effect on the prevalence results reported in table 4 had we instead simply dropped these 60 cases from the analyses. The results, including OR's and statistical significance levels, were very similar. No substantive conclusions would have been altered, and we decided to retain the 60 “switch” cases to preserve statistical power.

3.6 Mode effects

Another way to analyze the data that reflects a secondary concern is to examine traditional mode effects – in-person versus telephone interviews. Table 5 reports weighted prevalence estimates separately for in-person and telephone conditions, and adjusted odds ratios within the traditional interviewer and privacy-enhanced conditions. In general, the in-person interviews generated more reports of elder mistreatment than the telephone interviews. More specifically, when looking at the combined conditions (columns 2 and 3), the in-person modes resulted in significantly (p<.05) higher prevalence rates for financial and psychological mistreatment since turning 60. When examining mode effects separately within the traditional (CAPI vs. CATI) and the privacy enhanced technology (A-CASI vs. IVR) conditions via adjusted odds ratios, stronger significant effects were found when comparing A-CASI to IVR, including significant differences in financial mistreatment both since turning 60 (adjusted OR = 2.69; p< .05) and in the last six months (adjusted OR = 5.65; p < .01). The only traditional mode effect – CAPI higher prevalence than CATI for physical abuse in the last six months – should be interpreted with caution, as it is based on only a single CATI respondent reporting such mistreatment, as noted above.

Table 5
Financial, psychological, and physical mistreatment prevalence: Mode Effects (in-person versus telephone) overall, within privacy enhancing modes, and within traditional interviewer modes. (Prevalence as percentage, standard errors for prevalence / OR ...

4. Discussion

This study examined the feasibility and effects on prevalence estimates of using A-CASI and IVR to obtain self-reports of elder mistreatment from older adults. A population-based sample of approximately 900 adults age 60 and older completed CAPI, A-CASI, CATI, or IVR versions of questions about financial, psychological, and physical mistreatment since age 60 and in the six months prior to the interview.

In terms of feasibility, the results show that older adults can in fact use survey technology like A-CASI and IVR. In both conditions, approximately 83% of respondents used the technology. However, reasons for non-use differed for A-CASI and IVR. In the A-CASI condition, 16% simply refused to use the system, expressing discomfort and unfamiliarity with computers. This was more prevalent as a function of age – the older the respondent, the more likely to refuse A-CASI. In addition, those who reported not using e-mail were also more likely to refuse to use A-CASI, as might be expected. Perhaps use of a touch-screen interface (versus mouse or keypad) would have reduced some of this reluctance – this should be tested in future work. Another interesting finding in the A-CASI condition was that about 28% also refused to use the headphones to listen to the questions being read – they instead simply answered the mistreatment items by reading them and keying in responses (i.e., a CASI interview). This phenomenon was more prevalent among white respondents, while e-mail use and age were not predictive. While the interviewers were trained to present the headphones in a matter-of-fact way and not as a “choice,” these respondents tended to indicate that they “didn't need the headphones – I can read,” or mentioned issues with hearing aids. It should be noted that the interviewers did not report any instances of participants refusing to wear headphones also having literacy problems, the population for which the “audio” portion of A-CASI was originally intended. In fact, our results are in line with those of Couper, Tourangeau, and Marvin (2009), suggesting that the “audio” (i.e., headphone) portion of A-CASI may not even be necessary. The issue of feasibility and acceptability of use of headphones by older respondents requires further study, but we still believe the “audio” option should be available for older adults with literacy or vision problems.

Turning to IVR, the primary reason for non-completions was lack of a touchtone phone (7.9%) or issues with cordless phones (3.1%) – only one respondent refused to use IVR altogether. Thus, while some A-CASI respondents wouldn't use a computer or headphones, certain IVR respondents couldn't use the system due to lack of proper equipment, and this was again more of an issue for the oldest old. Use of e-mail did not predict use of the IVR system. In general, IVR is likely less intimidating and more familiar to elders given its ubiquity in customer service contexts. We realize that these touchtone phone issues could have been avoided by using voice recognition for responses (Bloom, 2008), but were concerned about reliability, increased costs, and reduced privacy with spoken responses. Future IVR studies with older adults should explore use of voice recognition software. Another key positive finding in the IVR condition was that only two respondents broke-off or hung up during the IVR section, and these were both completed upon an immediate callback and were due to technical difficulties and not respondent choice. Thus, the break-off problem in IVR surveys with younger samples (Cooley et al., 2000; Couper, Singer, and Tourangeau, 2004; Gribble et al., 2000; Mingay, 2001; Moskowitz, 2004; Tourangeau et al., 2002; Turner et al., 1996; Villarroel et al., 2006) appears to be much less of an issue with older adults.

The prevalence rates found in this study were higher than those reported in most previous population-based surveys (Comijs et al., 1998; Pillemer and Finkelhor, 1988; Podneiks, 1992). This was especially true for financial (9.7% since 60; 3.5% last 6 months) and psychological mistreatment (14.3% since 60; 8.2% last 6 months), although the 6 month rates were similar to the one year rates reported in the recent Laumann et al. (2008) U.S. study (3.5% financial, 9.0% psychological). However, it should be noted that precise definitions of what constitutes a “case” of financial or psychological mistreatment are not agreed upon in the elder mistreatment literature. Perhaps our definitions of “caseness” were liberal. However, our main focus was on the relative prevalence rates across experimental condition and not on an absolute prevalence rate. There are ongoing efforts in the field to develop more valid measurement approaches to financial and psychological mistreatment (National Research Council, 2003). Once more standard and validated approaches to measurement do emerge, our results will need to be replicated.

In general our findings on the effects of the new survey technologies on prevalence rates of financial and psychological mistreatment are encouraging, particularly for A-CASI. Findings were less clear for physical mistreatment and for IVR technology. Specifically, A-CASI resulted in higher prevalence rates of six month financial mistreatment than CAPI (7.3% vs. 2.5%), as well as psychological mistreatment since 60 and in the last six months. The findings for A-CASI on psychological mistreatment in the last 6 months were particularly strong (16.4% vs. 5.0% for CAPI). The impact of IVR was less evident, with the only clear findings emerging for psychological mistreatment in the last six months (9.3% vs. 4.5% for CATI). These findings imply underestimates of the number of cases of elder mistreatment using traditional interviewer-based approaches to data collection. The prevalence rates, and thus raw number of cases, were very low for the physical mistreatment measure, and because of this the finding of higher six month prevalence for IVR (vs. CATI) should be interpreted cautiously. In addition, the finding of higher (though non-significant) prevalence of physical mistreatment since turning 60 for CAPI (6.1%) than for A-CASI (2.8%) - the only such reversal in the in-person conditions - is difficult to explain. While the absolute number of cases reporting physical mistreatment was fairly low, and the results should be interpreted with caution, perhaps respondents were more willing to report physical abuse in the more distant past to a live interviewer as a way of “getting it off their chest” through a verbal admission. This is highly speculative and more work needs to be done on the differential effects of survey technology on reports of different types of elder mistreatment.

Finally, the results showed that in general, the in-person modes produced higher prevalence estimates than the telephone modes, and this was especially evident when comparing A-CASI and IVR. This finding is generally consistent with previous literature showing that in-person interviews are superior to telephone interviews when collecting data on sensitive topics due to increased ability to build trust and rapport (e.g., de Leeuw, 2008; de Leeuw and van der Zouwen, 1988).

A-CASI seems to be particularly effective in increasing reporting of financial and psychological mistreatment among older adults. The effectiveness of IVR technology is less clear in this context, although it did lead to increased reporting of six month psychological mistreatment. One way to interpret these findings is that while both A-CASI and IVR eliminate the need to report mistreatment to another person, thus reducing social desirability concerns, A-CASI can also reduce fear of disclosure to third parties who in fact may be perpetrators of the mistreatment. That is, in the home setting, A-CASI (or CASI for those not using headphones) eliminates the need for interviewers to read the abuse questions, and for the respondent to respond verbally as in a CAPI survey, which may be overheard by third parties. In the CATI condition, this was less of an issue, as a family member could potentially only hear the respondent's half of the conversation, and the key elder mistreatment questions required only “yes” or “no” responses. Thus, IVR may have provided less relative benefit in terms of reduced disclosure compared to CATI, than A-CASI did to CAPI. Another explanation may be simply that older adults feel more comfortable in general in more personal face-to-face settings, where rapport can be established – as the mode findings suggest – and that A-CASI brings together increased rapport with increased privacy. There may simply be more suspicion of telephone interviews among older adults (IVR included), given publicity about “phone scams” targeting the elderly. Another possibility is that respondents in the IVR condition might have been suspicious as to whether the interviewer really got off the line, or was actually still listening in, as there was no way to objectively verify this. One caveat about A-CASI is that there was more reluctance to use the computer and headphones than to use the IVR system.

4.1 Study limitations

This study has several limitations that should be acknowledged. First, the response rates were quite low (19.5% in-person conditions; 28.5% telephone conditions), although this is not atypical for current RDD surveys. However, there is a danger that those elders most likely to be mistreated may be least likely to respond, or to have gatekeepers, who may in fact be perpetrators, prevent interviewers from reaching them. Related to this issue, we realize that there are limitations to using an RDD phone-based screening sample for the in-person survey conditions, and that area probability sampling is a viable, and perhaps superior, alternative that would eliminate non-telephone household coverage biases and increase response rates. However, given our primary interest in a controlled, randomized experimental comparison of survey delivery modes, we decided to hold sample design (including the sample frame) constant across conditions. We were also limited by cost constraints, and decided against the more costly area probability sampling approach. We would advocate, based on these results, using A-CASI in elder mistreatment modules piggybacked onto existing national, area probability surveys of the elderly with high response rates. Our sample was limited to an urban setting in the United States, and the results will need replication in national samples, including rural areas, as well as in other countries. Another weakness was our lack of detailed computer use measures (we asked a single question about e-mail use), which may have helped to shed light in the feasibility analyses. While the study was framed as a “health and social relationships” study, and we did not want computer use measures to be asked before the elder mistreatment / technology section to avoid potential context or confounding effects, future work using survey technology to measure elder mistreatment should include more detailed assessments of experience with computer and other technology. Finally, while we assume that higher elder mistreatment prevalence rates are more valid, which is consistent with the sensitive topics survey literature, the study lacked “objective, gold standard” records for validation. Future work should attempt to employ reverse record check designs with confirmed adult protective services (APS) cases to test the accuracy of reports elicited in all four of the approaches studies here.

4.2 Conclusions

This study shows that new survey technologies like A-CASI and IVR can in fact be used by older adults. A-CASI seems particularly promising, if reluctance to use a computer or wear headphones can be overcome, as it resulted in higher reported prevalence of financial and psychological mistreatment. The findings for IVR are less clear. The Pew Center data reported earlier on internet use by age (42% among 65 and older vs. 79% among 50 – 64 year olds; Pew research Center, 2009) suggests that some of the A-CASI adoption issues may become less relevant as more technology-literate generations age (internet use among the 65+ group is also trending upwards over time). However, certain health-related factors such as vision, hearing, or cognitive deficits are less likely to disappear for subsequent cohorts, suggesting there may always be at least some difficulty in the use of technology for survey data collection among older populations. This suggests that new methods for assisting or “training” older adults in the use of technology like A-CASI, or new adaptations to the technology for the visually or hearing impaired, may be necessary. Similarly, while the IVR issue of not having a touchtone phone will certainly disappear over time for subsequent cohorts, problems in responding using a keypad on a cordless handset are more endemic to age-related factors like reduced manual dexterity. Assuming that voice recognition software continues to improve allowing spoken response to be the IVR standard, this may also become less problematic over time. In order to address these and other issues raised by the current study, our results should be replicated in national samples with higher response rates using agreed upon, validated measures of elder mistreatment.


This research was supported by the National Institute on Aging 5R21AG028-15-01. The authors would like to thank Alan Meisel from the University of Pittsburgh School of Law for his help and guidance with the design of this study.


1Supplemental Chi-Square analyses comparing prevalence rates by location revealed no significant differences overall, or within the CAPI and A-CASI conditions.

2During 11 in-person interviews (2.5%) there was another person in the same room with the interviewer and respondent, and during another 43 interviews (9.6%) another person was in a nearby room. Elder mistreatment prevalence rates for the 54 non-private interviews actually tended to be higher than those conducted entirely in private, both in the CAPI and A-CASI conditions. This suggests the possibility that the inability to obtain privacy may be somewhat indicative of a mistreatment situation, but it did not appear to discourage reports of such mistreatment. However, this is purely speculative, since we cannot verify that the third person was in fact the perpetrator of mistreatment.

3Separating the in-person sample into the low- and high-density African American strata, the following rates were obtained. For the low-density stratum in-person sample, the screening rate was 62.7%, the interview completion rate was 27.1%, for an overall AAPOR RR3 of 17.0%. For the high-density stratum in-person sample, the screening rate was 60.9%, the interview completion rate was 38.7%, for an overall AAPOR RR3 of 23.6%.

4Separating the telephone sample into the low- and high-density African American strata, the following rates were obtained. For the low-density stratum telephone sample, the screening rate was 61.8%, the interview completion rate was 44.0%, for an overall AAPOR RR3 of 27.2%. For the high-density stratum telephone sample, the screening rate was 59.8%, the interview completion rate was 50.6%, for an overall AAPOR RR3 of 30.3%.

5The survey also included measures of the following health-related covariates, which are not discussed further here: physical health, disability status; cognitive status (also used for screening); formal service utilization; depression; life satisfaction; social network size and density; perceived social support; and quality of marital relationship. The elder mistreatment items were administered following all of these measures, which were administered by interviewers in all four conditions. Standard demographic questions were asked by the interviewer after the A-CASI or IVR section.

APPENDIX A: A-CASI and IVR Instructions to Respondent / Switch Protocol


Prior to the A-CASI section on elder mistreatment, the interviewer turned the computer around to face the respondent, hooked up headphones, and said - “For these questions, we will use a more private system where a recorded voice will read the questions and you will answer by pressing the number keys on the computer or using the mouse to click on your answer. The questions will also be displayed on the computer screen, and you can follow along if you like, or you can just listen to the questions. If you need to have a question repeated, or want to back up to change your answer, press the “page up” key, or click on the “previous” button. During these questions, I will not be listening in, but if you have problems, or do not want to continue answering this way, just let me know, and I can just ask you the questions instead. After you've answered all the questions, the recording will tell you that this part of the interview is over. Please let me know when you've finished and I'll ask you the rest of the questions. Before you begin, I'll let the system ask you a few practice questions”. The respondent was then told to put on the headphones.


Prior to the IVR section on elder mistreatment, the interviewer informed the respondent that she/he would now switch them over to “a more private system where a recorded voice will read the questions and you will answer by pressing the numbered keys on your telephone. For example, you might be asked to press 1 for “yes” or 2 for “no”. Once you press a key, the system will repeat your answer and the next question will be asked automatically. If you need to have a question repeated or want to change your answer, just press the “star” key. During these questions, I will not be listening in, but if you have problems, or do not want to continue answering this way, just press “0” and you can re-connect to me, and I can just ask you the questions instead. After you've answered all the questions, the system will automatically reconnect you to me. Before you begin, I'll stay on the line and let the system ask you a few practice questions”.

APPENDIX B: Elder Mistreatment Survey Items and Definitions of “Caseness” for Prevalence

General introduction

We have been asking lots of questions about you and your family and friends and how you generally get along. The next series of questions deal with things that sometimes happen in families when people have disagreements, arguments, or are under stress. We are asking these questions to try and understand how older adults and their families cope with the problems of everyday life, and the conflicts that can sometimes happen as a result. Although the questions may seem to be a little personal, it is very important for the study that you answer honestly. Remember, your name will NOT be linked in any way with your answers and your answers will be held in the strictest confidence. Remember, you do not have to answer any question that you don't want to, but we are asking for your continued cooperation.

Financial mistreatment items

1a. Since you turned 60, have you signed any forms or documents that you didn't quite understand? (Note: All items are yes / no.

1b. (If yes to 1a) In the last 6 months, have you signed any forms or documents that you didn't quite understand?

(All subsequent items have 6 month follow-ups for “yes” responses to since 60. A “no” to since 60 is coded as “no” to last 6-months)

2a. Since you turned 60, has anyone asked you to sign anything without explaining what you were signing?

3a. Since you turned 60, has anyone taken your checks without permission?

4a. Since you turned 60, have you suspected that anyone was tampering with your savings or other assets?

Financial mistreatment “case” definition

“Yes” to any of the 4 items.

Psychological mistreatment items

Note: All items are prefaced with “Since you turned 60 (In the last 6 months) has your spouse, son, daughter, other family member or anyone else that you trust…”

  • 1a. screamed or yelled at you?
  • 2a. insulted you, called you names, or swore at you?
  • 3a. said something to deliberately hurt you?
  • 4a. stomped out of a room after an argument?
  • 5a. destroyed something that belonged to you?
  • 6a. threatened to hit you or throw something at you?
  • 7a. threatened to send you to a nursing home?
  • 8a. threatened to abandon you or stop taking care of you?

Psychological mistreatment “case” definition

“Yes” to three or more items; OR “Yes” to 7a (threatened to send to nursing home) OR “Yes” to 8a (threatened to abandon or stop taking care of you).

Physical mistreatment items

Note: All items are prefaced with “Since you turned 60 (In the last 6 months) has your spouse, son, daughter, other family member or anyone else that you trust…”

  • 1a. hit or slapped you?
  • 2a. pushed or shoved you?
  • 3a. shook you?
  • 4a. kicked you?
  • 5a. handled you roughly in any other way?
  • 6a. thrown something at you?
  • 7a. twisted your arm or hair?
  • 8a. choked you?
  • 9a. slammed you against a wall?
  • 10a. beat you up?

Physical mistreatment “case” definition

“Yes” to any of the 10 items.


  • Aquilino WS. Interview Mode Effects in Surveys of Drug and Alcohol Use: A Field Experiment. Public Opinion Quarterly. 1994;58:210–240.
  • Aquilino WS, LoScuito LA. Effect of Interview Mode on Self-Reported Drug Use. Public Opinion Quarterly. 1990;54:362–395.
  • Baker RP, Bradburn NM, Johnson RA. Computer-Assisted Personal Interviewing: An Experimental Evaluation of Data Quality and Survey Costs. Journal of Official Statistics. 1995;11:415–434.
  • Bloom J. The Speech IVR as a Survey Interviewing Methodology. In: Conrad FG, Schober MF, editors. In Envisioning the Survey Interview of the Future. Hoboken, NJ: Wiley; 2008. pp. 119–136.
  • Comijs HC, Pot AM, Smit JH, Bouter LM, Jonker C. Elder Abuse in the Community: Prevalence and Consequences. Journal of American Geriatrics Society. 1998;46:885–888. [PubMed]
  • Cooley PC, Miller HG, Gribble JN, Turner CT. Automating Telephone Surveys: Using T-ACASI to Obtain Data on Sensitive Topics. Computers in Human Behavior. 2000;16:1–11. [PMC free article] [PubMed]
  • Corkrey R, Parkinson L. Interactive Voice Response: Review of Studies 1989-2000. Behavior Research Methods, Instruments, & Computers. 2002a;34:342–353. [PubMed]
  • Corkrey R, Parkinson L. A Comparison of Four Computer-Based Telephone Interviewing Methods: Getting Answers to Sensitive Questions. Behavior Research Methods, Instruments, & Computers. 2002b;34:354–363. [PubMed]
  • Couper MP. Technology and the Survey Interview/Questionnaire. In: Conrad FG, Schober MF, editors. Envisioning the Survey Interview of the Future. Hoboken, NJ: Wiley; 2008. pp. 56–76.
  • Couper MP, Rowe B. Evaluation of a Computer-Assisted Self-Interview Component in a Computer-Assisted Personal Interview Survey. Public Opinion Quarterly. 1996;60:89–105.
  • Couper MP, Singer E, Tourangeau R. Does Voice Matter? An Interactive Voice Response (IVR) Experiment. Journal of Official Statistics. 2004;20:551–570.
  • Couper MP, Tourangeau R, Marvin T. Taking the Audio Out of Audio-CASI. Public Opinion Quarterly. 2009;73:281–303.
  • Currivan DP, Nyman AL, Turner CF, Biener L. Does Telephone Audio Computer-Assisted Self-Interviewing Improve the Accuracy of Prevalence Estimates of Youth Smoking? Evidence From the UMass Tobacco Study. Public Opinion Quarterly. 2004;68:542–564. [PMC free article] [PubMed]
  • Leeuw De. Choosing the Method of Data Collection. In: de Leeuw ED, Hox JJ, Dillman DA, editors. In International Handbook of Survey Methodology. New York: Lawrence Earlbaum Associates; 2008. pp. 113–135.
  • de Leeuw ED, van der Zouwen J. Data Quality in Telephone and Face to Face Surveys: A Comparative Meta-Analysis. In: Groves RM, Biemer PP, Lyberg LE, Massey JT, Nicholls WL, Waksberg J, editors. InTelephone Survey Methodology. New York: Wiley; 1988. pp. 283–299.
  • Des Jarlais DC, Paone D, Milliken J, Turner CF, Miller H, Gribble J, Shi Q, Hagan H, Friedman SR. Audio-Computer Interviewing to Measure Risk Behaviour for HIV Among Injecting Drug Users: A Quasi-Randomised Trial. Lancet. 1999;353:1657–1661. [PubMed]
  • Epstein JF, Barker PR, Kroutil LA. Mode Effects in Self-Reported Mental Health Data. Public Opinion Quarterly. 2001;65:529–549.
  • Gaziano C. Comparative Analysis of Within-Household Respondent Selection Techniques. Public Opinion Quarterly. 2005;69:124–157.
  • Gribble JN, Miller HG, Cooley PC, Catania JA, Pollack L, Turner CF. The Impact of T-ACASI Interviewing on Reported Drug Use Among Men Who Have Sex With Men. Substance Use & Misuse. 2000;35:869–890. [PubMed]
  • Herzog AR, Wallace RB. Measures of Cognitive Functioning in the AHEAD Study. The Journals of Gerontology, Series B. 1997;52B(Special Issue):37–48. [PubMed]
  • Laumann EO, Leitsch SA, Waite LJ. Elder Mistreatment in the United States: Prevalence Estimates from a Nationally Representative Study. The Journals of Gerontology, Series B. 2008;63B:S248–S254. [PMC free article] [PubMed]
  • Lessler JT, O'Reilly JM. Mode of Interview and Reporting of Sensitive Issues: Design and Implementation of Audio Computer-Assisted Self-Interviewing. In: Harrison L, Hughes A, editors. The Validity of Self-Reported Drug Use: Improving the Accuracy of Survey Estimates. Rockville, MD: National Institute on Drug Abuse; 1998. pp. 366–382. [PubMed]
  • Metzger DS, Koblin B, Turner C, Navaline H, Valenti F, Holte S, Gross M, Sheon A, Miller H, Cooley P, Seage GR., III Randomized Controlled Trial of Audio Computer-Assisted Self-Interviewing: Utility and Acceptability in Longitudinal Studies. HIVNET Vaccine Preparedness Study Protocol Team. American Journal of Epidemiology. 2000;152:99–106. [PubMed]
  • Mingay DJ. Is Telephone Audio Computer-Assisted Self-Interviewing (T-ACASI) a Method Whose Time Has Come? Proceedings of the Section on Survey Research Methods. 2001:1062–1067.
  • Moskowitz JM. Assessment of Cigarette Smoking and Smoking Susceptibility Among Youth: Telephone Computer-Assisted Self-Interviews Versus Computer-Assisted Telephone Interviews. Public Opinion Quarterly. 2004;68:565–587.
  • National Center for Health Statistics. Health, United States, 2005 with Chartbook on Trends in the Health of Americans. Hyattsville, MD: Government Printing Office; 2005. [PubMed]
  • National Research Council. Elder mistreatment: Abuse, Neglect, and Exploitation in an Aging America Panel to review risk and prevalence of elder abuse and neglect. In: Bonnie RJ, Wallace RB, editors. Committee on National Statistics and Committee on Law and Justice, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press; 2003.
  • O'Reilly JM, Hubbard ML, Lessler JT, Biemer PP, Turner CF. Audio and Video Computer Assisted Self-Interviewing: Preliminary Tests of New Technology for Data Collection. Journal of Official Statistics. 1994;10:197–214. [PMC free article] [PubMed]
  • Perlis TE, Des Jarlais DC, Friedman SR, Arasteh K, Turner CF. Audio-Computerized Self-Interviewing Versus Face-to-Face Interviewing for Research Data Collection at Drug Abuse Treatment Programs. Addiction. 2004;99:885–896. [PubMed]
  • Pew Research Center. Demographics of Internet Users. Pew Internet & American Life Project. 2009. [September29, 2009]. July 15, 2009
  • Pillemer KA, Finkelhor D. The Prevalence of Elder Abuse: A Random Sample Survey. The Gerontologist. 1988;28:51–57. [PubMed]
  • Podnieks E. National Survey on Abuse of the Elderly in Canada. Journal of Elder Abuse and Neglect. 1992;4:5–58.
  • Straus MA. Measuring Intrafamily conflict and violence: The Conflict Tactics Scales. Journal of Marriage and the Family. 1979;41:75–88.
  • Tourangeau R, Rasinksi K, Jobe JB, Smith TW, Pratt WF. Sources of Error in a Survey of Sexual Behavior. Journal of Official Statistics. 1997;13:341–365.
  • Tourangeau R, Rips LJ, Rasinski K. The Psychology of Survey Response. Cambridge, UK: Cambridge University Press; 2000.
  • Tourangeau R, Smith TW. Asking Sensitive Questions: The Impact of Data Collection Mode, Question Format, and Question Context. Public Opinion Quarterly. 1996;60:275–304.
  • Tourangeau R, Steiger DM, Wilson D. Self-Administered Questions by Telephone: Evaluating Interactive Voice Response. Public Opinion Quarterly. 2002;66:265–278.
  • Tourangeau R, Yan T. Sensitive Questions in Surveys. Psychological Bulletin. 2007;133:859–883. [PubMed]
  • Turner CF, Ku L, Rogers SM, Lindberg LD, Pleck JH, Sonenstein FL. Adolescent Sexual Behavior, Drug Use, and Violence: Increased Reporting with Computer Survey Technology. Science. 1998;280:867–873. [PubMed]
  • Turner CF, Miller HG, Smith TK, Cooley PC, Rogers SM. Telephone Audio Computer-Assisted Self-Interviewing (T-ACASI) and Survey Measurements of Sensitive Behaviors. Preliminary Results. Survey and Statistical Computing 1996. 1996:121–130.
  • Turner CF, Villarroel MA, Rogers SM, Eggleston E, Ganapathi L, Roman AM, Al-Tayyib A. Reducing Bias in Telephone Survey Estimates of the Prevalence of Drug Use: A Randomized Trial of Telephone Audio-CASI. Addiction. 2005;100:1432–1444. [PubMed]
  • Villarroel MA, Turner CF, Eggleston E, Al-Tayyib A, Rogers SM, Roman AM, Cooley PC, Gordek H. Same-Gender Sex in the United States: Impact of T-ACASI on Prevalence Estimates. Public Opinion Quarterly. 2006;70:166–196. [PMC free article] [PubMed]
  • Villarroel MA, Turner CF, Rogers SM, Roman AM, Cooley PC, Steinberg AB, Eggleston E, Chromy JR. T-ACASI Reduces Bias in STD Measurements: The National STD and Behavior Measurement Experiment. Sexually Transmitted Diseases. 2008;35:499–506. [PubMed]
  • Weeks MF. Computer-Assisted Survey Information Collection: A Review of CASIC Methods and Their Implications for Survey Operations. Journal of Official Statistics. 1992;8:445–465.
  • Wright DL, Aquilino WS, Supple AJ. A Comparison of Computer-Assisted and Paper-and-Pencil Self-Administered Questionnaires in a Survey on Smoking, Alcohol, and Drug Use. Public Opinion Quarterly. 1998;62:331–353.