Overall, we found that the sociodemographic profile across our two experimental conditions (HAF vs. No HAF) did not differ. However, both groups of responders did differ from the population as a whole with respect to age and race. This finding is consistent with what we observed in our 2007 Medical Care
where we compared respondents across conditions to Olmsted County population controls obtained via U.S. Census parameters. This finding offers tacit support to the common practice of comparing the characteristics of survey respondents to similar estimates from other sources such as the U.S. Census as a method of gauging the presence or absence of nonresponse bias. This practice is important to validate because when researchers have very little information on both respondents and nonrespondents at the individual level (which is often the case 12
) it is the only method available for estimating nonresponse bias. However, external data sources such as those obtained from the U.S. Census often do not include information that is more proximal to health survey subject matter, such as medical diagnoses and health care utilization patterns. Using this type of unique data afforded by the linked REP information, we found that a slightly sicker population, in terms of the number of comorbidities, was more likely to respond to the survey AND sign a HAF than those completing a survey in the arm where they did not receive a HAF.
Our finding of sociodemographic equivalence between the No HAF and HAF conditions runs counter to what has been observed in the few studies investigating the effects of the HIPAA authorization on response bias. For example, Krousel-Wood and colleagues26
found that written informed consent and HIPAA authorization resulted in lower participation among African Americans, females, and persons under 75 years of age in a cross sectional survey of older patients with hypertension. Bolcic-Janovic and colleagues found that among individuals that had been hospitalized in the past calendar year, men and older adults were more likely to return an authorization form as part of a phone survey 7
. In a qualitative study investigating the effect of including a signed HIPAA authorization requirement on willingness to participate in a hypothetical clinical research study on antihypertensive medication, Dunlop and colleagues27
found that males, those 40 years or older, and those with high school education or less were less likely to agree to study participation in the HIPAA authorization plus standard consent condition than those subjected to standard consent alone. Finally, in a telephone survey of patients with acute coronary syndrome where informed consent forms were mailed in advance of requesting signed permission to call, Armstrong and colleagues28
found that patients who did not sign an authorization form tended to be younger, a member of a minority racial or ethnic group, and unmarried than those willing to do so.
In the realm of clinically-relevant differences between the No HAF and HAF conditions, our one significant finding of higher comorbidities in the group willing to complete the survey, again runs counter to what has been seen in the literature. In the Armstrong et al.28
study mentioned above, those authorizing the subsequent telephone interview had significantly lower
mortality rates at 6 months but no differences in myocardial infarction, stroke, or re-hospitalization between the 2 groups. If one construes lower 6 month mortality rates as indicative of greater health at the time of the survey request, the findings by Armstrong and colleagues28
are at odds with our observation that those with poorer health at the time of the survey request were more
likely to complete the survey with a signed HAF.
Why our findings differ from those observed in other studies is unclear but may be due to the fact that the literature in this area has focused on more homogeneous patient populations of various sizes. As indicated earlier, the Krousel-Wood et al.26
study focused on a small (n = 177) sample of older patients with hypertension and the Dunlop et al.27
study focused on a purposive sample of 384 African American patients from four metropolitan primary care clinics. The Armstrong et al.28
study, while larger than the other two (n = 1221) still focused on patients with acute coronary syndrome. The Bolcic-Janovic study, while also larger (n=5,859), focused just on people that had been hospitalized for medical or surgical treatment in the past year 7
. It may be that the findings from these smaller and/or specialized samples cannot be generalized to our larger (n = 6,716) sample of community residents.
The current findings also run counter to what we observed in our earlier report1
where we found significantly higher proportions self-reporting general health and non-smoking in the HAF condition than in the No HAF condition. At first blush, this suggests that those willing to complete the survey and sign the HAF are healthier
than those unwilling to do so; a finding similar to that observed by Armstrong et al.28
. However, our observation that those completing the survey and signing a HAF were less healthy
(or, as shown in some of the sensitivity analyses, no different) in terms of the number of comoridities found in the REP administrative and medical record data raises a question about the validity of self-reported data. It is possible that the earlier finding might be due solely to suppression of self-reports of ill health in the HAF condition rather than differential selection and, as such, a case of measurement error rather than mere nonresponse error. Dunlop and colleagues27
hypothesize that the viewing of the HAF itself may have differentially biased self-reports of willingness to participate in their hypothetical clinical research study. This finding highlights the limitations associated with relying solely on self-reports of health as indicators of response bias as past researchers have been prone to do, and underscores the importance of utilizing an external database that can characterize respondents and nonrespondents at the individual level as we have done in the current investigation. However, it is acknowledged that administrative and medical record data may also be prone to measurement error associated with such things as variability in data item definitions, data collection techniques, and cleaning processes over time 29, 30
. In addition, our findings relating to commorbidities varied greatly depending on how we treated the Charlson scores in our sensitivity analyses. It may be that our selection of the Charlson score as an indicator of health, and the manner in which we treated it analytically, may be driving our main results. However, the Charlson measure has been found to be an effective method of estimating future morbidity and mortality in longitudinal studies,22
underscoring its utility as a measure of current health. Nonetheless, future researchers should pursue further work on this topic.
In conclusion, the results of the present study demonstrate that the 15 percentage point reduction in the observed response rates brought about by the introduction of the HAF to the survey request did not portend systematic bias in the sample. This accords with emerging evidence suggesting only a weak relationship between a survey’s response rate and its response bias12, 13
. Furthermore, our findings do not imply that the inclusion of the HAF has a benign effect on health studies just because of our observed lack of nonresponse bias. First, we saw a response rate of 39.8% in the HAF condition and 55.0% in the non-HAF condition in our original study. Given our original sample of 6,939, had no one been sent the HAF we would estimate responses from a total of 3,816 and if all had been sent the HAF, 2,762. The inclusion of the HAF would therefore decrease our analytical sample by 1,054 individuals. This loss of sample is associated with real decreases in the relative precision of our estimates. For our survey estimate reported in the original study that approximately 10% of the population smoke, out margin of error would decrease from 1.1% to 0.9% with the larger sample. Similarly, for our reported estimate of BMI, we would be able to estimate the mean to within 0.190 units as compared to 0.225 units. Second, the additional study costs incurred as a result of increased printing and postage for the two HAFs in each mailing increases the overall cost per completion. While relatively minor on a per packet basis, these costs could be quite substantial in large scale studies such as ours. These additional costs could be quite burdensome as well for those facing strict financial constraints on even smaller studies. The larger lost cost is associated with the loss of information from costly telephone interviews that cannot be used due to the absence of a signed HAF. While the finding of lack of nonresponse bias with the inclusion of a HAF is good news for those required to include this form, its impacts are far from benign. There is real loss in statistical power which can translate into more expensive survey protocols to achieve the same level of confidence in one’s findings. Further work is needed in order to determine the best way to mitigate the loss of power associated with the HAF.