PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Am J Prev Med. Author manuscript; available in PMC 2013 April 1.
Published in final edited form as:
PMCID: PMC3549634
NIHMSID: NIHMS358017

Nonparticipation in a Population-Based Trial to Increase Colorectal Cancer Screening

Abstract

Background

Many trials have tested different strategies to increase colorectal cancer (CRC) screening. Few describe whether participants are representative of the population from which they are recruited.

Purpose

To determine risk factors related to nonparticipation among patients enrolled in an integrated health plan and not up to date for CRC testing, in a trial to increase screening rates.

Methods

Between July 2008 and October 2009, a total of 15,000 adults aged 50–74 years from 21 clinics in Washington State who were due for CRC screening were contacted. Nonparticipants were defined as English-speaking patients who did not engage in the call or refused participation while still potentially eligible. Log-binomial regression models were used to estimate the relative risk of nonparticipation. Analyses were completed between October 2010 and June 2011.

Results

Patients who were nonwhite, had less education, used tobacco, had less continuity of care, and had lower rates of preventive care and cancer screening were more likely to be nonparticipants. Patients reporting never having received any type of CRC testing or screening were also more likely not to participate (62% of nonparticipants vs 46% of participants; adjusted RR=1.58, 95% CI=1.47, 1.70). Reasons for refusal included costs, risks of procedures, and not wanting their medical records reviewed.

Conclusions

Patients eligible for but not participating in the trial were more likely to be from minority socioeconomic and racial groups and had behaviors that can negatively affect cancer outcomes. Additional efforts are needed to recruit patients who need CRC screening the most.

Introduction

Evidence is unequivocal that screening decreases CRC mortality and morbidity.1,2 Although CRC screening rates have increased overall,3,4 rates remain lower in certain subgroups including adults aged 50–59 years, racial and ethnic minority groups, those with lower income and education levels, and those without health insurance.5 In addition to having lower screening rates, minority socioeconomic, racial, and ethnic populations are often under-represented in cancer trials. 6,7 Further, most trials fail to report the representativeness of their study populations8 and even fewer are able to provide information on the entire population from which study subjects were recruited. To the degree that specific populations are less represented in health services trials and the extent to which the effectiveness of interventions differs across subgroups, findings from RCTs may not always be generalizable to “real-world” settings.9

To better understand how RCT results may translate into daily practice, it is important to understand who participates and how participants differ from nonparticipants. It was hypothesized that characteristics of patients who did not participate in a trial to increase CRC screening would be similar to those who are overdue for CRC screening in general (i.e., unwilling patients would be younger, from racial and ethnic minority groups, have attained lower levels of education, and have less-positive attitudes about CRC screening than those who participated).

Methods

Study Setting

The Systems of Support to Improve Colorectal Cancer Screening (SOS) study is an RCT designed to improve CRC screening rates. SOS is being conducted at 21 Group Health–owned primary care medical centers in the Puget Sound region. Group Health is a large nonprofit integrated healthcare delivery system that provides both medical coverage and care to 680,000 members in Washington State. The demographic mix of Group Health enrollees is similar to the surrounding area, except that they are somewhat older (46% are aged ≥45 years vs 38% in the regional community); more likely to be employed; and more educated.10 Almost all members aged ≥65 years are Medicare enrollees, whereas most aged <65 years are in commercial plans. At the time of study enrollment, Group Health did not collect data on race or ethnicity. However, to maximize minority enrollment, clinics with known higher proportions of minority groups were oversampled.

Recruitment

The SOS participants were aged 50–73 years and due for CRC screening. They were randomized to either usual care or one of three interventions with increasing stepwise support to complete CRC screening. The Group Health IRB approved all study procedures. A complete description of the study design is reported elsewhere.10

Recruitment to the SOS study began in July 2008 and was completed in November 2009, with analyses completed between October 2010 and June 2011. Patients were eligible for the SOS study if they were continuously enrolled in Group Health for at least 2 years and, based on EMR and administrative data, were due for CRC screening. Being due for CRC screening was defined as no evidence of a fecal occult blood test (FOBT) in the past 9 months, no flexible sigmoidoscopy in the last 4 years, and no colonoscopy in the past 9 years.

The recruitment sample pool was refreshed weekly to include or exclude potential participants on the basis of their most recent CRC test. Automated data were used to exclude those with a prior history of CRC, inflammatory bowel disease, or a history of advanced colorectal polyps. Patients with a recent myocardial infarction; those undergoing active treatment for cancer with chemotherapy; with end-stage organ failure diseases (e.g., dementia, renal failure); or with nursing home residence or hospice status were also excluded. Patients also needed to be conversant in English, as telephone interventions were conducted in English.

Recruitment flow is depicted in Appendix A (available online at www.ajpmonline.org). From the 28,508 Group Health enrollees potentially eligible, 15,451 (54.2%) were sampled in order to have adequate power for study aims. Potentially eligible adults were sent a recruitment letter and a pamphlet that was designed and tested for readability11 and cultural appropriateness for different age and gender groups; a $2 bill was included as an incentive. Research interviewers then called to confirm eligibility and administer a baseline survey. Nonparticipants were defined as patients who did not engage in the call (hung up, were repeatedly unavailable, or another member of the household stated the person was not interested) or refused participation while still eligible.

Interviewer training was held in July 2008 for 22 interviewers, who were representative of gender and ethnicity characteristics of the population to be interviewed. Interviewers were trained on general interviewing techniques, conducting computer-assisted telephone interviews, and project-specific procedures, including question-by-question specifications. Data collection was conducted using computer-assisted telephone interviews. Contacts were initiated at various days and times, including weekdays, evenings, and weekends. Completed interviews, refusals, and ineligible records were reviewed by trained quality control team members and written feedback provided to the interviewers.

Subjects contacted by telephone could only refuse to participate while they remained eligible. If the subject remained potentially eligible but was unwilling to complete the interview, they were asked to answer a shortened version of the survey, which included questions on race, ethnicity, education level, employment, marital status, general health status, whether the respondent had ever been tested or screened for CRC, and the importance of CRC testing compared to other health issues. Those who remained eligible after completing the entire survey received a more detailed study description and were invited to provide verbal consent.

Invited individuals not current for CRC screening by automated data were categorized into the following four mutually exclusive groups: (1) identified while still potentially eligible and answered no short survey questions; (2) unwilling-to-participate while still potentially eligible and answered the short survey questions; (3) unwilling-to-participate after confirmed to be eligible and completing the entire baseline (long) survey; and (4) participants.

Different measures of participation rates were calculated among patients mailed invitations to participate depending on the point in the recruitment process potential subjects were unwilling to go further with the interview while potentially still eligible (Table 1). The highest participation rate (87.3%) arose when only those who were confirmed eligible were included in the denominator (Method E). The participation rate was 41.3% (Method C) when those who did not engage in the recruitment process or refused while they were potentially eligible were included.

Table 1
Various methods for estimating percentage of participationa

Measures

Group Health automated data were used to obtain information on gender, age, insurance plan type (commercial, Medicare, Medicaid or Basic Health [state-subsidized insurance for low- income families]), tobacco history, history of preventive care visits, and use of other cancer screening tests. Using the sample date as the reference date, automated data were used to define whether the patient had a preventive visit within the prior 2 years, and for women a mammogram within the prior 2 years or a Pap within the prior 3 years, and for men a prostate-specific antigen test (PSA) within the prior 2 years. The Johns Hopkins Adjusted Clinical Group (ACG) case-mix system was used to measure expected clinical need or comorbidity level based on age, gender, and the number and types of ICD-9 diagnostic codes over a 12-month period.12,13

Patients were categorized individually into three groups defined by their morbidity scores as having low-, moderate-, or high-resource clinical needs. The Usual Provider Continuity (UPC) index was calculated as the proportion of primary care visits to a subject’s most frequently visited physician.14 When subjects chose more than one category for race, coding precedence was given to Hispanic, non-Hispanic black, Asian, other, and non-Hispanic white categories, in that order. For those not answering the short survey, census data were used to impute race and education using the Bayesian Surname and Geocoding (BSG) methodology developed by Elliott et al.15 (Appendix B, available online at www.ajpmonline.org). Subjects completing the entire eligibility survey also answered questions related to their stage of change for CRC screening.16,17

Statistical Analysis

Individuals provided various amounts of data depending on when refusal occurred. The first analysis compares those who according to automated data were up-to-date or due for CRC testing, using chi-square tests of independence. Frequency distributions were calculated on all groups based on automated data and for Groups 2, 3, and 4 additional self-reported data. Automated data characteristics relating to participation were based on aggregrate frequency data and unadjusted as refusers had not provided consent to use their automated data at an individual level.

Distributions of each characteristic across groups were compared using chi-square tests of independence. The probability of being a nonparticipant was modeled adjusting for age, gender, and education. Regression models were log-binomial, yielding estimates interpretable on the relative risk scale and maintaining an appropriate binomial variance assumption for the refusal outcome. 18 All analyses were performed in SAS, version 9.2.

Results

Factors Associated with Being Due for Colorectal Cancer Screening

Based on automated data and eliminating those with exclusionary health conditions, 53.9% (28,508/52,898) adults aged 50–73 years were due for CRC testing and eligible for study recruitment (Appendix A, available online at www.ajpmonline.org). Patients who were younger, nonwhite, with lower education levels, with lower levels of expected clinical need, less continuity with one physician, without a preventive visit in the prior 2 years, current or past smokers, as well as patients not current for other cancer screenings were more likely to be due for CRC testing (Appendix C, available online at www.ajpmonline.org).

Factors Associated with Nonparticipation

Patients with Medicaid or Basic Health, without a primary or well care visit in the prior 2 years, who did not participate in other types of cancer screening, or who were current smokers (Table 2) were more likely to be unwilling to participate. Patients reporting Asian race, less education, or poorer general health were more likely to be nonparticipants (Table 3). More than half of the eligible participants reported having never been tested or screened for CRC. Those who self-reported never receiving CRC tests (62% of nonparticipants and 46% of participants) were more likely to be nonparticipants (adjusted risk ratio [RR]=1.58, 95% CI=1.47, 1.70) compared to those who had ever been tested. Patients reporting that CRC screening was less important than other health concerns were also more likely to be nonparticipants.

Table 2
Characteristics measured through automated data sources,a of nonparticipants and participants, n (%) unless otherwise noted
Table 3
Characteristics of participants and nonparticipants,a n (%) unless otherwise noted

Among those completing the baseline survey, patients reporting that they were not thinking about getting tested for CRC were almost four times more likely to be nonparticipants (adjusted RR=3.96, 95% CI=3.25, 4.82) compared to those planning to be tested in the next year. Reporting a first-degree relative or other family or friends having been diagnosed with CRC was not associated with participation (data not shown).

Fewer than 3% of patients eligible according to automated data (422/15,451) refused participation by hanging up. More than two thirds of those refusing participation and answering the short survey provided at least one reason why they did not want to participate, with “not interested” (59%) and “too busy” (22%) as the most common reasons (data not shown). More than 200 people provided more detailed reasons for not wanting to participate.

Frequent themes were concerns about the costs of CRC screening not being covered by their insurance (e.g. “my insurance has a $1500 deductible. It doesn’t pay for tests. I can’t afford it”—the study provided FOBT kits, but other costs were based on their insurance plan); concerns of risks or unpleasantness of CRC screening (e.g. “my husband had an awful experience doing the prep work for the exam, I don’t want to do that”); wanting to discuss it with their doctor; or beliefs they would not benefit from testing (e.g. “healthy” or “taking supplements”). Of those who completed the entire survey, were eligible, and were initially interested in participation, 683 refused during the verbal consent process, with the most common reason being unwillingness to have their EMR data reviewed.

Discussion

Potentially eligible patients reporting nonwhite race, lower levels of education, and who had less continuity with one physician, fewer preventive visits, less use of recommended screening tests, and were current or past smokers were more likely to not participate in the trial. The majority of English-speaking subjects that were contacted had never received any type of CRC testing, with nonparticipants being more than 1.5 times more likely to report this.

The most common reason for nonparticipation was passive, not engaging in the telephone interview. Additionally, 10% of eligible participants refused when they were asked to provide verbal consent, which included permission to review their medical data and Health Insurance Portability and Accountability Act (HIPAA) language (a requirement that has been associated with refusal in other studies19). Thus, more people might have participated if CRC screening interventions were offered as part of a quality improvement initiative and not research.

Systematic reviews provide evidence that factors related to study enrollment are complex and understudied. 7 Wendler and colleagues6 reviewed more than 1600 clinical intervention trials, and found that only 17 trials reported consent rates by race and ethnic groups, with minority racial groups being as likely to enroll. However, there was significant heterogeneity among studies, suggesting that willingness to participate may be mediated by a variety of factors. Interventions most successful at enrolling minority groups20 have used targeted strategies such as recruiting from community centers or faith-based organizations or limited enrollment to specific groups, which might lead to effective strategies for specific subgroups, but are not necessarily inclusive of diverse study populations.

Study designs not requiring individual consent could potentially increase the representativeness of the study population. Methods such as cluster RCTs and interrupted time-series make it possible in some cases to avoid individual consent and assess overall effectiveness of interventions on defined populations.21,22 However, these designs have practical and scientific limitations, particularly in comparative effectiveness studies such as the current one with four comparison groups. Additionally, in cluster RCTs and time-series studies, it may be more difficult to collect data on and adjust for population differences, 23 resulting in potential over- and under-estimates of effect size.24

Interrupted time-series studies attempt to account for temporal trends, but as CRC screening rates have been steadily increasing,4 it might be difficult to distinguish temporal trends from intervention effects. Comparing CRC screening outcomes in participants to those eligible but not invited to participate could potentially be used to improve assessment of external validity.25,26 However, because those not invited were not surveyed, eligibility based on self-reported CRC tests would be unknown.

Glasgow and others as part of the Reach, Effectiveness, Adoption, Implementation, and Maintenance Framework (RE-AIM)27 have called for increased reporting of participation rates and representativeness of study populations. 28,29 Reach, defined as the percentage of those invited and representativeness of those who agree to participate in a trial or program, can be used to quantify the potential impact of a study in other settings. Participation rates vary using different definitions of eligibility (Table 1). Similar to Glasgow, the authors favor defining participation rates as including those who are potentially, but not confirmed, eligible.28 Thus, the population-based impact of the SOS intervention could be estimated as reach (acceptance rate of 41%) times effectiveness, and if there are differential effects of the intervention across by subgroups, a correction coefficient for under-represented populations could be applied. 30,31 However, such calculations are estimates only, because some nonparticipants would be found to be ineligible if they had been interviewed (which we conservatively did not correct for) and others who were eligible might complete CRC screening offered as part of a quality initiative.

This study has several limitations. It cannot be assumed that all subjects up-to-date for CRC testing received those tests for screening, and patients receiving diagnostic tests might differ from those screened. There are also limited data on nonparticipants without self-reported data; however, imputed race and education (Appendix B, available online at www.ajpmonline.org) revealed trends similar to self-reports. Automated data are presented in aggregate only, as nonparticipants did not consent to have their EMR data reviewed, and adjusted relative risks could only be estimated for self-reported data.

Additionally, patients were recruited from an insured population in the Pacific Northwest; individuals without insurance or who receive care in other healthcare settings might respond differently to being invited to participate in a CRC screening study. Further, the ability to speak English was an eligibility requirement. Because Group Health did not collect data on race at the time of the study, it was not feasible to tailor materials or interviews to specific ethnic subgroups. Inclusion of non-English speakers and provision of culturally tailored interviews might have increased minority participation.

This study, however, has notable strengths, including the ability to characterize the entire age-eligible population from which the patients were recruited. Lamerato et al.,32 in a report of recruitment to the Prostate, Lung, Colorectal, and Ovarian Cancer Screening (PLCO) Trial, described demographic and socioeconomic characteristics of the recruitment cohort within a defined population, the Henry Ford Health System, with minority populations being less likely to participate; however, other PLCO sites that used recruitment strategies such as advertisements or referral were not able to provide similar data. The present study adds additional information on the health care and lifestyle behaviors of the recruitment pool and their relationship to participation in a trial to increase CRC screening rates.

Conclusion

Patients eligible for but not participating in a CRC screening trial were more likely to be from minority groups and have behaviors that negatively affect cancer outcomes. Results of this analysis will be used to quantitatively assess the reach and representativeness of the SOS trial and to propose study designs and strategies to reduce disparities.

Supplementary Material

01

Acknowledgments

This research was funded by grant R01CA121125 from the National Cancer Institute of the NIH.

We thank Annie Shaffer for her assistance in manuscript preparation, editing, and administrative support.

Footnotes

No financial disclosures were reported by the authors of this paper.

Appendix

Supplementary data

Supplementary data associated with this article can be found, in the online version, at doi:10.1016/j.amepre.2011.11.014.

References

1. Hewitson P, Glasziou P, Irwig L, Towler B, Watson E. Screening for colorectal cancer using the faecal occult blood test, Hemoccult. Cochrane Database Syst Rev. 2007;(1):CD001216. [PubMed]
2. U.S. Preventive Services Task Force. Screening for colorectal cancer: an updated systematic review. Rockville MD: Agency for Healthcare Research and Quality; 2008.
3. CDC. Vital signs: breast cancer screening among women aged 50–74 years—U.S., 2008. MMWR Morb Mortal Wkly Rep. 2010;59(26):813–816. [PubMed]
4. CDC. Vital signs: colorectal cancer screening, incidence, and mortality—U.S., 2002–2010. MMWR Morb Mortal Wkly Rep. 2011;460(26):884–889. [PubMed]
5. Holden DJ, Harris R, Porterfield DS, et al. Enhancing use and quality of colorectal cancer screening. Rockville MD: Agency for Healthcare Research and Quality; 2010. Report No.: 10-E002.
6. Wendler D, Kington R, Madans J, et al. Are racial and ethnic minorities less willing to participate in health research? PLoS Med. 2006;3(2):e19. [PMC free article] [PubMed]
7. Ford JG, Howerton MW, Lai GY, et al. Barriers to recruiting underrepresented populations to cancer clinical trials: a systematic review. Cancer. 2008;112(2):228–242. [PubMed]
8. Rabin BA, Glasgow RE, Kerner JF, Klump MP, Brownson RC. Dissemination and implementation research on community-based cancer prevention: a systematic review. Am J Prev Med. 2010;38(4):443–456. [PubMed]
9. Weiss NS, Koepsell TD, Psaty BM. Generalizability of the results of randomized trials. Arch Intern Med. 2008;168(2):133–135. [PubMed]
10. Green BB, Wang CY, Horner K, et al. Systems of support to increase colorectal cancer screening and follow-up rates (SOS): design, challenges, and baseline characteristics of trial participants. Contemp Clin Trials. 2010;31(6):589–603. [PMC free article] [PubMed]
11. Ridpath JR, Wiese CJ, Greene SM. Looking at research consent forms through a participant-centered lens: the PRISM readability toolkit. Am J Health Promot. 2009;23(6):371–375. [PubMed]
12. Starfield B, Weiner J, Mumford L, Steinwachs D. Ambulatory care groups: a categorization of diagnoses for research and management. Health Serv Res. 1991;26(1):53–74. [PMC free article] [PubMed]
13. Weiner JP, Starfield BH, Steinwachs DM, Mumford LM. Development and application of a population-oriented measure of ambulatory care case-mix. Med Care. 1991;29(5):452–472. [PubMed]
14. Breslau N, Reeb KG. Continuity of care in a university-based practice. J Med Educ. 1975;50(10):965–969. [PubMed]
15. Elliott MN, Fremont A, Morrison PA, Pantoja P, Lurie N. A new method for estimating race/ethnicity and associated disparities where administrative records lack self-reported race/ethnicity. Health Serv Res. 2008 May 12; [Epub ahead of print]. [PMC free article] [PubMed]
16. DiClemente C, Prochaska J. Toward a comprehensive, transtheoretical model of change. In: Miller WR, Heather N, editors. Treating addictive behaviors. New York: Springer; 1998. pp. 3–24.
17. Vernon SW, Bartholomew LK, McQueen A, et al. A randomized controlled trial of a tailored interactive computer-delivered intervention to promote colorectal cancer screening: sometimes more is just the same. Ann Behav Med. 2011;41(3):284–299. [PMC free article] [PubMed]
18. McNutt LA, Wu C, Xue X, Hafner JP. Estimating the relative risk in cohort studies and clinical trials of common outcomes. Am J Epidemiol. 2003;157(10):940–943. [PubMed]
19. Beebe TJ, Talley NJ, Camilleri M, Jenkins SM, Anderson KJ, Locke GR., 3rd The HIPAA authorization form and effects on survey response rates, nonresponse bias, and data quality: a randomized community study. Med Care. 2007;45(10):959–965. [PubMed]
20. Lai GY, Gary TL, Tilburt J, et al. Effectiveness of strategies to recruit underrepresented populations into cancer clinical trials. Clin Trials. 2006;3(2):133–141. [PubMed]
21. Fan E, Laupacis A, Pronovost PJ, Guyatt GH, Needham DM. How to use an article about quality improvement. JAMA. 2010;304(20):2279–2287. [PubMed]
22. Handley MA, Schillinger D, Shiboski S. Quasi-experimental designs in practice-based research settings: design and implementation considerations. J Am Board Fam Med. 2011;24(5):589–596. [PubMed]
23. Lewis C, Pignone M, Schild LA, et al. Effectiveness of a patient- and practice-level colorectal cancer screening intervention in health plan members: design and baseline findings of the CHOICE trial. Cancer. 2010;116(7):1664–1673. [PMC free article] [PubMed]
24. Kunz R, Vist G, Oxman AD. Randomisation to protect against selection bias in healthcare trials. Cochrane Database Syst Rev. 2007;(2) MR000012. [PubMed]
25. Hoffman RM, Steel SR, Yee EF, et al. A system-based intervention to improve colorectal cancer screening uptake. Am J Manag Care. 2011;17(1):49–55. [PubMed]
26. del Junco DJ, Vernon SW, Coan SP, et al. Promoting regular mammography screening, I. A systematic assessment of validity in a randomized trial. J Natl Cancer Inst. 2008;100(5):333–346. [PMC free article] [PubMed]
27. Glasgow RE, Klesges LM, Dzewaltowski DA, Estabrooks PA, Vogt TM. Evaluating the impact of health promotion programs: using the RE-AIM framework to form summary measures for decision making involving complex issues. Health Educ Res. 2006;21(5):688–694. [PubMed]
28. Glasgow RE, Strycker LA, Kurz D, et al. Recruitment for an internet-based diabetes self-management program: scientific and ethical implications. Ann Behav Med. 2010;40(1):40–48. [PubMed]
29. Kessler R, Glasgow RE. A proposal to speed translation of healthcare research into practice: dramatic change is needed. Am J Prev Med. 2011;40(6):637–644. [PubMed]
30. Glasgow RE, Nelson CC, Strycker LA, King DK. Using RE-AIM metrics to evaluate diabetes self-management support interventions. Am J Prev Med. 2006;30(1):67–73. [PubMed]
31. National Cancer Institute. RE-AIM: proposal for impact calculators. Implementation science: integrating science, practice and policy. 2011 cancercontrol.cancer.gov/IS/reaim/calculations.html.
32. Lamerato LE, Marcus PM, Jacobsen G, Johnson CC. Recruitment in the prostate, lung, colorectal, and ovarian (PLCO) cancer screening trial: the first phase of recruitment at Henry Ford Health System. Cancer Epidemiol Biomarkers Prev. 2008;17(4):827–833. [PubMed]