|Home | About | Journals | Submit | Contact Us | Français|
Members of vulnerable populations are underrepresented in research studies.
To evaluate and synthesize the evidence regarding interventions to enhance enrollment of vulnerable populations into health research studies.
Studies were identified by searching MEDLINE, the Web of Science database, personal sources, hand searching of related journals, and article references. Studies that contained data on recruitment interventions for vulnerable populations (minority, underserved, poor, rural, urban, or inner city) and for which the parent study (study for which recruitment was taking place) was an intervention study were included. A total of 2,648 study titles were screened and 48 articles met inclusion criteria, representing 56 parent studies. Two investigators extracted data from each study.
African Americans were the most frequently targeted population (82% of the studies), while 46% targeted Hispanics/Latinos. Many studies assessed 2 or more interventions, including social marketing (82% of studies), community outreach (80%), health system recruitment (52%), and referrals (28%). The methodologic rigor varied substantially. Only 40 studies (71%) incorporated a control group and 21% used statistical analysis to compare interventions. Social marketing, health system, and referral recruitment were each found to be the most successful intervention about 35–45% of the studies in which they were attempted, while community outreach was the most successful intervention in only 2 of 16 studies (13%) in which it was employed. People contacted as a result of social marketing were no less likely to enroll than people contacted through other mechanisms.
Further work with greater methodologic rigor is needed to identify evidence-based strategies for increasing minority enrollment in research studies; community outreach, as an isolated strategy, may be less successful than other strategies.
Although The NIH Revitalization Act of 1993 authorized that minorities be appropriately represented in clinical trials, they continue to have lower enrollment rates in health research when compared to nonminority groups.1,2 Several barriers specific to minority recruitment have been identified.3 Barriers at the institutional level include provider time constraints and competing service demands, while researcher barriers include multicultural differences, lack of knowledge, and bias against research leading to inactive recruitment.3–5 Barriers at the individual level include distrust of research, lack of confidentiality, fear of safety, schedule conflicts, poor access to medical care, lack of knowledge, language, and cultural differences.3,5–14 Despite such barriers, reports have appeared that minorities may be as willing to enroll in health research as whites if they are offered the opportunity to participate.15–17
As barriers to recruiting minorities have become more clearly understood, there has been increased interest in designing recruitment interventions to overcome them.13,18 Suggested approaches include recruiting through partnerships with churches, community leaders and organizations, recruiting at community-based clinic sites, and providing logistical assistance or financial enticement.13,18 Social marketing, the use of marketing to design and implement programs to promote socially beneficial behavior change, has also been advocated as a recruitment strategy.19 However, marketing campaigns can be expensive, and it is not clear if prospective participants who learn of a study through marketing are as likely to enroll as patients recruited through other mechanisms.20 That is, if social marketing campaigns result in a large pool of candidates, of which many are not eligible, then this approach may not be efficient.
A recent review of minority cancer trial enrollment strategies found that “...more than a decade following the institution of (the) NIH requirement, enrollment of minority populations into cancer clinical trials remains woefully inadequate”.18 Although there has been a call for empiric research to identify which recruitment interventions are effective, we are unaware of any synthesis of the evidence.13,18 There is also a question of the methodologic rigor of trial enrollment studies, as prior work has suggested that studies of recruitment interventions vary in their definition of successful recruitment, and others have called for more complete reporting of racial, ethnic and cultural composition of subjects in trials as well as thorough descriptions of recruitment successes and failures.13,21 We therefore conducted a systematic review to identify which minority recruitment strategies have been attempted and which ones are effective and efficient. We also assessed the methodologic rigor of these studies with regard to quantitative and qualitative data reporting and ability to draw inferences.
The MEDLINE database was searched from 1966 up to April 2005 (see Appendix for search terms). Further articles were found using hand searching of 3 journals including Journal of the National Medical Association, Controlled Clinical Trials, and Ethnicity and Disease from January 2002 to April 2005. Additional studies were identified by personal sources and reference sections of relevant studies. This strategy was supplemented by using the Web of Science database to generate a list of articles that cited studies of interest.
Studies that reported on recruitment intervention(s) with respect to recruiting special populations (defined as minority, underserved, lower socioeconomic status (SES), rural, urban, or inner city) into research studies were eligible. The parent study (the study for which recruitment was taking place) was also required to be an intervention. Studies were excluded if their target population did not specifically focus on minorities or socioeconomically underserved, or if the special population of interest was defined by gender or age. Studies that exclusively identified barriers to recruitment were also excluded, as our intent was to identify studies that evaluated recruitment interventions. Three investigators (S.U., S.P, and C.P.G.) reviewed 2,648 total citations and 96 full articles. A total of 48 manuscripts were eligible for inclusion, representing recruitment interventions for 56 parent studies (Fig. 1).22–69
Two investigators (S.U, S.P) extracted the number of people screened, eligible, and enrolled from each study. Study quality was also evaluated using explicit criteria including description of recruitment interventions, assessing efficacy of intervention, controlling for bias, data reporting, and external validity.70 We also assessed internal validity, by determining whether there were sufficient data to support authors’ conclusions about the efficacy of their intervention. Insufficient evidence was defined as the lack of reported statistical significance in enrollment between intervention groups. After dual review of each manuscript, disagreements were resolved by consensus.
Interventions were categorized into 4 main categories: social marketing, community outreach, referrals, and health system recruitment. Social marketing includes mass mailing, mass telephone calls, and media (TV, radio, newspaper, magazines, newsletters, brochures, flyers, PSA, specialty publications to a target group). Community outreach includes church recruitment, contact with community leaders and organizations, community presentations and meetings usually carried out by the research team in the community, health screenings, house to house/door to door/face to face contact in the community, community events participation with a booth, etc. Referrals include those from friends, family, other participants in the same study, participants from another study, etc. Health system recruitment strategies include direct recruitment by the healthcare provider or allowing research staff to approach potential subjects in clinical settings, or the use of medical records/registries to identify patients. The remaining interventions were classified as “other”.
Heterogeneity of study designs precluded quantitative metaanalysis. We calculated the proportion of study participants derived from each recruitment intervention as follows: number enrolled by a recruitment intervention/total number enrolled in study × 100. When possible, the proportion enrolled from different interventions was compared using chi-square tests. To present such results in aggregate, an additive method was used to determine the number of times a recruitment intervention was most successful in obtaining enrolled subjects, in contrast with the total number of studies in which it was attempted. An intervention was counted 0.5 instead of 1 if it was only included in a study as part of a combination of interventions (such as Community outreach and Health system).
When sufficient data were available, we assessed efficiency by calculating the recruitment fraction, which was defined as the number enrolled/number screened per recruitment intervention available.71 When possible, the recruitment fraction for social marketing vs “all other interventions” combined (community outreach + referrals + health system) were compared, given our a priori hypothesis that social marketing would have a lower recruitment fraction than other approaches.
In the final study sample, many of the 56 studies targeted more than 1 minority group. African Americans were the most frequently targeted population (82% ), while slightly less than half targeted Hispanics/Latinos (46%; Table 1). Older or elderly populations were identified by authors in 23% of the studies, and low SES or underserved populations in 18% of studies. Approximately 78% of the parent studies were prevention studies and the remaining 22% were treatment studies. The behavior/lifestyle subcategory was the most common type of preventive study, while pharmacotherapy was the most common treatment intervention studied.
Many studies assessed 2 or more interventions. Social marketing was used in 82% of studies, community outreach in 80%, followed by health system recruitment (52%), and referrals (28%; Table 1). Social marketing strategies included: media (68%), mass mailing (41%), and telephone calls (16%). The most common community outreach interventions included partnerships with churches (38%), contact with community leaders and organizations (36%), and presentations/meetings in the community (34%). The most frequent type of health system intervention was approaching health care providers directly (36%), followed by direct patient recruitment at a health center (27%), and chart review or registry (20%; Table 1).
The quality of study reporting was variable (Table 2). All (100%) of the studies reported their objective, the target population demographics, the recruitment interventions, and the locations for which recruitment would take place. However, less than 15% of studies reported the time spent or cost per recruitment intervention studied (Table 2). Only 48, 29, and 55% of studies reported the number approached/screened, eligible, and enrolled per recruitment intervention, respectively. It was difficult to assess external validity of many studies, as only 18% of studies reported that they recruited a sample that was representative of the entire population from which they were recruited, while 71% did not report a comparison between their sample and the target population.
The approach to evaluating the efficacy of the recruitment intervention also varied substantially. Only 71% of studies had a control group; of these 40 studies, 2 used randomization as the method to determine which group would receive which recruitment intervention (5%). An additional 55% employed nonrandom assignment of interventions concurrently to 2 or more different groups of prospective participants using geographic location or other factors to target interventions, and 33% of studies applied recruitment intervention(s) sequentially.
Because few interventions were applied using randomization, it is important to compare the characteristics of groups receiving the interventions. However, authors in only 13% of studies stated that groups receiving differing recruitment interventions were similar with regard to sociodemographic and clinical characteristics. In 20% of the studies, authors stated that the groups were not similar and in 68% of the studies, group similarity was unclear. Furthermore, only 21% of studies used formal statistical analysis to compare recruitment interventions. Similarly, we found that only 20% of studies had sufficient data to support author’s conclusions about the efficacy of their interventions.
The proportion of participants enrolled through separate recruitment interventions was reported in 20 studies (Table 3). Social marketing, alone or in combination with community outreach, yielded the highest number of participants in 9 studies. In these studies, which involved heterogeneous target groups, social marketing frequently yielded more than two-thirds of the study sample. Health system interventions were most effective in 5 studies, many of which involved patients with specific chronic conditions. In aggregate, social marketing was the most successful intervention 44% of the studies it was attempted (8 of 18), health system in 40% of the studies (5 of 12.5), referrals in 35% (3 of 8.5), and community outreach in only 13% of the studies in which it was attempted (2 of 16; Fig. 2).
We found 11 studies that compared the recruitment fraction between social marketing and other interventions. There was no consistent evidence suggesting that social marketing was more or less efficient than other approaches. Three manuscripts reported no statistically significant difference between approaches (Table 4). Of the 8 remaining studies, 5 reported that patients identified through social marketing interventions were significantly less likely to enroll than patients identified through other mechanisms, while 3 found that social marketing yielded patients who were more likely to enroll (Table 4).
This systematic review of the literature confirms that while numerous recruitment interventions for this population have been assessed, methodologic rigor is variable and the body of evidence has significant gaps. The heterogeneous nature of the trials’ interventions, target populations, and study designs rendered data synthesis challenging. Data on relative effectiveness of different recruitment approaches would be optimized if future studies could be designed to enhance internal and external validity as well as the transparency of reporting.
Some of the more commonly attempted recruitment interventions for minority populations included church recruitment and interaction with community organizations and leaders. Health system recruitment, including registry/chart review and physician referrals, was also assessed in about half of the studies. Given the recent emphasis on community-based research, it was surprising that health system recruitment was such a commonly used recruitment approach.6,7,9,10 This may have arisen from the perceived importance of physician referrals as gatekeepers to minority research recruitment, which has been previously documented.4,11,15
There was no clear dominant strategy. In fact, social marketing, health system, and referral interventions led to the highest proportion of participants 44, 40, and 35% of the times in which they were tried, respectively. The main outlier was community outreach which was the most successful in a mere 13% of the studies in which it was assessed. Although heterogeneity of studies and populations renders definitive comparisons difficult, the poor showing of community outreach merits further exploration as this is a commonly proposed intervention.3,7,9,10,22,72 It is important to note that community outreach likely has the advantage of being the recruitment strategy that may alleviate distrust.3,6 Our finding that outreach rarely was the most effective independent strategy may stem from the individual studies/populations in our review. Alternatively, perhaps the benefits of a collaboration with the community cannot be quantified as an independent strategy, but rather are most effective when incorporated into a comprehensive approach that includes other recruitment interventions.
We were surprised to note that some commonly proposed recruitment incentives such as financial reimbursement (cash or coupons) or convenience strategies (transportation/flexible scheduling) were infrequently assessed. Although we identified 9 studies that used financial incentives to assist in recruitment, none of them compared the effectiveness of this incentive with other recruitment interventions. For instance, Vollmer, et al. indicated that only 6 and 20% of African Americans and other minorities, respectively, cited financial incentives as their primary reason for participation, although recruitment fraction or enrollment proportions could not be calculated.62 This is an important gap in the literature. Financial incentives are commonly recommended, but they are not without controversy. Some authors have raised concerns that payments to underserved populations represents coercion, while others have suggested that payments may bias study results obtained or population recruited.73–79 Accordingly, payments could potentially have a negative impact on trust, especially in vulnerable populations that might already be distrustful of research.7,77
Our review also demonstrated that both the reporting and the methodological rigor of recruitment intervention studies has room for improvement. Inclusion of quantitative screening and recruitment data in manuscripts would allow for a more transparent assessment of which recruitment intervention lead to the most subjects screened (greatest reach), most eligible participants (relevant reach), and enrolled. A limitation for assessing media and community outreach recruitment includes the difficulty in ascertaining the number of people who were exposed. While it would not be feasible to determine the number of people who had seen or heard a specific advertisement, future studies should try to report the size of a target audience and frequency of exposures at the population level as well as determining the efficacy of interventions across racial and ethnic groups. Although this was beyond the scope of our study, the comparison of recruitment interventions across different target populations (such as different racial or ethnic groups) should be performed when data are available because the effectiveness of interventions may be moderated by cultural and contextual effects.
Additionally, it is important to report the amount of time and cost per recruitment intervention (which was done in fewer than 15% of studies in our sample) as this could affect decision making for researchers and funders. Finally, authors need to report if the different populations they are recruiting via different recruitment interventions are balanced with regard to sociodemographic or clinical characteristics and if they are representative of the larger population from which they were recruited.
In addition to the quality of reporting study data, we found room for improvement in the internal validity of many study designs. A control recruitment group was lacking in about one-third of our studies, and those that did employ a control group rarely used randomization as a means to diminish bias. Furthermore, with only 21% of the studies using formal statistical analysis, it was unsurprising that only a minority of studies had the data to support the authors’ conclusions. When the validity of authors’ conclusions and study outcomes are questionable, interpreting the data presented by such studies is difficult.
An important caveat when considering the frequency of “statistically significant” findings across studies is the variation in sample size. However, we found no systematic relation between sample size, study design, and outcome in our dataset (data not shown). Finally, the 4 categories of recruitment approaches included in this study were intentionally rather broad so as to be inclusive and concordant with the depths of reporting provided by the manuscripts. We recommend that future studies report data per specific recruitment intervention to allow for more detailed analyses of the success of various approaches.
Approaches that are effective in 1 setting or 1 population may not be generalizable. We were therefore concerned that there was a paucity of studies that focused on Asian Americans, Native Americans, and other minority populations, as researchers have little evidence to guide their efforts for these populations. Another population that merits further research is the minority elders. When trying to enroll these individuals, researchers face the challenges associated with the inclusion of minority participants as well as those encountered when recruiting older adults into disease-oriented clinical trials such as a higher prevalence of comorbid illness and functional impairments.80,81 Additionally, many of our parent studies were prevention rather than treatment trials. Hence, our findings may not be applicable to other populations, types of studies, or disease entities, as each of these factors may have different barriers to recruitment.25 Finally, our sample was restricted to manuscripts that reported recruitment data; these manuscripts represent a minority of the clinical trials conducted and published. Many other studies may have used or evaluated different recruitment approaches but did not report results.
It is important to identify not only successful recruitment interventions but also efficient ones. We did not find any evidence that social marketing was any less efficient—in terms of the proportion of screened patients who eventually enroll—than any other intervention. Although social marketing appears to yield roughly the same proportion of study subjects as health system and referral approaches without lower recruitment fractions, further studies are needed to determine whether its efficiency as a recruitment intervention is comparable to other strategies. Researchers need to weigh their resources of time, financial budget, effort, and expertise to choose which of the recruitment interventions are truly ideal for the population they are attempting to recruit.
At a time when minority recruitment for research continues to be an important issue that many still find challenging, it is vital to develop and evaluate innovative approaches to increase minority participation. For instance, the National Cancer Institute’s Minority-Based Community Clinical Oncology Program (MBCCOP) has enrolled thousands of minority patients into cancer clinical trials.82 This suggests that minorities can be recruited when appropriate outreach and infrastructure mechanisms are supported. While we found that there is an increasing body of evidence regarding approaches to enhance equitable recruitment, more work is needed. Improved methodologic rigor and data reporting are paramount to continue assessing successes or failures of recruitment interventions in such special populations. Perhaps, with rigorous evaluation of recruitment methods and adequate funding for strategies that are found to be effective, the goals of the NIH with respect to equitable and representative access to research studies can finally be realized.
Dr. Gross’s efforts were supported by Claude D. Pepper Older` Americans Independence Center at Yale (P30AG21342).
Potential Financial Conflicts of Interest None disclosed.