|Home | About | Journals | Submit | Contact Us | Français|
A diverse panel convened in June 2011 to explore a dilemma in human research: some traits may make individuals or communities particularly vulnerable to a variety of harms in research; however, well-intended efforts to protect these vulnerable individuals and communities from harm may actually generate a series of new harms.
We have presented a consensus statement forged by the panel through discussion during a 2-day meeting and the article-writing process.
We have identified practical problems that sometimes arise in connection with providing additional safeguards for groups labeled as vulnerable and offered recommendations on how we might better balance concerns for protection with concerns for justice and participant autonomy.
“Regrettably, the term ‘vulnerable’ too often gets played as a bioethical trump card, summarily tossed on the table in the course of debate, sometimes with the stern admonition that it would not be decent to exploit such subjects. Given the absence of agreed-upon standards for identifying and responding to vulnerability, such a move too often serves as a conversation-stopper, abruptly ending dialogue rather than furthering it. It may be possible to do better.”–K. Kipnis1(pG3)
As part of a scientific meeting enabled by a National Institute of Mental Health grant to help identify best practices for mental health research ethics, a diverse panel convened in June 2011 to explore a dilemma in human research: some traits may make individuals or communities particularly vulnerable to a variety of harms in research; however, well-intended efforts to protect these vulnerable individuals and communities from harm may actually generate a series of new harms.
At a daylong public conference, individuals representing mental health consumers, research ethicists, medical sociologists, psychiatric researchers, and substance abuse researchers presented data and reflections from their own and others’ research. These panelists then met for a second day of closed meetings to explore ways of addressing the dilemmas arising in research with vulnerable participants. The group forged consensus through discussion during the 2-day period and the article-writing process.
Research is generally safe and can present significant benefits to individuals, communities, and society at large2; however, it can also pose the risk of significant physical, psychological, social, legal, or economic harms.3 Although none of us are fully capable of fully protecting ourselves at all times, some factors may make it particularly challenging to protect ourselves in the context of research. The National Bioethics Advisory Commission identified a set of such factors, including cognitive, institutional, economic, and social vulnerabilities.4 Individuals with cognitive deficits may find it unusually difficult or impossible to understand and evaluate consent information. Being institutionalized or economically disadvantaged may make it difficult to say no to requests to participate in research. Kipnis, an advisor to the National Bioethics Advisory Commission, noted that oftentimes vulnerabilities arise only in specific contexts or relationships, but regardless of the source, they are generally of concern insofar as they may call into question the quality of informed consent.1 Furthermore, belonging to a socially marginalized minority group may reduce the likelihood of receiving adequate protections.5
People may manifest more than 1 vulnerability or risk factors for problems with informed consent or research protections. In the now infamous Tuskegee syphilis study, researchers observed the natural progression of syphilis in 400 African American men without providing treatment of the disease or its sequelae (neither the standard heavy metal treatment available at the study’s inception nor antibiotics when they became available). The fact that participants were poor, inadequately educated, African American men living in the rural south in the 1930s may help explain how this harmful, non-therapeutic study continued for 40 years.6,7
Largely in reaction to the Tuskegee study, the US Congress established the National Commission for the Protection of Human Subjects. The commission’s best-known document, the Belmont Report, provides an ethical framework to guide human research. The report urges researchers to show respect for persons by ensuring that they enter into research voluntarily and with adequate information and by protecting those with diminished autonomy. The commission also outlined a set of regulations (45CFR46) now known as the “common rule,” which are meant to specify and implement the principles of the Belmont Report.8
Current regulations address vulnerabilities in research by requiring additional safeguards for groups of participants. We use the term “group” with reservations. Given that research protocols create groups through sampling (regardless of whether the sample is drawn from a naturally occurring community), regulators and institutional review boards (IRBs) often target groups for protections; but in reality we are dealing with unique individuals who have become part of a heterogeneous group only because of the sampling intentions of a researcher.
For 3 groups (pregnant women, fetuses, and neonates; prisoners; and children), special safeguards are enumerated in special subparts of the common rule (45CFR46, subparts B–D). Moreover, the common rule calls for unspecified additional safeguards when participants “are likely to be vulnerable to coercion or undue influence, such as children, prisoners, pregnant women, mentally disabled persons, or economically or educationally disadvantaged persons” (45CFR46.111(b)). Our focus is on research involving individuals with mental health or substance use disorders—potential participants who are frequently viewed as requiring such unspecified additional safeguards. However, much of our commentary could be generalized to other groups of participants.
Researchers in the fields of public health, mental health, substance abuse, and HIV/AIDS are familiar with the implications of these policies. For example:
Although researchers may find such measures unreasonable, IRBs may feel that they are required by the regulatory demand for additional safeguards when participants are considered vulnerable.
Although these measures are undoubtedly well intentioned, following the status quo produces a host of ethical concerns.
Labeling particular groups as at risk for lacking decisional capacity or as incapable of making a voluntary choice reinforces stigma or stereotypes, when in fact members of such groups are frequently diverse and function as well as so-called healthy volunteers.15,16
The problem of stigma exists even when participants are indeed at risk for decisional incapacity or undue influence. However, this labeling is frequently unfair—the result of stereotypes and untested assumptions. For example, systematic review articles report that most studies of decisional capacity involving participants with schizophrenia have found that a majority of individuals retain decisional capacity.15,17 Nevertheless, Luebbert et al. have found that IRB members overestimate the risk of incapacity in populations with psychiatric diagnoses and underestimate the risk in populations with nonpsychiatric medical diagnoses that may impair decisional incapacity.18
Whereas the Belmont Report’s primary concern with justice was to ensure that vulnerable populations are not exploited, HIV and breast cancer activists argued that injustices arise when individuals or communities are denied access to studies that could lead to cures or improve lives. Although ethical requirements may sometimes legitimately erect barriers to research, erecting barriers unnecessarily may be harmful and unjust.19,20
Sometimes a participant will fail to understand information about a study because the consent form is too long and complex, the timing is bad, or recruiters explain things poorly.21–24 Routinely excluding those who perform poorly on a test of understanding of consent information may permit system problems to go uncorrected, particularly in research with vulnerable participants when there is an increased likelihood to attribute poor understanding to participant traits.
No one is perfectly autonomous: we all make decisions with imperfect information, reasons, and motivations. When participants genuinely lack the ability to make a decision for themselves (e.g., in head trauma research), excluding their participation does no violence to their autonomy. However, denying others the opportunity to volunteer for a study may be an inappropriate infringement on their autonomy, particularly when doing so is plagued by the reinforcement of stigma, unfair labeling owing to untested assumptions, hindering research unnecessarily, and ignoring system problems.25–27
In discussing the application of the principle of respect for persons, the Belmont Report observed that sometimes it is unclear just how it should be applied. The report suggests that the example of prisoner research may be instructive:
On the one hand, it would seem that the principle of respect for persons requires that prisoners not be deprived of the opportunity to volunteer for research. On the other hand, under prison conditions they may be subtly coerced or unduly influenced to engage in research activities for which they would not otherwise volunteer. Respect for persons would then dictate that prisoners be protected. Whether to allow prisoners to “volunteer” or to “protect” them presents a dilemma. Respecting persons, in most hard cases, is often a matter of balancing competing claims urged by the principle of respect itself.3(section B.1)
The following guidelines represent our attempt to restore balance by reconsidering standard ways of addressing vulnerabilities to reduce the downsides of our efforts to protect research participants while fostering genuine respect for them.
Begin by considering the risks posed by the study design before considering additional safeguards. Current regulations exempt 6 forms of research with general populations because they involve no more than minimal risk; this means that the regulations, with their insistence on additional safeguards, simply do not apply (45CFR46.101(b)). It is unclear how, say, being a prisoner increases the risks involved in participating in a 15-minute anonymous survey. From an ethical perspective, when studies meet the requirements for exemption, participants may not need any additional safeguards, and providing them can be counter to good sense.28
Offer as many protections as necessary and as few as possible. This kind of guideline has been embraced in other contexts, for example prescribing painkillers or accessing protected health information. Too few protections may place participants at unnecessary risk, whereas too many protections may decrease the exercise of autonomy, reinforce stigma, and unnecessarily hinder research. This principle simply articulates a commitment to the kind of balancing the Belmont Report requires.
Universally require consent assessments when justified by risk levels. Some medical conditions, such as cancer and diabetes, can cause significant pain that may threaten capacity to consent more than do many psychiatric diagnoses. Sometimes failures to understand consent information are owing to system failures, and any of us can be vulnerable at any given time depending on a number of contextual factors. Thus, when the risks of a study are so significant that we want to ensure that participants fully understand and appreciate them, it is appropriate to screen participants regardless of their diagnoses or lack thereof. Fortunately, brief screening tools exist that can identify whether the consent process was successful. For example, the University of California, San Diego, Brief Assessment of Capacity to Consent consists of 10 items that refer to any study protocol; it can be administered in less than 5 minutes and scored with excellent reliability.29 Similarly, the Agency for Healthcare Research and Quality offers a researcher’s certification of consent and authorization form that can be used to guide a meaningful consent process and ensure adequate comprehension.30
Use best data—not stereotypes and untested assumptions—to guide development of safeguards. As noted above, IRBs frequently worry that cash payments to drug users will exacerbate drug use. Recent studies by Festinger et al. have found that this assumption is false. In their initial study, payments of $10, $40, or $70 in cash or gift card were made to participants sampled from an out-patient substance abuse treatment program to determine the impact on new drug use, perceived coercion, study satisfaction, and follow-up rates14; in a follow-up study, payments of $70, $100, $130, and $160 in either cash or gift card were assessed.31 Neither study found higher payments or cash payments to be associated with new drug use as measured by urine analysis or with perceived coercion. By contrast, higher payments and cash payments were correlated with higher study satisfaction and follow-up rates with fewer tracking efforts.
Assess the subjective outcomes of the consent process rather than decisional capacity. Outcomes of a consent process should include participant understanding and appreciation of key information. Yet in the context of informed consent, it may be more appropriate to refer to understanding and appreciation as subjective outcomes than as evidence of decisional capacity, which implies that failures to understand and appreciate information arise because of participants’ cognitive deficits (their cognitive capacity or abilities). The phrase “subjective outcomes of the consent process” more accurately describes what most tests of decisional capacity actually assess. Whether a participant understands consent information may tell us more about the consent process (the readability of the consent form and the quality of the consent discussion) than it does about the participant’s decisional capacity or cognitive deficits. By focusing on the subjective outcome of the consent process rather than the individual’s decisional capacity, we will achieve the goal of ensuring that participants understand and appreciate crucial information while diminishing the focus on individuals’ presumed deficits. This approach is consistent with the desired shift from a focus on presumed deficits to a focus on empowerment that has been repeatedly expressed by mental health service users.27,32
Assessment can also guide refinement of the consent content and development of effective modes of delivery that are responsive to needs of research participants and diverse populations.
When additional safeguards are necessary, consider the attitudes and priorities of affected communities. Will reasonable payments for participants’ time be perceived as respectful or as manipulative? Will routinely reading a consent form aloud be viewed as considerate or as insulting? Will the inclusion of a participant advocate in the consent process be viewed as helpful or an intrusion on privacy? We cannot know the answers to these questions a priori. However, participants are often more than willing to share their views with us, and these views may rightly inform our decisions regarding additional safeguards.13,33 As advocates for community-based participatory research have long recognized, providing communities with a voice on matters of study designs is also a sign of respect and humility, acknowledging that researchers can learn from participant communities.27,34 A wide range of community engagement activities may accomplish the purpose of giving participants a voice in matters of research protections, ranging from traditional community-based participatory research processes to reviewing publications that report the attitudes of communities.35
Table 1 provides a summary of how these recommendations can be used to restore balance when weighing protection versus overall respect for vulnerable populations.
With the exception of our recommendation to avoid additional safeguards in research that poses no more than minimal risk to participants (including prisoners), the guidelines we have suggested are consistent with current federal regulations for the protection of human participants. For example, additional safeguards could be informed by dialogue with affected communities and might be required of all protocols regardless of population once a specific risk threshold is reached. Such safeguards would then be additional insofar as they go beyond standard protections, not insofar as they isolate 1 group for paternalistic interventions.
Most aspects of the standard approach to addressing vulnerabilities in research are not mandated by regulations but, rather, rest on a common tradition of interpreting the regulations. In fact, the status quo arguably flies in the face of the ethical framework mandated by federal regulations. The Belmont Report discusses the fact that designing an ethically acceptable research protocol necessarily involves balancing competing goals. Even in the context of 1 principle, such as respect for persons, it is necessary to balance competing aims (e.g., protecting people from undue influence and respecting their ability to make voluntary choices).
It is time to pursue in earnest a more balanced notion of respect for persons, including persons who are currently labeled vulnerable. We believe not only that IRB members are granted the discretion to operationalize a more balanced notion of respect for persons but also that the Belmont Report actually requires it.
This project was made possible by the National Institutes of Health (NIH), National Institute of Mental Health (grant 1R13MH079690) and the NIH National Center for Research Resources (grant UL1 RR024992).
Special thanks to Whitney Hadley for research assistance.
Reprints can be ordered at http://www.ajph.org by clicking the “Reprints” link.
ContributorsJ. M. DuBois facilitated panel discussion and wrote the first draft of the article. All other authors participated in panel discussion and the editing process and read and approved the final article.
Human Participant Protection
No institutional review board review was needed because the project involved no interactions with human participants.
James M. DuBois, The Bander Center for Medical Business Ethics, Saint Louis University, St Louis, MO.
Laura Beskow, The Institute for Genome Sciences and Policy, Duke University, Durham, NC.
Jean Campbell, The Missouri Institute of Mental Health, University of Missouri, St. Louis.
Karen Dugosh, The Treatment Research Institute, Philadelphia, PA.
David Festinger, The Treatment Research Institute, Philadelphia, PA.
Sarah Hartz, The Department of Psychiatry, Washington University School of Medicine, St. Louis, MO.
Rosalina James, The Department of Bioethics, University of Washington, Seattle.
Charles Lidz, The Department of Psychiatry, University of Massachusetts School of Medicine, Worcester.