Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Am J Public Health. Author manuscript; available in PMC 2013 December 1.
Published in final edited form as:
PMCID: PMC3493853

Restoring Balance: A Consensus Statement on the Protection of Vulnerable Research Participants


A diverse panel convened in June 2011 to explore a dilemma in human research: some traits may make individuals or communities particularly vulnerable to a variety of harms in research; however, well-intended efforts to protect these vulnerable individuals and communities from harm may actually generate a series of new harms.

We have presented a consensus statement forged by the panel through discussion during a 2-day meeting and the article-writing process.

We have identified practical problems that sometimes arise in connection with providing additional safeguards for groups labeled as vulnerable and offered recommendations on how we might better balance concerns for protection with concerns for justice and participant autonomy.

“Regrettably, the term ‘vulnerable’ too often gets played as a bioethical trump card, summarily tossed on the table in the course of debate, sometimes with the stern admonition that it would not be decent to exploit such subjects. Given the absence of agreed-upon standards for identifying and responding to vulnerability, such a move too often serves as a conversation-stopper, abruptly ending dialogue rather than furthering it. It may be possible to do better.”

–K. Kipnis1(pG3)

As part of a scientific meeting enabled by a National Institute of Mental Health grant to help identify best practices for mental health research ethics, a diverse panel convened in June 2011 to explore a dilemma in human research: some traits may make individuals or communities particularly vulnerable to a variety of harms in research; however, well-intended efforts to protect these vulnerable individuals and communities from harm may actually generate a series of new harms.

At a daylong public conference, individuals representing mental health consumers, research ethicists, medical sociologists, psychiatric researchers, and substance abuse researchers presented data and reflections from their own and others’ research. These panelists then met for a second day of closed meetings to explore ways of addressing the dilemmas arising in research with vulnerable participants. The group forged consensus through discussion during the 2-day period and the article-writing process.


Research is generally safe and can present significant benefits to individuals, communities, and society at large2; however, it can also pose the risk of significant physical, psychological, social, legal, or economic harms.3 Although none of us are fully capable of fully protecting ourselves at all times, some factors may make it particularly challenging to protect ourselves in the context of research. The National Bioethics Advisory Commission identified a set of such factors, including cognitive, institutional, economic, and social vulnerabilities.4 Individuals with cognitive deficits may find it unusually difficult or impossible to understand and evaluate consent information. Being institutionalized or economically disadvantaged may make it difficult to say no to requests to participate in research. Kipnis, an advisor to the National Bioethics Advisory Commission, noted that oftentimes vulnerabilities arise only in specific contexts or relationships, but regardless of the source, they are generally of concern insofar as they may call into question the quality of informed consent.1 Furthermore, belonging to a socially marginalized minority group may reduce the likelihood of receiving adequate protections.5

People may manifest more than 1 vulnerability or risk factors for problems with informed consent or research protections. In the now infamous Tuskegee syphilis study, researchers observed the natural progression of syphilis in 400 African American men without providing treatment of the disease or its sequelae (neither the standard heavy metal treatment available at the study’s inception nor antibiotics when they became available). The fact that participants were poor, inadequately educated, African American men living in the rural south in the 1930s may help explain how this harmful, non-therapeutic study continued for 40 years.6,7

Largely in reaction to the Tuskegee study, the US Congress established the National Commission for the Protection of Human Subjects. The commission’s best-known document, the Belmont Report, provides an ethical framework to guide human research. The report urges researchers to show respect for persons by ensuring that they enter into research voluntarily and with adequate information and by protecting those with diminished autonomy. The commission also outlined a set of regulations (45CFR46) now known as the “common rule,” which are meant to specify and implement the principles of the Belmont Report.8


Current regulations address vulnerabilities in research by requiring additional safeguards for groups of participants. We use the term “group” with reservations. Given that research protocols create groups through sampling (regardless of whether the sample is drawn from a naturally occurring community), regulators and institutional review boards (IRBs) often target groups for protections; but in reality we are dealing with unique individuals who have become part of a heterogeneous group only because of the sampling intentions of a researcher.

For 3 groups (pregnant women, fetuses, and neonates; prisoners; and children), special safeguards are enumerated in special subparts of the common rule (45CFR46, subparts B–D). Moreover, the common rule calls for unspecified additional safeguards when participants “are likely to be vulnerable to coercion or undue influence, such as children, prisoners, pregnant women, mentally disabled persons, or economically or educationally disadvantaged persons” (45CFR46.111(b)). Our focus is on research involving individuals with mental health or substance use disorders—potential participants who are frequently viewed as requiring such unspecified additional safeguards. However, much of our commentary could be generalized to other groups of participants.

Researchers in the fields of public health, mental health, substance abuse, and HIV/AIDS are familiar with the implications of these policies. For example:

  1. IRBs may label groups of participants (e.g., people with schizophrenia) as unlikely to have the capacity to consent to research. In such cases, groups are either excluded from participation or their capacity to consent is formally and routinely assessed, sometimes in ways that researchers and participants alike find burdensome and condescending.9,10
  2. IRBs may label groups as vulnerable to undue influence and significantly restrict allowable payments. IRBs worry, for example, that substance users will find payments unduly influential and will use them to purchase drugs or alcohol. IRBs frequently require researchers to offer gift cards of modest value rather than cash,11,12 which participants may feel is unjust.13 Like most of us, participants prefer the flexibility cash offers.14
  3. IRBs may require full board review, extensive protocol modifications, and burdensome processes for researchers conducting even minimal risk research. For example, current regulations do not allow IRBs to exempt any kind of research involving prisoners or people in the criminal justice system (footnote at 45CFR46.101(i); 45CFR46.303(c)). Some IRBs generalize this practice of non-exemption to other populations labeled as vulnerable.

Although researchers may find such measures unreasonable, IRBs may feel that they are required by the regulatory demand for additional safeguards when participants are considered vulnerable.


Although these measures are undoubtedly well intentioned, following the status quo produces a host of ethical concerns.

Reinforcing Stigma

Labeling particular groups as at risk for lacking decisional capacity or as incapable of making a voluntary choice reinforces stigma or stereotypes, when in fact members of such groups are frequently diverse and function as well as so-called healthy volunteers.15,16

Producing Unfairness

The problem of stigma exists even when participants are indeed at risk for decisional incapacity or undue influence. However, this labeling is frequently unfair—the result of stereotypes and untested assumptions. For example, systematic review articles report that most studies of decisional capacity involving participants with schizophrenia have found that a majority of individuals retain decisional capacity.15,17 Nevertheless, Luebbert et al. have found that IRB members overestimate the risk of incapacity in populations with psychiatric diagnoses and underestimate the risk in populations with nonpsychiatric medical diagnoses that may impair decisional incapacity.18

Hindering Research Unnecessarily

Whereas the Belmont Report’s primary concern with justice was to ensure that vulnerable populations are not exploited, HIV and breast cancer activists argued that injustices arise when individuals or communities are denied access to studies that could lead to cures or improve lives. Although ethical requirements may sometimes legitimately erect barriers to research, erecting barriers unnecessarily may be harmful and unjust.19,20

Ignoring System Problems

Sometimes a participant will fail to understand information about a study because the consent form is too long and complex, the timing is bad, or recruiters explain things poorly.2124 Routinely excluding those who perform poorly on a test of understanding of consent information may permit system problems to go uncorrected, particularly in research with vulnerable participants when there is an increased likelihood to attribute poor understanding to participant traits.

Restricting Individuals’ Exercise of Autonomy

No one is perfectly autonomous: we all make decisions with imperfect information, reasons, and motivations. When participants genuinely lack the ability to make a decision for themselves (e.g., in head trauma research), excluding their participation does no violence to their autonomy. However, denying others the opportunity to volunteer for a study may be an inappropriate infringement on their autonomy, particularly when doing so is plagued by the reinforcement of stigma, unfair labeling owing to untested assumptions, hindering research unnecessarily, and ignoring system problems.2527


In discussing the application of the principle of respect for persons, the Belmont Report observed that sometimes it is unclear just how it should be applied. The report suggests that the example of prisoner research may be instructive:

On the one hand, it would seem that the principle of respect for persons requires that prisoners not be deprived of the opportunity to volunteer for research. On the other hand, under prison conditions they may be subtly coerced or unduly influenced to engage in research activities for which they would not otherwise volunteer. Respect for persons would then dictate that prisoners be protected. Whether to allow prisoners to “volunteer” or to “protect” them presents a dilemma. Respecting persons, in most hard cases, is often a matter of balancing competing claims urged by the principle of respect itself.3(section B.1)

The following guidelines represent our attempt to restore balance by reconsidering standard ways of addressing vulnerabilities to reduce the downsides of our efforts to protect research participants while fostering genuine respect for them.


Begin by considering the risks posed by the study design before considering additional safeguards. Current regulations exempt 6 forms of research with general populations because they involve no more than minimal risk; this means that the regulations, with their insistence on additional safeguards, simply do not apply (45CFR46.101(b)). It is unclear how, say, being a prisoner increases the risks involved in participating in a 15-minute anonymous survey. From an ethical perspective, when studies meet the requirements for exemption, participants may not need any additional safeguards, and providing them can be counter to good sense.28


Offer as many protections as necessary and as few as possible. This kind of guideline has been embraced in other contexts, for example prescribing painkillers or accessing protected health information. Too few protections may place participants at unnecessary risk, whereas too many protections may decrease the exercise of autonomy, reinforce stigma, and unnecessarily hinder research. This principle simply articulates a commitment to the kind of balancing the Belmont Report requires.

Consent Assessments

Universally require consent assessments when justified by risk levels. Some medical conditions, such as cancer and diabetes, can cause significant pain that may threaten capacity to consent more than do many psychiatric diagnoses. Sometimes failures to understand consent information are owing to system failures, and any of us can be vulnerable at any given time depending on a number of contextual factors. Thus, when the risks of a study are so significant that we want to ensure that participants fully understand and appreciate them, it is appropriate to screen participants regardless of their diagnoses or lack thereof. Fortunately, brief screening tools exist that can identify whether the consent process was successful. For example, the University of California, San Diego, Brief Assessment of Capacity to Consent consists of 10 items that refer to any study protocol; it can be administered in less than 5 minutes and scored with excellent reliability.29 Similarly, the Agency for Healthcare Research and Quality offers a researcher’s certification of consent and authorization form that can be used to guide a meaningful consent process and ensure adequate comprehension.30


Use best data—not stereotypes and untested assumptions—to guide development of safeguards. As noted above, IRBs frequently worry that cash payments to drug users will exacerbate drug use. Recent studies by Festinger et al. have found that this assumption is false. In their initial study, payments of $10, $40, or $70 in cash or gift card were made to participants sampled from an out-patient substance abuse treatment program to determine the impact on new drug use, perceived coercion, study satisfaction, and follow-up rates14; in a follow-up study, payments of $70, $100, $130, and $160 in either cash or gift card were assessed.31 Neither study found higher payments or cash payments to be associated with new drug use as measured by urine analysis or with perceived coercion. By contrast, higher payments and cash payments were correlated with higher study satisfaction and follow-up rates with fewer tracking efforts.


Assess the subjective outcomes of the consent process rather than decisional capacity. Outcomes of a consent process should include participant understanding and appreciation of key information. Yet in the context of informed consent, it may be more appropriate to refer to understanding and appreciation as subjective outcomes than as evidence of decisional capacity, which implies that failures to understand and appreciate information arise because of participants’ cognitive deficits (their cognitive capacity or abilities). The phrase “subjective outcomes of the consent process” more accurately describes what most tests of decisional capacity actually assess. Whether a participant understands consent information may tell us more about the consent process (the readability of the consent form and the quality of the consent discussion) than it does about the participant’s decisional capacity or cognitive deficits. By focusing on the subjective outcome of the consent process rather than the individual’s decisional capacity, we will achieve the goal of ensuring that participants understand and appreciate crucial information while diminishing the focus on individuals’ presumed deficits. This approach is consistent with the desired shift from a focus on presumed deficits to a focus on empowerment that has been repeatedly expressed by mental health service users.27,32

Assessment can also guide refinement of the consent content and development of effective modes of delivery that are responsive to needs of research participants and diverse populations.

Additional Safeguards

When additional safeguards are necessary, consider the attitudes and priorities of affected communities. Will reasonable payments for participants’ time be perceived as respectful or as manipulative? Will routinely reading a consent form aloud be viewed as considerate or as insulting? Will the inclusion of a participant advocate in the consent process be viewed as helpful or an intrusion on privacy? We cannot know the answers to these questions a priori. However, participants are often more than willing to share their views with us, and these views may rightly inform our decisions regarding additional safeguards.13,33 As advocates for community-based participatory research have long recognized, providing communities with a voice on matters of study designs is also a sign of respect and humility, acknowledging that researchers can learn from participant communities.27,34 A wide range of community engagement activities may accomplish the purpose of giving participants a voice in matters of research protections, ranging from traditional community-based participatory research processes to reviewing publications that report the attitudes of communities.35

Table 1 provides a summary of how these recommendations can be used to restore balance when weighing protection versus overall respect for vulnerable populations.

Illustration of Balanced Practices and Attitudes Versus Status Quo: Vulnerable Research Participants Protection


With the exception of our recommendation to avoid additional safeguards in research that poses no more than minimal risk to participants (including prisoners), the guidelines we have suggested are consistent with current federal regulations for the protection of human participants. For example, additional safeguards could be informed by dialogue with affected communities and might be required of all protocols regardless of population once a specific risk threshold is reached. Such safeguards would then be additional insofar as they go beyond standard protections, not insofar as they isolate 1 group for paternalistic interventions.

Most aspects of the standard approach to addressing vulnerabilities in research are not mandated by regulations but, rather, rest on a common tradition of interpreting the regulations. In fact, the status quo arguably flies in the face of the ethical framework mandated by federal regulations. The Belmont Report discusses the fact that designing an ethically acceptable research protocol necessarily involves balancing competing goals. Even in the context of 1 principle, such as respect for persons, it is necessary to balance competing aims (e.g., protecting people from undue influence and respecting their ability to make voluntary choices).

It is time to pursue in earnest a more balanced notion of respect for persons, including persons who are currently labeled vulnerable. We believe not only that IRB members are granted the discretion to operationalize a more balanced notion of respect for persons but also that the Belmont Report actually requires it.


This project was made possible by the National Institutes of Health (NIH), National Institute of Mental Health (grant 1R13MH079690) and the NIH National Center for Research Resources (grant UL1 RR024992).

Special thanks to Whitney Hadley for research assistance.


Reprints can be ordered at by clicking the “Reprints” link.


J. M. DuBois facilitated panel discussion and wrote the first draft of the article. All other authors participated in panel discussion and the editing process and read and approved the final article.

Human Participant Protection

No institutional review board review was needed because the project involved no interactions with human participants.

Contributor Information

James M. DuBois, The Bander Center for Medical Business Ethics, Saint Louis University, St Louis, MO.

Laura Beskow, The Institute for Genome Sciences and Policy, Duke University, Durham, NC.

Jean Campbell, The Missouri Institute of Mental Health, University of Missouri, St. Louis.

Karen Dugosh, The Treatment Research Institute, Philadelphia, PA.

David Festinger, The Treatment Research Institute, Philadelphia, PA.

Sarah Hartz, The Department of Psychiatry, Washington University School of Medicine, St. Louis, MO.

Rosalina James, The Department of Bioethics, University of Washington, Seattle.

Charles Lidz, The Department of Psychiatry, University of Massachusetts School of Medicine, Worcester.


1. Kipnis K. National Bioethics Advisory Commission. Ethical and Policy Issues in Research Involving Human Participants. Vol. 2. Bethesda, MD: National Bioethics Advisory Commission; 2001. Vulnerability in research subjects: a bioethical taxonomy; pp. G1–G13. Commissioned Papers and Staff Analysis.
2. Levine RJ. Ethics and Regulation of Clinical Research. 2. New Haven, CT: Yale University Press; 1988.
3. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. Washington, DC: 1979. [PubMed]
4. National Bioethics Advisory Commission. Report and Recommendations. Vol. 1. Bethesda, MD: 2001. Ethical and Policy Issues in Research Involving Human Participants.
5. DuBois JM. Ethics in Mental Health Research: Principles, Guidance, and Cases. New York: Oxford University Press; 2008.
6. Jones JH. Bad Blood: The Tuskegee Syphilis Experiment. 2. New York: Free Press; 1993. rev ed.
7. Reverby S, editor. Tuskegee’s Truths: Rethinking the Tuskegee Syphilis Study. Chapel Hill: University of North Carolina Press; 2000.
8. Office of Human Research Protections. Protection of Human Subjects. Washington, DC: 2009.
9. National Bioethics Advisory Commission. Research Involving Persons With Mental Disorders That May Affect Decision-Making Capacity. Rockville, MD: 1998.
10. Appelbaum PS. Competence to consent to research: a critique of the recommendations of the National Bioethics Advisory Committee. Account Res. 1999;7(2–4):265–276. [PubMed]
11. Grady C. Money for research participation? Does it jeopardize informed consent? Am J Bioeth. 2001;1(2):40–44. [PubMed]
12. Ritter A, Fry C, Swan A. The ethics of reimbursing injection drug users for public health research interviews: what price are we prepared to pay? Int J Drug Policy. 2003;14(1):1–3.
13. DuBois JM, Callahan O’Leary C, Cottler LB. The attitudes of females in drug court toward additional safeguards in HIV prevention research. Prev Sci. 2009;10(4):345–352. [PMC free article] [PubMed]
14. Festinger DS, Marlowe DB, Croft JR, et al. Do research payments precipitate drug use or coerce participation? Drug Alcohol Depend. 2005;78(3):275–281. [PubMed]
15. Dunn LB. Capacity to consent to research in schizophrenia: the expanding evidence base. Behav Sci Law. 2006;24(4):431–445. [PubMed]
16. Roberts LW, Roberts B. Psychiatric research ethics: an overview of evolving guidelines and current ethical dilemmas in the study of mental illness. Biol Psychiatry. 1999;46(8):1025–1038. [PubMed]
17. Jeste DV, Depp CA, Palmer BW. Magnitude of impairment in decisional capacity in people with schizophrenia compared to normal subjects: an overview. Schizophr Bull. 2006;32(1):121–128. [PMC free article] [PubMed]
18. Luebbert R, Tait RC, Chibnall JT, Deshields TL. IRB member judgments of decisional capacity, coercion, and risk in medical and psychiatric studies. J Empir Res Hum Res Ethics. 2008;3 (1):15–24. [PMC free article] [PubMed]
19. King PA. Justice beyond Belmont. In: Childress JF, Meslin EM, Shapiro HT, editors. Belmont Revisited. Ethical Principles for Research With Human Subjects. Washington, DC: Georgetown University Press; 2005. pp. 136–147.
20. Kahn JP, Mastroianni AC, Sugarman J, editors. Beyond Consent: Seeking Justice in Research. New York: Oxford University Press; 1998.
21. Paasche-Orlow MK, Taylor HA, Brancati FL. Readability standards for informed-consent forms as compared with actual readability. N Engl J Med. 2003;348(8):721–726. [PubMed]
22. Iltis AS. Timing invitations to participate in clinical research: preliminary versus informed consent. J Med Philos. 2005;30(1):89–106. [PubMed]
23. Young DR, Hooker DT, Freeberg FE. Informed consent documents: increasing comprehension by reducing reading level. IRB. 1990;12(3):1–5. [PubMed]
24. Hochhauser M. “Therapeutic misconception” and “recruiting doublespeak” in the informed consent process. IRB. 2002;24(1):11–12. [PubMed]
25. Roberts LW. Informed consent and the capacity for voluntarism. Am J Psychiatry. 2002;159(5):705–712. [PubMed]
26. Appelbaum PS. Editorial: Missing the boat: competence and consent in psychiatric research. Am J Psychiatry. 1998;155(11):1486–1488. [PubMed]
27. Campbell J. ‘We are the evidence,’ an examination of service user research involvement as voice. In: Wallcraft J, Schrank B, Amering M, editors. Handbook of Service User Involvement in Mental Health Research. New York: Wiley; 2009. pp. 113–137.
28. Gunsalus CK. The nanny state meets the inner lawyer: overregulating while underprotecting human participants in research. Ethics Behav. 2004;14(4):369–382. [PubMed]
29. Jeste DV, Palmer BW, Appelbaum PS, et al. A new brief instrument for assessing decisional capacity for clinical research. Arch Gen Psychiatry. 2007;64(8):966–974. [PubMed]
30. Agency for Healthcare Research and Quality. [Accessed June 23, 2011.];Researcher’s certification of consent and authorization. 2010 Available at:
31. Festinger DS, Marlowe DB, Dugosh KL, Croft JR, Arabia PL. Higher magnitude cash payments improve research follow-up rates without increasing drug use or perceived coercion. Drug Alcohol Depend. 2008;96(1–2):128–135. [PMC free article] [PubMed]
32. Del Vecchio P, Blyler CR. Identifying critical outcomes and setting priorities for mental health services research. In: Wallcraft J, Schrank B, Amering M, editors. Handbook of Service User Involvement in Mental Health Research. New York: Wiley; 2009. pp. 92–111.
33. Fisher CB. National Bioethics Advisory Commission. Research Involving Persons With Mental Disorders That May Affect Decision-Making Capacity. Commissioned Papers. Rockvillle, MD: National Bioethics Advisory Committee; 1999. Relational ethics and research with vulnerable populations; pp. 29–49.
34. Israel BA, Schulz AJ, Parker EA, Becker AB. Review of community-based research: assessing partnership approaches to improve public health. Annu Rev Public Health. 1998;19:173–202. [PubMed]
35. DuBois JM, Bailey-Burch B, Bustillos D, et al. Ethical issues in mental health research: the case for community engagement. Curr Opin Psychiatry. 2011;24(3):208–214. [PMC free article] [PubMed]