|Home | About | Journals | Submit | Contact Us | Français|
The conduct of research with human participants is facing increased scrutiny from government, media, and academic sources. Research oversight is consequently increasing dramatically as education and accreditation movements gain momentum. Institutional review boards themselves are undergoing significant changes in organization and accountability, implementing new tools to monitor investigator compliance.
This article describes the causes of recent calls for increased scrutiny, the resulting trends in research oversight, and the general lack of preparation for increased costs in the public sector. These are costs which will be felt acutely in the forensic setting as diminishing state budgets affect hospital, university, and correctional institutions.
Increasing criticism of the research enterprise in the United States has created a strong pressure to improve oversight and review. Government agencies, citizen advocacy groups, media, and researchers alike, are exerting pressure that has significant implications for biomedical investigations involving human subjects. This is especially true of forensic research which is conducted with participants who may need greater protection by virtue of their vulnerability as prisoners or chronic care patients. Federal regulations already require increased consent and monitoring procedures for research with vulnerable populations, including prisoners. A review of recent scandals, trends in the research review system and their costs will clarify the future influences on research already conducted under a higher degree of vigilance.
In 1996 the U.S. General Accounting Office reported that heavy workloads and a lack of resources were undermining research review by institutional review boards (IRBs).1 In 1998 the U.S. Dept. of Health and Human Services (DHHS) Inspector General described disturbing trends in commercialization of IRBs, increased IRB-shopping by researchers seeking speedy approval, and an increase of private review boards not under standard scrutiny.2 The recent work of the US government's Advisory Committee on Human Radiation Experiments (ACHRE) identified incomplete explanations of risk and benefit in the government-conducted research, and serious ethical concerns in a significant number of protocols.3,4
In acknowledging a vast array of concerns, the Department of Health and Human Services in 2001 commissioned the Institute of Medicine (IOM) to conduct a “comprehensive assessment of the national system for protection from research risks.” The results of this assessment are included in the IOM's landmark report: Responsible Research: A Systems Approach to Protecting Research Participants.5 Contemporaneously, prominent ethicists began to call for a move from “compliance to conscience,” requiring a broader sensitivity to the vulnerabilities of research subjects, and more than mere compliance with research regulations.6,7
Responding to specific negative outcomes at certain centers, the federal government suspended research at over 12 centers nationwide, and published severe criticisms of their review procedures.8,9 Now concern has spread to international AIDS research conducted by U.S. researchers under lower local standards of consent and review.
More intensive oversight of human research had also arisen from public and media responses to serious adverse events.10,11 Following the death of Ellen Roche, a healthy volunteer in a Johns Hopkins University protocol, Maryland's governor signed legislation to grant greater public access to IRB minutes, making the decision-making process more “transparent”.12 The tragic death of teenager Jesse Gelsinger in a recombinant DNA protocol at the University of Pennsylvania, had already led to Congressional hearings and the first lawsuit ever against a research ethicist.13,14
In April 2002, the U.S. Senate heard testimony espousing a national Human Research Subjects Protection Act. The proposal included provisions for increased public scrutiny of research decisions, including scrutiny of specific protocols and methods.15
Under this new level of scrutiny numerous institutions are undertaking systematic review of oversight practices and offering new standards for the ethical conduct of research. From the U.S. Department of Health and Human Services to the Institute of Medicine, scholars and leaders in the field are making a change in the manner research is conducted. The standard of presumed ethical conduct among investigators, a bulwark of research ethics for decades, may no longer apply. Focused heavily on new levels of accreditation and monitoring, a new ethic of scrutiny is overtaking human experimentation.
The Office of Human Research Protection (OHRP) within the U.S. Department of Health and Human Services oversees federally funded research within the United States. With its recently doubled staff OHRP has taken the role of federal oversight to a new level. In announcing a new quality improvement program, the OHRP notes its move from a “reactive, compliance-focused system of oversight and sanctions” to one that is “not only proactive, but interactive, and emphasizes prevention of harm.”16
The OHRP proposes an initially voluntary program of quality assurance, quality improvement, and, ultimately, continuous quality improvement (CQI). Quality assurance takes the form of an institution's self-assessment of research protections, guided by OHRP's Division of Assurances and Quality Improvement. A self-assessment tool developed by the division gauges compliance with federal regulations.
This survey tool contains detailed questions of workload and staffing resources, of the number of reviews conducted during meetings, and even the length of meetings themselves. It asks whether IRBs have their own budgets, conduct their own internal audits, and document the amount and frequency of training. It asks whether IRBs solicit written status reports from investigators during continuing review. The DHHS is not merely concerned with the time spent reviewing each protocol, but on adequate resources for staff, reviewers, and monitoring. It offers a systematic process for oversight that goes well beyond the model in place at most research institutions.
Following self-assessment, institutions are asked to interact with OHRP by written correspondence, teleconference, videoconference, or on-site consultation. Subsequent quality improvement emphasizes those mechanisms which best improve research protections. Best practices, effective procedures, and tools are to be posted by OHRP as a model for other institutions. Moreover, OHRP offers to broker networking relationships between responding institutions.
CQI (continuous quality improvement) then takes effect at institutions volunteering for this program, guided by the previous process of quality assurance and improvement. In a hint at its hopes for the future scope of this model, OHRP anticipates 60 such consultations per month from the 7600 institutions holding research compliance contracts with the federal government.
Several distinct efforts are underway nationally to promote formal accreditation of human research protection programs, one through the federal government's Veterans Administration. The 150 VA hospitals conducting human research are now contracted with the independent, non-profit National Committee on Quality Assurance (NCQA) to apply new standards for human subject protections. Organized into six domains, this self-described “systematic and comprehensive program” surveys institutional responsibilities, IRB structure and operation, consideration of risks and benefits, subject recruitment and selection, privacy and confidentiality, and informed consent.17
Prominent in NCQA's model is the creation of an institution-wide Human Research Protection Program (HRPP). This network of individuals and committees takes collective responsibility for the workings of research at the institution. It includes institutional officials, research and development committees, IRBs, IRB staff, investigators, research staff, research pharmacists and the like. An accreditation survey tests the existence and comprehensiveness of policies and outcomes at each level.
The accreditation survey assesses the institution's own evaluation of HRPP effectiveness, and its conduct of evaluation and improvement programs, including measuring, assessing, and improving compliance with HRPP policies. Accreditation evaluates monitoring of investigator performance using standards such as internal and external audits or other monitoring reports that generally lie outside current IRB practices.
Like OHRP, NCQA's accreditation survey asks whether IRBs are provided sufficient resources. Specifically it asks whether budgeting takes into account the volume of reviews and feedback from IRB members and staff – emphasizing the lack of influence perceived among those who currently conduct the nation's research review.
The organization Public Responsibility in Medicine and Research (PRIMR) provides some of the field's most widely attended educational conferences in research ethics. It trains hundreds of researchers and research administrators annually. Advocates of improved research standards for investigators and IRBs, PRIMR published its own standards for IRB accreditation.18 Its support for development of Human Research Protection Programs is in line with the NCQA and underscores the move toward a more comprehensive institutional responsibility for research conduct.
PRIMR's accreditation program, also voluntary, is in two phases: development of “objective, outcome-oriented performance standards” to serve as formal measurement criteria, and on-site visitation. PRIMR, too, endorses formal accreditation of IRBs by use of universal standards.
Among the standards PRIMR proposes is that research organizations must match the number of IRBs to the volume (and type) of research conducted. This responds directly to the concern, well established in the literature, that research review requires more time than over-worked IRBs can afford. Moreover, in another appeal to standardization, PRIMR urges “additional thought and consideration” to the need for greater uniformity at institutions with more than one IRB.
Support for quality improvement is reflected in a standard calling for “regularly assessing outcomes and improving performance” of the entire Human Research Protection Program. This move toward comprehensive assessment of research oversight includes solicitation of views of research subjects as well as views of the communities that supply them. In even stronger language PRIMR asserts that each research institution “must” (as opposed to “should”) propose its own evaluative assessments “of all aspects of the HRPP.”
Nor does PRIMR shrink from the need for more resources to finance appropriate oversight, particularly in the new age of accreditation and monitoring. Sufficient resources, staff, equipment, and technology must follow an institution's determination of what is “adequate.” The input of IRB members and staff is a specific requirement – just as in NCQA's survey.
In another standard heeding the call for greater communication with supporting communities, PRIMR requires “evidence” of communication with representatives of the “geographic and/or subject communities” that provide research subjects. Familiarity with the community's values has been a theme for many in university communities who have been surprised to discover the kind of research conducted in their own backyards. In this context, PRIMR specifically mentions research involving Native Americans, a topic of some sensitivity in the U.S. research community.
The Institute of Medicine (IOM) subsequently weighed in on research oversight as well. With a directive from the U.S. Secretary of Health and Human Services to improve “the structure and function of human research review programs,” the IOM reviewed standards proposed by both PRIMR and NCQA.19 It endorsed the latter.
First, the IOM supports the use of a broader system for overseeing conduct of human research. It endorses formation of Human Research Participant Protection Programs (HRPPPs) – similar to NCQA's HRPP. The IOM is supportive of NCQA's quality improvement program and its explicit assessment of compliance with federal regulations.
The IOM's performance assessment arm of the HRPPP is comprehensive; it encompasses monitoring of research, provision of education, and conduct of quality improvement. New elements include formation of ombudsman programs, and self-assessment of oversight programs, outcomes, and support. In this way, IOM mirrors NCQA's requirements by offering an entirely new tier of communication between IRBs and the research they review.
The IOM would like NCQA's program to go further however. Especially in the review of investigators, the IOM calls for more than the usual documentation of informed consent and protocol review. CQI (continuous quality improvement) mechanisms are strongly recommended. In a further broadening of scope, the IOM asserts the need for assessing research sponsors themselves, and supports involvement of research participants in setting performance standards.
Moreover, the IOM calls for collection of baseline data on the current research review system and the piloting of accreditation programs following its model. They assert that formal study of the current and future state of research oversight requires federal investigation by both the General Accounting Office and the Dept. of Health and Human Services Inspector General. In this view, the systematic overhaul of research oversight will not proceed by half-measures. It will require broad-based funding, endorsement, and support.
The newly developed Association for the Accreditation of Human Research Protection Programs (AAHRPP) takes a similar view. Founded by seven eminent institutions (including the Association of American Medical Colleges, the Association of American Universities, the National Health Council, and PRIMR), this body supports a three-pronged accreditation process that includes self-assessment by institutions, on-site evaluation, and review by an appointed council. Standards are based on PRIMR's approach and are intended to conform to the IOM's recommendations. Institutions contact AAHRP for accreditation, pay a fee, and remain current by maintaining the association's standards. Evidence of AAHRPP's increasing importance in research oversight is their recent receipt of a three-year grant from the Centers for Disease Control and Prevention (CDC), announced in a September 30, 2003 press release.20 Primary among their goals is the assessment of the role of accreditation in improving human subject protections.
This change in the conduct of American research has not been lost on the nation's psychiatric organizations. The American Psychiatric Association, for example, named a task force on research ethics in the year 2000. Advised by IOM members Paul Appelbaum and Richard Bonnie, the task force is considerably influenced by the IOM's approach. Now completing its work, the task force has been acutely aware of its mandate to assure research participants are sufficiently protected by appropriate oversight and monitoring. It is deriving recommendations from first principles (i.e., respect for persons, beneficence, justice) and identifying critical elements of oversight that must be in place to protect psychiatric research participants.
The American Academy of Psychiatry and the Law (AAPL) has also addressed the advancing tide of improved research oversight. In the process of revising its ethics guidelines, AAPL is considering an ethics guideline specifically for the conduct of forensic research. Recent drafts contain wording supportive of applying the regulations governing federally funded research to all research, whether federally funded or not, and urges familiarity with the monitoring and accreditation movements. The application of the federal protections of vulnerable populations (namely, prisoners, children, and others) to non-federally funded research is an important step in standardizing oversight standards. Moreover, it is hoped that the monitoring and accreditation vocabulary will now enter discussions of forensic research oversight as well.
The themes in this movement toward a more stringent research ethic can be categorized as follows;
It is the commitment of greater resources that will be most problematic for forensic and other public sector research. As noted earlier, this is research that already requires greater procedural protections because of the vulnerability of the populations (e.g., in correctional, chronic care, and state hospital settings). Although the costs of research oversight have rarely been studied in the professional literature, there is no lack of consensus that costs are high. The University of Texas, San Antonio, conducted the earliest study of IRB costs in 1979.21 Drawn from internal funds, costs of their medical school IRB was estimated at $100,000 per year, a significant expenditure for a high-volume IRB at the time. With the increased complexity of protocols and regulations since then, however, it is not possible to translate these costs into modern dollars.
The next study of this kind was conducted just this year within the Veterans Administration (VA).22 Using current “benchmark standards” for administering a single high-volume IRB, the VA allowed for a full-time professional staff to administer 300–350 protocols per year, a standard described but not explicitly justified by prior analysis.23 Using this standard a high-volume IRB would require 8 committee support staff, a full-time administrator, a full-time administrative assistant, and a database analyst. With a chair and 9 committee members, weekly meetings would draw portions of salary ranging from .05 FTE for a committee member to .5 FTE for its chair. The estimated institutional cost in this model is over $1.2 million per year.
These analysts observed that high-volume IRBs are more efficient than IRBs conducting less frequent reviews, with costs per protocol significantly higher among low-volume IRBs. With many small IRBs located at hospitals which draw IRB support from patient care funds, serious limitations on research oversight may result at financially strapped state institutions.
Even more troubling is the observation that these IRB costs do nothing to estimate the cost of a full oversight system's (e.g., an HRPPP) quality management, training, and administrative missions. Commentators such as the National Bioethics Advisory Commission have already begun to call for portions of grant funds to be reserved for oversight.24 Indeed the federal government, in response to the movement to enhance research protections, published an RFA in 2002 to provide institutional support for improved oversight (e.g., RFAs OD-02-003, OD-03-007). Others have begun to call for study of IRB deliberations themselves, to assure that new procedures and costs will provide real improvements in the conduct of research.25
There remains little literature on current costs, however, so there is even less preparation for the increases that are anticipated. At our own institution (the University of Massachusetts Medical School), the annual costs of maintaining oversight of human subjects protection are greater than $400,000. Included in this budget are salaries, travel, meetings and memberships for three full-time staff (IRB Manager, IRB Coordinator and Administrative Assistant) and a small portion of salary support for the IRB Chair. Less visible costs of human subjects protection arise from two other layers of responsibility: an estimated .75 effort of the director of the office (who facilitates education, training, and development of an internal inspection program for quality improvement) and .80 effort of a regulatory specialist (who assists investigators with regulatory documents and assists the director in study inspections). Beyond these obvious costs are those that are currently incalculable: the research reviews by conflict of interest and HIPAA/privacy officials, and the institutional legal department.
Within the Massachusetts Department of Mental Health, there is no budget line item for research review, with direct costs of operating the institutional IRB, including partial salaries for a director and support staff, mailing, parking, and copying fees estimated to be less than $100,000 a year.26 For an IRB reviewing 10% of the VA's number of protocols (30–40 per year) this is proportionately less than the benchmark standard ($1.2 million for the VA IRB/$120,000 in adjusted dollars for the Dept. of Mental Health). There is no separate budget for indirect costs of executive managerial staff, office, or conference room overhead. Nor is there reimbursement for the time of volunteer IRB members to travel and prepare for meetings. Area monitoring committees that volunteer to oversee ongoing research also have no budget.
The state's Department of Corrections similarly has no separate budget item for research review and oversight.27 Review is part of the work of the Director of Research and Planning, who reviews proposals, interacts with prison superintendents, and determines not only the appropriateness of the research but its feasibility within the correctional setting. There are no projections for the cost increases that will arise from accreditation and increased oversight.
It is possible that the advent of the Health Insurance Portability and Accountability Act (HIPAA), which affects research as well as clinical care, provides an indication of where greater costs will arise. Although numbers are scarce here as well, institutions incur significant costs from new data protection and transfer technologies, new practices and policies, and from new staff training requirements.28 Some have already observed significant (and costly) changes in IRB practices under HIPAA, including increased revisions and less expedited review.29 Research on costs prior to these studies has been appropriately characterized as of “limited quality and low statistical power.”30
A specific proposal from our institution suggests the scope of technology that may be necessary to absorb the anticipated changes: an electronic submission tool for IRB proposals is proposed for simplifying and auditing increases in research oversight. The proposed tool includes tracking of IRB decision-making as well as oversight mechanisms, but would require an initial investment of $100,000. The human and technical support requirements would ultimately be far greater. Nor does this speak to the education, long-term training, and equipment needed to link investigators to an established computer network.
The broader data from the public sector underscores the lack of preparation for new oversight standards. Information from the Bazelon Center for Mental Health Law – emphasized by Paul Appelbaum in his APA presidential nomination address – shows the following trends: the closure of more state hospitals in the early 1990s than in the 20 years before; 30% greater mental health spending in 1955 than presently; and an overall drop in state mental health expenditures resulting in the provision of less than 2% of mental health dollars.31,32 With psychiatric beds most vulnerable in the case of mergers and acquisitions, many hospitals have reported the intent to cut their complement of psychiatric beds.33
This short-sighted view of mental health needs provides little encouragement for the research enterprise, especially given the oversight movements now gaining momentum.
As mental health budgets are cut and professional staffing is decreased, research oversight conducted by these institutions and their research reviewers will suffer. Public sector research oversight will not be able to keep up with the important new standards. With even a $100,000 annual drain on an institution's research budget, greater oversight costs would mean significantly less support available for major grants (and hence less competitive proposals), less health services research (as on the interaction of police or judges with persons with mental illness), and fewer training slots for forensic professionals, many of whom conduct research as part of their education. At our own Center for Mental Health Services Research, for example, a $100,000 diversion to research oversight would supplant one faculty member and a research associate.34
If research in correctional settings or state hospitals is to continue, it will have to overcome not only the crisis of confidence created by research scandals and public scrutiny but a crisis of finances as well. In the current social and economic climate the promise of improved treatments, diagnoses, and risk assessments could experience a dramatic set-back. Mental health in the public sector is already over-burdened and under-funded, with the recent economic downturn resulting in dramatic restrictions in basic services.
It is therefore unlikely that forensic research will attain the standards being set by the new research ethic. The prevailing view of mental health in general and research in particular will exert considerable negative influence on forensic research – research that already requires greater protections. Incapacity to maintain high standards of research protection will consequently deter and impede new knowledge.
It will take the combined efforts of standard-bearers at governmental and professional organizations (e.g., DHHS, NCQA, PRIMR, IOM, AAHRPP, APA, AAPL) to make this danger clear. The APA during 2002–2003 already established “defunding” as a focus of its advocacy. In the APA's view and that of many others, more resources are critical to the understanding of mental illness, particularly among those most refractory to treatment. It is these vulnerable persons who are found increasingly in forensic settings. To withhold necessary funds from this effort ignores the many calls to improve the research that advances the field. Inability to keep up with the new oversight standards will ultimately fail important societal obligations to research subjects, the most vulnerable and deserving contributors to mental health science.
Dr. Candilis is a forensic psychiatrist and medical ethicist at the University of Massachusetts Medical School. His work is supported by an NIH career development award in research ethics mentored by Paul Appelbaum and Charles Lidz (K01MH01851). The views expressed in this article do not necessarily reflect those of the NIH.