|Home | About | Journals | Submit | Contact Us | Français|
Most reports of scientific misconduct have been focused on principal investigators and other scientists (e.g., biostatisticians) involved in the research enterprise. However, by virtue of their position, research coordinators are often closest to the research field where much of misconduct occurs.
To describe research coordinators’ experiences with scientific misconduct in their clinical environment.
The descriptive design was embedded in a larger, cross-sectional national survey. A total of 266 respondents, predominately registered nurses, who answered yes to having first hand knowledge of scientific misconduct in the past year provided open-ended question responses.
Content analysis was conducted by the research team, ensuring agreement of core categories and subcategories of misconduct.
Research coordinators most commonly learned about misconduct via first-hand witness of the event, with the principal investigator being the person most commonly identified as the responsible party. Five major categories of misconduct were identified: protocol violations, consent violations, fabrication, falsification, and financial conflict of interest. In 70% of cases, the misconduct was reported. In the majority of instances where misconduct was reported, some action was taken. However, in approximately 14% of cases, no action or investigation ensued; in 6.5% of cases the coordinator was either fired or resigned.
The study demonstrates the need to expand definitions of scientific misconduct beyond fabrication, falsification, and plagiarism to include other practices. The importance of the ethical climate in the institution in ensuring a safe environment to report and an environment where evidence is reviewed cannot be overlooked.
Research integrity and a corollary, scientific misconduct, have received increasing attention in the literature in the last 3 decades. Most of the research conducted on scientific misconduct has been focused on principal investigators (PIs) and other scientists involved in the research enterprise. By virtue of their position, research coordinators (RCs), many of whom are nurses, are often closest to the research field where much of the misconduct occurs. That is, these are the individuals who manage the large clinical studies. In this role they negotiate with patients, investigators, and research personnel involved in aspects of the study implementation, including data collection, data management, and interpretation. The experiences of these individuals with breaches in research integrity and scientific misconduct need to be investigated and shared with the larger health care community in order to ensure the safety and well-being of patients enrolled in clinical trials and the accountability of the larger organization for their safety.
Scientific endeavors are not undertaken by a single person, but rather involve a number of personnel. Two groups of research workers that have been studied independently are investigators and research coordinators. Although the role of the PI in ensuring the scientific integrity of a study is intuitive, until recently the critical role played by the RC has not been explicated. The importance of the RC to the day-to-day operations of a study and the overall successful functioning of a research team was well-described by Mueller and Mamo (2000) and Fedor and Cola (2003). As a group, RCs hold a unique position in clinical studies and can be expected to be aware of and even influence the scientific integrity with which the research is conceptualized, implemented, and disseminated. However, until recently, little information was available about their values, beliefs, practices, and experiences related to scientific integrity and misconduct.
The purpose of this study was to describe experiences reported by RCs as part of a larger study of their beliefs, values, and perceptions of influences on scientific misconduct; to examine how these factors influence their beliefs about reporting practices; and to describe how selected demographic characteristics influence perceptions of scientific misconduct. The quantitative findings related to their beliefs, values, and attitudes and the psychometrics of the survey instrument have been reported elsewhere (Broome, Pryor, Habermann, Pulley, & Kincaid, 2005; Pryor, Habermann, & Broome, 2007). As part of that parent study the RCs were asked to report whether they had experienced an instance of scientific misconduct within the last year. If they had, they were then presented with 12 questions designed to explore their experiences. The findings presented in this article focus on the qualitative data concerning those experiences with scientific misconduct.
Research integrity is defined as “a commitment to intellectual honesty and personal responsibility for one’s actions and to a range of practices that characterize responsible research conduct” (Committee on Assessing Integrity in Research Environments, 2002, p. 34). In contrast, scientific misconduct has been defined by federal regulations as fabrication, falsification, and plagiarism, as well as other practices that deviate seriously from those commonly accepted within the scientific community for proposing, conducting, or reporting research (United States Department of Health and Human Services [DHHS], 2005). Recently the Office of Research Integrity (ORI) expanded the definition of scientific misconduct to include misconduct occurring in connection with reviewing research (DHHS, 2005). Although the federal agencies restrict the definition of scientific misconduct to reviewing, reporting, and disseminating research, other experts take a broader view (Martinson, Anderson, & de Vries, 2005). The Council of Science Editors [CSE] stated scientific misconduct is “behaviour by a researcher, intentional or not, that falls short of good ethical and scientific standard” (Scott-Lichter & the Editorial Policy Committee, CSE, 2006, p. 38).
Some of the most unambiguous instances of misconduct are fabricating or falsifying data and plagiarism. A variety of other practices are deviation from accepted practices including: (a) misrepresenting data, (b) failing to explain data weaknesses, (c) selective reporting of results, (d) misuse of funds, (e) safety violations, (f) conflicts of interest, (g) agreements with sponsors not to publish data, and (h) failing to obtain informed consent (Breen, 2003; Martinson et al., 2005; Weed, 1998; Wilmhurst, 1997).
The prevalence of scientific misconduct is unknown, and data regarding estimates is taken from a variety of sources. One source, the ORI, conducts investigations of scientific misconduct. Examining 2005 data, 30 cases were investigated, resulting in a finding of misconduct in 8 cases (ORI, 2006). In a recent report from the ORI (Reynolds, 2004), it was found that 13% of the 136 investigations that resulted in findings of misconduct from 1992 to 2002 were clinical trials. However, there is consensus that the number of confirmed cases underestimates the actual number of cases (Glick & Shamoo, 1994).
Another source of information about the occurrence of misconduct are research studies reporting prevalence figures, although the literature is limited. In an early study, investigators reported that 27% of the 119 research project administrators surveyed knew of a case of scientific misconduct, but 42% stated that this knowledge was not known publicly (Hals & Jacobsen, 1993). These findings were confirmed by contemporaneous survey research and detailed audits of research practices of individual scientists (Shapiro, 1993; Swazey, Anderson, & Lewis, 1993; Tangney, 1987; Weiss, Vogelzang, & Peterson, 1993). In a more recent survey of authors who had published pharmaceutical trial studies, 17.4% of the 322 respondents reporting knowledge of an issue of misconduct, 7.8% knew first-hand of a case of data fabrication or falsification, and 1.2% reported participating in a study in which such activities had occurred (Gardner, Lidz, & Hartwig, 2005). In a survey of early career (postdoctoral funding) and midcareer (R01 funding) scientists, 33% of the early career respondents reported engaging in scientific misconduct compared to 38% in the midcareer respondents (de Vries, Anderson, & Martinson, 2006; Martinson et al., 2005). Overall, the most frequently reported behaviors were similar, with both groups including inadequate record keeping related to research projects, dropping observations or data points from analysis, and overlooking others’ use of flawed data. The early career scientists also reported using inappropriate designs; while the midcareer scientists also reported changing the design or results of a study in response to funding source pressures.
In a recent survey involving research coordinators, 301 (18%) reported a first-hand knowledge of an incident of scientific misconduct in the previous year (Pryor et al., 2007). The most frequently reported types of misconduct were disagreements about authorship, intentional protocol violations related to procedures, intentional protocol violations related to subject enrollment, and data falsification.
These studies indicate there is a continuum of integrity and misconduct. At one end, there is integrity, or practices that characterize responsible conduct. At the opposite end are intentional practices of fabrication, falsification, and plagiarism. In between are behaviors, such as inadequate record keeping or dropping observations from analysis, that can lead to flawed data. Some of the behaviors have been termed normal misbehavior (de Vries et al., 2006), but they include practices that can result in poor quality research and ultimately may undermine the public’s trust in scientists.
Many factors have been suggested as contributing to the occurrence of scientific misconduct. Generally, the potential variables that influence the occurrence of misconduct can be classified in one of three ways. These are variables that describe traits of individuals (e.g., beliefs, educational background), characteristics of the environment (e.g., available resources, monitoring, workload), and individual roles (e.g., tenured vs. not, PI vs. RC). On an individual level, pressures for promotion and tenure, competition among investigators, need for recognition, desire for financial gain, ego, personality factors, and conflicting personal and professional obligations are factors that may influence certain individuals to engage in misconduct (Davis, Riske-Morris, & Diaz, 2007; LaFollette, 1994; Rankin & Esteves, 1997; Weed, 1998).
Relevant environmental factors include the amount of oversight of the study, existence of explicit versus implicit rules, penalties and rewards attached to such rules, extent of ongoing training, the amount of regulation involved, and insufficient mentoring (Davis et al., 2007; de Vries et al., 2006). The ethical climate of the organization in which the research takes place is another environmental variable (Davis et al., 2007; Gaddis et al., 2003).
Little research has been done examining the influence of these factors for individuals in different research roles. In the survey of RCs by Pryor et al. (2007), institutional and behavioral characteristics perceived to influence scientific misconduct were reported. With regard to pressures on investigators, investigator competitiveness was rated high or very high by 54% and pressure on investigators to obtain external funding was rated high or very high by 45% of respondents. This finding is consistent with findings from a focus group study with 51 scientists that underscored the concept of the social nature of the research environment, with relationships often influenced by competition (de Vries et al., 2006).
In the RC survey, the behavioral influences identified by at least 25% of the sample as having a significant influence on misconduct fell into two categories (Pryor et al., 2007). The first category related to investigator issues, including pressures for funding, recognition, and publications. The second category concerned RC pressures related to workload, including number and intensity of protocols for which the RC was responsible and insufficient involvement or low interest of the PI. Again this finding is consistent with other surveys of clinical research coordinators; Fedor and Cola (2003) reported 23% of 139 respondents identified unrealistic workload as their largest work-related obstacle. In that survey, 24.8% of respondents reported having 7 or more concurrent studies as an average workload.
The findings reported in this paper are to shift the discourse from what factors may influence or impede research integrity to what occurred in instances of scientific misconduct. By describing research coordinator’s experiences, an insider’s account of scientific misconduct and the management of these breeches in research integrity are explored.
A cross-sectional survey design to study the beliefs, values, practices, and experiences of 1,645 research coordinators related to scientific misconduct was employed in the parent study. For the purposes of the study, RC was defined as someone who had responsibility for enrolling and following subjects. The study was approved by the University of Alabama at Birmingham and Indiana University Institutional Review Boards. Participants did not provide written informed consent, as there was concern for breach of confidentiality due to the sensitive nature of the survey topic. There was an extensive cover letter informing potential participants about the study.
The sampling strategy and survey plan for the questionnaire administered in the parent study, the Scientific Misconduct Questionnaire-Revised (SMQ-R), have been described previously (Broome et al., 2005; Pryor et al., 2007). A total of 5,306 RCs who belonged to the Association of Clinical Research Professionals were mailed the survey, and 1,645 (31%) returned a useable questionnaire. In the full sample of 1,645 respondents, 332 (18.6%) responded yes to having first-hand knowledge of an occurrence of scientific misconduct within the previous year. Of those, 266 respondents indicated they had experienced an instance of misconduct and provided complete answers to the open-ended questions related to an occurrence of misconduct (14.9%).
The sample (n = 266) was predominantly female (97%), White (95%), registered nurses (64%), who reported an average of 9.9 years of experience working in research. The average age was 44 years, with slightly over half of the respondents (53%) classifying themselves as a RC; 26% were research nurses, and 21% indicated other. These respondents reported responsibility for enrolling subjects in an average of 5.4 studies, and responsibility for follow-up in 8.9 studies. Few differences were noted when comparing the respondents who provided the qualitative data examined in this report (n = 266) to respondents in the larger survey sample who either did not provide qualitative data (n = 1,491) or whose responses were excluded either due to incomplete answers or their responses indicated they did not have first-hand knowledge (n = 28). Qualitative respondents were on average 2.1 years younger (t =3.57, df=1749, p < .001), more likely to work in an academic medical center versus other settings (chi-square = 6.8, df= 1, p = .009), and were responsible for follow-up for an average of 2.3 more studies per month (t= −2.21, df=293.5, p = .03). The two groups did not differ on other demographic or work characteristics, including educational background (chi-square = 8.4, df = 4, p = .08) and type of position held (chi-square = 5.7, df = 2, p = .06).
The SMQ-R is a 68-item questionnaire covering 6 different dimensions of scientific misconduct (perception of the workplace environment, prevalence of scientific misconduct, awareness of others about misconduct, reporting misconduct, attitudes and beliefs about misconduct, and behavioral influences on misconduct.) The development and testing of the SMQ-R and the psychometric properties have been reported (Broome et al., 2005). If the RC reported having experience with an instance of misconduct within the last year, he or she was asked to complete 12 additional open-ended questions (Table 1). The questions were developed by the investigative team and were reviewed for content accuracy and relevance to the topic by 3 research coordinators who had at least 5 years of experience. Only the content analysis of the 12 open-ended items are included in this article.
Since responses to the open-ended questions were received in the mail, the handwritten responses were transcribed verbatim. All responses were checked to ensure there was no identifying information that could link responses to an individual or an institution. After verification of transcription accuracy, analysis of the data began. Given the nature of the data, including the inability to clarify or seek additional information from respondents, a content analysis was deemed most appropriate. Patton (2002) describes this as a process of qualitative data reduction and sense-making effort to attempt to identify core consistencies. Analysis was guided by the focus of the open-ended questions; that is, how they knew of the misconduct, who was involved, specifics of the instance, reporting or nonreporting, the consequences (if reported), and future handing of misconduct if another instance was encountered, similar to what Krippendorff (2004) refers to as problem-driven analysis.
Initially, the PI, along with two graduate students, developed preliminary codes based on approximately 50 responses. These preliminary codes were then used by the four authors to code all responses independently. This resulted in the need to expand codes and develop subcodes within categories. After further refinement, the data were recoded using the expanded codes. Three of the investigators met in a 2-day face-to-face session to reach consensus, resolve differences in how data were coded, and ensure rigor. The codes were not mutually exclusive; that is, data could be coded in more than one way or more than one type of misconduct occurred in the described situation. For example:
Study coordinator forgot to get signature of consent before procedures were completed. Went back to subject at a later date and asked them to back date their signature. Also enrolled subjects who should have been disqualified based on co-morbid diagnosis.
This content was coded as a consent violation and a protocol violation of inclusion/exclusion criteria.
Whenever possible the exact terms used by the respondents were used. That is, if one respondent used the term PI that was how the perpetrator was described, while another may have used the term co-investigator for a similar position. The most common findings in each category will be presented. At times, the percentages provided do not always total 100% due to numerous, small subcategories within or insufficient detail in the response for the investigators to code with certainty (Table 2).
The most common way research coordinators reported they learned about misconduct was first-hand witness of the event.
While working in the same area, I witnessed the misconduct. An assessment was performed by a coordinator. The assessment was performed outside the window set by the protocol. The coordinator had the subject fill in a time that was within the window.
The second most common way an RC reported discovering misconduct was when he or she learned about the incident from another RC. This usually occurred when one RC spoke with another RC colleague seeking advice as to what to do. Misconduct was revealed during chart audits in 15.4% of the reported instances. Less common ways misconduct was uncovered by RCs included other research staff members on the study talking to the RC and the study sponsor identifying misconduct on review.
When the respondents were asked who was thought to be responsible for the misconduct, careful attention was paid to utilize the precise response. At times, more than one person was identified. Most commonly, the PI was identified as the responsible person (70/281; 25.1%).
The PI instructed me SAEs (serious adverse events) occurring prior to randomization do not need to be reported to the IRB. An invasive procedure is done immediately after the consent, but the patient is not randomized to study drug till a month later. Several SAEs over a ten month period were unreported.
Other responsible parties identified included: RC (56/281; 19.9%), another investigator on the study (47/281; 16.9%), a medical doctor (17/281; 6.1%), and other research staff (15/281; 5.2%). The percentage attributed to a RC as the responsible person was not a reflection of responding RCs admitting their own responsibility, but rather their report of misconduct after acquiring a study from a previous RC.
This person left the site and I was assigned to take over one of her studies…She had enrolled 3 subjects. One should not have been enrolled, had an exclusion criteria…When I made the first phone call to a patient and asked about any hospitalizations was told about a TIA event 8 months earlier. This was an SAE for the study and had not been reported. Either the coordinator did not actually call the subject or if she knew about the SAE she did not report it.
Thus, misconduct and the responsible party were identified in a variety of ways with first hand witness and the PI being the most common responses.
Types of misconduct were categorized into five broad categories: protocol violations, consent violations, fabrication, falsification, and financial conflict of interest. These categories were then subcategorized with a more specific code (Table 2).
Protocol violations were the most common type of misconduct reported, comprising exactly half of the cases. Protocol violations included inclusion or exclusion violation, noncompliance with protocol, issues of patient safety due to protocol violations, and lack of PI oversight of the protocol.
Consent violations were reported in more than a quarter of the cases. Most commonly, it was described as not fully informed consent, followed by documentation of consent issues (e.g., an outdated consent form). While less commonly identified, there were instances of no institutional review board approval reported and instances described as consent being coercive. Often consent violations had more than one element involved such as not fully informed and coercion.
The investigator purposely misrepresented a protocol to a patient claiming that the experimental arm (a phase III randomized study) was superior to the control arm. The patient had already declined participation but the investigator spoke to the patient and convinced the patient to participate…Later I said “I see patient so and so changed his mind” to which the investigator said he can usually “bring them around” when they say no.
The other categories of misconduct--fabrication, financial conflict of interest, and falsification--were reported less frequently. Fabrication most commonly involved creation of data; less common was fabrication of documentation in patient records. Financial conflict of interest was reported in 5% of cases. This included subcategories of failing to disclose a conflict, double-billing for procedures (billing study account and insurance), and accepting enrollment incentives. Least common was a small number of cases that reported falsification of either patient documentation or investigator credentials.
In 70% (186/266) of cases, the RC reported the scientific misconduct. Most commonly, the RC spoke directly to the responsible person involved in the misconduct. For those RCs who spoke directly to the involved person, their responses demonstrated that they approached the person in a factual manner presenting the issue. The responses from the responsible party were varied, with half accepting responsibility and acknowledging what had occurred. A couple of examples were: “…was very responsive, took responsibility and wanted to know how to do the right thing and avoid the same problem in the future” and “…was very concerned and assured the monitor that the situation would be corrected immediately.”
Equally as common as accepting responsibility was the responsible person getting defensive, denying the misconduct, or trivialized what the RC was saying: “Principal Investigator responded by lashing out at the many regulations governing clinical research and that physicians’ hands were being tied,” “Initially pointed out the deviation to the Principal Investigator. PI yelled and refused to comply. Reported to Executive Director. He raised voice and threatened termination,” and “They felt it was very minor-especially since the safety of the patient was not compromised. I had difficulty impressing upon them the severity of the offense.”
The RCs often reported misconduct to multiple sources. These sources included a direct supervisor, department manager, institutional review board member, clinical director, and study sponsor. However, in 30% (80/266) of cases, the RC was aware of the misconduct but did not report it to any authority. At times, the RC did not report the misconduct because he or she became aware of it through an authority who was already involved, such as the study monitor or the institutional review board. Thus, reporting to another party was not relevant. In occasional cases, the RC decided not to report because the person involved in the misconduct was involved directly in supervising him or her as an employee and this influenced their actions.
While working in the same area, I witnessed the misconduct. An assessment was performed by a coordinator. The assessment was performed outside the window set by the protocol. The coordinator had the subject fill in a time that was within the window. …I would have liked to let her know what I witnessed and discussed the situation…but she was my supervisor.
Reasons for not reporting centered on issues of the lack of risk to patients or harm. A number of respondents stated what they would do depends on the degree of misconduct: “If I felt the misconduct was harmful to anyone, I would do something. I am not sure what; it would depend on the type of misconduct.” and “If patients are at risk, I would press the issue and report.” However, their responses do not provide enough detail to distinguish the type or degree of misconduct that would result in action, nor to know the level of action.
A variety of outcomes were reported by the RCs as a result of reporting the misconduct. The most common outcome was the responsible party was fired, allowed to resign, or restricted in some major capacity (60/266; 22.7%). Restrictions included being barred from conducting research at the institution or having the study halted.
I found…investigator used insufficient informed consent to enroll participants. Consent failed to provide accurate information about purpose, duration and procedures. …complied a complete list of discrepancies, discussed with compliance manager, IRB, PI and also to co-investigator who is MD of record for the study (PI was an MPH)…study placed on administrative hold… for full board review. PI replaced by MD of record…no longer allowed to be a PI on studies for two years.
The next most common outcome was having the issue resolved and having some quality improvement efforts as a result of the reporting (56/266; 21.2%). Quality improvement efforts included policy changes, frequent audits, and educational sessions. In 14.6% (39/266) of cases, the outcome was either unknown by the RC or still pending at the time of the study responses. In 14.2% (38/266) of cases, there was no penalty or action taken. In 8.1% (22/266) of cases, there was a milder penalty against the responsible party such as a warning, a letter, or some type of counseling placed in the employee file. However, in 6.5% (17/266) of cases, the RC was either fired or quit their positions. “Yes, I reported it to the foundation board. The board told me not to discuss my concerns with anyone. The foundation eliminated my position. The foundation paid me 6 months pay for firing.”
A majority of RCs (60%), when asked what would happen if they encountered misconduct in the future, said they would do the same (i.e., report it). Slightly more than 20% said they would handle it differently, with the most common responses being they would either report to more than one person or report to higher authorities (such as the IRB) sooner. Almost 9% said they were uncertain and would depend on the specific circumstances. A few participants stated they would do nothing, and a few stated they would resign from their position.
The findings from this study add to previous research that has been reported to identify causes and types of scientific misconduct by giving the accounts of first-hand experiences from research team members that are primarily nurses. The findings reported in this article augment existing literature on factors that influence or impede research integrity to include descriptions that occurred in instances of misconduct.
A range of behaviors involved in misconduct was found, confirming that misconduct is not limited to fabrication, falsification, and plagiarism. While these three activities are highlighted in definitions of misconduct and in training related to this subject, the findings in the current study support recommendations from others that traditional definitions of falsification, fabrication, and plagiarism are insufficient (Martinson et al., 2005). The full range of scientific misconduct behaviors should be incorporated in educational programs about research integrity.
The greatest percentage of misconduct was discovered through a first-hand witness of events. This finding, in conjunction with only 15% of cases being discovered by chart audit, suggests that an approach solely based on monitoring is inadequate. Relying exclusively on monitoring will not give an accurate assessment of the prevalence of scientific misconduct.
It was demonstrated that, from the experiences of RCs, protocol violations were the most common type of misconduct. Specifically, inclusion or exclusion criteria violations and noncompliance with protocol were the most common. Given the dynamic status of most patients, some might argue that there is always a degree of clinical judgment and that the PI is best suited to make the judgment, but given the individual factors identified in the literature as influencing misconduct, such as pressures for promotion, tenure and productivity, desire for financial gain, ego, and conflicting obligations; decisions should not be made by any one individual. In cases of differences either in interpretation of criteria or protocol interpretations, the differences would best be resolved through discussion by the research team, where differences could be discussed openly, rather than be decided upon by one person.
While most RCs reported misconduct, one reason identified for not reporting was that the responsible party was the RC’s boss. That finding, in combination with the finding that 6.5% of RCs who reported were fired or choose to leave the institution, demonstrates the power imbalance that can exist not only in organizations but between members of the research team. Although most institutions have established mechanisms for reporting, alternative reporting routes for misconduct should be addressed when the person is reporting on someone in a supervisory or influential position. In the majority of cases, the RC knew what steps should be taken in the situation and did report. Also, the majority of RCs who reported misconduct believed they would report again if encountered. In more cases than not, reporting resulted in action by the institution. However, in about 14% of reported cases, there was no action. As mentioned before, organizations may have established, detailed procedures as to what occurs when misconduct is reported, but organizations also need to delineate how evidence is reviewed, including feedback mechanisms to the employees who are engaged in clinical research. These procedures are essential to ensure RCs report events.
There were limitations to this study. Sample selection bias existed; the sample was recruited from research professional organizations, with a significant percentage of the sample holding certification and several years of experience, and thus the sample may not be representative of all research coordinators. Those who returned responses might not represent a total sample that has experienced misconduct in the last year. Although anonymity was protected, there were a handful of responders who commented about not wanting to provide details on paper and concerns about their responses being traceable. The data gathered was limited by the nature of the design. Responses could not be clarified and further probing could not occur as often happens in other forms of qualitative data collection. The authors initially planned a different strategy of data collection but received recommendations from institutional and scientific review boards regarding the need to ensure anonymity of respondents with the greatest certainty.
Future research involving research coordinators and scientific misconduct needs to explore further the reasons why RC’s do not report misconduct. In addition, research is needed to include not only coordinators and investigators independently, but rather jointly with these members of the research team. The complex factors that influence research integrity occur in the context of a research team and an organizational environment. These relationships should be examined in future studies.
Research coordinators hold a unique position in the research team and are often the ones who identify breaches of scientific integrity. The majority of the time coordinators choose to report these breaches and also more often than not institutions respond appropriately. However, there are exceptions. All study personnel should be well-educated regarding misconduct, clear mechanisms for reporting should exist, feedback loops about evidence review should exist, and integrity should be valued in the culture of the research environment.
This study was supported by National Institute of Nursing Research, 5RO1NR 08802-02
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Barbara Habermann, School of Nursing, Indiana University, Indianapolis, Indiana.
Marion Broome, School of Nursing, Indiana University, Indianapolis, Indiana.
Erica R. Pryor, School of Nursing, University of Alabama at Birmingham, Birmingham, Alabama.
Kim Wagler Ziner, School of Nursing, Indiana University, Indianapolis, Indiana.