PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Res Nurs Health. Author manuscript; available in PMC 2012 August 1.
Published in final edited form as:
PMCID: PMC3272671
NIHMSID: NIHMS290342

Principles and Strategies for Monitoring Data Collection Integrity in a Multi-site Randomized Clinical Trial of a Behavioral Intervention

Celeste R. Phillips-Salimi, PhD, RN, CPON, Assistant Professor, Molly A. Donovan Stickler, MPH, Research Associate, Kristin Stegenga, PhD, RN, CPON, Nurse Researcher, Melissa Lee, CCRP, BS, Clinical Trials Project Manager, and Joan E. Haase, PhD, RN, FAAN, Holmquist Professor in Pediatric Oncology Nursing

Abstract

Although treatment fidelity strategies for enhancing the integrity of behavioral interventions have been well described, little has been written about monitoring data collection integrity. This article describes the principles and strategies developed to monitor data collection integrity of the “Stories and Music for Adolescent/Young Adult Resilience During Transplant” study (R01NR008583; U10CA098543; U10CA095861) — a multi-site Children’s Oncology Group randomized clinical trial of a music therapy intervention for adolescents and young adults undergoing stem cell transplant. The principles and strategies outlined in this article provide one model for development and evaluation of a data collection integrity monitoring plan for behavioral interventions that may be adapted by investigators and may be useful to funding agencies and grant application reviewers in evaluating proposals.

Keywords: methodological research, randomized controlled trials, cancer, adolescence, music therapy

Introduction

In 1999, the National Institutes of Health (NIH) established the Behavioral Change Consortium (BCC) to foster collaboration by researchers across 15 research projects that had been funded to evaluate behavior-change interventions in various populations (Ory, Jordan, & Bazzarre, 2002). As the group began working together, members quickly recognized there were several unique issues in the design and implementation of behavioral intervention studies. Since that time, the Treatment Fidelity Workgroup of the NIH BCC has placed special emphasis on identifying treatment fidelity strategies for behavioral interventions (Bellg et al., 2004; Borrelli et al., 2005). Treatment fidelity refers to: (a) maintaining the integrity of an intervention and (b) developing procedures to ensure that the intervention is evaluated appropriately (Bellg et al., 2004; Borrelli et al., 2005). Data collection and management procedures play a role in ensuring that an intervention is evaluated appropriately. The Treatment Fidelity Workgroup has thoroughly described specific treatment fidelity concepts and strategies for enhancing the integrity of behavioral interventions (Bellg et al., 2004; Borrelli et al., 2005). However, the principles and strategies for assuring the integrity of data collection obtained in behavioral interventions have not been as well described nor have those for the interventions themselves.

The NIH (1998) requires the establishment of a comprehensive plan for monitoring data and patient safety for all clinical trials that entail potential risk to participants (http://grants.nih.gov/grants/guide/notice-files/not98-084.html). According to the NIH Policy for Data and Safety Monitoring, each Institute or Center is responsible for ensuring that its sponsored clinical research activities have an appropriate plan for data and safety monitoring. Often the Institute or Center delegates responsibility to the principal investigator for developing and implementing a data safety and monitoring plan. Although the directive for developing and implementing a data collection integrity monitoring plan is clear, the NIH policy does not include the “nuts and bolts” level of detail needed to guide principal investigators in developing a plan or evaluating its effectiveness.

The purpose of this paper is to provide a detailed description of the principles and strategies used to monitor data collection integrity in the study, “Stories and Music for Adolescent/Young Adults Resilience during Transplant” (SMART) — a randomized clinical trial (RCT) of a music therapy intervention for adolescents and young adults (AYA) undergoing stem cell transplant (SCT) for cancer. In the following sections, we describe the: (a) features of the SMART study that needed to be considered in the data collection integrity monitoring plan; (b) principles guiding development of the plan; (c) specific strategies used; and (d) helpfulness of strategies to address challenges.

SMART Study Features

In order to plan effectively for the monitoring of data collection integrity, a study’s design, population, setting, personnel, and technology features all need to be carefully considered. The SMART study is a relatively complex multi-site RCT of a behavioral music therapy intervention and is supported through two NIH mechanisms — as both an RO1 from NINR (R01NR008583) and a limited-site cooperative group study through the Children’s Oncology Group (U10CA098543; U10CA095861). The SMART study complexities that needed to be considered in developing a data collection integrity monitoring plan are described here to provide readers with background about ways study features influence data collection integrity planning decisions.

Design Features

The complexity of the design, including number of groups, measures, and times for data collection are key features to consider when planning ways of obtaining accurate and complete data. The two-group SMART study compared the efficacy of a therapeutic music video intervention to a low-dose control group audio-books condition. Each group received six, 60-minute sessions (two sessions per week, for 3 weeks). Data were collected over nine discrete intervals (i.e., three major data collections: (a) at baseline, prior to transplant; (b) post-intervention, usually 3 weeks post-transplant; (c) 100-days post-transplant; and six brief pre-/post-session data collections: prior to and immediately following intervention sessions 2, 4, and 6). The major data collections, done via laptop computers connected to the Internet, involved the completion of 23 discrete self-report measures by the AYA and took 45 minutes to 2 hours to complete, as well as completion by parents or by AYA > 18 years of age of additional demographic questions that took approximately 15 minutes to complete. The pre-/post-session data collections involved five brief symptom measures together taking 5 to 7 minutes to complete. All data collection time points were completed either in an in-patient or out-patient health care setting with the exception of the third major data collection time point (i.e., 100 days post-transplant). Because most AYAs had been discharged home by this time point, it would have been burdensome for AYA to return to the clinic. Therefore, AYA were offered the option to complete the measures from home or from their local follow-up site. In such cases measures were completed via the secure, password protected data collection website, and the data collector was available throughout the session via telephone.

Sample Features

The illness acuity and age of the study sample are also important to consider in planning for data collection. The target sample for SMART was AYA with cancer, between the ages of 11 and 24, who were hospitalized for an SCT. Individuals undergoing SCT often experience serious and debilitating side effects, and the transplant course can be quite unpredictable, which makes AYAs’ ability to engage in intervention and data collection activities equally unpredictable. The likelihood that AYAs would experience a high level of symptom distress during their participation was high; the implications for subject burden and participation had to be incorporated into both the clinical trial protocols and training for study team members. In addition, while maintaining standardization of approaches, research members needed to effectively interact with AYA across a wide developmental continuum: from pre-, middle, and older adolescence to young adulthood.

Setting and Personnel Complexity

The SMART study was implemented in six different states in a combination of 11 pediatric and adult hospital settings. Each study site had multiple team members serving in varied roles and coming from multiple disciplines. At different times during the implementation of the study, the research team consisted of 36 to 64 members from eight different disciplines/areas of expertise (nursing, music therapy, biostatistics, medicine, public health, social work, psychology, and certified research associates). Although the rich diversity of perspectives among the multidisciplinary team members was highly beneficial, there were also different levels of research knowledge and experience among team members. This variability required planning to ensure a baseline standard for practice in study–related roles for all team members.

Two roles were specific to the SMART study data collection: (a) data collectors — individuals responsible for all data collection sessions related to the intervention — and (b) quality assurance monitors — four trained staff located at the lead study site who evaluated the audio-taped data collection sessions and provided feedback to the data collectors across all sites. Each site had at least two data collectors. The majority of data collectors were Certified Research Associates and members of the Children’s Oncology Group. Specific details related to these roles are described under the section on strategies to monitor data collection integrity.

Data Collection Features

When planning data collection integrity monitoring strategies, the procedures for data collection need to be considered because these procedures can affect how well evaluators are blinded, how burdensome data collection is, and how consistently data collection procedures are implemented. A customized, password-protected, web-based data management system was created for the SMART study. This system was developed by two data managers assigned to our team from the Department of Biostatistics at the core site. The SMART study’s multi-leveled data collection and monitoring website included: participant data collection; self-evaluated quality assurance monitoring for both the data collector and interventionist roles; data collection and intervention monitor-assessed quality assurance; patient scheduling; and tracking modules (Musik et al., in press). Each of these secure components of the website required role-specific access permission.

Study-related data, including data specific to quality assurance monitoring, were captured in real time. AYA data were collected via laptop computers that had multiple levels of security protections within and across sites (e.g., implementation of port blocking and Internet Protocol filters to limit network access; utilization of secure database server that concealed its data storage location; and encryption of data between the web server and participants; Musik et al., in press). Electronic backup copies were created and stored nightly.

Before study implementation, the AYA data collection website was evaluated by members of a teen advisory board at one of the participating institutions, and the website was revised and integrated into the final product based on their feedback. To ensure minimal missing data, the data collection website contained cues (i.e., a tone sounded and a message appeared) for both AYAs and data collector to review data collection for completeness.

In addition to the SMART website, the study team used Indiana University’s collaboration and learning website (Oncourse CL) that allowed members to securely exchange information between sites and communicate with other members of the team. Information about the protocols, study roles, and minutes from meetings were housed in this password-protected site. The use of this additional web-based portal facilitated communication between geographically distant sites in real time. It also enhanced access at all times to the most current study-related materials by all team members.

Guiding Principles

To ensure that all of the above complexities were addressed, six principles guided the development and implementation of our data collection integrity monitoring plan:

  1. A comprehensive data collection approach — the delivery and format of the measures to the AYA and family needed to be consistent, thorough, and according to protocol.
  2. Competency and consistency in the use of technology — both the AYA and data collector were competent and consistent in use of the technology.
  3. Open communication — regular forums were established for all members of the data collection team to ask questions, share data collection experiences, and receive constructive feedback.
  4. Systematic, multi-level, and multi-source tracking of processes and accountability — an organized approach to monitoring and documenting how data were collected and ensuring data collectors were accountable for following the established protocols.
  5. Transparency of processes used by all team members — all aspects of the evaluation process were transparent and systematically evaluated by quality assurance monitors
  6. Growth and learning opportunities for all team members — opportunities were available for team members to grow and learn from their experiences as a data collector or monitor. These six principles guided the data collection integrity processes designed and implemented for the study.

Strategies to Monitor Data Collection Integrity

The guiding principles were operationalized in four ways. These included standardized data collection protocols and manuals, standardized training, on-going team communication, and on-going quality assurance monitoring. Table 1 summaries the guiding principles and strategies used to address each principle.

Table 1
Guiding Principles and Strategies Used to Address Each Principle

Standardized Data Collection Protocols and Manuals

During our pilot study to assess feasibility and acceptability of the SMART study, we developed the preliminary data collection protocols. Over the course of data collection with 12 AYAs, word-for-word scripts and other data collection procedures were systematically developed based on the insights and feedback of the graduate-level research assistant involved in collecting the data. Once funding was obtained for the larger study, the word-for-word scripts and step-by-step procedures were expanded, evaluated, refined in a group process, and used as the basis of the data collection protocol.

Table 2 shows a portion of the scripted data collection protocol, outlining the first three steps for the data collector. Under each step of the protocol, the processes were described in detail. For example, under the second heading, “Creating the Data Collection Environment,” the specifics about introducing oneself to the AYA and family and beginning the audio recording were outlined. Additionally, word-for-word scripts were provided in areas where consistent information needed to be given to the AYAs. For example, under the third heading, “Give Brief Review of Data Collection Procedures,” a word-for-word script was provided so all AYAs would know what to expect during the session. Until data collectors became familiar with the content and wording, they informed the AYAs that they would be reading some of the instructions to make sure everyone in the study had the same information.

Table 2
Portion of Scripted Data Collection Protocol

During the start-up phase of the study, standardized manuals were developed to ensure organized and consistent information was provided during training and readily available post-training. The purpose of the manuals was to ensure that all team members had comprehensive handbooks containing consistent, accurate, and accessible information about the SMART study. Three separate manuals were created that were specific to roles of project manager/site PI, interventionist, and data collector. The study manual for data collectors included: (a) background of the study development and funding sources; (b) organizational structure and team members; (c) overview of conceptual frameworks guiding the study; (d) brief descriptions of intervention arms; (e) policy and procedures; (f) disease and treatment clinical considerations; (g) job description; (h) description of instruments; (i) data collection protocols; and (j) quality assurance forms. Because data collectors were blinded to the AYA’s group assignment, only basic information, to ensure “buy-in”, related to the intervention and control conditions (i.e. making a DVD and listening to books on tapes) was included in the manual.

Standardized Training

All study team members participated in comprehensive standardized training, completed over two days. Training included a thorough review of all of the components of the standardized training manual described above. Additionally, team-building activities, role-playing exercises, technology training and procedures for blinding were incorporated into training.

Team-building activities were essential to establish a level of confidence in and cooperation among team members. During the 2-day face-to-face training session that took place at the core study site, time was scheduled for introductions, opportunities to visit with other team members during breaks, and shared meals. These opportunities allowed the data collectors and others to get to know the study leadership and each other prior to the implementation of the study. Team-building activities enabled members to gain comfort, rapport, and a sense of collegiality with each other that was especially valuable during the role-playing scenarios used during training.

Data collectors received role-specific training and participated in role-playing exercises that addressed potential situations that might be encountered during a data collection session. For example, one role-playing exercise focused on how to manage interruptions. The role-playing component provided the trainers the opportunity to evaluate the skill level of each data collector and offer anticipatory guidance, thus increasing both data collector comfort with approaching AYAs during a stressful time and ensuring successful data collection. Role-playing was particularly important because most of the data collectors had limited experiences interacting directly with AYAs.

Data collectors received hands-on training on how to navigate study-related website systems for data collection and monitoring. This training took place in a computer lab and gave data collectors the chance to access and view the websites they would be using during the study. Data collectors were also given instructions on how to access the required post-session quality assurance forms (i.e., the Self Quality Assurance Form and Field Notes) on the monitoring website.

The importance of ensuring that data collectors remained blinded to each AYA’s group assignment was emphasized during training. Data collectors were informed of the rationale for blinding, provided information about how to report unblinding occurrences, and given examples of situations in which unblinding could potentially occur. Additionally, within the protocol dialogue for each data collection session, the data collector was prompted to remind the AYA not to reveal what group s/he was randomized to. Other strategies to ensure that blinding was maintained included: (a) developing different websites for data collectors and interventionists for communication purposes; (b) having data collectors wear buttons that said, “I’m an Evaluator! Please don’t tell me what group you are in”; (c) providing information to staff on the SCT units about the study and their role in helping the research team maintain blinding of the data collectors; and (d) placing a study-related care plan in the AYA’s chart that included encouragement for staff to avoid discussing the study with data collectors.

Ongoing Team Communication

Ongoing communication and training for data collectors occurred predominantly through regularly scheduled conference calls. The primary purposes for these conference calls were to: (a) provide opportunities for data collectors to share their experiences; (b) facilitate consistent communication; (c) promote group participation in problem solving; and (d) enhance data collectors’ commitment to the study. A dedicated toll-free phone number was used for these conference calls. Email reminders were sent to data collectors at least 3 days ahead of the scheduled call time to ensure that the meeting was incorporated into team members’ busy schedules. Each site had between two to four data collectors at any given time; on average 90% attended the conference calls.

To further facilitate ongoing communication among study team members, the OnCourse CL website described above was used to post and archive announcements and other information related to data collection, examples of role-playing exercises, team members’ contact information, and electronic copies of the data collection protocol.

Quality Assurance Monitoring

Quality assurance monitoring was another strategy used to ensure the integrity of the data collected. Data collectors were required to digitally record all of their data collection sessions. Within 24 hours of a session, data collectors listened to their recorded session and completed the self quality assurance evaluation checklist on the SMART study monitoring website.

To ensure that the data collection was carried out according to protocol and that the data collection was a positive experience for the AYA, the designated quality assurance monitor reviewed the data collector’s field notes and the session’s audio-recording for compliance with the protocol. The quality assurance monitor then completed the same quality assurance evaluation checklist as the data collector and provided positive and constructive feedback to the data collector. Competency within a session was achieved when all items on the checklist were marked “done.” or the item was marked “not appropriate” if a reason could be provided indicating why an action could or should not have been completed (e.g., asking the AYA’s family to leave the room when they were not in the room). The feedback was provided to data collectors for the first three long data collections and the first three pre/post data collection sessions they completed. If there were no significant problems (e.g., missing important actions; failing to complete checklist and field notes in a timely matter, or failing to provide a full narrative description of deviations from protocol), the data collector was considered competent for data collection study procedures. Inconsistencies, problems with inaccuracies, or deviations from the protocol were discussed with the data collector individually. When appropriate, the lessons learned were shared by the data collector on subsequent data collection team calls, so all data collectors could benefit. After each data collector reached competency, the data collector was no longer required to complete the checklist; however, 20% of their subsequent data collection events were randomly selected for review using the same procedures for evaluating protocol adherence.

Helpfulness of Strategies to Address Ongoing Challenges

Inevitably, challenges arise when implementing a behavioral RCT. Data collection integrity monitoring strategies developed for the SMART study assisted the research team to identify and effectively address challenges in a timely manner. Specific challenges that arose during the course of the SMART study were related to technology, cancelled sessions, and interruptions of data collection sessions by others. To help anticipate challenges, we provide specific description of some challenges that arose and strategies used to address them

Technology Issues

Especially during the study start-up phase, technology issues occasionally influenced timing, completeness, and quality of data, and had the potential to influence the interest and engagement of the AYA to complete study measures. Technology challenges included interfacing the information technology across multiple hospitals and university systems, where varied security systems and access rules were in place. A second challenge was loss of wireless connection during data collection. This was distressing to both the AYA and the data collector. To address this problem, paper copies of the exact computer screen shots of the measures were made readily available; these paper copies were presented to AYA in the same order as the computer and included the screen shots of encouraging messages. The AYA were given 1/3 of the measures at a time; when these were completed, the data collector exchanged the completed 1/3 of the measures for the next 1/3 and reviewed the completed measures for missing items, at the same time the AYA was finishing the next section. Data from these paper copies were then manually entered into the data management system by the site’s project manager within one hour after the data collection session and checked for accurate entry. After realizing the potential for connectivity problems, data collection protocols were revised to instruct data collectors to access the wireless connection prior to entering the AYA’s room. When evaluation was done remotely, prior to the scheduled evaluation time, the project managers worked with the AYAs to assure they knew how to connect. This resulted in decreased anxiety for both the AYAs and data collectors. For the rare times when connectivity problems could not be resolved quickly, the available paper copies of the study measures were used. This ensured consistency and timeliness even with the inevitable occasional technological challenges.

Scheduling Data Collection Sessions

Scheduling evaluation sessions was a challenge for various reasons, including high levels of AYA symptom distress, competing scheduled treatments, and competing AYA activity priorities. These data collection scheduling challenges were primarily related to cancelled intervention sessions. Depending on the level of symptom distress, the AYA was either encouraged to “try” the session and stop if they continued to feel ill, or the session was rescheduled. To further address the challenge of symptoms, strategies for assessing the situation (i.e., communicating appropriately with the AYA) and coordinating with project managers were reinforced during training and conference calls. Competing treatments were addressed by deliberatively working to get nursing staff buy-in to the significance of the study and working closely with them to identify optimal times for AYAs to complete the evaluations. Project managers also worked closely with the AYAs to assure the evaluation sessions were worked in around their other activity priorities. Missing data collection sessions were automatically tracked through a report of timeliness of all study activities that was generated in real time in the data management system.

Interruptions of Data Collection Sessions by Others

Interruptions from family members, especially parents, and healthcare providers were inevitable, even though each received information about the need for a quiet, non-distracting, comfortable environment in order to ensure that the AYA could thoughtfully and independently complete measures. Challenges included side conversations by/with parents if they decided not to leave during data collection, distracting noises from televisions and IV pumps, and other visitors entering the room. Data collectors developed and shared their repertoire of successful strategies to manage distractions with each other. In order to minimize preventable distractions, we provided regular in-services at each site to ensure clinical staff knew about the study and how they could contribute to its success. However, the inevitability of interruptions by clinical staff for medically necessary treatments was acknowledged. Project managers worked closely with the nursing staff to keep the study-related activities visible on the units.

Conclusion and Recommendations

The guiding principles and strategies of the SMART study’s data collection integrity monitoring plan provide one model that may be useful to investigators planning to evaluate a behavioral intervention across multiple sites. The following are our summary recommendations.

First, whenever possible, investigators should consider taking the time to plan the essential components of their data collection integrity monitoring plan during pilot work. For our research team, the pilot study provided essential groundwork for developing our data collection integrity monitoring plan. The pilot study provided the opportunity to consider the ways unique features of the study would influence how data were collected and guided the specific steps needed to administer the instruments. The development of the word-for-word scripts and step-by-step procedures helped us create a consistent data collection approach that could easily be evaluated. Several of the data collectors in the larger study reported that they found the scripts and procedures to be extremely helpful, particularly if they had not had previous experiences in interacting with AYAs with cancer. The step-by-step procedures also helped eliminate any deviations from the protocol. Other benefits gained from the pilot study included the opportunity to create realistic role-playing exercises that were useful during the standardized training and the development of the standardized procedure for evaluating each data collection session.

The second recommendation is to ensure that data collectors have the opportunity to establish and maintain rapport and open communication with one another, quality assurance monitors, and core team leaders. In the SMART study, the standardized training session helped data collectors understand how they uniquely contributed to the study while introducing and connecting them with the other team members. This rapport, maintained through regularly scheduled conference calls, provided a non-threatening forum to openly discuss problems and share ideas about how to deal with difficult situations. The combination of both electronic and personal communication provided a unique, accessible support and feedback mechanism among data collectors, quality assurance monitors, and site core investigators. Our data collectors reported that the extensive communication opportunities resulted in a much stronger commitment to the study than they had commonly experienced when working with other clinical trials. For some data collectors, their commitment was so strong that they willingly took an active role in participating in dissemination of study results (Lee et al., 2008).

Our third recommendation is to develop a data collection integrity monitoring plan that is well-organized and structured. The organization and structure of the SMART study’s data collection integrity monitoring plan were imperative in ensuring the components of the plan were carried out appropriately. We maintained this organization and structure through the use of technology. The website monitoring system, for example, was essential in keeping track of any discrepancies related to data collection. If problems arose, the site projector coordinator could quickly resolve them. The use of Oncourse also helped provide organization by having an easily accessible secure website for data collectors to access the manuals and protocols and posting of meeting minutes.

The fourth recommendation is to use technology to minimize data collection burden and error. Despite the fact that technology problems inevitably occurred, the use of technology was clearly welcomed by our participants and study team members. For most AYAs, completing data collection on the computer was more comfortable and quickly accomplished than with paper- and-pencil format. In addition, the ability to directly upload data to the secure website server as the AYA completed the measures decreased the risk of error and saved time and effort for data collectors. The convenience of remote, secure data collection was particularly helpful at Time 3 when many AYAs were home and able to complete data collection without making a trip to the study site.

The fifth recommendation is to consider partnering with cooperative groups to enhance data collection integrity. The Children’s Oncology Group enhanced our data collection integrity efforts in several ways: assuring all participants were registered thorough the Children’s Oncology Group’s national database, having skilled Certified Research Associates available on-site to assist with data collection, and having the same disease and treatment-related forms used by the Children’s Oncology Group and entered by the Certified Research Associates.

Based on the implementation of the principles and strategies described in this article, we conclude that a comprehensive data collection integrity monitoring plan in a multi-site behavioral intervention is necessary to ensure that proper documentation of the data collection process is organized in a coherent manner, facilitate data analysis, add credibility to findings, and ensure that the quality of care provided to participants is consistent with the established standards. Other process features of a data collection integrity monitoring plan include: (a) generating regular reports of data quality for distribution to appropriate team members; (b) identifying and correcting problems by developing case studies for ongoing education; and (c) creating a final report of all quality assurance efforts, including protocol adherence and deviations, for use during the interpretation of findings. In addition to these process features, our plan included standardized training of all study personnel; the use of database tracking of recruitment, accrual, and retention; and a two-level process for identification of adverse events.

Acknowledgments

The project described was supported by the National Institutes of Health-National Institute of Nursing Research R01NR008583 and the National Cancer Institute U10 CA098543 and U10 CA095861. Additional support was provided by the first author’s Individual National Research Service Award from the National Institute of Nursing Research F31NR009733-01A1.

Footnotes

*Part of this paper was presented at the American Psychosocial Oncology Society (APOS) 5th Annual Conference; Irvine, CA. February 2008.

Contributor Information

Celeste R. Phillips-Salimi, University of Kentucky College of Nursing Lexington, KY.

Molly A. Donovan Stickler, Indiana University School of Nursing Indianapolis, IN.

Kristin Stegenga, Children’s Mercy Hospital Kansas City, MO.

Melissa Lee, Indiana University School of Medicine Indianapolis, IN.

Joan E. Haase, Indiana University School of Nursing Indianapolis, IN.

References

  • Bellg AJ, Borrelli B, Resnick B, Hecht J, Minicucci DS, Ory M, Czajkowski S. Enhancing treatment fidelity in health behavior change studies: Best practices and recommendations from the NIH Behavior Change Consortium. Health Psychology. 2004;23:443–451. … . doi: 10.1037/0278-6133.23.5.443. [PubMed]
  • Borrelli B, Sepinwall D, Ernst D, Bellg AJ, Czajkowski S, Breger R, Orwig D. A new tool to assess treatment fidelity and evaluation of treatment fidelity across 10 years of health behavior research. Journal of Consulting & Clinical Psychology. 2005;73:852–860. … . doi: 10.1037/0022-006X.73.5.852. [PubMed]
  • Lee M, Spiegel C, Phillips-Salimi C, Donovan M, Garrison L, Ryan R, Haase JE. The processes, advantages, and challenges of serving as an evaluator on ANUR0631, Children’s Oncology Group’s first behavioral intervention study; Poster session presented at the Children’s Oncology Group Conference; Denver, CO. Oct, 2008.
  • Musik B, Robb SL, Burns DS, Stegenga K, Yan M, McCorkle K, Haase JE. The development and use of a web-based data management system for a Phase II clinical trial of adolescents and young adults. CIN: Computers, Informatics, Nursing. (In press) doi: 10.1097/NCN.0b013e3181fcbc95. [PMC free article] [PubMed]
  • National Institutes of Health NIH policy for data and safety monitoring. 1998 Retrieved from http://grants.nih.gov/grants/guide/notice-files/not98-084.html.
  • Ory MG, Jordan PJ, Bazzarre T. The Behavior Change Consortium: Setting the stage for a new century of health behavior-change research. Health Education Research. 2002;17:500–511. Retrieved from http://her.oxfordjournals.org/cgi/reprint/17/5/500. [PubMed]