|Home | About | Journals | Submit | Contact Us | Français|
Investigators conducting research involving human subjects are obligated to safeguard the wellbeing of the study participants. Other than requiring investigators to establish procedures for ongoing monitoring and reporting of adverse events, federal regulations do not dictate how human subject safety should be ensured. A variety of data safety monitoring (DSM) procedures may be acceptable depending on the nature, size, and complexity of the study. However, practical guidance for establishing and implementing appropriate DSM plans for such studies are lacking. In this article, we provide a review of the DSM considerations associated with monitoring health remotely and describe the Pocket Personal Assistant for Tracking Health project as an exemplar for how to develop effective DSM plans for research that captures clinical data using remote health-monitoring devices. Protecting the safety and welfare of participants is one of the most important mandates for research involving human subjects. Investigators have an ethical and scientific responsibility to monitor the safety of research participants. Investigators typically fulfill this responsibility by monitoring and reporting adverse events.
Until recently, the term “data safety and monitoring” has been often limited to the surveillance procedures put in place to detect safety risks associated with the use of a new therapeutic device or to detect adverse events and toxicities associated with a new drug. This type of monitoring is performed to ensure the safety of the individual and the timely termination of a trial if the therapy is found to be harmful. The purpose of monitoring clinical data that are uploaded from remote health devices for research may be somewhat different, because typically the use of the remote technology itself does not increase safety risk, although this may be a consideration in some studies. However, the data that are collected may indicate that an individual is experiencing a change in clinical condition; therefore, researchers cannot ignore these data.
Not surprisingly, over the past several decades, research studies designed to collect health data remotely from subjects via the Internet, using such platforms as personal computers, wearable devices, home monitoring equipment, smart phones, and other cellular devices, have proliferated.
As the Internet has grown in popularity, concerns about whether such research affects human subject protections have grown. In fact, work groups have been convened and codes of ethics have been developed to specifically address the obligations of researchers employing these new domains of electronic communication, interaction, and data collection made possible by these technologies.1–4 Although these as well as other ethical considerations of collecting health data remotely have been well articulated in the Internet research literature, little practical guidance is provided about how to establish data safety monitoring (DSM) procedures to ensure participant safety. In fact, a recent study found that although remote monitoring systems were designed to potentially recognize worrisome values and generate alerts, rarely explanations were provided about how to develop procedures to review these data in real time and intervene if needed.5 The authors suggested that researchers develop risk-monitoring procedures, defined as automatic surveillance procedures to detect critical data values for measured variables (e.g., blood pressure [BP] and glucose measures) that may indicate worsening of an individual's clinical condition and that the thresholds are specified apriori. Others note that systems used for research purposes and not clinical management must be capable of detecting changes, and procedures need to be established to perform ongoing surveillance and act in a timely manner to ensure subjects' safety.6 Typically published reports that involve monitoring physiological indicators clearly state that the studies were approved by International Review Boards (IRBs); however, few reports discussed how the researchers established DSM procedures and handled the unique DSM considerations and opportunities that arose in these trials.
Researchers who are on the front line receiving clinical data from participants have a responsibility to ensure participants' safety by setting up appropriate DSM protocols. They are not only required to monitor for potential adverse events, but researchers have additional responsibilities when the study data include electronic capture of health information that reflects a participant's condition in real time. Because of the electronic monitoring, researchers may be more proximal to the participant and may detect a problem before the participant, because the researcher has access to the most current information about the participant's clinical status (e.g., subclinical changes or trends), or before clinicians are likely to be communicating with their patients (e.g., between follow-up appointments). Because the researchers have access to these data, they are obligated to establish a plan for monitoring these data to ensure that participants in the study are safe while maintaining the integrity of the study by minimizing unnecessary intervention whenever possible. These safety concerns are particularly important when the researchers who are collecting remote health data are not responsible for managing the participant's clinical care.
Researchers involved in the collection of electronic data in real time need to establish systems to detect and deal with values that may indicate potential safety risks in a responsive and timely manner. This obligation is no different from other researchers who are collecting data in face-to-face or telephone interview formats. Data safety plans must include procedures to identify and evaluate the significance, accuracy, and possible explanations for critical values as well as protocols for how to appropriately respond and take actions. Fortunately, most studies that include remote data capture also have the technological capability to incorporate programs for DSM surveillance, alerts, and triage within the systems.
By testing the efficacy of Personal Assistant for Tracking Health (Pocket PATH®), a mobile health application with customized data recording, trending, and decision-support programs to promote active involvement of patients in their care after lung transplantation,7 we recognized the importance of designing a comprehensive DSM plan for the Pocket PATH project that would extend beyond adverse event reporting to include monitoring remote health data to ensure participant safety. However, although the unique ethical considerations for remote health monitoring have been described in the literature, practical advice for how to develop and implement DSM plans are generally lacking. Therefore, we are now presenting the Pocket PATH project as an exemplar of how to develop effective DSM plans for research involving remote physiological health monitoring.
A full-scale, randomized, controlled trial is underway to test the efficacy of Pocket PATH by comparing self-monitoring activities, and ultimately healthcare outcomes, between participants randomized to use the Pocket PATH device versus the standard care approach for tracking their health data. Although lung transplant recipients were selected as the first test population, the Pocket PATH intervention holds promise for promoting self-monitoring behaviors for a variety of chronic illness populations.8
Prior to randomization, all patients received routine discharge instructions about the health indicators to monitor at home (e.g., temperature, spirometry, BP, and weight) and the specific values that are considered critical and should be reported to the transplant team immediately. Participants in the standard care group are expected to perform self-monitoring, record their health data on paper logs, detect critical values, and report changes to the transplant team. Participants in the Pocket PATH group are also expected to perform self-monitoring and record their health data using the custom features of the mobile device. The participants are also able to view graphic displays to detect trends in their data over time and receive automatic feedback messages when critical values are entered. The participants also receive prompts to report such changes to the transplant team (see Figs. 1 and and22 for sample Pocket PATH screens used for data entry and viewing graphs). Time-stamped data, including the actual values recipients enter and a record of any feedback messages that were generated, are automatically uploaded to a Web site via the cellular network.
It is important to note that the Pocket PATH research team is not involved in the clinical management of any individuals enrolled in the study; the transplant team is responsible for the clinical care of all study patients. The purpose of the study is to evaluate whether use of Pocket PATH helps patients monitor their health and report changes in their condition more often than the patients in the standard care comparison group. Therefore, to maintain the integrity of the study, the transplant team is not informed of patients' involvement in the trial; additionally, the transplant team does not have access to self-monitoring data recorded by participants in either group between clinic visits. However, as the clinical data from participants in the Pocket PATH group are automatically remotely uploaded to the project Web site to track patients' utilization of the device and its features, we cannot ignore potentially critical values. We have a responsibility to monitor data that may indicate an unsafe change a patient's clinical condition and outline steps for handling worrisome values.
The first step in developing the data safety plan for Pocket PATH was to determine which remote health indicators to monitor and the corresponding critical values for each indicator. We assembled a focus group of lung transplant clinicians to learn which of the indicators and their respective values were considered critical and should prompt an automatic feedback message for patients to immediately report to the clinicians. The clinicians selected three indicators and corresponding critical values: a temperature >101 Fahrenheit, a BP of systolic >160 or <80 or diastolic >100 mmHg and pulse <60 or >120 beats/min. We then designed a Web-based system to capture and display the data remotely collected via the cellular network including screens to view data logs and graphs and pages to review any critical feedback messages that had been generated.
Participants enter data at home using the Pocket PATH device and receive automatic feedback messages if they enter values that are considered critical. The data are automatically uploaded via the cellular network to the Web-based data monitoring system every night. The principal investigator or project director reviews the DSM Web site for critical feedback messages at least every 72h (see Fig. 3 for a screen shot of the main screen of Pocket PATH DSM Web site showing actual data).
The data include the device identification (ID) number, the user ID number, the indicator name, and the text of the feedback message. As shown in Figure 3, the first message listed on the DSM Web site indicates that user 203 entered a critical BP value of 161/75, which prompted the following feedback message to be automatically sent to the participant's device on 3/22/2009 at 8:38:20 AM: “You reported a high blood pressure. Wait 5 minutes, then recheck and enter your blood pressure again. If it is still elevated, report your blood pressure to the coordinator immediately.” Feedback messages chronologically appear on the Web site. This system ensures that critical changes in a participant's condition, should they occur, prompt an automatic feedback message to the participant and are available for review on the DSM Web site in a timely manner.
We developed an algorithm to guide decisions about how to appropriately handle any critical values to ensure participants' safety while keeping threats to study integrity to a minimum (see Fig. 4).
Every critical feedback message is evaluated and assigned a final code. According to the algorithm, if a participant reports that the value was entered in error at any time in the process, the message is coded as 00. For values not deemed erroneous, the project staff compares the value associated with the feedback message with trends in participants' cumulative data. If subsequent data show that the value returned to an acceptable level, the message is coded as 01. If subsequent values did not return to an acceptable level, the project staff reviews the lung transplant coordinator's progress note in the Transplant Patient Management System, an electronic archive of all communications between transplant clinical providers and patients, to assess whether the critical value was reported to the clinical team. If the participant did report the critical value to a clinician, the message is coded as 02. If the clinician was already aware of the value, for example, if the participant was admitted to the hospital, the message is coded as 03. If there is no evidence that the clinician is aware of the critical value, the project staff member contacts the participant. If the participant offers a satisfactory explanation for how the critical value was handled or if he or she has a plan to contact the coordinator, the message is coded as 04. If the explanation is unsatisfactory or the participant does not plan to contact the clinician and the project staff feels the threat is critical, the transplant coordinator is notified of the change in the individual's condition and the message is coded as 05. The final codes are entered in the comments column. Referring again to the first critical feedback message shown in Figure 3, because the repeat BP returned to an acceptable level (141/70), a code of 01 was documented in the last column.
To ensure that our research team is meeting our obligation to protect participant safety, our DSM plan also includes a mechanism for an objective assessment of the adequacy of the DSM procedures and decisions that are made during the process. We chose to include this extra layer of safety by enlisting the help of a lung transplant expert. On a quarterly basis, deidentified critical feedback logs are reviewed by the expert to assess whether critical values were appropriately evaluated by the research team. Kappa coefficients were calculated to determine interrater reliability between the codes assigned by the researchers and the codes assigned by the expert reviewer. Standards for strength of agreement range from <0=poor to 0.01–0.20=slight, 0.21–0.40=fair, 0.41–0.60=moderate, 0.61–0.80=substantial, and 0.81–1=almost perfect.9
In this section, we summarize the types of health indicators for which critical values were recorded, how the values were coded, and the level of interrater reliability between the expert reviewer and researcher. During the first two quarters of the study, 10 participants were randomized to the Pocket PATH group. Data regarding the number of days each participant had the device, recorded data, and the overall percentage of usage for each user are displayed in Table 1 by quarter.
Of the three health indicators that were monitored (temperature, BP, and pulse), relatively few values met the threshold for critical values. During the first two quarters, 10 patients recorded 2,503 values. Of those, 69 (69/2,503) or 3% were critical values. This prompted a total of 69 feedback messages; of those, 0% were critical temperature values (0 critical temperature values/800 of total temperature values entered), 5% were critical BP values (40 critical BP values/877 total BP values entered), and 4% were critical pulse values (29 critical pulse values/826 of total pulse values entered). Of all the critical values monitored (69), the majority (53/69) were ultimately coded as 01; 13 of 69 were coded as 03, and 3 of 69 were coded as 04. None of the values was coded as 00, 02, or 05.
Interrater reliability between the expert reviewer's and researcher's coding of critical values was calculated using kappa statistics. According to the standards of strength of agreement, the kappa coefficient for the entire time period was 0.945, indicating an “almost perfect” level of interrater reliability.9
Fortunately, the number of actual critical values documented during the first two quarters of the Pocket PATH study was relatively low, but this system was effective in detecting a critical value when one occurred and the expert review confirmed that our procedures reliably ensured subject safety. It is important to note that the data reported here are for the early months following transplant and more critical values are likely to be entered over time as the recipients' conditions deteriorate. Further, all of the critical values were handled without having to notify a clinician about a participant's condition; therefore, clinicians remained blinded to study group assignment. Evaluation of the elements of our DSM system, standard operating procedures, and agreement between experts and coders are evidence that our DSM protocol is an appropriate and effective system for ensuring the safety of participants in studies that involve the collection of remote health data. To assist other researchers to develop their own customized DSM protocols for studies involving remote health monitoring, the steps are outlined in Table 2.
The idea of incorporating alerts and warnings into clinical information systems or Internet-based research projects is not new. Examples of systems with alerts designed for clinical use include the weight monitoring in heart failure patients trial,10 home monitoring of patients with essential hypertension,11and the telemonitoring of asthma patients.12 Likewise, many Internet research studies are designed to monitor participants' health conditions remotely and alert researchers. Examples include Internet-based self-help interventions for symptoms of psychological distress such as depression and anxiety, which have built-in mechanisms to screen for levels of clinical depression and anxiety.13,14 However, when the collection of remote health data is performed for research and not for clinical purposes, the need to identify potential safety risks is no less serious. Just as remote health monitoring promotes optimal clinical management, because it is possible to immediately recognize changes, remote health monitoring promotes immediate recognition of potential participant safety issues in clinical research. Further, as most remote health-monitoring devices are capable of and are often intended to capture data and ultimately share data with other interested parties (e.g., family members of elderly relatives and clinician caregivers), it makes sense that monitoring and detection systems be built into the design from the start.
Systems originally designed for DSM can be easily adapted for use by other parties who are ultimately interested in having access to such data, trends, and alerts. Automated alerts save time and resources by identifying those that are out of acceptable range and may warrant attention, thus avoiding manual review of all data. Patients who are in “trouble” can be immediately identified. Further, systems can be designed to facilitate evaluation of each alert, create summary reports, display data trends, and incorporate other relevant data. Records of the types and frequencies of alerts for each individual or cohort can be used to identify individual or global issues. Investigators may learn that some thresholds are too sensitive, that is, capture events that are not as critical as once thought, or identify issues that seem to be commonly occurring across the cohort. As technological applications are capable of extensive and remote monitoring, the same technologies can be leveraged to assist with surveying, alerting, and evaluating the significant or potential critical values. Technological applications are now available to integrate data collection and DSM. The capabilities that support ongoing remote data capture can be expanded to monitor data for potential safety threats by using the technology to capture remote health data and monitor for thresholds and critical values. DSM protocols for research involving health data remotely collected may need to be more extensive than those typically required by IRBs.3
In conclusion, our aim was to raise awareness of the unique considerations, opportunities, and challenges that accompany research involving electronic data capture from remote health-monitoring devices not to suggest that more arduous DSM requirements be mandated for studies that involve the collection of remote health data.
This study was funded by NIH, NINR, NR 010711 (DeVito Dabbs, P.I.).
No competing financial interests exist.