|Home | About | Journals | Submit | Contact Us | Français|
The PatientViewpoint website collects patient-reported outcomes (PROs) and links them with the electronic health record to aid patient management. This pilot-test evaluated PatientViewpoint’s use, usefulness, and acceptability to patients and clinicians.
This was a single-arm prospective study that enrolled breast and prostate cancer patients undergoing treatment and the clinicians who managed them. Patients completed PROs every two weeks, and clinicians could access the results for patient visits. Scores that were poor relative to norms or substantially worse than the previous assessment were highlighted. After three on-study visits, we assessed patient and clinician perspectives on PatientViewpoint using close-ended and open-ended questions.
11/12 eligible clinicians (92%) and 52/76 eligible patients (68%) enrolled. Patients completed a median of 71% of assigned questionnaires; clinicians reported using the information for 79% of patients, most commonly as a source of additional information (51%). At the median, score reports identified 3 potential issues, of which 1 was discussed during the visit. Patients reported the system was easy to use (92%), useful (70%), aided recall of symptoms/side effects (72%), helped them feel more in control of their care (60%), improved discussions with their provider (49%), and improved care quality (39%). Patients and clinicians desired more information on score interpretation and minor adjustments to site navigation.
These results support the feasibility and value of PatientViewpoint. An ongoing study is using a continuous quality improvement approach to further refine PatientViewpoint. Future studies will evaluate its impact on patient care and outcomes.
Given the growing interest in using patient-reported outcome (PRO) questionnaires (e.g., health-related quality of life, symptoms) to aid in the management of individual patients, researchers and clinicians have been developing and testing ways to collect PRO data and report them to clinicians in real-time [1–2]. When PROs are collected in clinical research, clinicians generally do not see the individual results of their patients and, therefore, cannot use the data to identify and address issues prospectively. However, when PROs are used for clinical practice, the patient’s results are provided to clinicians in an effort to improve patient care and raise awareness of specific patient concerns. Studies in oncology have found that the use of PROs in clinical practice is associated with improved patient-clinician communication and can also improve patient care and outcomes [3–5]. Technology plays an important role in this intervention, as the real-time scoring and reporting of PRO results can facilitate their use [6–7]. To that end, a multidisciplinary team has been working since 2005 to develop the PatientViewpoint web tool (www.patientviewpoint.org).
PatientViewpoint allows clinicians to assign patients PRO questionnaires to complete at pre-defined intervals . Patients receive an email reminder to complete the questionnaire with a link to the website. The results from the patient’s current and previous questionnaires are displayed in graphical format. Scores that are poor in absolute terms, or represent a significant worsening from the previous assessment, are highlighted in yellow to alert clinicians to potential issues. Free-text boxes ask patients to report the issue they are most interested in discussing with their clinician and any other feedback. PatientViewpoint provides graphical score reports, and a plain-text table is imported into the electronic health record (EHR). Because the score highlighting is not possible in the plain-text format, an asterisk is used instead. The system reminds patients on several occasions not to report urgent issues using PatientViewpoint, but it also has the capability to generate an automatic page or email if responses requiring immediate attention are reported (though none of the questions used in this study were considered by the clinician co-investigators to warrant such alerts).
After developing a prototype website and conducting usability testing, we sought to conduct the initial pilot-test of PatientViewpoint in practice to assess its feasibility and value. We examined the web tool’s use, usefulness, and acceptability.
This was a single-arm prospective study conducted between July 2010 and January 2011 at the Johns Hopkins cancer center. The study was approved by the School of Medicine Institutional Review Board, and all subjects provided written informed consent.
Patients were enrolled during the first two months of the study and followed for three on-study visits. We enrolled patients with breast or prostate cancer (any stage) who were currently undergoing medical oncology treatment and had visits at least monthly during the study period. We also required that patients have had at least one on-treatment visit prior to enrollment to provide them a basis for comparison of how the PatientViewpoint website affected their care. Other patient eligibility criteria included age 21 years or older, physically and cognitively able to complete the questionnaire, able to read English, and able and willing to provide informed consent. Patients were not required to have a computer with Internet access and had the option of completing their surveys using a laptop computer in the clinic prior to their visits. There was no target sample size for patient subjects; this pilot-test evaluated the proportion of eligible patients approached during a two-month recruitment period willing to participate.
We enrolled clinicians as study subjects to capture their perspectives on the PatientViewpoint website. We recruited medical oncologists and nurse practitioners who treat breast and prostate cancer patients at two ambulatory practice locations (one urban, one surburban). We limited recruitment to the 12 most active patient-care clinicians (as opposed to research-focused clinicians), who see approximately 80%–95% of breast and prostate cancer patients at our Cancer Center.
Prior to patient enrollment, each clinician received individual training on PatientViewpoint, including how to access results via the PatientViewpoint web system and via the EHR. We provided one-page summaries of the PRO questionnaires, including their content and how to interpret scores.
Patients currently undergoing treatment and scheduled to have a medical oncology visit were informed of the study, and interested patients were evaluated for eligibility. Patients who enrolled in the study completed a brief personal data questionnaire. In addition to reporting demographic information (e.g., age, race), we also asked patients to report their access to a computer with an internet connection (no access, dial-up/low-speed access, high-speed access) and computer use (regular, occasional, rare, never). The research coordinator trained them on the website’s operations and assigned them a username and password. Clinicians provided basic clinical data (e.g., type of cancer, extent of disease, performance status) for enrolled patients.
While PatientViewpoint allows clinicians to select which questionnaires to assign to which patients, in this study, we required all breast and prostate cancer patients to complete the same questionnaires. Specifically, all patients completed the version 1 short forms for six Patient Reported Outcomes Measurement Information System (PROMIS) domains: physical function, pain interference, satisfaction with social roles, fatigue, anxiety, and depression [9–10]. Breast cancer patients also completed the breast-cancer specific module from the European Organization for Research and Treatment of Cancer Quality-of-Life Questionnaire (BR23) , and prostate cancer patients completed the Expanded Prostate Cancer Index Composite (EPIC) short-form [12–13]. Patients were assigned to complete the PRO questionnaires every two weeks, regardless of visit frequency. Three days prior to the target completion date, they received an email prompt with a link to the website. The window for questionnaire completion closed three days after the target completion date. If patients did not complete the PRO questionnaire prior to their visit, the research coordinator met with them and offered them the opportunity to complete the questionnaire in the clinic. If PRO questionnaires were not completed, the research coordinator collected information on reasons for noncompletion.
The research coordinator alerted clinicians of participating patients on their schedule for that day by a written note or email, which included instructions on how to view the results in both the EHR and PatientViewpoint. Clinicians decided whether to look at the patient’s PRO results in hard copy, via the website, via the EHR, or not at all.
In addition to personal and clinical data forms, we collected (1) usage information from the website, (2) close-ended patient and clinician feedback forms, and (3) open-ended patient and clinician exit interviews. The feedback forms and interviews were collected at the third study visit. The close-ended data were analyzed descriptively using frequencies and percentages, with quotes from the open-ended questions providing supplementary information for illustrative purposes.
We tracked the number of subjects who agreed to participate and the number who dropped out. We assessed the percentage of PRO questionnaires that were completed, patients who completed the PRO questionnaires using the website offsite versus in-clinic, and rate of missing responses. Finally, we calculated the length of time it took patients to complete the PRO questionnaire. From the research coordinator’s records, we monitored how often patients and clinicians required assistance, the number of technical difficulties, and the reasons why PRO questionnaires were not completed.
The 15-item patient feedback form, adapted from Basch et al. , asked patients about the questionnaire length and whether the website was useful, facilitated discussions with their clinicians, improved the quality of their care, and was easy to use and understand. The interviews with patients based on Taenzer et al.  collected information on what, if any, action was taken regarding PRO domains that were highlighted as potential issues, and how the website could be improved.
Clinicians completed a 9-item feedback form based on those used in previous studies [4, 16–17]. This questionnaire asked whether the clinician looked at the summary report prior to and/or during the visit, looked at the summary report in hard copy or through the EHR, discussed the results with the patients, and thought the summary report was helpful in communicating with and managing the patient. The clinician interviews based on Velikova et al.  elicited feedback on whether the PRO data provided new and useful information, how having the PRO data affected the visit, whether the summary reports were easily interpretable, and suggestions for improving the website.
There were 76 eligible patients invited to participate in the study; 52 (68%) enrolled and 47/52 (90%) completed the study. The other 5 patients withdrew (n=1), were deemed ineligible after enrollment (n=1), changed treatment schedules (n=2), or died (n=1). All but one of the clinicians approached agreed to participate (n=11).
Of the 47 patients who completed the study, 72% had breast cancer (n=34) and 28% had prostate cancer (n=13) (Table 1). Overall, the median age was 58 years, 18% of the sample was non-White, and 30% had not graduated from college. Clinician-rated performance status was 0 (indicating fully active) for 57% of the patients, the majority of whom had metastatic disease. Three-quarters of patients were undergoing chemotherapy, and hormonal therapy was common among the prostate cancer patients (85%). While most subjects were regular computer users (79%) and had high-speed internet access (91%), there were subjects who never used computers (9%) and who had no or low-speed internet access (8%).
The 24 patients who declined were not interested (n=23) or too busy (n=1). These patients had a higher median age (64.5) and were more likely to be non-White (33%) than enrolled patients. While we did not collect information on computer usage from the invited subjects who declined, one patient spontaneously reported discomfort with computers as the reason for refusing.
Based on the number of patients enrolled in the study and the length of follow-up for each, we expected 224 questionnaires, of which a total of 190 questionnaires (85%) were completed. Of the expected surveys, 56 were not completed at home: 28 due to technical problems, 6 because patients reported being too ill, and 22 for other reasons. Of these 56 surveys not completed at home, 34 were not completed in the clinic either (and were therefore missing): 16 due to technical problems, 1 patient reported being too ill, and 17 for other reasons (primarily difficulty finding time). Among the surveys that were completed in the clinic, 2 surveys experienced technical problems, 4 were interrupted, and 2 patients required assistance. Technical problems included issues with the email notification, not being able to complete the survey in the clinic if the survey time window for completion was not open, and the results not synchronizing with the EHR. Fixes for all of these issues were developed and implemented during the pilot test.
For each patient, we calculated the number of questionnaires we would have expected them to complete and the percentage of expected questionnaires received. The median number of expected questionnaires completed by individual patients was 71%. The majority of questionnaires were completed offsite (n=160; 87%), with 28 patients (60%) only completing questionnaires offsite. Two patients reported having no internet access, and a total of 3 patients (6%) only completed questionnaires in the clinic. Two patients completed no questionnaires. PROMIS questionnaires completed at home were more likely to have no missing items (84%) compared to PROMIS questionnaires completed in the clinic (67%). Among breast cancer patients, it took a median (range) of 6 (2–12) minutes to complete the PROMIS items and 3 (1–11) minutes to complete the BR23. Among prostate cancer patients, it took 5 (3–12) minutes to complete the PROMIS items and 4 minutes (2–6) to complete the EPIC.
The median number of potential issues identified by the PROs was three, of which a median of one was discussed (Table 2). Among breast cancer patients (n=34), the most frequently highlighted domains were systemic therapy (i.e., symptoms and side effects of therapy; n=18) and sexual function (n=17). Prostate cancer patients’ (n=13) most frequently highlighted domains were sexual function (n=10) and hormonal function (i.e., hot flashes, breast issues, fatigue, depression, body weight; n=9). Of the potential issues identified, clinicians were most likely to discuss systemic therapy (89%), pain interference (80%), and fatigue (80%) and least likely to discuss sexual function (6% of breast cancer, 10% of prostate cancer). For issues that were discussed, we asked patients to report who initiated the discussion. Body image, hair loss, future perspective (i.e., worry about future health), and hormonal function were the only topics where patients were more likely than clinicians to initiate discussion. The most common actions taken in response to identified issues were providing information and/or advice.
Patient feedback about the intervention was generally positive (Figure 1). Patients rated PatientViewpoint’s usability highly, with over 90% responding “strongly agree” or “agree.” Almost half of patients reported that clinicians used the information for their care (46%), and many reported that care quality improved (39%). In the subsample of patients for whom the clinicians reported examining the score reports for the visit (n=23), the percentage of patients who strongly agreed/agreed to all Feedback Form questions was equal or up to 10% higher when compared to the overall sample, suggesting that whether clinicians examined the results affected how patients rated the intervention.
For the majority of patients (n=24; 51%), clinicians viewed the results in the EHR. Only a minority of the time (n=8; 17%), did clinicians refer to a hard-copy print-out of the results. The remaining 32% viewed the results in PatientViewpoint or not at all (though we do not have the specific numbers for each). On only 21% of the 47 feedback forms did clinicians report not using the PRO information. Clinicians reported that they used the questionnaire to provide additional information (51%), to confirm knowledge of patient’s problems (49%), to provide an overall assessment of the patient (43%), to identify issues to discuss (38%), and to contribute to patient management (30%). The feedback items clinicians were most likely to agree or strongly agree with were that the intervention helped them identify areas of concern (58%) and improved care quality (54%) (Figure 2). Clinician ratings on the Feedback Forms for encounters in which they reported examining the patient’s score (n=23) were substantially higher. In many cases, the absolute percentage of ratings of strongly agree/agree increased 25%–30%.
The exit interviews conducted with patients and clinicians provided additional information regarding their perspectives on PatientViewpoint (Table 3). On the positive side, patients reported that the site was well-organized and laid-out and provided an opportunity to raise issues that would otherwise have gone unaddressed during appointments. Some of the negative feedback included questioning whether their provider looked at the results, noting that the intervention could be impersonal, and indicating that the score reports only identified issues that were already known. Recommendations for improving the intervention included tailoring questions to be applicable to the individual and providing more explanation about the score meaning, including having higher scores consistently indicating either better or worse status.
Clinicians reported that the intervention helped them identify and address issues that might have otherwise gone unnoticed, made patients more engaged in their care, and enabled standardized tracking of patients’ PROs. However, in some cases, clinicians noted that the scores only confirmed what was already known. Clinicians strongly preferred the graphic score reports in PatientViewpoint but preferred the ease of access of the plain-text score reports within the EHR. In both cases, they wanted more explanation about the PRO item content and score meaning.
This study represents the first pilot-test of the PatientViewpoint PRO web tool in practice, and we sought to assess its use, usefulness, and acceptability from the perspectives of both patients and clinicians. The results were generally positive, beginning with participation by nearly 70% of eligible patients and greater than 90% of clinicians approached. Questionnaire completion rates were high, with some improvement as technical issues were identified and addressed.
PatientViewpoint allows patients to complete questionnaires from home, a potential advantage in terms of convenience and data completeness compared to in-clinic administrations that need to be fit into busy visit schedules. In fact, we found that the majority (87%) of questionnaires was completed off-site, and missing data were less common in off-site questionnaires. Patients and clinicians both reported that the PatientViewpoint intervention was useful, particularly in cases in which the clinician viewed the results for the visit. As one indication of PatientViewpoint’s value, the Breast Cancer Program has decided to use it to collect PROs as part of a patient registry. While the Program considered other data collection tools, the ability to track patients’ PRO status in PatientViewpoint and the EHR for individual patient management was considered desirable.
While patients and clinicians tended to rate the intervention favorably, the issues identified by PatientViewpoint as being potentially problematic were discussed and addressed to only a limited degree. Although there are likely a variety of reasons for this finding, one explanation is that physicians were unsure how to address these identified issues [19–20]. To address this problem, a multidisciplinary panel has developed consensus suggestions for how clinicians can respond to issues identified as being potentially problematic . Clinicians click on a “What can I do?” link to access these suggestions.
Another study finding is the importance of questionnaires tailored to patient’s (and clinician’s) specific needs. While PatientViewpoint has the capability to administer questionnaires selected from a library, at intervals tailored to the patient, in this study, all breast and prostate cancer patients completed the same questionnaires on the same schedule. We expect clinicians will require time and experience using the PROs before they feel comfortable determining which questionnaires, completed at which intervals, are most useful to them. Additional information may need to be made available to facilitate this choice. Another innovation that could improve the relevance of the questionnaires is the use of computer adaptive tests (CATs) where a patient’s responses to previous questions informs selection of the next question [22–23]. While we hope to incorporate CATs in PatientViewpoint in the future, the current system cannot administer adaptive questionnaires.
Finally both patients and clinicians commented on the difficulty in interpreting the score reports and understanding what the numbers mean. We have modified the score report format so that the y-axis now indicates with an arrow whether higher scores are better or worse. We have also added score interpretation guides which can be accessed by clicking on a link “Score Meaning” and descriptions of the domain content which can be accessed by clicking on a link “What Is This?” We are working to incorporate the graphics-driven and hyperlinked PatientViewpoint score report, rather than the plain-text document, in the EHR to facilitate use of these functions by clinicians.
The approach taken in this pilot-test is also noteworthy. In particular, the research team closely monitored the intervention’s use by patients and clinicians. The study coordinator tracked whether patients completed questionnaires and whether clinicians examined the results and, when they did not, collected information on why not. This research strategy provided useful information for improving PatientViewpoint, but the study coordinator’s involvement is a deviation from routine care. Also, a limitation of our data collection was that the Clinician Feedback Form only asked whether clinicians viewed the results in the EHR or in hard-copy and did not include viewing the results in PatientViewpoint as a specific category. During the pilot study, PatientViewpoint did not count the page views.
We are now conducting an additional pilot-test of PatientViewpoint using a continuous quality improvement approach, rather than an experimental design. We are using less intrusive research methods and observing how patients and clinicians use the intervention in circumstances more reflective of routine care outside a research setting. This ongoing study will provide valuable information on how improvements made to the system (e.g., links for “What Can I Do?” and “Score Meaning”) are used by patients and clinicians, as well as identify future refinements that could improve PatientViewpoint’s feasibility and value. Finally, the initial pilot-test and the ongoing study have relied on interview data to identify whether PatientViewpoint led to changes in patient management. Having now established the use, usefulness, and acceptability of PatientViewpoint, we expect to examine how PatientViewpoint affects patient care and outcomes in future research.
Funding Source: Development of PatientViewpoint has been supported by the National Cancer Institute (1R21CA134805-01A1; 1R21CA113223-01).
We appreciate Sharon White and Michelle Gutierrez for their meticulous study coordination. We also would like to thank Michelle Campbell, Ray Hamann, and colleagues for their technical development of PatientViewpoint. We are most grateful to all the patients and clinicians who agreed to participate in this pilot test.
Conflicts of Interest: The authors have no relevant conflicts of interest.