To initiate the GEM D&I Campaign, a small group of IS leaders from the Kaiser Permanente Colorado Cancer Communication Research Center, the National Cancer Institute, and Washington University in St. Louis, initially added a number of constructs (n = 17) and affiliated measures (n = 63) to pre-populate the GEM D&I workspace prior to the public launch. These constructs and measures were identified using expert input and a recently published paper by Proctor et al.
that identified critical constructs for health-related IS outcomes [20
], followed by a focused, non-systematic search of the literature for additional measures using snowball sampling. This search involved the identification of additional relevant publications, constructs, and measures using search of reference lists and online resources. Definitions for initial uploaded constructs were entered by members of the research team using descriptions from referenced publications. The research team relied on the literature and their own expertise in IS to assign measures to different constructs.
With the D&I workspace pre-populated with this initial list of constructs and measures added, the GEM D&I Campaign launched in March 2012 to coincide with the 5th Annual National Institutes of Health (NIH) Conference on Science of Dissemination and Implementation (http://conferences.thehillgroup.com/obssr/di2012/about.html
). At the conference exhibit booth, participants were encouraged to sign up to receive additional information via email in the weeks following the conference and become an active contributor to GEM-D&I. Over 120 conference participants signed up to receive information at the exhibit booth or through promotional emails following the conference, and 77 indicated a willingness to serve as a Champion. Those who signed up represented different regions across the country and countries around the world, and organizational affiliation ranged from large government agencies to private sector organizations, to small non-profit organizations. The largest number of interested users (61%) came from academic or research institutes, which is not surprising because this is representative of the conference audience.
As indicated earlier, GEM campaigns are traditionally divided into four phases: educate; populate; rate; and celebrate. For the D&I Campaign, although dates are used to define each phase, the campaign is designed to be ongoing, and the workspace will continually be evaluated and assessed throughout its lifespan as the community iteratively continues to identify, refine, add and rate measures related to IS (Figure ).
The Grid-Enabled Measures Dissemination and Implementation Campaign.
Phase 1 was initially run from the launch in mid-March through 6 April 2012. The focus of this phase was to inform and educate potential users on the campaign’s purpose and objectives, provide an overview of GEM and how it will be utilized, and opportunities and instructions for participation. This was accomplished through a webinar training offered four times over the course of one week and conducted by NCI and Kaiser Permanente staff. Targeted emails to conference participants, as well as other colleagues and outlets, were used to promote the webinars, along with attached factsheets and information regarding the D&I measures campaign. A total of 60 users registered to participate in a webinar and 52 attended a session. Presentation materials and instructions were distributed to all interested users that were unable to attend the webinars. Following the webinars, email announcements were sent with a campaign toolkit (email invitation; instruction guide (Additional file 1
); factsheets (Additional file 2
); GEM template slides to invite others to participate, and webinar links).
This marked the transition into phase 2, the populate phase. These announcements were distributed to the interested conference attendees, webinar participants, and additional national and international leaders in the field of D&I, who were identified by the research team as potential Champions. These Champions and interested users were tasked to help promote the D&I Campaign through GEM and the importance of harmonizing IS measures through the distribution of materials and announcements to their colleagues and networks.
The focus of this second phase, which ran from 6 April 2012 through 14 May 2012, was to increase the number of measures and constructs, and complete information (i.e.
, meta-data) fields (e.g.
, reliability, validity, description.) in the D&I workspace by users as well as continued work by the program team. Weekly email notices were sent to those individuals who signed-up at the D&I Conference or participated in one of the webinars with updates regarding the measures added/edited and instructions on how to add/edit their preferred D&I measures in the system. In weeks 3 and 4 of the populate phase, the notices focused on encouraging users to upload measure instruments as well as add new measures to constructs not yet populated (e.g.
, Evidence Based Practice Attitude scale, and the constructs of Feasibility and Acceptability). Incentives for participation—beyond the possibility to contribute one’s opinion and feedback for a larger purpose—included the opportunity to be listed as a GEM D&I contributor on the NCI Division of Cancer Control and Population Sciences (DCCPS) Implementation Science website (http://cancercontrol.cancer.gov/IS/
), as well as acknowledgement at an upcoming D&I conference. No other incentives were provided, and IRB approval was not required because no individually identifiable data were collected.
Phase 3, the rate phase, includes the publication of this article with a focus on continuing the dialogue around measure consensus within the IS community by allowing users to rate and comment on measures added to GEM D&I. Users are encouraged to enter both quantitative ratings and qualitative comments at any time during the campaign if they are familiar with the measure. However, the promotional push for this phase of the D&I campaign did not start until the end of the populate phase to allow for an increased number of measures and complete measure information to be uploaded before sending a wide call for ratings. We recognize that there will always be new measures identified and measures that did not get added to GEM initially, as such, this second populate phase is iterative and ongoing and individuals are encouraged to continue to add measures.
Prior to beginning the promotional efforts for the rate phase, leaders from key topic and construct areas were asked to review the measures to determine if any key IS health measures or information fields were missing that would prevent effective rating. Where gaps were identified, the D&I campaign team attempted to add additional information through further literature reviews, with an emphasis on psychometric characteristics such as validity and reliability.
During phase 3, readers are invited to rate and comment on measures. To develop rating guidelines, an open, collaborative dialogue was initiated between the core GEM D&I campaign team and lead Champions from Washington University. These discussions shaped and led to the final proposed rating guidelines wherein users are asked to provide two ratings on two separate five-point scales in regards to this measure being a 'Gold Standard Measure (1 = weak; 5 = strong) and a 'Practical Measure’ (1 = low practicality; 5 = high practicality). Table includes a listing of the criteria to consider when assigning these two ratings. The first rating, 'Gold Standard,’ is based on traditional measurement criteria, including published data on reliability, validity, breadth of application, sensitivity to longitudinal change, and relevance to public health goal, as detailed and elaborated in Table . This rating uses criteria similar to those for almost all traditional health research measures. The second ‘Practical’ rating (which is a new addition to the traditional GEM and used only for D&I workspace measures at this time) considers the above criteria, but gives greater weight to pragmatic features related to the probability that the measure can be successfully used in real world settings such as primary care, state health departments, community projects, and low-resource settings, where there are many competing demands and limited research funds and/or staff to supervise data collection. Criteria for this second rating include feasibility, appropriateness, cost, and actionable results. These ratings and comments will be visible for each measure, and will be continuously updated on GEM and in other places to serve as a guide for the IS field in identifying what IS measures are appropriate for varying contexts. The intent is not to mandate the use of any specific subset of measures, but rather to inform selection by serving as a decision tool and provide a resource for those not familiar with measurement options to benefit from their colleagues’ experiences and knowledge.
Proposed criteria for rating dissemination and implementation measures for scientific soundness and practicality
Finally, in phase 4, the celebrate phase, all the data collected during the campaign will be summarized and disseminated through hosted webinars, IS conferences, and newsletters to spark constructive conversation regarding harmonizing measures. Measures that are rated highly on average in either practicality or gold standard criteria may provide a basis for coming to consensus on potential key measures for the field, in conjunction with other rigorous evaluation methods. These conversations and resulting information can be used to provide guidance to D&I researchers and practitioners on which measures are most highly rated for different purposes and contexts, as well as present an opportunity to share the experiences of users who utilize different measures. At the time of this paper, the campaign was just entering phase 3 (this publication is part of the promotion for phase 3), so data on these later phases are not yet available.
Google analytics and GEM usage reports were used to analyze progress and work done through the workspace’s associated measures and constructs. Metrics analyzed included: the cumulative number of measures and constructs in the GEM D&I Workspace; number of views [hits] and visitors for different sections; and measure information additions, edits, and downloads on a weekly basis (Additional file 1
). Finally, a report was generated to assess the number, geographical location, and institutional affiliation of GEM registrants during this first period of the GEM D&I Campaign. These metrics, and the tools used, are typical for website evaluation and tracking.
Results to date
By week 8 of the GEM D&I Campaign, there were a total of 45 constructs and 120 measures added to the GEM D&I Workspace. These measures are listed by construct in Additional file 3
. The following constructs had the most associated measures: Organizational Culture (n = 13), Adoption (n = 8), and Acceptability (n = 7). There were 18 constructs that had only one measure including Reach, Organizational Change, and Fidelity. As of 14 May 2012, five constructs had no associated measures (i.e.
, Demographics, Dissemination, Diffusion, Implementation Climate, Middle Managers Commitment To Innovation Implementation). There are, at present, 51 measures (43%) with uploaded instruments (full or partial). The lack of instrument upload was mostly due to a difficulty getting responses from the authors of proprietary measures to obtain the full instrument. Measures spanned a number of topic areas including process outcomes for D&I, characteristics of the innovation, evidence, policy, process, context, and target audience characteristics. Most of the uploaded measures targeted healthcare providers (including clinicians and other types of providers) (n = 37), the general population (including patients with different conditions; n = 27), or researchers (n = 18). Psychometric properties were reported for 58 measures (48%) with varying detail and completeness.
The number of visitors on the GEM D&I Home Measures pages and the number of measures and constructs added to the GEM D&I Workspace were tracked on a weekly basis and are summarized as cumulative values in Figure . We observed peak activity periods for visits on week 7 (n = 89 visitors on GEM D&I Home page), and for number of added constructs and measures on week 4 (n = 16 for added constructs and n = 37 added measures). A total of 515 views from 351 visitors to the GEM D&I Workspace home page were counted during the campaign along with 501 views from 257 visitors to the GEM D&I Measures page.
Cumulative results of the Grid-Enabled Measures Dissemination and Implementation Campaign by Campaign week.
The number of views, visitors, and downloads for each measure and construct page are provided in Additional file 3
. A total of 4,721 views and 442 downloads were counted across all D&I measures. The most popular measures (based on their cumulative views and number of downloads) are summarized in Table . The top three measures based on views and downloads were the Morisky Eight-Item Medication Adherence Scale (n = 545 views, n = 128 downloads), Morisky Four-Item Self-Report Measure of Medication-Taking Behavior (n = 355 views, n = 96 downloads), Acceptability of Decision Aid Scale (n = 120 views), and Evidence Based Practice Attitude scale (n = 17 downloads).
Most frequently accessed and downloaded Dissemination and Implementation (D&I) measures during the first 8 weeks of the Grid-Enabled Measures D&I Campaign
A total of 109 individuals registered on GEM over the eight-week period of the GEM D&I Campaign. The majority of registrants (52.3%) were affiliated with academic institutions; the remainder came from cancer/medical centers (15.6%), private for profit and not-for-profit organizations (13.8%), and federal and state government (12.8%). The registrants predominantly came from the United States (US); however, 10% were international users and represented seven countries including Canada, the United Kingdom (UK), Malaysia, and Pakistan. Some participants in the GEM D&I Campaign had previously been registered members of GEM: 36 existing members logged in during the GEM D&I campaign. Due to limitations of the web analytics, we are unable to know whether these existing members or the newly registered members were associated with the GEM D&I campaign exclusively, because there was a second campaign related to Survivorship Care Planning that overlapped in time with the GEM D&I campaign. We counted 18 comments on the individual D&I measure and construct pages, and four comments on the GEM D&I discussion board. Most of these comments were about the usefulness and the psychometric properties of measures, the future use of these measures in research, and for whom the measures will be the most beneficial. Since the rating and commenting phase had not yet begun, we anticipate that the number of comments on the site will continue to increase over time.
The practicality rating dimension opens up a number of IS possibilities, and we plan to summarize the user feedback for both research and practitioner groups. A final opportunity that GEM has been developed to facilitate is sharing of actual datasets on D&I measures. If there is greater use of harmonized measures, this could provide a neutral, trusted platform for team science.