|Home | About | Journals | Submit | Contact Us | Français|
The Information Assessment Method (IAM) allows clinicians to report the cognitive impact, clinical relevance, intention to use, and expected patient health benefits associated with clinical information received by email. More than 15,000 Canadian physicians and pharmacists use the IAM in continuing education programs. In addition, information providers can use IAM ratings and feedback comments from clinicians to improve their products.
Our general objective was to validate the IAM questionnaire for the delivery of educational material (ecological and logical content validity). Our specific objectives were to measure the relevance and evaluate the representativeness of IAM items for assessing information received by email.
A 3-part mixed methods study was conducted (convergent design). In part 1 (quantitative longitudinal study), the relevance of IAM items was measured. Participants were 5596 physician members of the Canadian Medical Association who used the IAM. A total of 234,196 ratings were collected in 2012. The relevance of IAM items with respect to their main construct was calculated using descriptive statistics (relevance ratio R). In part 2 (qualitative descriptive study), the representativeness of IAM items was evaluated. A total of 15 family physicians completed semistructured face-to-face interviews. For each construct, we evaluated the representativeness of IAM items using a deductive-inductive thematic qualitative data analysis. In part 3 (mixing quantitative and qualitative parts), results from quantitative and qualitative analyses were reviewed, juxtaposed in a table, discussed with experts, and integrated. Thus, our final results are derived from the views of users (ecological content validation) and experts (logical content validation).
Of the 23 IAM items, 21 were validated for content, while 2 were removed. In part 1 (quantitative results), 21 items were deemed relevant, while 2 items were deemed not relevant (R=4.86% [N=234,196] and R=3.04% [n=45,394], respectively). In part 2 (qualitative results), 22 items were deemed representative, while 1 item was not representative. In part 3 (mixing quantitative and qualitative results), the content validity of 21 items was confirmed, and the 2 nonrelevant items were excluded. A fully validated version was generated (IAM-v2014).
This study produced a content validated IAM questionnaire that is used by clinicians and information providers to assess the clinical information delivered in continuing education programs.
This paper reports the content validation of an original method for assessing the value of educational material delivered to the health professionals from their perspective. Numerous clinically relevant research studies are published daily; thus, it is impossible for health professionals to filter and absorb all this information. Educational programs strive to overcome this issue, through Web-based information resources and email alert services. In particular, clinical emailing channels deliver educational material to health professionals, such as a Daily POEM research synopsis (POEM stands for Patient-Oriented Evidence that Matters) or a Highlight (a weekly email with evidence-based treatment recommendation) [1-3]. As shown in an earlier article, family physicians perceive advantages from receiving educational material via email .
The purpose of this study was to validate a method for assessing the perceived value of information (educational material) delivered by email from the perspective of family physicians (information users). The Information Assessment Method (IAM) is used by more than 15,000 Canadian pharmacists and physicians as a continuing education tool for assessing (reflective learning) outcomes of information delivered in educational programs. The physicians described in this study participate in the longitudinal Daily POEMs program, sponsored by the Canadian Medical Association. This program is certified for continuing medical education credit by the College of Family Physicians of Canada and the Royal College of Physicians and Surgeons of Canada. For each completed IAM questionnaire (reflective learning activity), physicians earned credits. Then, we used the IAM ratings for this validation study. Saracevic and Kantor  defined the perceived value of information as an “Acquisition-Cognition-Application” process; subsequently, we linked this process to 4 levels of outcome of information in a theoretical model, which has been operationalized by the IAM questionnaire. Presented elsewhere, the ACA-LO (Acquisition Cognition Application – Levels of Outcome) model explains the “value” of information, that is, how information is valuable from the users’ viewpoint [6-8]. Health professionals subscribe to an alerting service and then acquire a passage of text (acquisition), which they read, understand, and integrate (cognition). Subsequently, they may use this newly understood and cognitively processed information for a specific patient (application). The corresponding subsequent 4 levels of outcomes are as follows: the situational relevance of the information (level 1), its cognitive impact (level 2), the use of this information (level 3), and subsequent health benefits (level 4; Figure 1).
The IAM is a systematic and comprehensive method to assess information from the perspective of the information users; different versions of the IAM questionnaire have been developed for and used by the public (patients and parents) and health professionals (nurses, pharmacists, and physicians) [1,2,7-13]. The IAM can help assess electronic knowledge resources in the context of the “pull” or the “push” of information. A “push-pull acquisition-cognition-application” of information conceptual framework has been published elsewhere [2,14]. On the one hand, “pull” refers to information-seeking behavior, such as a search for information in an electronic knowledge resource. “Push,” on the other hand, refers to information delivery and is currently used in multiple health domains such as continuing education, disease prevention, health education, medical treatment, and nutrition [1,10,15-19]. This is a type of passive acquisition of information such as email alerts.
With respect to the physicians’ evaluation of clinical information in a “push” context, the 2011 version of the IAM questionnaire (IAM-v2011) contained 23 items distributed on 4 constructs (derived from the 4 levels of outcomes): (1) the “cognitive impact” construct contains 6 items of positive impact and 4 items of negative impact (cognitive impact of information on clinicians), (2) the “clinical relevance” construct contains 3 items (relevance of information for a specific patient), (3) the “clinical use” construct contains 7 items (information use for a specific patient), and (4) the “health benefits” construct contains 3 items (expected health benefits for a specific patient; Multimedia Appendix 1). In a “push” context, clinical information will in some way impact a clinician’s continuing education in general (eg, learning something new about a medical intervention) but may not be necessarily relevant for a clinician’s specific patient (in contrast to the “pull” context where clinicians typically seek information for a situation linked to the care of a specific patient). Thus, we sequenced the IAM questions in a pragmatic order (rather than a theoretical order); as such, questions that operationalize the “cognitive impact” construct (level 2) were presented before questions regarding the “clinical relevance” construct (level 1). Hereafter, we follow this pragmatic order. Specifically, the IAM questionnaire has been refined iteratively since 2001 through literature reviews, qualitative, quantitative, and mixed methods research . It allows information users, including professionals, to systematically report these outcomes for each piece of information such as one educational email. For example, in the context of lifelong learning, 13,444 family physician members of the College of Family Physicians of Canada used the IAM to stimulate reflective learning and earn continuing education credits between January 2010 and December 2014 . This process allowed them to rate Highlights that are weekly treatment recommendations from a reference Web-based resource called RxTx. Along with ratings, participants provided constructive feedback to the information provider (the Canadian Pharmacists Association), which was then used to improve the information content of RxTx . This paper addresses the following problem: the IAM has not been fully validated in the “push” context (for information delivery). Regarding the IAM-v2011 for the “push” context, items were developed in line with guidance from Haynes et al . In previous work, we conducted discussions with experts, as well as literature reviews, qualitative, quantitative, and mixed methods research studies [1,2,9,11,21,23-27]. In this paper, we report an evaluation of the content validity of the IAM-v2011.
One important aspect of the content validation of an assessment tool such as the IAM questionnaire is to ensure that all aspects of the measure are covered . Hence, we reviewed the literature (qualitative, quantitative, and mixed methods studies) about outcomes associated with educational email alerts. The included studies were (1) primary research studies, (2) on educational emails directed to physicians, (3) on outcomes of emails, and (4) reported in English. Specifically, we included the 5 research studies that were included in a 2010 review  and tracked research papers (up to March 2014) cited by or citing these studies and 3 literature reviews on educational emails (using the Scopus comprehensive bibliographic database). In addition, we conducted personal searches, for example, in Google Scholar. In total, 258 records were identified (146 from Scopus and 112 from personal searches). Full-text publications were retrieved and screened. A total of 13 studies were included [11,14,26-36]. The included studies had diverse designs: 6 quantitative descriptive studies, 2 randomized controlled trials, 2 qualitative research studies, 2 mixed methods research studies, and 1 quantitative prospective observational study. A thematic synthesis was conducted, and the findings are presented in Table 1. Regarding the outcomes of information constructs, (1) “cognitive impact” was reported in 9 studies, (2) “clinical relevance” was reported in 6 studies, (3) “clinical use” was reported in 8 studies, and (4) “health benefits” was reported in 5 studies. No other construct was reported. No instrument similar to the IAM was found in the literature. Our synthesis supported the 4 constructs covered in the IAM questionnaire, when educational emails are delivered to physicians. Therefore, this paper is aimed to evaluate the content validity of the IAM-v2011 from the perspective of physicians who use the IAM in the context of educational material delivered to physicians.
We used a 3-part mixed methods convergent design (quantitative, qualitative, and mixing) [37,38]. In the quantitative part, the relevance of IAM-v2011 items was measured using data collected from a Web-based longitudinal study. In the qualitative part, we evaluated the representativeness of IAM-v2011 items and their relationship to the IAM constructs. Considering that ecological content validation is determined by the end users [39,40], the viewpoint of actual IAM users was needed, and participants were IAM users in the quantitative and qualitative parts of the validation study. In the mixing part, quantitative and qualitative results were integrated and discussed with experts.
We conducted an evaluation of the ecological and logical content validity of the IAM-v2011. Validity refers to whether a test measures what it is supposed to measure [41-44], and content validity is defined as “the degree to which elements of an assessment instrument are relevant to and representative of the targeted construct for a particular assessment purpose” . The relevance of an assessment instrument refers to the appropriateness of its elements for the targeted construct and function of assessment. For example, the relevance of an item refers to the degree to which this item is likely to accomplish the goal implied by the construct. Relevance can be evaluated through quantitative methods. The representativeness of an assessment instrument refers to whether its elements cover all facets of the targeted constructs. For example, a representative item gives a good indication of what its construct is intended to measure. Representativeness can be evaluated through qualitative methods.
Content validity can be divided into (1) logical content validity in which a determination is left to experts and (2) ecological content validity in which the determination is obtained from the users . Ecological validity is the degree to which the behaviors observed and recorded in a study reflect the behaviors that actually occur in natural settings . Our general objective was to assess the logical and ecological content validity of IAM-v2011 for educational email alerts. In line with standard procedures for content validation of evaluation tools , our specific objectives were to measure the relevance and evaluate the representativeness of IAM-v2011 items for assessing information received via email alerts.
A Web-based longitudinal study was conducted. We considered all 2012 IAM ratings submitted by physicians after reading a Daily POEM email alert . Tailored to a primary care audience, Daily POEMs are synopses of original primary research or systematic reviews, selected after scanning and critically appraising studies published in 102 medical journals. A total of 270 Daily POEMs were emailed to physician members of the Canadian Medical Association in 2012. Participants were all physicians across Canada who subscribed voluntarily to receive Daily POEMs and rated at least one POEM in 2012 using the IAM-v2011 as a requirement to obtain continuing education credit. From 5596 physicians, we collected 234,196 IAM completed Web-based questionnaires (ratings) from January 1 to December 31, 2012. Regarding the data analysis, for each IAM-v2011 item of the construct, a ratio (R) was calculated using the formula shown in Figure 2.
Stated otherwise, for each construct or subconstruct, the relevance ratios of all items were calculated. For example, with regard to the item “I learned something new,” the relevance ratio R was calculated as follows. The number of completed questionnaires where this item was selected was divided by the total number of IAM questionnaires in which at least one item of the “Positive cognitive impact” construct was selected. In line with the standards for educational and psychological testing , validation is a joint responsibility of the developer and the knowledge user. IAM knowledge users (users of the results of the analysis of IAM ratings) are information providers (such as the Canadian Pharmacists Association, which produces the abovementioned Highlights) and appreciate the “Negative cognitive impact” items, which can detect issues with information content. Thus, negative cognitive impact items are rarely selected, but necessary, and the construct “cognitive impact” has been divided into 2 subconstructs: “positive” and “negative” cognitive impact. For example, with respect to the item “This information can be harmful,” the number of completed questionnaires where this item was selected was divided by the total number of questionnaires in which at least one item of the construct “Negative cognitive impact” was selected in order to calculate the value of R.
The results were interpreted as follows. In line with our prior content validation study in a “pull” context , the items were deemed relevant when R was 10% and above and irrelevant when R was less than 10%. With respect to the cutoff value of R to exclude items, there is no agreed upon criterion or universal cutoff to determine content validity [41,42].
A qualitative descriptive study was conducted  through semistructured face-to-face interviews with 15 family physicians (end users). The interviews started with general questions about educational email alerts and continuing medical education activities, to explore participants’ experiences; then, we asked specific questions on the representativeness of IAM-v2011 items.
An email invitation was sent to all physician members of the Department of Family Medicine at McGill University (n=269). Our eligibility criteria were (1) practicing family physician working in the greater Montréal area, (2) receiving educational email alerts, and (3) rating Daily POEMs or Highlights using the IAM. Of the 17 family physicians who volunteered, 15 were interviewed, while 2 were excluded (1 had no experience with the IAM-v2011 and 1 was not available).
Before each interview, participants received a brief lay summary of the study. For each IAM-v2011 item, participants were asked about its representativeness as follows: (1) the interviewer started by explaining each construct and the definition of that construct, (2) each participant was then asked to read the construct and its corresponding items on paper, and (3) for each construct, the participant was asked open-ended questions about the items and if they were suitable for that construct. For example, the interviewees were asked whether they would add, modify, or delete some items and the reasons behind their opinion. Although focus groups can be used in content validation studies , we decided to conduct individual interviews because we were interested mainly in individual experience and perception of the use of the IAM linked to educational emails. Interviews were recorded, reviewed, and transcribed on the same day of the interview. Our interview guide is available on request.
We conducted hybrid deductive-inductive thematic analysis. This type of analysis consists of applying themes (theory-driven) and searching for themes that emerge because of their importance to the description of the phenomenon under study . The inductive process involves the identification of emerging or new themes through “careful reading and re-reading of the data” . We summarized and analyzed the interview transcripts. We assigned preliminary themes based on our ACA-LO theoretical model and the interview guide and then searched for themes that emerged. The coding process was conducted in 6 stages [50,51]: (1) developing a code manual, (2) testing the reliability of codes, (3) summarizing the data and identifying initial themes, (4) applying a template of codes for the meaningful themes, (5) connecting the codes in accordance with the process of discovering patterns in the data, and (6) corroborating and legitimating coded themes. The final results were discussed with 7 members of the Information Technology Primary Care Research Group (ITPCRG) who are experts in the IAM. For each construct, a table was created that contained themes collected from interviews. For each IAM item, we had 8 possibilities. There were 4 initial possibilities (4 deductive themes): (1) addition, (2) deletion, (3) modification of an item, and (4) no change. Then, 4 additional possibilities emerged (4 inductive themes): (1) merge two or more items, (2) merge two or more items and add a new element, (3) keep the main item and delete subitems, and (4) keep the main item and add a new subitem. An item was deemed representative of the corresponding construct when it was confirmed (modified or unchanged) or added (new item). An item was deemed not representative when participants suggested its deletion.
Qualitative and quantitative results were integrated and compared. Such a comparison of results has been recommended in reference books on mixed methods, specifically in primary care research [37,52]. The relevance and representativeness of IAM items were tabulated. Items of questionable relevance or representativeness were identified and discussed with ITPCRG members. IAM items with low relevance or those that were not representative were excluded. In addition, we reviewed and discussed the clarity and language of all items. A final decision regarding each item was achieved by consensus of ITPCRG members. For excluding items, priority was given to the quantitative data received from the 5596 physicians (relevance). The qualitative findings might have suggested new items (representativeness). In our study, qualitative findings supported the removal of 1 nonrelevant item and corroborated quantitative results but did not suggest any new item.
This study was conducted according to the ethical principles stated in the Declaration of Helsinki. Ethical approval was obtained from the McGill University Institutional Review Board. The Institutional Review Board provided ethical approval #A11-E25-05A for collecting and analyzing the quantitative data and #A06-E44-13A for the qualitative data collection and analysis.
Results are presented according to the 3 parts of the mixed methods design.
Of 23 items, 21 had an R value of greater than 10% (N=234,196). All 21 were kept for proposing a 2014 version of the IAM (IAM-v2014; in Table 2, all items except items 1 and 13). The remaining 2 items had an R value of less than 10% (in Table 2, see items 1 and 13). R was 4.86% (N=234,196) for item 1 of the construct “Positive cognitive impact” (“My practice will be changed and improved”) and 3.04% (n=45,394) for item 13 of the construct “Information use” (“I did not know what to do, and I will use this information to manage this patient”). The final decision for items 1 and 13 was to exclude them.
We interviewed 9 male and 6 female family physicians. A total of 9 participants were working in academic health science centers, while 6 were working in community-based private family medicine clinics. The participants’ number of years in practice ranged from 9 to 38 years. A total of 5 participants indicated no particular clinical focus to their practice, while 10 expressed a special interest such as maternity and newborn care (n=3) or care of the elderly (n=3). We interviewed all participants in their offices. The participants were welcoming and cooperative. Of 15 interviewees, 11 gave ample time for the interview, while 4 seemed rushed. For each IAM-v2011 item, all interviewees answered all our questions about its relationship to its construct and whether they would add, modify, or delete it if they had the option to do so. Results of the qualitative part of the study are presented below (construct by construct) and summarized in Table 3.
The 10 IAM-v2011 items associated with this construct were representative. For example, about the item “I am motivated to learn more” (item 3), one interviewee said, “I would like to modify this item to be more specific and to be ‘I am motivated to learn more about this topic.’”
We asked specific questions about this construct, in particular the item “information partially relevant.” Of 15 participants, 9 participants interpreted this item as follows: some information from a Daily POEM or a Highlight covers an aspect of a patient’s condition, or the information does not exactly fit the patient’s condition. A total of 4 participants said this item can be interpreted as either clinically relevant or not relevant. One participant interpreted this item as “information clinically relevant,” while another participant interpreted it as “information clinically not relevant.”
Of the 7 items associated with this construct, 6 were representative, while 1 item was not. By way of illustration, an interviewee said about the latter (item 13 “I did not know what to do, and I will use this information to manage this patient”): “I would like to delete this item as it is redundant.”
All 3 items were representative.
Results of quantitative and qualitative analyses were integrated. All IAM-v2011 items, their relevance, representativeness, and a final decision are presented in Table 4. Decision making involved discussions with ITPCRG members, after which 1 item with a low relevance ratio (item 1) and 1 nonrepresentative item with a low relevance ratio (item 13) were excluded from the IAM. With regard to the former item (representative with low relevance ratio), priority was given to the quantitative data (relevance) because it provided feedback from 5596 users. The 21 other items were deemed relevant and representative. There was no item with a high relevance ratio that was nonrepresentative. No new items were suggested from the qualitative data.
These results have led us to produce a 21-item content validated version of the IAM for “push” technology, presented in Multimedia Appendix 2 (IAM-v2014). This work contributes to advance knowledge in continuing education, and continuing education tools, as there are no similar methods reported in the literature. Outside email alerts, our results can be applied to other Web-based means that deliver educational material, such as apps on mobile devices. For example, we have developed an app (called IAM Medical Guidelines) providing spaced education in a continuing medical education program on respiratory diseases. In such a program, the IAM questionnaire is used by clinicians to document reflective learning and earn continuing education credits.In addition, these results contribute to practice at 3 levels (user, provider, and researcher). First, at the level of the individual knowledge user, physicians can use a validated method to assess the clinical information delivered to them through educational email alerts. More than 15,000 Canadian family physicians and pharmacists are using the validated version of the IAM questionnaire to assess educational email alerts and earn continuing education credits in programs such as Daily POEMs and Highlights. During the calendar year of 2016, the IAM questionnaire (push version) was completed more than 400,000 times by physicians and pharmacists in Canada. To our knowledge, the IAM questionnaire is the most frequently used questionnaire in Canada, in the context of the continuing education of health professionals. Second, at the organizational knowledge provider level, the analysis of IAM-v2014 ratings can be based on a validated method. For example, information providers such as the Canadian Pharmacists Association are receiving validated feedback from their members. Third, using a validated questionnaire offers at least two other advantages: (1) researchers will save time and resources by avoiding the lengthy process of developing and validating their own instrument, and (2) new studies can compare their findings against those of other IAM-based studies.
The validation of the IAM as a whole is based on our prior work and a theoretical model, although we gathered quantitative and qualitative evidence for validating each construct and item. Future research may pursue the validation of the IAM as a whole, for example, using factor analysis. As mentioned in the standards for educational and psychological testing, validation can always be pursued . With respect to the quantitative part of the study, as continuing education programs rely on the voluntary participation of physicians, we acknowledge a selection bias with respect to the participants. While our quantitative data sample comprised 234,196 IAM questionnaires completed by 5596 physicians, these participants were not representative of all Canadian physicians. For example, participants were more likely to be comfortable with information technology. With respect to the qualitative part, although focus groups are sufficient for content validation , we chose to conduct face-to-face interviews as it is typically difficult to arrange meetings with groups of physicians.
Our data regarding the expected patient health benefits of clinical information reflect the subjective views of health care professionals. For example, a limited number of studies report how using information from knowledge resources may have helped physicians to avoid unnecessary tests, treatments, or referral to specialist colleagues. Outside research conducted in computer laboratories using clinical scenarios, most of the studies share the limitation of self-report and do not objectively examine patient-related outcomes. With respect to the literature on continuing education in the health professions, basing study outcomes on self-report is typical. For instance, a scoping review examined the impact of physician self-audit programs . None of the 6 observational studies included in the review objectively assessed outcomes. To the extent that self-report encourages socially desirable responses, the validity of study outcomes based on self-reported behavior and expected health benefits for patients can be questioned in future research.
Our content validation study followed the usual recommendations for developing psychometric and educational assessment tools [22,39]. In previous work, we reviewed information studies and developed a theoretical model, while in this study we gathered quantitative and qualitative evidence to support the use of the IAM in a specific context: the delivery of educational material. Content validation is typically a mixed methods research endeavor [37,38,54]. On the basis of the complementarity and synergy between qualitative and quantitative methods, mixed methods enhance validation studies by integrating quantitative and qualitative results on different aspects of the instruments. For example, focus groups provide qualitative evidence on relevance and representativeness of concepts , which are then tested using factor analysis (providing quantitative evidence on convergent and discriminant concepts).
Our validation study was based on Messick’s definition of validity [42-44], which still informs the standards for educational and psychological testing . Our mixed methods study assessed the content validity of the IAM. For each construct, we used quantitative methods to measure the relevance of IAM items and qualitative methods to evaluate their representativeness; then, we integrated the quantitative and qualitative results. In case of divergence, we gave more weight to quantitative results with respect to final decisions about “deleting” an item because the quantitative sample was large. In addition to the large sample in the quantitative part of the study, we interviewed 15 physician users of the IAM. This can be considered as a consultation with ecological experts (IAM users) . The final steps in our data analysis and the draft of IAM-v2014 were discussed with ITPCRG members who are logical experts on assessing the value of clinical information. Expert panel discussion is a core component of content validation .
This study produced a content validated IAM questionnaire (IAM-v2014) that is used by clinicians and information providers to assess the clinical information delivered in continuing education programs. Research on how the quality of health care and the health of specific patients are associated with the delivery of educational content can use tools to accurately document clinical events at multiple points in time. One of the tools for researchers to conduct this type of work is our validated IAM questionnaire, coupled with data from electronic medical records. Finally, the IAM can facilitate a continuous interactional process between information providers who deliver “best” evidence (knowledge translation) and information users who assess this evidence (ratings) and submit constructive feedback; in turn, information providers may use this feedback from information users to optimize their evidence (thereby establishing two-way knowledge translation), which can be made available on the Internet for further retrieval . Using the IAM, the delivery of research-based educational information can be enhanced by experience-based information from health professionals. For example, in addition to the IAM ratings, health professionals provide a substantial amount of free-text comments. These comments include constructive feedback such as suggestions for additional content, reservation or disagreement, suggestions to consider contradictory evidence, or a need for clarification of content. This two-way knowledge translation appears to be unique with regard to information management . In line with the literature on relational marketing , being open to user feedback and handling such feedback can improve an educational resource and aid information providers in sustaining relationships with the users by valuing their expertise.
PP holds an Investigator Award from the “Fonds de recherche du Québec en santé” (FRQS). Authors gratefully acknowledge the assistance of Randolph Stephenson, PhD, for his oversight of the psychometric aspects of this study, and Dr Isabelle Vedel for her recommendations regarding the methodology, as well as the members of the Information Technology Primary Care Research Group for their participation in the Expert Panel. The Information Assessment Method is protected by Registered Copyrights (2008): # CA 1057518 “A scale to assess the cognitive impact, relevance and use of information hits derived from electronic resources,” and # CA 1057519 “Une échelle pour évaluer l'impact cognitif, la pertinence et l’utilisation des informations issues de ressources électroniques.”
Two sources of funding supported this study. The first source was the Canadian Pharmacists Association. Project title: Two-way knowledge translation between information providers and health professionals; nominated principal investigator, PP (McGill University); co-principal investigator, RG (McGill University); total amount, Can $50,000 (unrestricted grant); duration, 2 years (2012-2014). The second source was Practice Solutions (a Canadian Medical Association company). Project title: Evaluating the effect of information technology on medical practice; nominated principal investigator, RG (McGill University); co-principal investigator, PP (McGill University); total amount, Can $98,910 (unrestricted grant); duration, 3 years (2011-2014).
The 2011 version of the Information Assessment Method (IAM-v2011) in a “push” context.
The 2014 version of the Information Assessment Method (IAM-v2014) in a “push” context.
Authors' Contributions: HB carried out this study. PP and RG supervised the work and contributed to all stages of the research. All authors participated in drafting the manuscript. All authors read and approved the final version of the manuscript.
Conflicts of Interest: None declared.