Search tips
Search criteria 


Logo of jmlaJournal informationSubscribeSubmissions on the Publisher web siteCurrent issue of JMLA in PMCAlso see BMLA journal in PMC
J Med Libr Assoc. 2014 January; 102(1): 31–40.
PMCID: PMC3878933

Evaluation of best practices in the design of online evidence-based practice instructional modules*An external file that holds a picture, illustration, etc.
Object name is mlab-irp.jpg

Margaret J. Foster, MS., MPH., AHIP, Suzanne Shurtz, MLIS., AHIP, and Catherine Pepper, MLIS., MPH



The research determined to what extent best practices are being followed by freely available online modules aimed at teaching critical thinking and evidence-based practices (EBPs) in health sciences fields.


In phase I, an evaluation rubric was created after reviewing the literature. Individual rubric questions were assigned point values and grouped into sections, and the sections weighted. Phase II involved searching Internet platforms to locate online EBP modules, which were screened to determine if they met predetermined criteria for inclusion. Phase III comprised a first evaluation, in which two authors assessed each module, followed by a second evaluation of the top-scoring modules by five representatives from different health sciences units.


The rubric's 28 questions were categorized into 4 sections: content, design, interactivity, and usability. After retrieving 170 online modules and closely screening 91, 42 were in the first evaluation and 8 modules were in the second evaluation. Modules in the first evaluation earned, on average, 59% of available points; modules in the second earned an average of 68%. Both evaluations had a moderate level of inter-rater reliability.


The rubric was effective and reliable in evaluating the modules. Most modules followed best practices for content and usability but not for design and interactivity.


By systematically collecting and evaluating instructional modules, the authors found many potentially useful elements for module creation. Also, by reviewing the limitations of the evaluated modules, the authors were able to anticipate and plan ways to overcome potential issues in module design.


With trends in health sciences education moving toward more self-directed and online learning 15, medical librarians in academic health sciences settings are increasingly asked to support online courses or to develop online modules 613. Health sciences educators acknowledge the impact that librarians can have in teaching evidence-based medicine (EBM) or evidence-based practice (EBP), as well as critical thinking or information literacy skills 11,1317. For instance, several nursing journal articles lauded the concept of an embedded librarian in partnering with faculty to teach students how to search effectively for evidence 11,13,16,17. A recent survey of medical librarians explored the roles in which they are involved with EBM. The top three roles identified in the survey were teaching as EBM instructors, developing guides in using EBM resources, and creating EBM instructional materials, such as handouts and online tutorials 18. The authors of the current study, medical librarians at an academic institution, were charged by Texas A&M Health Sciences Center (HSC) administrators with creating EBP and critical thinking web-based modules to support curricula for all disciplines represented by the academic units of the HSC.


Texas A&M University's Medical Sciences Library (MSL), in College Station, Texas, serves the HSC, with more than 2,000 students enrolled in its 6 components: medicine (633), nursing (106), public health (338), dentistry (573), pharmacy (345), and biomedical science (127) 19,20. Due to the geographic distribution of the HSC's campuses (spread across 8 cities in Texas), some programs offer distance education courses using web-based content management software and video conferencing. In 2011, as required for accreditation by the Southern Association for Colleges and Schools (SACS), the HSC needed to select an overarching instructional goal for the required quality enhancement plan (QEP). The HSC chose to focus on the areas of critical thinking and EBP with a plan titled, “Critically Appraise Relevant Evidence” (CARE), which called for developing online learning modules to facilitate delivery of EBP and critical thinking content to all components throughout the HSC 21.

MSL librarians have been involved for a number of years, to varying degrees, in teaching EBP in many of the HSC's components, the majority of which have used in-person instruction. In addition, the authors have presented a three-part webinar several times on evidence-based public health (EBPH). Several QEP Committee members, a group of faculty and staff representing all colleges in the HSC, participated in the EBPH webinar that was presented in June 2011 and subsequently suggested to the entire committee that the librarians be approached to adapt the webinar content into EBP online modules, with additional modules specifically on critical thinking skills. Pursuant to the MSL director's strong endorsement, the librarians accepted this charge and created a project plan and an outline of module topics with associated learning objectives.

In preparation for the SACS site visit in spring 2012, the librarians created a pilot example module for the CARE plan. During the site visit, a SACS reviewer met with the librarians and others on the QEP Committee to review the pilot module and the plan for creating additional modules. While acknowledging the good quality of the pilot module, the reviewer recommended assessing modules created by other institutions for possible adaption for CARE and applying best practices to the creation of new modules. In response to this recommendation, the librarians began by reviewing the health sciences, library science, and education literature to develop evaluation criteria for identifying high-quality tutorials and determining best practices in EBP module creation.

Research question

The research question for this study is: To what extent are best practices being followed by freely available online modules aimed at teaching critical thinking and evidence-based practices in health sciences fields? The authors developed two objectives to answer this question:

  • to identify characteristics of online modules that make them effective learning resources and then to develop and evaluate a rubric for evaluating online modules
  • to locate freely available modules on critical thinking and evidence-based practices in health care and then to evaluate the extent to which these modules met the criteria in the evaluation rubric

To achieve these objectives, the study was organized into three phases. In phase I, an evaluation rubric would be created after reviewing the literature. Phase II would require locating and screening online EBP modules to be included in the evaluation. Phase III would comprise two stages with a first round of module evaluations completed by the authors. In the second round, top-scoring modules would be further critiqued by evaluators including representatives from different health sciences units.


A literature review was completed to inform the development of an evaluation form for online modules. The authors hoped to locate studies describing an evaluation of EBP online modules. As studies of this nature were few, the search was broadened to include evaluations of online modules done in higher education, regardless of the module topic. Also, articles were sought that described best practices in instructional module design. MEDLINE (PubMed), ERIC (Proquest), PsycINFO (Proquest), Education Full Text (EBSCO), and Academic Search Complete (EBSCO) were searched for citations on “evidence-based practice or critical thinking” and “training or education or modules or online or web-based.”

Summary of findings

Two articles reported a study, initially conducted in 2007 and repeated in 2008, in which medical librarians assessed available online tutorials that had been created by academic medical libraries 6,7. In those studies, the researchers used the Association of American Medical Colleges' member school list of 126 libraries to identify websites containing tutorial links. For each tutorial found (n = 684), a researcher collected certain data, such as the creator of the tutorials, target audience, software used to create the tutorial, topics covered, and presence of interactivity. Results showed that 63% of US academic health sciences libraries created their own tutorials, but that there was great variety in the topics, software used, and interactivity of tutorials. The topics of tutorials included in the study were not limited to EBM or EBP, although tutorials that libraries created on EBM represented the highest proportion (n = 34). The overall quality of the tutorials was not directly evaluated; however, one conclusion of the study was that, “while libraries are creating more tutorials, they are not typically including elements of interactivity in them” 7.

As demonstrated by the variability of tutorials found in these studies 6,7, no established guidelines were available for medical librarians on the design of high-quality EBP modules. Only one article written by medical librarians was located explaining the process for creating quality online EBP modules 9. The authors emphasized the importance of incorporating health sciences faculty input and striving for a brief, interactive tutorial with feedback mechanism. They further illustrated how to adapt a basic EBP module and customize it for specific disciplines. Studies by health profession educators assessed student learning outcomes for EBP modules using surveys or comparing pre/post test scores, but they focused on the outcomes achieved rather than specific exploration of module attributes that impacted learning 2224.

Research in the library science field included best practices for creating online instructional modules. Su and Kuo reviewed online information literacy tutorials from academic libraries and evaluated the tutorials based on the following criteria: “objectives and teaching strategies, topics, estimated time for browsing, multimedia application, and visibility” 25. The teaching strategies specifically assessed in the study focused on different types of engagement, including quizzes, simulations, and exercises. Results also assessed the presence and clarity of learning objectives, simplicity of navigation, length of time for completion, and ease of locating modules 25. Another study concluded that library instructional tutorials are most effective when learning theories and learning styles are considered in design (e.g., multimedia modules address needs of both audio and visual learners). Interactivity and the ability to determine how to navigate through the module aids a learner in being “able to construct his or her own informational hierarchy” 26. This research supported the concept of designing modules to address the multiple levels contained in Bloom's taxonomy to facilitate learning 27. Likewise, Zhang found that best practices in information literacy tutorials should include “effective use of media elements such as text, color, graphics, navigation systems, audio, video, as well as the implementation of interaction and feedback” 28.

From the education field, Strobl and Jacobs evaluated online language learning tutorials to assess the reliability of the “Quality Assessment of Digital Educational Material (QuADEM)” methodology as developed by European researchers 29. The authors emphasized that QuADEM can assess digital educational material in general but cautioned that the tool was not exhaustive, especially when evaluating design. The QuADEM manual, freely available under a Creative Commons license, is divided into broad categories (e.g., “Content”) and checklists or rubrics for assessing content in these categories (e.g., “The content is adapted to the learning objectives” or “The content is up to date”). Many of the qualities and attributes addressed in previously cited studies of online learning evaluation (design, interactivity, usability, learning styles, etc.) were addressed in the QuADEM methodology 30.

As observed during the literature review, reoccurring characteristics by which online modules were evaluated could be classified into the headings of “content,” “design,” “usability,” and “interactivity.” The QuADEM manual employed similar categories and then bulleted evaluation criteria to determine the quality of the content in the corresponding category. As the QuADEM manual was a validated instrument with categories relevant to EBP modules and corresponding to criteria for module evaluation that were found in the literature, it was used as a template for developing the evaluation matrix for this research project.

Development of the form

An evaluation form to critically analyze each module was developed incorporating the desired characteristics, particularly drawing from categories and evaluation criteria in the QuADEM methodology 30. The module evaluation form reflected the four major categories of module evaluation identified in the literature review: content, design, interactivity, and usability (Table 1). Content included topics and covered disciplines (specifically those desired for the HSC's CARE modules), as well as credibility, relevance, currency, organization, understandability, and appropriate style and language for the audience. The design section assessed Bloom's taxonomy components 27, learning objectives, appropriate coverage of all objectives, and attention to learning styles. Interactivity was evaluated by estimating the level of interactivity and the types of interaction, as well as relevance, difficulty, feedback, and opportunity for reflection. Finally, usability assessed visual appeal of layout, ease of navigation, and ability for learners to determine their own learning paths. In addition, usability addressed whether the module met five selected requirements of the Americans with Disabilities Act (ADA) guidelines for web page design, where applicable and feasible 31. The evaluation form was created in Zoho Creator, free online database software 32.

Table 1
Evaluation form

The researchers next discussed whether or not one category should be weighted above another category. The authors agreed that, while interactivity and usability were essential for an online module, quality content and design conducive to achieving learning objectives were foundational and should be given slightly more weight than the other categories. Through a consensus discussion among the authors, the 4 sections were given the following weights: 30% to content, 30% to design, 20% to interactivity, and 20% to usability. Criteria within each question were given 1 point, for a total of 38 points possible (Table 1).



For the purposes of this study, a module was defined as a packet of subject-related teaching materials presenting a sequence of learning activities 33. To be included in the evaluation, modules needed to meet four inclusion criteria:

  • meet the definition of a module
  • be freely available online
  • be intended for a health-related audience
  • cover at least one aspect of EBP or critical thinking

Search strategy

To locate modules or mention of modules in the literature, databases and websites were searched using the following terms: (EBM or EBP or “evidence based medicine” or “evidence based practice” or “evidence based nursing” or “evidence based public health”) AND (training or tutorial or module). Databases searched included PubMed, Eric (EBSCO), Academic Search Complete (EBSCO), Library Information Science & Technology Abstracts (EBSCO), and CINAHL (EBSCO). Specific web pages searched included home pages of members of the Association of Academic Health Sciences Libraries (AAHSL), the Association of College and Research Libraries (ACRL) Peer-reviewed Instructional Materials Online (PRIMO) database, Google, and iTunes.


As potential modules were found, the names and/or links were recorded in a spreadsheet and duplicates were removed. These 170 modules were then quickly screened to determine whether or not they met the inclusion criteria; if so, module information (i.e., name of module, uniform resource locator [URL], source, possible cost, covered topics, and intended audience) was collected in an online database. Then, a more in-depth screening was done on the remaining 91 modules, with 48 deemed suitable for evaluation. As 6 of the 48 modules had failed links, 42 modules ultimately were included in the evaluation process.


Phase 3 involved a two-level evaluation of those modules that made it through screening. Each module was randomly assigned to two authors using MS Excel and was evaluated using the rubric in the evaluation form. After scoring five modules, the authors met to discuss application of the questions, aiming to develop a consensus. Issues were discussed and disagreements solved, and evaluation forms of the first five modules were changed as needed. After this discussion, the evaluators worked independently.

When all evaluation forms were completed, the average percent score of points earned by ratings from the two evaluators was calculated. Those modules that were one standard deviation above the average were labeled as “high quality,” and those that were one standard deviation below were categorized as “below average.” The second evaluation focused on those labeled “high quality” modules, which were then evaluated by five members of the QEP Committee, consisting of faculty and professional staff in academic disciplines and administrative offices of the HSC. If any of the highest scoring modules from this round had not already been evaluated by each of the three authors, this was completed so that these modules were each rated by a total of eight evaluators.


The evaluation score for a module was calculated by points earned divided by thirty-eight, the total possible points using points listed in Table 1. Scores for individual questions were determined by dividing the number of points earned by the number of points possible. Inter-scorer agreement in the first evaluation was calculated by Cohen's weighted kappa score for questions (where appropriate), sections, and overall. Agreement for the second evaluation of the top-scoring modules by five committee members and three authors was calculated by Fleiss's kappa, which is appropriate for determining agreement between more than two raters. Finally, the difference in averaged evaluation scores between the two groups was calculated with a t-test to determine if there was a significant difference between the groups.


First group of evaluations


Nearly half (45%) of the modules covered at least 4 of the topic categories listed in question A1, with the 2 most common—EBPs and literature searching—covered by 79% of the modules. The topic covered least often, at least explicitly, in the modules was critical thinking, at only 14% of modules (n = 6). As to the disciplines addressed in the modules (question A2), 19% of the modules (n = 8) covered all health disciplines, either in general or with examples, while 6 of those evaluated did not specifically focus on health examples. Clinical medicine was the most common discipline to be covered, with 36% (n = 15) focused solely on clinical medicine. A few unique topics included speech pathology, social work, medical physiology, allied health, health services research, and podiatry. The content for 88% of the modules was rated “credible,” for an overall 92% score for this criterion in the 42 modules (Table 2). The least common content criterion met in the evaluation was currency.

Table 2
Average evaluation score* for modules and level of agreement per question


Design varied widely: out of 84 evaluations, the average points earned was 44% for this area. Most modules included knowledge (98%) and comprehension (68%) tasks from Bloom's taxonomy categories, with small percentages covering other levels (Table 3). Most provided a purpose or focus of the module up front, but a little more than half (54%) listed measurable learning objectives. Learning styles clustered around visual only, with few including audio or spatial learning.

Table 3
Percentage of modules with best practices


The interactivity section had the lowest average of points earned (33%). Most (70%) of the modules had some interactivity, but only 7% were labeled as having a “high” level of interactivity (Table 3). The types of activities available in the studied modules provided some constructive examples of what could be done with this medium: clicking (60%), typing in answers (21%), scrolling (38%), answering a quiz or survey (17%), searching (11%), and other (12%). However, few of the modules provided feedback (35%) or an opportunity to reflect on learning (28%).


Overall, modules earned high ratings for usability (averaged 77% for this section). Many of the modules allowed learners to determine their own learning paths (81%), but only 68% were labeled as easy to navigate, and only 57% were labeled as having an appealing layout. Most of the modules met the ADA requirements, either by following the requirement or by not having sound or video.

Overall site quality and evaluator agreement

The results of the first evaluation yielded 8 modules that were found to be of “high” quality, 30 that were “average,” and 4 “below average.” Overall level of agreement was calculated by Cohen's weighted kappa (κw = 0.43), interpreted as moderate agreement 34. Six criteria had agreement of “poor” or “slight” (“currency,” “measurable objectives,” and “cover all objectives”) and 3 usability measures (“includes captions,” “audio turns off,” and “pausing”). All other criteria (n = 21) had fair or higher agreement. The estimated time to review a module, depending on its amount of content and interactivity, was approximately 25 minutes.

Second group of evaluations


Modules labeled “high quality” (n = 8) were evaluated by 5 members of the QEP Committee (see “Acknowledgements”), in addition to the evaluations by the 3 authors, for a total of 64 evaluations. Table 4 provides descriptions of the 8 modules, including creator, link, and percent of points earned.

Table 4
Characteristics of highest-scoring modules


The average percent of points earned for this section was 81%, which was 5% higher than the average for the first group of modules scored. All of these “high quality” modules covered clinical medicine, with 2 covering it exclusively and the other 6 also covering other health disciplines. While all of the modules covered EBP, critical thinking was not specifically addressed in most. Although most modules were considered to be credible and relevant, the modules received an average of 60% for the score on currency.


Bloom's taxonomy tasks covered by the 8 modules were more inclusive than those in the first evaluation, with 50% of the 8 modules covering all 6 of the Bloom's taxomomy levels (knowledge, comprehension, application, analysis, synthesis, evaluation) (Table 3). In addition, these modules had a higher percentage of audio and spatial elements.


Comparing the overall score for interactivity between the first evaluation and the second evaluation, there was an increase of 17%. This change is mainly due to higher scores for providing feedback and opportunity to reflect on learning. In addition, the proportion of modules that included a quiz or survey increased to 43%.


Average usability score was 67%, with the average score for ease of navigation at 86%. Most of the modules complied with the 5 ADA requirements on the form, with the recommendation most often not followed being “includes captions.”

Overall site quality and evaluator agreement

Overall quality score was 10% higher for the second evaluation than the first evaluation. Overall agreement was moderate as interpreted by a Fleiss's kappa, κ = 0.47.

Comparison of first and second evaluations

A t-test was calculated to compare the means between the first and second evaluations to determine if the scores received for the 8 modules in the second evaluation, which involved faculty, were significantly different from those in the first. Results showed that the differences were not statistically significant beyond the P<0.05 level for any of the 4 broad categories, although they were close (P = 0.07).


Principal findings

Most modules followed best practices for content and usability, but many lacked key elements in design and interactivity. The evaluation illuminated several pitfalls to avoid in module creation that could improve the effectiveness of the modules. In addition, the research also illustrated opportunities for improvement, even for modules that have excellent content and usability. For instance, the researchers noted 3 main areas for module improvement related to the criteria or characteristics of content. The first area for possible improvement was currency, as nearly half (48%) of all modules were considered outdated. The researchers noticed that examples of searching for evidence often included screencasts or screenshots of outdated websites or database interfaces, particularly for PubMed. To effectively teach searching for evidence, it is imperative that searching examples are current to make the module as useful to the participant as possible. As modules take a great deal of time and effort, it is understandable that once they are posted, updates could take a lower priority. To remedy this issue, module creators can consider methods during the module planning phase to facilitate updates. Rather than linking directly to web pages outside the creator's control or embedding website screenshots directly into a recorded module, links could be inserted into the module connecting to another web page. This outside web page, controlled by the module creator, could be regularly updated with screenshots of the database without having to re-record the entire module. A second observation related to content was that most of the reviewed modules were oriented toward medicine alone, which limits relevance of the tutorials for other fields. While these tutorials might have originally been developed solely to support medical education, to make them more interdisciplinary, module creators can consider providing a variety of examples from different health sciences disciplines. While this variety increases the amount of time creating the tutorial, it is much more effective than creating multiple modules to meet the needs of each health profession that a library serves and emphasizes that evidence-based concepts are similar for all health professions. Third, few modules specific to critical thinking were located, which may indicate that: (1) the search strategy needs revision; (2) this topic is taught as an embedded component within courses and is taught less frequently as a stand-alone module; or (3) critical thinking as a topic has not yet fully emerged in health sciences education.

The evaluation also revealed opportunities to improve module design. Although usability was rated high overall for most modules, the fact that modules scored lower in terms of aesthetics indicates that it might be beneficial to explore e-learning aesthetic design principles when creating online modules 41,42. While having an appealing layout may not overcome issues with content, usability, and interactivity, an unattractive look and feel can detract from the content and can disincline the learner from progressing through the module's components.

Instructional design of modules should also be considered to improve effectiveness. Evaluators noted that only half of the modules provided measurable learning objectives. Explicitly listing measurable objectives in learning modules enables students to know what knowledge and skills they will be expected to demonstrate. Likewise, measurable objectives provide a “yardstick” by which to evaluate the extent to which modules meet stated learning outcomes. The research on the importance of objectives stresses the key role of objectives “as the logical foundation of the teaching-learning-assessment process” 35.

Another design issue that emerged was the lack of linkage of modules' purposes and/or objectives with the standard measure of learning outcomes, Bloom's taxonomy 27. Bloom's taxonomy provides a structure for building instructional modules that afford students the opportunity to acquire basic knowledge and then practice higher-order knowledge and skills, such as “evaluation” and “synthesis.” Because most instructional modules did not extend beyond the lower steps of Bloom's taxonomy, the opportunity to allow more intensive and substantive learning at these higher levels is lost. The authors recognize, however, that measuring higher-level thinking is often difficult to achieve in an online module and that the modules that were evaluated might be used as starting points for students to gain basic knowledge or comprehension of content, which would then be applied by students in classes, journal clubs, and so on.

Another area for improvement in module design was apparent through the almost-universal focus on visual learning styles, to the exclusion of other sensory learning methods, such as aural and spatial. Accommodating learning styles beyond merely visual has been found to improve student performance 3638 and critical thinking 39. With advances in technology and tutorial authoring software, it is hoped that modules will increase in sophistication of design.

Related to design was the observation of how little interactivity most modules offered. The interactivity of modules mainly consisted of clicking the mouse to advance screens. A few modules had interactions such as hovering the mouse over “hot spots” to display content, dragging and dropping of screen elements, triggering media materials, or typing text into boxes. Highly interactive e-learning is more successful at engaging students with the material and allows students to exert more control over their own learning paths 40. Care should be taken, however, to avoid overloading modules with excessive interactions that do not directly contribute to the instructional objectives.


Three main strengths of this research project can be discerned: the comprehensive search, the systematic evaluation, and the inclusion of stakeholders in the evaluation. By choosing a comprehensive scope and a systematic search strategy, the authors cast a wide net over all publicly available instructional content for teaching EBP and critical thinking skills accessible on the Internet to provide a snapshot of what has been done.

The plethora of available modules can be daunting to identify and sift through without a rubric to follow for screening and evaluation. The evaluation form developed for this study allowed a systematic and standardized method that multiple, diverse evaluators could use to assess a number of modules without the review being inordinately time consuming. The form's design also included opportunities for free-text responses, which were valuable in interpreting evaluators' perceptions and in gathering their summations of modules. Other librarians and departmental faculty may find the evaluation form presented here to be useful as they engage in a similar search for available instructional modules, and they can utilize the form to guide them in updating modules or when creating new modules. The form highlights key elements of best practices but can be adapted to include criteria appropriate to other institutions or educational projects.

The third strength of the research was that representatives of stakeholders involved in CARE planning were involved in evaluating external modules. Three medical librarians, as well as faculty representing all HSC academic disciplines, participated in module evaluations. In addition to bringing multiple perspectives, talents, and skills to the project, involving those who will use the modules in the evaluation process can increase credibility and buy-in for modules, which are necessary elements for end user acceptance. Faculty can also review what is available and practicable, facilitating more realistic expectations as well as recognition and appreciation of best practices of module creation.


A possible weakness of the study was the variability in the ratings during the first evaluation for six of the criteria, such as “currency” and “measurable objectives,” even though a consensus conference was held among the authors soon after the evaluation began to improve reliability. Further definition of sub-measures might have reduced this variability. Second, some available modules might have been missed, either because they were locked behind firewalls or were labeled with keywords that were not included in the search. Last, students were not included in the assessment of the modules and could provide important input into evaluations of module design. The authors intend to incorporate student feedback during the testing phase of the CARE modules.


Developing and utilizing a standard rubric provided an effective evaluation tool for reviewing modules. The rubric was found to provide a moderate level of agreement. Overall, most modules received high ratings for content and usability, with lower scores given to design and interactivity. Through conducting the evaluations, the authors found many useful ideas and principles to guide them in creating modules for their own constituents. In addition, by reviewing the limitations of the evaluated modules, the authors were able to anticipate potential issues and plan ways to overcome them.


The authors thank the following individuals, all members of the HSC's QEP Committee, for evaluating modules:

  • Litao Wang, MEd, College of Medicine, Texas A&M Health Science Center
  • Steve Leggio, Teaching Learning Resource Center, Texas A&M Health Science Center
  • Paul C. Dechow, PhD, Baylor College of Dentistry, Texas A&M Health Science Center
  • Jennifer Griffith, DrPH, MPH, School of Rural Public Health, Texas A&M Health Science Center
  • Terri L. Kurz, PhD, Office of Medical Education, College of Medicine, Texas A&M Health Science Center


*Based on a paper presented on May 6, 2013, at One Health, joint meeting of the Medical Library Association (MLA '13), the 11th International Congress on Medical Librarianship (ICML), the 7th International Conference of Animal Health Information Specialists (ICAHIS), and the 6th International Clinical Librarian Conference (ICLC).

IRPThis article has been approved for the Medical Library Association's Independent Reading Program <>


1. Bouchoucha S, Wikander L, Wilkin C. Assessment of simulated clinical skills and distance students: can we do it better. Nurse Educ Today. 2012 Dec 10. DOI: [PubMed]
2. Findlater GS, Kristmundsdottir F, Parson SH, Gillingwater TH. Development of a supported self-directed learning approach for anatomy education. Anat Sci Educ. 2012 Mar/Apr;5(2):114–21. DOI: [PubMed]
3. Kulier R, Gülmezoglu AM, Zamora J, Plana MN, Carroli G, Cecatti JG, Germar MJ, Pisake L, Mittal S, Pattinson R, Wolomby-Molondo JJ, Bergh AM, May W, Souza JP, Koppenhoefer S, Khan KS. Effectiveness of a clinically integrated e-learning course in evidence-based medicine for reproductive health training: a randomized trial. JAMA. 2012 Dec 5;308(21):2218–25. DOI: [PubMed]
4. Lewin LO, Singh M, Bateman BL, Glover PB. Improving education in primary care: development of an online curriculum using the blended learning model. BMC Med Educ. 2009 Jun 10;9:33. DOI: [PMC free article] [PubMed]
5. Smith SJ, Kakarala RR, Talluri SK, Sud P, Parboosingh J. Internal medicine residents' acceptance of self-directed learning plans at the point of care. J Grad Med Educ. 2011 Sep;3(3):425–8. DOI: [PMC free article] [PubMed]
6. Anderson RP, Wilson SP, Livingston MB, LoCicero AD. Characteristics and content of medical library tutorials: a review. J Med Lib Assoc. 2008 Jan;96(1):61–3. DOI: [PMC free article] [PubMed]
7. Anderson RP, Wilson SP, Yeh F, Phillips B, Livingston MB. Topics and features of academic medical library tutorials. Med Ref Serv Q. 2008 Winter;27(4):406–18. DOI: [PubMed]
8. Childs S, Blenkinsopp E, Hall A, Walton G. Effective e-learning for health professionals and students—barriers and their solutions. a systematic review of the literature—findings from the HeXL project. Health Info Lib J. 2005 Dec;22(suppl 2):20–32. DOI: [PubMed]
9. Jeffery KM, Maggio L, Blanchard M. Making generic tutorials content specific: recycling evidence-based practice (EBP) tutorials for two disciplines. Med Ref Serv Q. 2009 Spring;28(1):1–9. DOI: [PubMed]
10. Konieczny A. Experiences as an embedded librarian in online courses. Med Ref Serv Q. 2010 Jan;29(1):47–57. DOI: [PubMed]
11. Putnam J, Faltermeier D, Riggs CJ, Pulcher K, Kitts R. Conquering evidence-based practice using an embedded librarian and online search tool. J Nurs Educ. 2011 Jan;50(1):60. [PubMed]
12. Schilling K, Wiecha J, Polineni D, Khalil S. An interactive web-based curriculum on evidence-based medicine: design and effectiveness. Fam Med. 2006 Feb;38(2):126–32. [PubMed]
13. Schutt MA, Hightower B. Enhancing RN-to-BSN students' information literacy skills through the use of instructional technology. J Nurs Educ. 2009 Feb;48(2):101–5. [PubMed]
14. Boruff JT, Thomas A. Integrating evidence-based practice and information literacy skills in teaching physical and occupational therapy students. Health Info Lib J. 2011 Dec;28(4):264–72. DOI: [PubMed]
15. Dorsch JL, Perry GJ. Evidence-based medicine at the intersection of research interests between academic health sciences librarians and medical educators: a review of the literature. J Med Lib Assoc. 2012 Oct;100(4):251–7. DOI: [PMC free article] [PubMed]
16. Klem ML, Weiss PM. Evidence-based resources and the role of librarians in developing evidence-based practice curricula. J Prof Nurs. 2005 Nov;21(6):380–7. DOI: [PubMed]
17. Miller LC, Graves RS, Jones BB, Sievert MC. Beyond Google: finding and evaluating web-based information for community-based nursing practice. Int J Nurs Educ Scholarsh. 2010 Aug 16;7(1) DOI: [PMC free article] [PubMed]
18. Li P, Wu L. Supporting evidence-based medicine: a survey of U.S. medical librarians. Med Ref Serv Q. 2011;30(4):365–81. DOI: [PubMed]
19. Texas A&M Health Science Center. Focused report: Southern Association for Colleges and Schools [Internet] The Center; 2012 [cited 21 Dec 2012]. <>.
20. Texas A&M Health Science Center. Statistical fact book: academic year 2011–12 [Internet] The Center; 2011 [cited 21 Dec 2012]. <>.
21. Texas A&M Health Science Center. Critically appraise relevant evidence [Internet] The Center; 2012 [cited 21 Dec 2012]. <>.
22. Kassam R, McLeod E, Kwong M, Tidball G, Collins J, Neufeld L, Drynan D. An interprofessional web-based resource for health professions preceptors. Am J Pharm Educ. 2012 Nov 12;76(9):168. DOI: [PMC free article] [PubMed]
23. Weberschock T, Sorinola O, Thangaratinam S, Rengerink KO, Arvanitis TN, Khan KS. EBM Unity Group. How to confidently teach EBM on foot: development and evaluation of a web-based e-learning course. Evid Based Med. 2012 Aug 4. DOI: [PubMed]
24. Widyahening IS, van der Heijden GJMG, Moy FM, van der Graaf Y, Sastroasmoro S, Bulgiba A. From west to east; experience with adapting a curriculum in evidence-based medicine. Perspect Med Educ. 2012 Dec;1(5–6):249–61. DOI: [PMC free article] [PubMed]
25. Su S, Kuo J. Design and development of web-based information literacy tutorials. J Acad Lib. 2010 Jul;36(4):320–8.
26. Tempelman-Kluit N. Multimedia learning theories and online instruction. Coll Res Lib. 2006 Jul;67(4):364–9.
27. Bloom BS. Taxonomy of educational objectives. New York, NY: Longman; 1954.
28. Zhang L. Effectively incorporating instructional media into web-based information literacy. Electron Lib. 2006;24(3):294–306. DOI:
29. Strobl C, Jacobs G. Assessing QuADEM: preliminary notes on a new method for evaluating online language learning courseware. Comput Assist Lang Learn. 2011;24(5):433–49. DOI:
30. Opdenacker L, Stassen IMAM, Vaes S, Van Waes L, Jacobs G. QuADEM: manual for the quality assessment of digital educational material [Internet] Antwerp, The Netherlands: Universiteit Antwerpen; 2010 [cited 6 Apr 2012]. <>.
31. US Department of Justice. Information and technical assistance on the Americans with Disabilities Act [Internet] The Department; 2013 [cited 20 Feb 2013]. <>.
32. Zoho. Creator [Internet] Pleasanton, CA: Zoho; 2005 [cited 21 Feb 2013]. <>.
33. Robinson JR, Crittenden WB. Learning modules: a concept for extension educators. J Extension. 1972;10(4):35–44.
34. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33(1):159–74. [PubMed]
35. Gronlund NE, Brookhart SM. Gronlund's writing instructional objectives. 8th ed. Upper Saddle River, NJ: Pearson Education; 2009.
36. Dunn R, Honigsfeld A, Doolan LS, Bostrom L, Russo K, Schiering MS, Suh B, Tenedero H. Impact of learning-style instructional strategies on students' achievement and attitudes: perceptions of educators in diverse institutions. Clearing House J Educ Strateg Issues Ideas. 2009 Jan–Feb;82(3):135–40.
37. Mestre L. Accommodating diverse learning styles in an online environment. Ref User Serv Q. 2006;46(2):27–32.
38. Wooldridge B. Increasing the effectiveness of university/college instruction: integrating the results of learning style research into course design and delivery. In: Sims RR, Sims SJ, editors. The importance of learning styles: understanding the implications for learning, course design, and education. Westport, CT: Greenwood Press; 1995.
39. Andreou C, Papastavrou E, Merkouris A. Learning styles and critical thinking relationship in baccalaureate nursing education: a systematic review. Nurse Educ Today. 2013 Jul 8. DOI: [PubMed]
40. Lim CP, Lee SL, Richards C. Developing interactive learning objects for a computing mathematics module. Intl J E-Learn. 2006;5(2):221–44.
41. Stoltz-Loike M, Morrell R, Loike J. Usability testing of BusinessThinking e-learning CD-ROMs with older adults. Educ Gerontol. 2005;31(10):765–86.
42. Nielsen J. Heuristic evaluation. In: Nielsen J, Mack RL, editors. Usability inspection methods. New York, NY: John Wiley & Sons; 1994. pp. 25–62.

Articles from Journal of the Medical Library Association : JMLA are provided here courtesy of Medical Library Association