PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of ijnesInternational Journal of Nursing Education ScholarshipInternational Journal of Nursing Education ScholarshipSubmit to International Journal of Nursing Education ScholarshipSubscribe
 
Int J Nurs Educ Scholarsh. 2008 January 1; 5(1): 44.
Published online 2008 December 30. doi:  10.2202/1548-923X.1538
PMCID: PMC2920736

Evaluation of an Online Graduate Nursing Curriculum: Examining Standards of Quality*

Abstract

Recent offerings of online courses have outpaced the evaluation of the quality of those offerings, particularly at the program level. The purpose of this project was to evaluate the quality of a set of 16 graduate nursing courses developed for three master"s specialty programs. An interdisciplinary group of nursing faculty and instructional technologists collaborated in the development of a quality assessment tool and evaluated 16 online graduate nursing courses. Faculty members for each of the courses were interviewed as part of the process. The collaborative design and development process is discussed, as well as evaluation techniques and the data collection instrument. Examples of best practices and areas for improvement in online courses are presented. The findings will help others as they develop and evaluate online curricula. This project provides a model for interdisciplinary collaboration on the evaluation of online programs that can be reviewed, modified, and implemented at other institutions. Instruments are included as supplementary material.

A proliferation of online course offerings at a wide range of institutions of higher education has been witnessed in recent years. Many graduate programs are now offered entirely online, particularly in professional schools. Nursing education is no exception. A number of authors have described the use of online strategies in graduate nursing programs (Mills & Hrubetz, 2001; Jesse, Taleff, Payne, Cox, & Steele, 2006). One notable trend in this growth of online education is that online course offerings have outpaced the evaluation of the quality of those offerings, particularly at the program level (McGorry, 2003).

Discussed in this paper is an evaluation project undertaken by nursing faculty and educational technology professionals with a desire to go beyond examining single online courses to review a set of courses comprising entire graduate nursing specialties at a large Midwestern U.S. university. This interdisciplinary group collaborated in the development of an instrument to evaluate the quality of online graduate nursing courses and in the evaluation of 16 online graduate nursing courses. In this paper, the process for generating the items is discussed, the evaluation instrument is shared, and evaluation results are presented.

BACKGROUND

In 2000, nursing faculty received federal funding from the U.S. Department of Health and Human Services (Health Services and Resources Administration, Bureau of Health Professions, Division of Nursing) to develop online graduate nursing courses in three specialty areas: nurse-midwifery, women’s health care nurse practitioner, and public health nursing (Avery, Ringdahl, Juve and Plumbo, 2003). Additional funding in 2004 provided for the re-design of two additional graduate specialties, psychiatric mental health clinical nurse specialist, and nursing and health care systems management. A hybrid model of online education comprised of web-based courses with typically two face-to-face sessions on campus each semester was used. Educational technology professionals from the university’s Digital Media Center collaborated with the graduate nursing project faculty to plan the project’s evaluation component.

The evaluation process had a number of goals beyond simply satisfying a reporting obligation including:

  • discovering areas of potential improvement both for individual courses and across multiple courses;
  • identifying examples of excellent online teaching within the School of Nursing to provide guidance for faculty seeking to develop new or improve existing online courses;
  • developing an evaluation instrument that could be understood by faculty members and instructional designers, and used to evaluate online courses in a reasonable amount of time;
  • providing a checklist of the important components of quality online education for faculty in the process of developing or revising online courses; and
  • initiating important dialogue within the School of Nursing regarding what faculty consider to be the most important elements of good online courses.

The process began with the goal of determining a common set of shared pedagogical beliefs concerning effective course design and quality instruction. These were influenced by a number of factors, including the culture of teaching in the school, the disciplinary content of nursing, and characteristics of nursing students taking the courses (e.g., learning styles, backgrounds, expectations).

The articulation of this set of shared beliefs followed a review of relevant literature related to online pedagogy (Cobb, Billings, Mays, & Canty-Mitchell, 2001; Billings, Connors, & Skiba, 2001; Phipps & Merisotis, 2000; Wright, n.d.). The well known seven principles of good practice in undergraduate education as interpreted for technology-rich learning environments emphasize the communication among students and faculty as well as timely communication that is so important in learning in general, but particularly in the online environment (Chickering & Ehrman, 1996). These principles, reflected by several other authors, were ascribed to by the faculty and instructional technologists.

Thiele (2003) asked undergraduate nursing students in an online research course about changes they experienced as a result of the course. They reported feeling more independent as learners and that prompt feedback from their instructor was important to their learning. Regular interaction between faculty and students was also described by Mills and Hrubetz (2001) as important in student learning, consistent with Chickering et al’s principles. They concluded that regular engagement with course materials and frequent interaction with faculty via web-based technologies resulted in a superior learning experience.

Interaction is critically important in web-based education. Thurmond and Wambach (2004) completed an in depth review of interaction and described four types: learner-content, learner-learner, learner-faculty and learner-interface. While not mutually exclusive, the authors described the importance of each and emphasized the positive relationship between interaction with the instructor and perceived learning by students. Modifying course content, cultivating communication, developing assignments and assessments, and using multimedia are four steps described by LaPrairie and Hinson (2005) in adapting courses for the online environment. They also emphasized the importance of effective communication in quality online education.

Billings’ (2000) framework for evaluating web-based courses in nursing is comprised of components related to the use of technology, faculty support, student support, and educational practices, and how those components impact outcomes of web-based education. This study focuses on the educational practices component, in which Billings reflects Chickering and Ehrman’s (1996) practices adapted to technology. In this study, examined were educational practices of a group of web-based courses comprising graduate nursing specialty programs to look for quality practices.

The expertise of faculty members in the school who had the most experience with online teaching was used. The articulation of shared beliefs was well-grounded in existing literature and resulted in the standards or items selected for inclusion in our tool. While some standards are important in any course regardless of teaching methodology (Inglis, 2005), the writers believed that items selected were particularly important in web-based course design. The standards were organized into four categories:

  • Course mechanics: clear articulation of such things as course goals and objectives, technical requirements, pre-requisites, and time commitment; the connection between course objectives with learning activities and assessment mechanisms.
  • Course organization: ease of navigation; scaffolding of course material and activities from simple to complex; accommodation of multiple learning styles.
  • Student support: availability of faculty and/or teaching assistants to address student needs; support for course learning activities; access to technical support.
  • Communication and interaction: degree and robustness of student interaction with other students, faculty members, course material, and course interface.

METHODS

Evaluation Instrument

The existing literature was reviewed to locate an instrument designed to measure these standards that could be used both retrospectively, to evaluate online courses that had already been taught, and prospectively, by faculty engaged in designing, developing, or revising an online course. Although ideas were drawn from existing evaluation instruments (including the Robert Wood Johnson Foundation’s Partnership for Training project, Quality Standards for Learner-centered Online Instruction, no longer available online), none found were useable for this project because they were written in instructional design jargon, making application difficult for faculty members, or they were long and unwieldy, requiring extensive time to apply to a single online course.

Through a series of meetings, the team of faculty and educational technology professionals constructed an evaluation tool (Appendix A) which reflected the standards described earlier and required approximately two to four hours to apply to a single online course. The tool initially consisted of 20 items to be rated on a 5-item scale (higher number represented closer adherence to the standard) and one general open-ended item. Raters were encouraged to add comments to explain their numerical ratings.

Reliability and validity of the instrument was evaluated by means of a pilot test that focused on a graduate ethics course that had been taught several times by a single faculty member. Three different faculty members and two educational technology professionals used the instrument to rate the pilot course by examining an archived version. Student assignments and discussion postings within a threaded discussion tool, as well as the course materials were available for review.

The raters then met with the instructor to review the ratings, identify aspects of the course that may have been missed by the raters, and generally debrief about the course rating experience. Calculations of inter-rater reliability were not computed, but the raters discussed how to “calibrate” the instrument by establishing agreement on the meaning of the 20 items. Faculty and educational technology professionals had divergent ratings on a few items; this was resolved by discussion resulting in a common understanding of what each question was asking and agreed to by all five raters. Face and content validity were also determined through this pilot process. Instrument use and review by faculty experienced in both classroom and online teaching, as well educational technology/instructional design professionals, provided the background and expertise to assess this instrument.

Following the pilot process, minor clarifying edits were made to the instrument and the evaluation process as a whole was modified. In addition to activity on the course web-site, students communicated with the professors by email or phone, and typically had one or two face-to-face meetings. Substantial interaction occurred among students and between students and instructors during these face-to-face meetings but was not available for review. To capture these data, a follow-up interview was held with the instructor of each course evaluated.

Evaluation Procedures

The revised tool was applied to each of the 16 courses. These included courses specific to public health nursing, nurse-midwifery, and women’s health care nurse practitioner specialties as well as core courses required for most students in the master’s program. Two nursing faculty members and one educational technology professional examined each course independently. One faculty member was part of the funded project and had participated in the tool development; the second was drawn from a larger pool of experienced online teachers. Data from all raters were compiled and reviewed, shared with each respective course instructor(s), and an interview was conducted with each instructor.

In addition to standard quantitative analyses of the numerical data, the authors independently reviewed the qualitative data (reviewers’ comments entered following each question on the instrument, and notes taken during the follow-up interviews with the instructors) looking for best practices, common themes, points of clarification, and areas of disagreement among reviewers. Each author independently reviewed the data, describing the primary information related to each question. Then common themes that emerged across all the questions were identified. The three authors met and discussed their individual interpretations, and came to agreement on identified themes. A formal coding and categorization scheme was not used.

RESULTS

Quantitative Results

Highest and lowest mean scores

Items on the instrument that yielded the highest and the lowest mean scores across all 16 courses were reviewed and are shown in Tables 1 and and2.2. The overall mean score for all items was 3.91 on a 1–5 scale (5 highest).

Table 1
Highest Mean Scores
Table 2
Lowest Mean Scores

Inter-rater reliability

As a check on the consistency of the evaluation procedures, the variability of ratings given to all courses by the different raters was examined. Educational technology professionals gave higher mean ratings than nursing faculty reviewers on 16 of 21 items across all courses. These differences were statistically significant (at the p < .05 level) in four cases:

  • Question 2: Goals and objectives appropriate to level of the course
  • Question 5: Faculty availability to address student needs
  • Question 12: Evaluation mechanisms/instruments measure objectives
  • Question 15: Flexibility for student input to shape the course as appropriate

Several factors may explain the differences. Nursing faculty may be more critical of courses in their own area than outsiders by virtue of their greater subject knowledge and/or experience teaching similar courses. Educational technology professionals may have been less critical as outsiders and not wanting to presume or offend. Finally, the educational technology professionals, by virtue of consulting in multiple departments and schools across a large university, have reviewed more online courses than the faculty members, and the widely varying quality of these courses may have provided a comparison against which the nursing courses were impressive.

Qualitative: Themes Derived from Qualitative Data and Examples

Match between course objectives and learning activities

The connection between learning activities present in a course and that course’s stated objectives was queried. In other words, are the things students do in a course well-suited to help them achieve the course’s learning objectives?

Learning objectives in many of the courses we examined were relatively high in Bloom’s taxonomy (Bloom, 1956). Instructors hoped that students would not only acquire knowledge and comprehension, but that they would be able to apply that knowledge, and engage in analysis and synthesis. In many cases it was not clear to raters how the learning activities, which gave students little practice at application, analysis, or synthesis, would help them achieve those higher-level objectives.

A nursing theory course with an assignment where students were introduced to criteria used to analyze conceptual models/theories and then asked to complete an analysis worked especially well. The assignment matched a corresponding high level course objective and in addition, the professor described to students why this skill was important and how it linked to clinical practice. In contrast, a clinically focused course had a number of lower level objectives in each content module, but it was not clear how those objectives and corresponding module activities related to the overall higher level course objectives. Determining achievement of these objectives may have been made in the clinical setting, but this was not described in course materials.

Faculty and student technical competence is important

Students are increasingly sophisticated users of technology and have little patience for courses that are cumbersome to navigate and not easy to use (Jones, 2002; Ernst et al., 2005). Online courses must therefore be easy to use, consistent in the information provided, free of errors as much as possible, visually appealing, and links to outside information must work. A common mistake that makes navigation more difficult is insufficient use of hyperlinks within the course, specifically describing instructions for a learning activity located in another part of the course and not providing a direct link.

Errors or inconsistent information were found in a few courses. While navigating around a course was not always easy or intuitive, some courses were complimented as providing an easy to use structure. Technical requirements of students in advance of taking an online course must be clear. This information, whether and how it was provided, varied considerably across all courses. One example was information that students would need access to and skill in using Power Point to complete a course assignment including a link to an outside tutorial for students who may not have the skill.

Students need clear support

When students are provided or referred for support to achieve course objectives, the information must be in an easily accessible and understandable format. For example, if students need to watch a video or listen to an audio file, they must be provided information about needed plug-ins or access to other information. If students are required to create and post online presentations, clear information is needed so they can do the work productively and in a timely fashion.

Comments included the importance of clear information about how and when the instructor is available for student support. Some instructors provided this information prior to the start of the course via email or face-to-face at the start of the course. Others included support information within the course site in a specific location and/or along with specific learning activities and assignments.

One really good example was a course that provided a link to an outside resource called “Is Online Right for You?” This site provided students with an assessment of their readiness and disposition toward online academic work, information to assist novices understand what to expect, and tips on getting started in an online course. In contrast, some courses expected students to complete a Power Point presentation; but no statement was made about the requirement for those skills, nor was support provided.

Diverse learning styles must be supported

Another salient theme that emerged was related to learning styles. Online nursing courses used a great variety of different learning activities (such as group work of various sorts, scavenger hunts, self-tests, letter-writing, and talking circles), but they tended to be text-heavy and to make little use of varied media.

Examples of good use of media included the use of a photograph and audio recorded message from the faculty member welcoming students to the course, graphic images inserted into text-based content making it more visually appealing, and audio with PowerPoint presentations so students could listen to the faculty member present information on a particular topic with visuals.

Student voice must be present in the course

Two important components of graduate education are student input into the course and higher level learning outcomes such as analysis and synthesis. Some clinical courses will be heavier in specific detailed information related to advanced practice specialty content and students will have less natural opportunity to contribute to the conduct of the course. However, some courses did or could have offered students a chance to pose questions, provide interpretation or synthesis of content, or suggest new areas of discussion. As with the theme of learning styles, student voice will be more apparent in courses with more varied learning activities.

Several good examples of student voice included asking students to reflect on ways they observe research being integrated into their current practice settings and opportunities for further use of research in practice, asking students to bring two questions for face-to-face discussion, and asking students to complete a brief mid-term evaluation to help shape aspects of the remaining course.

Interaction is critical in online courses

The degree of student-student interaction is an important component of online education. While all online nursing courses used online communication tools (email, threaded discussion, synchronous chat), the degree of genuine interaction among students was often quite limited. True interaction occurs when one person’s actions are conditioned by the actions of another person (Wagner, 1994; Wagner, 1997; Thurmond and Wambach, 2004), and this was not observed in a number of the courses. In online asynchronous discussions, students would often just say their piece as required by the assignment and leave, without taking account of or reacting thoughtfully to anyone else’s contributions.

An effective example of students effectively engaging in conversation was a research assignment where students posted a summary of several steps of an assignment they were working on and each gave and received peer comments in a small group via the asynchronous discussion tool. Specific instructions provided by the faculty member for this learning activity likely resulted in the increased level of interaction.

DISCUSSION

Guided in part by Michael Patton’s utilization-focused approach to evaluation (Patton, 1996), data analysis centered on three action categories, representing ways in which we hoped that the findings would be used:

  • course-specific items for potential revision by individual faculty (things professors may want to change or work on in their courses);
  • issues for School of Nursing faculty to address as a group (items having to do with overall standards of quality for online nursing courses in the master’s specialties)
  • items that reflect a need to improve the evaluation instrument (ambiguities, confusing questions, etc.)

In order to promote the use of the findings, individual faculty were given the data from their course evaluation to use as they desired in continued revisions to their online courses. These data have been presented to the faculty as a whole so that the information can be used school-wide to continue to improve the quality of online courses. Faculty response to the use of the tool has been enthusiastic.

Quantitative Data

Highest and lowest mean scores

A conspicuous feature of the items that received the highest ratings (see Table 1) is that several have to do with the goals and objectives of a course, particularly their presentation and connection with the learning activities and evaluation mechanisms. The school has emphasized the explicit statement of course goals and objectives and their integration into the course in its curriculum development process for a number of years, and this emphasis was reflected in online courses. McGorry (2003) and Martinez, Liu, Watson, and Bichelmeyer (2006) emphasize the importance of these items in online courses.

The lowest-rated items (Table 2) relate to the provision of certain information to students in a course. This evaluation revealed many cases in which course Web-sites used boilerplate information rather than information tailored to a specific course, possibly accounting for the lower ratings given to three of these four items. For instance, with respect to Q7 (technical requirements specified with regard to both skill and equipment), courses frequently simply linked to standard school or university technical requirements related to students having a computer with certain specifications and level of internet connection. However, students were not always informed about what they needed for a particular course, such as required browser plug-ins or multimedia software. McGorry’s (2003) proposed quality model also included several items related to technical support. Regarding Q3 (pre-requisite or prior knowledge required), courses noted that students must be “graduate students in nursing”, rather than describing the background knowledge and skills students need to succeed in a particular course. Online courses with greater specificity would be more helpful to students, but this is a policy issue best decided by the department or school. Boilerplate information can provide a baseline of information for students across courses, but may be misleading in courses that demand more specialized skill and equipment.

Finally, with respect to the lowest-rated items, the scores were still relatively positive. Q4 (written connection between course objectives and learning activities) was the only item in the group to score below 3 on a 5-point scale. This could be an artifact of sympathetic reviewers. Q4 is the one item for which there was no boilerplate, and the one item for which the lowest possible scoring was therefore likely.

Making the connection between course activities and learning outcomes explicit facilitates metacognition, in other words, the ability for students to monitor their own learning (Bransford, Brown & Cocking, 2000), and may improve motivation (Svinicki, 2004). Making these connections can lead to a better match between objectives and learning activities. Policy-level decisions recommending or requiring that faculty make explicit connections between objectives and activities would be helpful to facilitate student learning.

Changes to evaluation tool and procedures

The discovery that educational technology professionals’ ratings were systematically different from those given by faculty clearly showed the need to improve inter-rater reliability. The instrument was revised in an effort to reduce ambiguity. For example, items related to interactivity and learning styles were clarified by separating questions about learning activities from use of media. The revised instrument also has open ended questions related to the presence of interactive course components, a course highlight, and an area for improvement.

After a review of relevant literature (Roblyer and Ekhaml, 2000; Technology and Learning Program (Rubric for Online Instruction), 2003; Taggart, 1999), a scoring rubric (Appendix B) designed to increase the rigor and reduce subjectivity inherent in the evaluation process was created. The rubric provides an abstract description of the low, middle and high levels of success in meeting the criteria for each item. These levels were accompanied by specific examples. By defining the extremes and middle of the scale, it is anticipated that others will be able to use the rating scale ranging from “to a very small extent, small extent, moderate extent, great extent, and very great extent”. A recently published rubric by Blood-Siegfried et al., (2008) followed a similar process and used very similar quality indicators. That group evaluated five master’s level nursing core courses using a series of 56 statements of quality and the instrument has been used for faculty new to online teaching. The rubric, while naming examples of quality indicators, was not developed as a tool for rating the various items.

Qualitative Data

Match between course objectives and learning activities

The finding that objectives did not always match well with activities in a course could be useful for individual faculty members who want their students to achieve higher-level learning objectives. Faculty development programs focusing on the use of more sophisticated active learning strategies (McConnell, Steer, and Owens, 2003) would be helpful.

Faculty and student technical competence is important

There are two issues pertaining to this theme. How intuitive the navigation is and how functional the site is for students to use; and to what degree students are prepared with the technology skills they need to succeed in specific courses. Policy-level decisions regarding a standard for a common approach to course design and support, and common communication to students regarding technical competence required for any specific course is recommended.

Following this evaluation, the School created an online orientation all students must complete before beginning their first online graduate course. The orientation ensures a baseline level of technology skill among students, and orients them to the common template that is used in the design of individual courses throughout the School. A common template for online courses is used and all support staff have completed courses on online course design to support faculty in course development and maintenance.

Students need clear support

Instructors provided information regarding student support resources in a variety of ways. As programs are increasingly available online, this information would ideally be provided in a consistent fashion throughout a program, across courses. Whatever approach is chosen, consistency and clarity are important to facilitate student learning.

Diverse learning styles must be supported

Using both a variety of activities and multiple media will help instructors seeking to provide diversity in their courses, make courses more interesting, and hopefully provide all students with an opportunity to interact with the course in a way that is satisfying based on personal learning preferences. An outcome of this research was to provide a number of examples of nursing courses that make excellent use both of varied activities and media, to serve as models for instructors seeking more diversity in their courses.

Interaction is critical in online courses

Student-student interaction was inconsistent across the courses reviewed. Several recent meta-analyses of studies of distance education courses conclude that the presence of student-student interaction in an online course is a strong predictor of learning outcomes in that course (Bernard et al., 2004; Hiltz, Coppola, Rotter, Turoff, and Benbunan-Fich, 2000; Zhao, Yong, Bo, Lai, and Tan, 2005). Insufficient guidance for students may have affected lack of robust interaction in some cases, thus overlapping to some degree with clarity of learner expectations. Research about online interaction has consistently emphasized the importance of providing students with explicit structures to guide their interaction (Horton, 2000; Feenberg & Xin, 2000; Hiltz et al., 2000), and while some courses provided students with excellent guidance as to how to use online communication tools interactively, others did not.

Another important consideration with graduate nursing students is their fuller personal lives compared with undergraduate students. Graduate students tend to be older, employed at least part time as nurses, and often have families including young children that compete for time and attention, reducing the opportunity to converse regularly and in meaningful ways using asynchronous discussion methods. Martinez, Liu, Watson and Bichelmeyer (2006) identified that distance students may have more difficulty with high levels of engagement. Providing specific instructions and time frames for meaningful interaction and other means for dialogue, including live discussion, should help create opportunities for scholarly dialogue for all students. Highlighting, in written reports and oral presentations, courses that represent best practices with respect to creating robust student-student interaction and provide scaffolding (starting with simple activities and building to those that are more complex) for students expected to interact in the online environment is also an outcome stemming from this study.

CONCLUSION

Online teaching and learning is an approach to education that continues to grow in response to the need to reach students wherever they are located and whenever they are able to participate in learning. As academic units attempt to increase access by moving programs online, there is a need to ensure the quality of those programs. The project described in this paper represents a collaborative development process that produced a set of theoretically informed, yet practically grounded, pedagogical standards as well as an evaluation tool which operationalized those standards.

This project should provide academic units concerned with quality in online education with a valuable tool for examining course mechanics as well as the quality of engagement and interaction in the “virtual classroom”. Our revised course evaluation instrument, with its accompanying rubric, may also be of value in the development or redesign of courses for online delivery. It is already being used by another university department for redesigning all courses in one of its master’s degree programs. The tool also has the potential to be used to support peer review, either for an entire course or for specific content modules. Similar to a faculty peer attending a face-to-face class session to review the teaching of a colleague, the tool could be used to review a specific content area, providing specific ratings and qualitative feedback.

This project was specifically intended to first develop a tool that could easily be used by faculty and others to evaluate online courses in areas that many have determined are important in quality online learning. Second, the tool was applied to a set of graduate nursing courses. While it was beyond the scope of this study, the tool could be used in combination with student evaluation data (Billings et al., 2001) for broader overall evaluation of online programs.

The evaluation tool and process discussed in this paper is similar to recent examples of program-level evaluation strategies for online programs, including the Quality Matters program (http://www.qualitymatters.org/) the University of the Sciences in Philadelphia learning-centered teaching project (http://www.usp.edu/lct/assessment.shtml), the Rubric for Online Instruction project at CSU Chico(http://www.csuchico.edu/tlp/resources/rubric/rubric.pdf), and the rubric developed by Blood-Siegfried et al (2008).

Recent doctoral dissertations have made evaluation of online nursing programs their focus (Brown, 2005; Brownrigg, 2005; Kaiser, 2005) suggesting that evaluation of online learning environments is maturing. Nonetheless, additional research related to quality in the online learning environment is needed to examine quality across courses representing programs of study as well as student outcomes; outcomes that should include measures of academic success, professional socialization, and satisfaction with the online learning experience.

Appendix A

Appendix B

Footnotes

*This project was supported by funds from the Division of Nursing (DN), Bureau of Health Professions (BHPr), Health Resources and Services Administration (HRSA), Department of Health and Human Services (DHHS) under grant numbers D09HP00115, NM, WHCNP, and PHN Graduate Education Via Technology, for $1,630,100 and D09HP04068 Technology Enhanced Learning in Graduate Nursing $966,600. The information or content and conclusions are those of the authors and should not be construed as the official position or policy of, nor should any endorsements be inferred by, the Division of Nursing, BHPr, DHHS or the U.S. Government.

REFERENCES

  • Avery MD, Ringdahl D, Juve C, Plumbo M. The transition to web-based education: Enhancing access to graduate education for women’s health providers. Journal of Midwifery and Women’s Health. 2003;48:418–25. [PubMed]
  • Bernard RM, Abrami PC, Lou Y, Borokhovski E, Wade A, Wozney L, et al. How does distance education compare with classroom instruction? A meta-analysis of the empirical literature. Review of Educational Research. 2004;74:379–439. doi: 10.3102/00346543074003379. [Cross Ref]
  • Billings DM. A framework for assessing outcomes and practices in Web-based courses in nursing. Journal of Nursing Education. 2000;39:60–7. [PubMed]
  • Billings DM, Connors HR, Skiba DJ. Benchmarking best practices in web-based nursing courses. Advances in Nursing Science. 2001;23:41–52. [PubMed]
  • Blood-Siegfried JE, Short NM, Rapp CG, Hill E, Talbert S, Skinner J, Campbell A, Goodwin L. A Rubric for Improving the Quality of Online Courses. International Journal of Nursing Education Scholarship. 2008;5(1):Article 34. doi: 10.2202/1548-923X.1648. Available at: http://www.bepress.com/ijnes/vol5/iss1/art34. [PubMed] [Cross Ref]
  • Bloom BS. Taxonomy of educational objectives, handbook I: The cognitive domain. New York: David McKay Co. Inc; 1956.
  • Bransford JD, Brown AL, Cocking RR. How people learn: Brain, mind, experience, and school. Washington, D.C: National Academy Press; 1999.
  • Brown KM. A case study of the first year of a new online RN from LPN program: evaluation of program and student outcomes. Central Michigan University. Dissertation Abstracts International. 2005;66(02):807. (UMI No. AAI3163188)
  • Brownrigg JV. Assessment of Web-based learning in nursing: the role of social presence. Dissertation Abstracts International. 2005;66(03):1389. (UMI No. AAI3168648)
  • Chickering AW, Ehrman EC. Implementing the seven principles: Technology as lever. 1996. Retrieved September 5, 2008, from http://www.tltgroup.org/programs/seven.html.
  • Cobb KL, Billings DM, Mays RM, Canty-Mitchell J. Peer review of teaching in Web-based courses in nursing. Nurse Educator. 2001;26:274–9. doi: 10.1097/00006223-200111000-00012. [PubMed] [Cross Ref]
  • Ernst D, Jorn L, Martyr-Wagner M, McGlynn N, Molgaard L, Sonnack J, Oliver J, Voelker R. Student technology survey executive report. Minneapolis: University of Minnesota Digital Media Center, Office of Information Technology; 2005. Apr, Retrieved April 1, 2007, from http://dmc.umn.edu/surveys/students/ssreport04.pdf.
  • Feenberg A, Xin C. A teacher’s guide to moderating online discussion forums: From theory to practice. The TextWeaver Project. 2000. Retrieved January 1, 2006, from http://www.textweaver.org/modmanual4.htm.
  • Hiltz SR, Coppola N, Rotter N, Turoff M, Benbunan-Fich R. Measuring the importance of collaborative learning for the effectiveness of ALN: A multi-measure, multi-method approach. Journal of Asynchronous Learning Networks. 2000;4(2) Retrieved 10/30/07 from http://www.alnresearch.org/Data_Files/articles/full_text/le-hiltz.htm.
  • Horton S. Web Teaching Guide: A Practical Approach to Creating Course Web Sites. New Haven, CT: Yale University Press; 2000.
  • Inglis A. Quality improvement, quality assurance, and benchmarking: Comparing two frameworks for managing quality processes in open and distance learning. International Review of Research in Open and Distance Learning. 2005;6:1–13.
  • Jesse DE, Taleff J, Payne P, Cox R, Steele LL. Reusable Learning Units: An Innovative Teaching Strategy for Online Nursing Education. International Journal of Nursing Education Scholarship. 2006;3:1–12. doi: 10.2202/1548-923X.1260. Retrieved June 30, 2007 from http://www.bepress.com/ijnes/vol3/iss1/art28. [PubMed] [Cross Ref]
  • Jones S. The Internet Goes to College: How students are living in the future with today’s technology. Pew Internet & American Life Project; Washington: 2002. Sep, Retrieved 10/30/07 from http://www.pewinternet.org/pdfs/PIP_College_Report.pdf.
  • Kaiser LM. From presence to “e-presence” in online nursing education. Dissertation Abstracts International. 2005;66(03):1396. (UMI No. AAI3169428)
  • LaPrairie KN, Hinson JM. Trading spaces: Transferring face-to-face courses to the online environment. Journal of Interactive Instruction Development. 2005;18:3–8.
  • Martinez R, Liu S, Watson W, Bichelmeyer B. Evaluation of a web-based master’s degree program: Lessons learned from an online instructional technology program. Quarterly Review of Distance Education. 2006;7:267–83.
  • McConnell D, Steer D, Owens K. Assessment and active learning strategies for introductory geology courses. Journal of Geoscience Education. 2003;51(2):205–216.
  • McGorry SY. Measuring quality in online programs. Internet and Higher Education. 2003;6:159–77. doi: 10.1016/S1096-7516(03)00022-8. [Cross Ref]
  • Mills AC, Hrubetz J. Strategic development of a master’s program on the World Wide Web. Journal of Professional Nursing. 2001;17:166–72. doi: 10.1053/jpnu.2001.24864. [PubMed] [Cross Ref]
  • Patton MQ. Utilization-focused evaluation. Thousand Oaks, CA: Sage; 1996.
  • Phipps R, Merisotis J. Quality on the line: benchmarks for success in internet-based distance education. Washington, D.C: Institute for Higher Education Policy; 2000.
  • Roblyer MD, Ekhaml L. How interactive are your distance courses? A rubric for assessing interaction in distance learning. Online Journal of Distance Learning Administration. 2000 Spring;3(2) Retrieved 10/30/07 from http://www.westga.edu/%7Edistance/ojdla/summer32/roblyer32.pdf.
  • Technology and Learning Program. Rubric for online instruction. California State University, Chico; Chico, CA: 2003. Retrieved November 29, 2008 from http://www.csuchico.edu/tlp/resources/rubric/rubric.pdf.
  • Svinicki MD. Learning and motivation in the postsecondary classroom. Bolton, MA: Anker Publishing Company; 2004.
  • Taggart GL. Rubrics: A handbook for construction and use. New York: Scarecrow Education; 1999.
  • Thiele JE. Learning patterns of online students. Journal of Nursing Education. 2003;42:364–6. [PubMed]
  • Thurmond V, Wambach K. Towards an understanding of interactions in distance education. Online Journal of Nursing Informatics (OJNI) 2004;8(2) Retrieved November 29, 2008 from http://www.eaa-knowledge.com/ojni/ni/8_2/interactions.htm.
  • Wagner ED. In support of a functional definition of interaction. The American Journal of Distance Education. 1994;8(2):6–26.
  • Wagner ED. Interactivity: From agents to outcomes. In: Cyrs TE, editor. Teaching and learning at a distance: What it takes to effectively design, deliver, and evaluate programs. San Francisco: Jossey-Bass Publishers; 1997.
  • Wright CR. Criteria for evaluating the quality of online courses. Retrieved October 30, 2007 from http://elearning.typepad.com/thelearnedman/ID/evaluatingcourses.pdf.
  • Zhao Y, Yong L, Bo Y, Lai C, Tan S. What makes the difference? A practical analysis of research on the effectiveness of distance education. Teachers College Record. 2005;107(8):1836–1884. doi: 10.1111/j.1467-9620.2005.00544.x. Retrieved November 29, 2008 from http://www.tcrecord.org/library/content.asp?contentid=12098. [Cross Ref]

Articles from International Journal of Nursing Education Scholarship are provided here courtesy of Berkeley Electronic Press