|Home | About | Journals | Submit | Contact Us | Français|
Objective: This article explores the development and implementation of the Medical Library Association (MLA) Benchmarking Network from the initial idea and test survey, to the implementation of a national survey in 2002, to the establishment of a continuing program in 2004. Started as a program for hospital libraries, it has expanded to include other nonacademic health sciences libraries.
Methods: The activities and timelines of MLA's Benchmarking Network task forces and editorial board from 1998 to 2004 are described.
Results: The Benchmarking Network task forces successfully developed an extensive questionnaire with parameters of size and measures of library activity and published a report of the data collected by September 2002. The data were available to all MLA members in the form of aggregate tables. Utilization of Web-based technologies proved feasible for data intake and interactive display. A companion article analyzes and presents some of the data. MLA has continued to develop the Benchmarking Network with the completion of a second survey in 2004.
Conclusions: The Benchmarking Network has provided many small libraries with comparative data to present to their administrators. It is a challenge for the future to convince all MLA members to participate in this valuable program.
Historically, hospital and other special library members of the Medical Library Association (MLA) have had neither the organization nor the impetus to gather ongoing statistics on their activities. Such statistics could be used to compare services, establish best practices, make management decisions, or conduct research projects. Local and regional surveys have been done in the past and sporadically reported in the literature . In 1986, the Hospital Libraries Section (HLS) created a Task Force on Hospital Library Statistics to “collect, synthesize and make available statistics on hospital libraries nationwide” . It became apparent, however, that it would be too costly an undertaking for HLS to do a survey. This activity inspired the American Hospital Association (AHA) librarians to push for a survey of hospital libraries. AHA agreed to do this as part of its annual survey in 1990 . AHA survey results reported the type of services in hospital libraries but not any measures of activity of those services. In 1991, the continuing HLS Task Force on Hospital Library Statistics reported on the cooperation with the AHA survey and their activity with other MLA committees in submitting a grant proposal to the National Library of Medicine to create a database of libraries in the health sciences that “stressed the continuity of collection and maintenance of statistics pertinent to health sciences libraries” . While the grant was approved in February 1991, it was never funded.
The development of MLA standards for hospital libraries in 1984, 1994, and 2000 pointed out the need for better statistics for hospital libraries to better identify and present best practices [5–8]. Concurrent with this ongoing need, the new technologies of the microcomputer and the Web developed. Also, in the late 1990s, health care economic forces caused many hospital libraries to be closed. The development and implementation of the MLA Benchmarking Network came about at the intersection of these needs, economics, and new technologies.
The MLA Benchmarking Network was born in an environment of managed care. During the last decade of the twentieth century, the health care industry, especially the hospital sector, was transformed by marketplace factors. Because of managed care's emphasis on cost control, many hospitals embraced plans for merger as a strategy to decrease costs [9, 10]. It was reported that in 1996 a record number of 768 hospitals merged . As more hospitals merged, many departments were consolidated across hospitals.
This departmental consolidation trend affected hospital libraries. The hospital library literature of the 1990s contains a number of examples of libraries that were consolidated, downsized, or eliminated following a merger [12, 13]. MLA '97, the 1997 MLA annual meeting, included a session on managed care, “Survival of the Fittest (or) It's a Jungle Out There” . Managed care and its effect on libraries had become a major concern of MLA.
The MLA Board decided to address the issue in two ways. First, the MLA president met with the administrators of several large health care systems to advocate for the critical role that hospital libraries played in consolidated organizations. Second, board members interviewed various librarians involved in mergers to determine what assistance MLA might provide. Both administrators and librarians agreed on one useful strategy: providing specific data from a large group of hospital libraries to compare data across libraries that could lead to improved library services.
MLA leaders began investigating the feasibility of providing comparative data to hospital libraries. Two models were available: the Association of Academic Health Sciences Libraries (AAHSL) has administered an annual survey that included specific library operations data since 1975 [15, 16] and the Canadian Health Libraries Association/Association des Bibliothèques de la Santé du Canada (CHLA/ABSC) had developed a benchmarking toolkit to assist their members in collecting and analyzing data .
In the 1990s, benchmarking had become an important management tool in hospitals. Benchmarking is based on the belief that someone may be doing a process better than others. The goal of benchmarking is to improve performance by comparing how other organizations execute similar tasks and adopting the best practices found. Benchmarking is a technique that uses ratios to provide a common “yardstick” to evaluate performance and efficiency in certain areas . To use the technique of benchmarking, benchmarking partners need to be identified and worked with in a structured way to compare processes.
MLA member librarians who did not participate in the AAHSL data collection had no set of statistics similar to AAHSL's to easily identify benchmarking partners. Under pressure to benchmark from their administrations, many librarians put out calls for partners and statistics on email discussion lists. Even without using formal benchmarking techniques, based on the AAHSL experience, librarians could use comparative data to show their administrators how other libraries in like-size institutions performed. Interest in MLA in benchmarking resulted in a special session, “Benchmarking: Collecting and Analyzing Data Effectively,” which was presented at MLA '96 . Formal development of the Benchmarking Network began in 1998 with the meeting session, “Empowering Through Benchmarking” .
At MLA '98, the MLA Board passed a resolution to form the MLA Benchmarking Task Force. During the 1998/99 association year, the task force was formed and its charge set: “to define, develop, and evaluate a coordinated and comprehensive Web-based medical library benchmarking tool that will enable members to establish best practices, compare important operations, and define appropriate statistics for negotiating with administrators” . Web-based data input was seen as the most economical way to carry out such a survey, based on examples of this type of technology already available in 1998. The chair of the Benchmarking Task Force, Bernie Todd Smith, divided the task force into several functional teams, all contributing to accomplishing the ultimate goal of preparing MLA membership for a beta test survey and developing the Benchmarking Network Website to carry out the Web-based survey. A tentative project schedule was set up to itemize the steps of the project. Even with delays, the beta test survey was finished by April 2001. Table 1 presents a calendar of the Benchmarking Network's work and accomplishments.
The functional teams included publicity, training, administrative, content, information systems, and outcomes. Through various MLA communication venues, the Publicity Team worked to alert the MLA membership about the value of benchmarking and the progress that the task force was making in developing a usable tool. The Training Team reached out to MLA chapters to develop local educators. The Administrative Team investigated tentative models for participation in and access to the Benchmarking Network. The Content Team performed the crucial work of developing the beta survey instrument. The Information Systems Team developed the initial Web-based, beta test, survey intake site and reporting site. The Outcomes Team studied various ways to report the data.
Using a Web-based intake form to gather statistics required that all members of MLA be informed where and when to input their data. The Publicity Team's role was to introduce MLA hospital library members to the concepts and importance of benchmarking and standardized statistics, and they accomplished this task through a series of articles in the HLS newsletter, National Network. Todd Smith presented the vision of the task force and suggested data for future participants to collect . Goodwin and Harris presented case studies about the difficulty of doing benchmarking without data already present [23, 24]. The Publicity Team's chair, Susan Schweinsberg Long, AHIP, wrote four informative updates to explain the project and encourage participation in the beta test survey [25–28].
The Training Team, under the leadership of Jacqueline Donaldson Doyle, AHIP, FMLA, determined that decentralized education about MLA's benchmarking initiative would be crucial. The team recruited benchmarking chapter educators (BCEs) from each of the fourteen MLA chapters and provided these educators with in-depth information and training. BCEs were trained on the benchmarking beta tool so that they could provide their local chapter members with one-on-one support. This network of educators proved to be the backbone of the project, providing marketing and promotion as well as education to all members. A constant flow of emails notified MLA members of when and how to input their data. Much of the subsequent success in achieving widespread participation in the Benchmarking Network can be attributed to the hard work of the locally based BCEs.
The board discussed how participation in such a project would impact the membership structure and finances of MLA. Initially, it was suggested that only MLA institutional members participate, making this project a benefit of that type of membership, because the project benefited libraries as institutions rather than individuals. The controversy among HLS members led the MLA Board to allow any MLA member to participate. These issues, settled after the first survey in 2002, were discussed at many MLA and HLS board meetings during initial development of the Benchmarking Network. Debra Rand, AHIP, chair of the Administrative Team, worked with MLA staff to further the discussions of these sometimes controversial models for participation and access. Instructions on intake made it clear that only one person in each library was to fill out the survey. When one member managed two libraries, arrangements were made to enter data for both.
The Content Development Team, led by Janice Kaplan, was charged with investigating content issues related to the benchmarking instrument and recommending the best components for a final minimum data set. The first goal of the Content Development Team was to determine and define a select minimum data set by January 2000 to be used in the beta test survey. The minimum data set was to contain data elements common to all health sciences libraries, thus encompassing both hospital libraries and academic medical centers.
The data set was based on a set developed for the North Atlantic Health Sciences Librarians (NAHSL) benchmarking project, the winner of the Majors/MLA Chapter Project of the Year for 1999 . This project developed from NAHSL meetings as early as 1995, and its purpose was to gather library data to be used by NAHSL members for information sharing, strategic planning, staffing, comparing budgets, and educating . Using the NAHSL data set and a selection of data elements appearing in both the AAHSL survey and the CHLA/ABSC benchmarking project, the team developed minimum data sets for each section of the survey.
A paper-based survey instrument was pretested several times by the entire task force as well as at chapter meetings in the fall of 1999 with the assistance of the BCEs. The data were divided into five categories:
Part of the content development project was a glossary. Definitions for terms used in the questions were developed. Standardized definitions from reputable associations (e.g., AHA) and standards bodies (e.g., Joint Commission on Accreditation of Healthcare Organizations) or definitions that had long-standing acceptance in the library community were used whenever possible and cited. With newer or library-specific concepts, the committee worked to bring accuracy to the definitions that they developed themselves.
The Information Systems Team, headed by MLA staff member Kate Corcoran, reviewed a number of Web survey technologies. The team decided that a customized option was the most useful, as nothing similar enough to the envisioned project was then available. For the beta test, the team developed initial hypertext markup language (HTML) interface pages, based on content developed by the Content Team, and tested their usability before contracting the back-end PHP and MySQL database programming to Ego-Systems. Popup links were embedded in the code to allow review of specific definitions without leaving the pages. Once entered, data could be edited by the participant until the survey deadline. Some fields were programmed to only accept certain data types (e.g., numeric only). Based on pretesting of the Web-based questionnaires, the team and contractor made minor revisions and corrections to both forms and programming.
The benchmarking beta test survey was a pilot study of the questionnaire and a beta test of the Web collection process. During 2000, the task force solicited approximately 1,500 hospital librarians to fill out an extensive questionnaire (over 100 questions) on the Web in the Members-Only section of the MLANET Website. Eighty-four participants entered their data on the beta test survey site during the 4-month entry period in early 2001. After review of the data, 11 records were eliminated for various reasons (e.g., incompleteness). The remaining 73 represented a 0.05% return on the request for beta data.
The task force noted the difficulty in getting members to participate in the beta test. Showing the results and publicizing the project through the BCEs was judged a priority if the full survey was to succeed. It was felt at least 275 participants (18% of the HLS membership) would be needed to give the survey a sufficient level of confidence, if one were to assume 1,500 hospital libraries, which was then the approximate number of HLS members. With the successful completion of the beta test survey, the Benchmarking Task Force term was finished and they recommended to the MLA Board that a full survey be implemented .
An MLA Benchmarking Implementation Task Force was established in 2001, chaired by Rand, with some members continuing from the previous task force and some new members. The charge of the Benchmarking Implementation Task Force was “to define and develop outcome measures, evaluate the success of the networking effort, and consult with headquarters staff about the project” . The task force was again organized into teams, including a Content Team, an Outcomes Team, and an Education Team. Feedback from beta test survey participants, HLS members, MLA chapter members, and BCEs was used to make important revisions to the process, the survey content, and the Web interface.
Initially, the task force considered a model of continuous data entry (i.e., participants could add or change their data at any time via the Website and reports could be generated as data were entered). However, to allow members to more accurately assess and use the benchmarking data, the task force decided to switch to the AAHSL model of a finite open data-entry period and a subsequent editing and evaluation period. A three-month window for data entry was deemed appropriate. All MLA hospital and special library members were encouraged to participate. AAHSL member libraries were not asked to participate in the benchmarking survey, because plans called for the possible merging of MLA data with AAHSL data in the future. Thus, AAHSL libraries would not need to enter their data twice. During most of these planning stages, the editor of the AAHSL survey, James Shedlock, AHIP, was a member of the task force and encouraged data to be similar to permit integrating the data when possible in the future.
The Content Team analyzed the test data for comprehensiveness and accuracy. The beta test survey included a comment area, and user comments were very useful in revising the questions. Additional data elements were added and a section for libraries functioning in health systems was added to accommodate the different data patterns for libraries that had been merged into larger systems.
Revision was an arduous task, because each data item had to be analyzed with a view to how well the question or series of questions was understood. It was often clear from the beta responses that participants had not understood the intention of the question; if this happened repeatedly with a single question, the committee knew that it (or a definition associated with that question) needed to be revised. For example, “How many databases does the library use?” is confusing compared to “Report the number of externally produced bibliographic databases for which you have purchased access for your users, including purchase through consortial contracts.” Clarity in survey directions and in the definitions of terminology was essential to enhance understanding of the survey. Among the seventy-three measures of activity, forty-five questions required data input and twenty-eight questions only required a yes or no answer. The full set of questions and definitions can be found in Appendixes A and B of the companion article .
To demonstrate the power of a future national database of size parameters and activity measures for hospital library management, the Outcomes Team, chaired by Rosalind Farnam Dudden, AHIP, FMLA, examined the data gathered in the beta test survey. Data tables were placed on the MLANET Benchmarking Network Website . The Outcomes Team presented averages for various hospital and library activity measures by average hospital bed size and total library FTEs. With only seventy-three responses, librarians were cautioned that the data should not be used as a library management tool. The team also developed several scenarios illustrating how the benchmark data could be used to answer specific questions. These data were put into an electronic presentation and circulated to the BCEs to use in promoting the first survey. It was also shown to the MLA Board in print form, which resulted in continuing MLA financial support of the project.
The Benchmarking Implementation Task Force put significant effort into promotional and educational activities to encourage maximum participation in the live Web survey.
The Web data intake pages and database were revised and tested. The Benchmarking Network's first data entry period was December 15, 2001, to March 4, 2002. During this period, BCEs actively promoted participation via extensive email communications. Some of the chapters provided incentives for participation, such as a free personal digital assistant (PDA) or free conference registration. Task force members and BCEs fielded questions from participants as needed. To participate, MLA members (personal or institutional) would log on to the network with their MLA IDs and passwords.
The first Benchmarking Network survey was considered an unqualified success, with a total of 385 members submitting data and participation from every MLA chapter area. Based on the types and numbers of libraries entering data, the task force decided to restrict the final analysis and reporting to the 344 hospital libraries that participated. These numbers represent between a 16% and 23% return rate. Another important activity of the task force was to review or edit the submitted data for accuracy.
The Outcomes Team, which had previously analyzed the beta test survey results, was reactivated in April 2002. The team had 4 basic plans for reporting the data of the 344 hospital libraries: (1) aggregate tables based on parameters of hospital or library size, (2) scenarios or answers to specific questions, (3) an interactive report site where participants and others could request reports based on chosen parameters of size or measures of activity, and (4) sale of the entire Excel data file for benchmarking or research. The team worked on the aggregate tables with MLA headquarters staff, and MLA headquarters staff developed the interactive site. Neither sale of the data set nor updated scenarios were completed. No one asked to buy the data set, and the scenarios proved to be too difficult to do with volunteer time.
The aggregate tables are open to all MLA members on the MLANET Members-Only Website. The interactive site is open to participants, or access can be purchased. The interactive report includes an abbreviated list of institutions (name, city, state) that match the selected parameters of size. With this list, participants now have the opportunity to locate benchmarking partners for full benchmarking projects as outlined in the “The MLA Benchmarking Network Survey Participant's Guide to Finding Benchmarking Partners” (Appendix). The task force developed this tool in summer 2003.
In January 2003, the task force began the process of updating the content of the survey for the next intake period. There was a clear desire to expand the Benchmarking Network beyond hospital and health system libraries. Also, some areas needed revision for the hospital library–related questions. Significant usability changes were implemented on the Web intake form, including self-calculating fields and clarified definitions (e.g., for clinical medical librarian programs). New questions were added in several categories, including PDA support and number of public access computer workstations. The majority of questions stayed the same, so that participants from the 2002 survey would be able to compare their current and previous data. The area of electronic journals required some difficult data changes, however. The input pages were designed so that participants' previously entered data would appear for reference, but new data would still need to be entered in the appropriate fields.
Patterns of responses for libraries in health systems showed three models of health system type: multiple libraries centrally managed, multiple libraries with autonomous management, or system libraries with a combination of centralized and autonomous characteristics. Specific instructions were provided on how to complete the survey, depending on which model seemed the best fit. The question on care categories of a hospital was also revised so that pediatric, cancer, and osteopathic hospitals could more easily benchmark with each other.
Institution types were expanded to include health association, research institution, area health education center (AHEC), college or university (not reporting to AAHSL), or consumer health information services located in a separate facility. A separate set of profile questions was developed for health association libraries and research facility libraries. Questions regarding consumer health services were threaded throughout the survey so that all types of libraries could answer them. Most of these changes were developed by the Benchmarking Implementation Task Force prior to MLA '03 and were discussed at length at the meeting.
The charge of the Benchmarking Implementation Task Force was essentially completed at the end of its two-year term in May 2003. The task force recommended to the MLA Board that to provide appropriate support and guidance for the association benchmarking efforts, an editorial board should be established. The MLA Board approved the recommendation at its May 2003 meeting. The charges of the Benchmarking Network Editorial Board (BNEB), chaired by Michelle Volesko Brewer, in collaboration with MLA staff, are to:
The BNEB continued the process of editing the questions, and a new, updated survey was available for data input on the Web on March 19, 2004, with all non-AAHSL MLA members encouraged to participate. By the closing date of July 11, 2004, 373 libraries, 316 of which were hospitals, had reported their data. With the new BNEB, editing delays occurred. The MLA Benchmarking Network 2004 survey results became available to participants on an interactive Website in November 2005.
Librarians at small heath sciences libraries, especially hospital libraries, have been seeking comparable statistics since the 1980s. Inspired by the AAHSL annual survey and the Canadian benchmarking project, more than fifty MLA members and key MLA staff have served on task forces and boards for this benchmarking enterprise. Together, they have worked for seven years to develop, administer, and tabulate two nationwide surveys. Both surveys achieved participation rates that resulted in a high level of confidence. The details of how this program was put together are important examples of MLA's devoted members and staff and their effective advocacy for small health sciences libraries.
A challenge for the future will be to convince all MLA members to enter their data. The BCEs in some states called every MLA member who would be eligible to enter data. Their reports to the task force showed three types of responses:
The BNEB will need to address the beliefs of the second group and demonstrate the benefits of the project. As to the challenge of the third group, a time saving tool is now available to gather standardized statistics in the form of the Benchmarking Network data collection worksheet and definitions list. This will help librarians start to collect data and report in the next survey. Whether these librarians work in a one-person setting or not, Tomlin advises them that instead of being so busy that they have to work around daily roadblocks, they should sit back and analyze the situation, make the changes necessary to unblock the roads, and report their experiences. She says, while in the past comparative figures were difficult to get, the Benchmarking Network is “a prime opportunity to contribute hard numbers to a project in a profession where intangibles have been the rule” .
Another challenging aspect to the project will be to educate librarians how to use the data that have been collected. Continuing education courses and newsletter articles need to be developed to reach all librarians and show them the value of the data in promoting their libraries. Librarians who have used the data successfully, or perhaps not successfully, need to report their activities in newsletters and journal articles.
Another challenge for the future will be to keep up the momentum and convince more MLA members and leadership of the importance of the Benchmarking Network as an association project. Committing association resources to the project so that data can be collected and reported in a timely manner will need to be addressed by the leadership. The AAHSL survey has been often mentioned in this article as a star example. While the survey was not part of the original goals of the organization at its founding in 1978, “sponsorship of the Annual Statistics set the stage for AAHSL to speak as the authoritative source for information about medical school libraries” . The Benchmarking Network surveys, when combined with parts of the AAHSL survey as planned in the future, will allow MLA to speak as the authoritative source for information about all health sciences libraries in the United States and Canada.
The history of hospital libraries and other nonacademic health sciences libraries showed a dearth of easily accessible statistics that could help them administratively. MLA has now developed a very effective statistical tool in the Benchmarking Network survey. Convincing each MLA member library to contribute to this shared database will be the future challenge.
The Benchmarking Network Project has already had an impact on the association. Anecdotally, the authors have received many reports of libraries using the data to successfully address issues with their administrations. At the same time, the authors know of libraries that have closed because the administration already made up its mind and no statistic would have changed the decision. A number of annual meeting presentations using MLA benchmarking data are assisting members in developing models for using the benchmarking data. Statistics alone are unlikely to convince administrators. But the extensive and current statistics from the MLA Benchmarking Network surveys, in combination with good management practices and good communication skills, will allow hospital librarians and other nonacademic health sciences librarians to present a positive picture of the need for library services in the face of hospital-wide budget reductions.
The authors acknowledge the contributions of the many MLA members who served on the teams involved in this project and to all the MLA members who took valuable time to enter their data—twice!
Benchmarking is comparing an organization's performance with the best competitor's performance to achieve quality improvement. By using the “Interactive Reporting” in the MLA Benchmarking Network, you can instantly compare your library to others by:
The steps for constructing your “Interactive Report” are simple:
The Benchmarking Network does not indicate the ranking of the various institutions on the specific questions (i.e., you will not know which institution achieved the maximum score). You must fine-tune your similarity criteria so that your resulting list of institutions is larger than five. (For confidentiality reasons, institutional names are withheld when the result is five total institutions or less, including your own.) You can use the back-arrow to quickly input your revised request.
Once you have a viable list of institutions, you may decide to contact several institutions to determine how these institutions achieve their service or budget levels. In analyzing the processes, you move from simple comparative benchmarking to process benchmarking:
You will first want to decide which process you want to improve. Then, using your list of institutions from the interactive report, search the MLANET online member directory to find complete contact information <http://www.mlanet.org/members/directory/>.
A conversation with a colleague you do not know might begin like this:
We have both entered our data into MLA's Benchmarking Network, and now I would like to take the next step by finding partners to identify best practices. Would you be interested in working with me (and perhaps a couple of other member institutions) in delving a little further into our practices (your process here, e.g., interlibrary loan)? Our institutions have some similarities… I would like to find out more about the process, not just the numbers of loans.
Also, you might want to offer the partner something in return (e.g., nonconfidential information from the other institutions you are contacting).
For further information and assistance in benchmarking:
*Based on presentations at MLA '02, the 102nd Annual Meeting of the Medical Library Association; Dallas, TX; 2002; and MLA '03, the 103rd Annual Meeting of the Medical Library Association; Orlando, FL; 2003.
†Taken from: MLA Benchmarking Network survey participant's guide to finding benchmarking partners. [Web document]. Chicago, IL: Medical Library Association, 2003. [cited 15 Jan 2005]. <http://www.mlanet.org/members/pdf/bn_partner_guide.pdf>.