At MLA '98, the MLA Board passed a resolution to form the MLA Benchmarking Task Force. During the 1998/99 association year, the task force was formed and its charge set: “to define, develop, and evaluate a coordinated and comprehensive Web-based medical library benchmarking tool that will enable members to establish best practices, compare important operations, and define appropriate statistics for negotiating with administrators” [21
]. Web-based data input was seen as the most economical way to carry out such a survey, based on examples of this type of technology already available in 1998. The chair of the Benchmarking Task Force, Bernie Todd Smith, divided the task force into several functional teams, all contributing to accomplishing the ultimate goal of preparing MLA membership for a beta test survey and developing the Benchmarking Network Website to carry out the Web-based survey. A tentative project schedule was set up to itemize the steps of the project. Even with delays, the beta test survey was finished by April 2001. presents a calendar of the Benchmarking Network's work and accomplishments.
Table 1 Timeline and activity of the benchmarking task forces and editorial board
The functional teams included publicity, training, administrative, content, information systems, and outcomes. Through various MLA communication venues, the Publicity Team worked to alert the MLA membership about the value of benchmarking and the progress that the task force was making in developing a usable tool. The Training Team reached out to MLA chapters to develop local educators. The Administrative Team investigated tentative models for participation in and access to the Benchmarking Network. The Content Team performed the crucial work of developing the beta survey instrument. The Information Systems Team developed the initial Web-based, beta test, survey intake site and reporting site. The Outcomes Team studied various ways to report the data.
Using a Web-based intake form to gather statistics required that all members of MLA be informed where and when to input their data. The Publicity Team's role was to introduce MLA hospital library members to the concepts and importance of benchmarking and standardized statistics, and they accomplished this task through a series of articles in the HLS newsletter, National Network.
Todd Smith presented the vision of the task force and suggested data for future participants to collect [22
]. Goodwin and Harris presented case studies about the difficulty of doing benchmarking without data already present [23, 24
]. The Publicity Team's chair, Susan Schweinsberg Long, AHIP, wrote four informative updates to explain the project and encourage participation in the beta test survey [25
The Training Team, under the leadership of Jacqueline Donaldson Doyle, AHIP, FMLA, determined that decentralized education about MLA's benchmarking initiative would be crucial. The team recruited benchmarking chapter educators (BCEs) from each of the fourteen MLA chapters and provided these educators with in-depth information and training. BCEs were trained on the benchmarking beta tool so that they could provide their local chapter members with one-on-one support. This network of educators proved to be the backbone of the project, providing marketing and promotion as well as education to all members. A constant flow of emails notified MLA members of when and how to input their data. Much of the subsequent success in achieving widespread participation in the Benchmarking Network can be attributed to the hard work of the locally based BCEs.
The board discussed how participation in such a project would impact the membership structure and finances of MLA. Initially, it was suggested that only MLA institutional members participate, making this project a benefit of that type of membership, because the project benefited libraries as institutions rather than individuals. The controversy among HLS members led the MLA Board to allow any MLA member to participate. These issues, settled after the first survey in 2002, were discussed at many MLA and HLS board meetings during initial development of the Benchmarking Network. Debra Rand, AHIP, chair of the Administrative Team, worked with MLA staff to further the discussions of these sometimes controversial models for participation and access. Instructions on intake made it clear that only one person in each library was to fill out the survey. When one member managed two libraries, arrangements were made to enter data for both.
Content Development Team
The Content Development Team, led by Janice Kaplan, was charged with investigating content issues related to the benchmarking instrument and recommending the best components for a final minimum data set. The first goal of the Content Development Team was to determine and define a select minimum data set by January 2000 to be used in the beta test survey. The minimum data set was to contain data elements common to all health sciences libraries, thus encompassing both hospital libraries and academic medical centers.
The data set was based on a set developed for the North Atlantic Health Sciences Librarians (NAHSL) benchmarking project, the winner of the Majors/MLA Chapter Project of the Year for 1999 [29
]. This project developed from NAHSL meetings as early as 1995, and its purpose was to gather library data to be used by NAHSL members for information sharing, strategic planning, staffing, comparing budgets, and educating [30
]. Using the NAHSL data set and a selection of data elements appearing in both the AAHSL survey and the CHLA/ABSC benchmarking project, the team developed minimum data sets for each section of the survey.
A paper-based survey instrument was pretested several times by the entire task force as well as at chapter meetings in the fall of 1999 with the assistance of the BCEs. The data were divided into five categories:
- Library Profile: demographic measures, such as hospital bed size, teaching or nonteaching hospital, state, census, region, etc., by which groups of libraries could be categorized
- Administration: measures of library data, including budget data, personnel (full-time equivalents [FTEs] and volunteers), hours, square footage, and overall usage statistics
- Public Services: measures of public use, including interlibrary loan statistics, mediated and end-user searches, reference questions, photocopy statistics and practices, formal instructional classes, and gate statistics
- Technical Services: measures of the library's collection, including number of journal subscriptions, books, audiovisuals, and public access workstation availability
- Special Services: measures of various special services, including consumer health services, oversight of institutional archives, Web access to various library services, clinical medical librarian programs, and involvement in continuing medical education
Part of the content development project was a glossary. Definitions for terms used in the questions were developed. Standardized definitions from reputable associations (e.g., AHA) and standards bodies (e.g., Joint Commission on Accreditation of Healthcare Organizations) or definitions that had long-standing acceptance in the library community were used whenever possible and cited. With newer or library-specific concepts, the committee worked to bring accuracy to the definitions that they developed themselves.
Information Systems Team
The Information Systems Team, headed by MLA staff member Kate Corcoran, reviewed a number of Web survey technologies. The team decided that a customized option was the most useful, as nothing similar enough to the envisioned project was then available. For the beta test, the team developed initial hypertext markup language (HTML) interface pages, based on content developed by the Content Team, and tested their usability before contracting the back-end PHP and MySQL database programming to Ego-Systems. Popup links were embedded in the code to allow review of specific definitions without leaving the pages. Once entered, data could be edited by the participant until the survey deadline. Some fields were programmed to only accept certain data types (e.g., numeric only). Based on pretesting of the Web-based questionnaires, the team and contractor made minor revisions and corrections to both forms and programming.
Benchmarking beta test survey
The benchmarking beta test survey was a pilot study of the questionnaire and a beta test of the Web collection process. During 2000, the task force solicited approximately 1,500 hospital librarians to fill out an extensive questionnaire (over 100 questions) on the Web in the Members-Only section of the MLANET Website. Eighty-four participants entered their data on the beta test survey site during the 4-month entry period in early 2001. After review of the data, 11 records were eliminated for various reasons (e.g., incompleteness). The remaining 73 represented a 0.05% return on the request for beta data.
The task force noted the difficulty in getting members to participate in the beta test. Showing the results and publicizing the project through the BCEs was judged a priority if the full survey was to succeed. It was felt at least 275 participants (18% of the HLS membership) would be needed to give the survey a sufficient level of confidence, if one were to assume 1,500 hospital libraries, which was then the approximate number of HLS members. With the successful completion of the beta test survey, the Benchmarking Task Force term was finished and they recommended to the MLA Board that a full survey be implemented [31