|Home | About | Journals | Submit | Contact Us | Français|
This paper describes a distributed collaborative effort between industry and academia to systematize data management in an academic biomedical laboratory. Heterogeneous and voluminous nature of research data created in biomedical laboratories make information management difficult and research unproductive. One such collaborative effort was evaluated over a period of four years using data collection methods including ethnographic observations, semi-structured interviews, web-based surveys, progress reports, conference call summaries, and face-to-face group discussions. Data were analyzed using qualitative methods of data analysis to 1) characterize specific problems faced by biomedical researchers with traditional information management practices, 2) identify intervention areas to introduce a new research information management system called Labmatrix, and finally to 3) evaluate and delineate important general collaboration (intervention) characteristics that can optimize outcomes of an implementation process in biomedical laboratories. Results emphasize the importance of end user perseverance, human-centric interoperability evaluation, and demonstration of return on investment of effort and time of laboratory members and industry personnel for success of implementation process. In addition, there is an intrinsic learning component associated with the implementation process of an information management system. Technology transfer experience in a complex environment such as the biomedical laboratory can be eased with use of information systems that support human and cognitive interoperability. Such informatics features can also contribute to successful collaboration and hopefully to scientific productivity.
Biomedical research when coupled with the high speed processing technologies results in highly detailed datasets. Bioscience laboratories are inherently data intensive which is evident from the publicly accessible immense databases generated by the Human Genome project . The information to be processed in a biomedical laboratory ranges from DNA sequences, mutation analysis, expression arrays, cell and biochemical assays, animal and reagent inventory ,to name a few . The challenge to genomic medicine is to analyze and integrate these diverse and voluminous data sources to elucidate normal and abnormal physiology [2-4]. In addition to scientific research, academic biomedical laboratories (labs) are expected to use their data and consequent results to support education, knowledge dissemination through publications, and future research questions. However, current traditional informatics approaches used by these labs have helped them little in performing their daily tasks and in fact, jeopardized their productivity [5,6]. Some of the current lab data management methods include handwritten lab notebooks, paper files, homegrown small databases and spreadsheet files. Problems with heterogeneous data, nonintegrated information retrieval and querying, nonadherence to standards, lack of documented laboratory workflows, and usability issues have been reported as major hindrances to efficient information management and research productivity in bioscience labs. The need for informatics solutions to solve the problems with current information management methods used by academic as well as industrial biomedical labs is well-documented [7-9]. Informatics systems that (a) enable collaborative authoring of lab records, (b) integrate and parse multiple biomedical data types, (c) efficient database architectures, and (d) user-centered data visualization have been developed and evaluated [10-15]. With increase in functional complexity in biomedical research, a single system can no longer support the entire lifecycle of a biomedical research project. Hence it is inevitable for lab researchers to use multiple interoperating systems for their research. Modern information management systems should have a common layer of interoperability while providing a spectrum of options that could be used to support individual researcher needs. Comprehensive, yet nuanced customization of information management systems, therefore may be a solution to address informatics issues in bioscience labs.
In this paper, we describe the attempts of our collaboration to identify and reduce barriers to scientific productivity and satisfaction in a specific translational research-oriented biomedical research lab through direct involvement in evaluating and improving data management in research laboratories. Contrary to the focus of our paper, published literature on similar collaborative efforts have provided insights into technical architecture of bioscience system with minor focus on sociotechnical aspects , and summative evaluation of an informatics-driven solution rather than implementation process itself . Majority of reported studies on bioscience information management provide an account of system design and demonstration [18-21]. Our study is data-driven and qualitative in nature. Specific objective of this paper is to present the efforts made by one industry-academia collaboration to improve the data management and integration capabilities of an academic biomedical lab through an iterative design process. The lessons learned from this collaborative implementation effort can inform the scientific, academic, and industrial communities of (a) implementation strategies that can help optimize technology transfer in academic biomedical labs, (b) possible pitfalls to be aware of during intervention with a new data management system, and (c) important characteristics that are pre-requisite for every biomedical research collaboration when implementing new information technology in academic labs. The next sections are organized as follows. Section 2 provides a brief overview of our study context, outlining the industry-academia collaboration with a brief background on the nature of this intervention project as well as of the technology that was used to improve management of biomedical research data. Section 3 describes methodological details of the study, and in the remaining sections of the paper, we summarize the results and limitations of the study while providing relevant discussions and conclusions.
Bioscience Research Integration Software Platform (BRISP) research group is a collaboration between Johns Hopkins School of Medicine, Fraunhofer Center for Experimental Software Engineering, UTHealth- Houston School of Biomedical Informatics, and Biofortis Inc., whose mandate has been to join in the development of a software platform that can help biomedical labs improve their research productivity and satisfaction. Detailed contributions of the four collaborative sites can be seen in Figure 1. Selection of the test lab was based on several criteria including their responsiveness, motivation of the lab's PI, and the richness of lab environment in terms of its ability to represent the manifold changes of use of information technology. A biomedical research information management system called “Labmatrix” developed by Biofortis was used during the project to resolve data management issues in the test lab. Recent studies show that Labmatrix can improve data management capabilities of basic research labs by promoting consistent lab practices, enhancing data integrity and data standardization, allowing collaborative data access, and reducing time spent on specimen and data retrieval by employing query-based retrieval mechanisms [22,23].
The major objective of BRISP collaboration is to use Labmatrix (from now on ‘LM') as a tool to improve and systematize productivity of test labs. One important study area reported in this paper include our collaborative efforts to improve data management of high content screening (HCS) related activities in the test lab. HCS technology revolutionized biological research by enabling nuanced analysis of cell parameters (e.g. cell movement, shape, texture) . However, data generated by HCS usually amounts to multiple terabytes and presents several informatics issues ranging from information store, retrieval, and image mining for pattern identification. Solving these computational challenges required considerable financial resources and thoroughly-constructed IT infrastructure, which are not often available for small and medium sized bioscience laboratories. Data management and integration associated with HCS technology in smaller laboratory settings are an even greater challenge as access to software programmers and systems integrators is often limited. As part of this study, our BRISP group attempted to understand adoption and implementation of Labmatrix to address problems faced by test lab in management of HCS data. Examples of such problems include lack of integrated database with heterogeneous HCS datatypes (numerical data, image data, plate layout), investigator-driven data analysis due to multiple nonintegrated analytics, and problems with interpretation and visualization of results. Complete details about BRISP HCS project is beyond the scope of this paper and can be found elsewhere . In order to resolve some of these issues, BRISP group used several methods to understand the problems, identify possible intervention areas, and monitor implementation progress. The next section presents a brief overview of the methods used in this study over a course of four years.
Several data collection and analyses methods were employed throughout the course of this collaboration. Important data collection methods included ethnographic observations, semi-structured questionnaires, web-based surveys, face-to-face interviews, conference call summaries, and progress reports that were shared among the project team via Google Docs. The data collection activities were conducted in three phases as described below.
Preliminary ethnographic observations were conducted to understand the workflow of the bioscience labs, and gain insight into interaction strategies among lab members. The goal was to guide and improve efficiency of data collection in the next phase. A trained researcher unobtrusively observed the activities at different times in the test labs and took observational notes . The important concepts identified during the ethnographic phase were used to design web-based questionnaires. Two questionnaires (Q1 and Q2) were used in this study with the lab principal investigators (PIs) to understand the information management practices followed in the test labs. Q1 was administered to all six candidate lab PIs during the test lab selection process, while Q2 was given only to the PI of the selected test lab. Both the questionnaires included open-ended and closed specific questions. The questionnaires served as a means to gain knowledge about the overall state of the labs in terms of (1) magnitude and nature of data handled, and (2) data management techniques, and to create an account of current data handling and communication practices in the test lab. The participants responded to all the questions and based on their questionnaire responses the themes for the semi-structured interviews were framed. Unlike the questionnaire framework, where detailed questions were formulated ahead of time, semi-structured interviews began with more general unstructured questions [27, 28]. Semi-structured interviews provided an opportunity to learn more about the lab goals and practices. These interviews allowed us to collect detailed descriptions to understand the reasons behind the problems faced by current day biomedical researchers. A number of new questions were generated during these interviews, allowing both the interviewer and interviewee to probe further on a particular issue(s). The four interview areas of interest were lab data storage, lab data management, queries on stored data, and collaboration. Nine test lab members in different professional roles such as lab manager, computer support specialist, and bench molecular biology investigators were interviewed. These interviews contained rich descriptive accounts of specific team members' roles and activities. All interview data were audio recorded and transcribed for analysis.
Weekly conference calls and semi-annual face-to-face meetings were conducted to monitor progress of the collaboration in reaching its intended objective. These weekly calls were recorded and transcribed to understand the implementation process of Labmatrix in the test lab. The Conference call summaries were utilized to record, document, and track the progress of the study. We conducted thematic analysis on these summaries to understand the impact of BRISP collaboration on the project outcomes. Thematic analysis is the most common form of analysis in qualitative research . It focuses on examination of patterns or themes within data, where categories emerge from the data . Qualitative coding was used to understand the user experiences of Labmatrix implementation. A total of 261 discussion threads over 52 weeks of conference call summaries were analyzed in this phase.
Web-based surveys and field (laboratory) observations were the data collection activities performed in this phase. The survey questions were designed based on the conference call summaries obtained in the previous phase. Ethnographic observations were utilized to understand the ways in which lab personnel used LabMatrix to support their day-to-day data management activities in the test lab. Given the diversity of our data collection methods, we employed multifaceted techniques grounded in socio-cognitive research to analyze and identify themes as the collaboration advanced with implementation of LM. As newer themes emerged, we used Connection, Communication, Consolidation, Collaboration Interoperability Framework (C4IF)  to evaluate the interoperability standards supported by information management systems with and without LM. The C4IF framework was originally proposed for business information systems as a classification typology. This framework to analyze the interoperability standards of the existing information management practices in a typical bioscience laboratory at granular levels lying underneath the system. The data collected and analyzed using these methods yielded certain interesting and worthy results, as presented below.
Given the use of diverse analytic methods, we were able to analyze our data from various dimensions related to technical aspects of bioscience lab and research information management systems, socio-technical factors, and the characteristics of BRISP collaboration during the implementation process.
In this phase, we understood research information management profile of the test lab to identify the problems with traditional information systems (such as paper files, excel files, and homegrown databases) based on the practices in the test lab. Based on data from observations, questionnaires, and interviews conducted prior to implementation of LM, we created the information management profile of the test lab. Figure 2 provides a bird's eye view of important research activities usually conducted in the test lab. As seen in the figure, problem areas in data management prior to LM (marked in red flags) were also identified. Identification of challenging areas that existed in the test lab in regards to data management allowed us to narrow down on areas to systematize, and improve using LM in subsequent years. Lack of context in communication, failure to adequately support collaboration, and low interoperability standards were identified as some major problems with traditional research information management applications used by test labs. Limitations to cross study comparisons and data integration were the other challenges faced by biomedical researchers when dealing with spreadsheets, homegrown databases, and so forth with the current information management methods. In addition, we found that poor data organization can potentially lead to substantial data loss as well as degraded security and privacy levels. Myneni et al. provide a detailed analysis of these findings based on the analysis of observational, interview, and questionnaire data .
Having identified the problematic areas in the test lab, the BRISP group initially compiled a list of key performance indicators which can help them map their progress as they moved ahead with implementation of LM to address some of these problems. However, these indicators were hardly used as improvement markers as the implementation process progressed and the group realized that this list of indicators would not encapsulate the progress in the test lab. Given the complex nature of research environment in biomedical labs, new performance indicators emerged as the BRISP group progressed with LM implementation in the test lab. System interoperability and visualization quality emerged as major indicators of process improvement. For example, Figure 3 depicts how the emphasis laid on visualization increased as BRISP collaboration progressed with implementation of LM in the test lab based on data from conference call summaries.
In addition to visualization quality, the increasing trend of multiple interoperating machines to tackle data management, integration, and interpretation in biomedical labs made system interoperability a vital component for successful deployment of LM in this environment.
The initial face-to-face interviews and questionnaires provided the group with possible intervention and development areas in the test lab. However, it was not an easy job to pinpoint and focus on one research area. It was only through challenging trial and error that the BRISP group found the high-impact and end-user beneficial project, which is HCS data management. Prior to addressing HCS data management, significant time and effort was applied to other data management issues which were later abandoned. For example, a substantial effort to integrate automated family pedigree-based recording and analysis of hereditary disease data was canceled when the research laboratory realized that the cost in disruption of its current methods would not be sufficiently compensated by a commensurate benefit in research output. Similarly, an effort to automate inventory of antibody and other research samples was not effectively utilized by laboratory personnel because of the perception that the new system required too much time and did not integrate well into the laboratory workflow in ways that benefited both individual laboratory members and the laboratory as a whole. Analysis of the teleconference call content revealed that the lab members became frustrated with the new system during the projects where they did not perceive direct benefit from LM. Given below are sample quotations drawn from conference call summaries illustrating occasional low morale of BRISP collaboration during this course.
“..while he remains enthusiastic about the project, enthusiasm in his lab for doing the BRISP project is low, because typically postdocs, grad students, and other staff have pressing needs and don't see the long term view of the lab..”
“…..we must be talking not about the theoretical but the actual….and these next few weeks are critical, where the amount of time put in per person needs to go up in proportion to whatever is needed to get to this goal, or frankly, the project will be a failure in terms of helping the lab achieve greater scientific productivity and satisfaction.”
Figure 4 provides an overview of the representative tasks performed by the collaboration as part of the study. Of those 261 threads, around 64% of the discussion themes were related to- 1) programming issues and, 2) communication between academia and industry to resolve these issues. Data indicated that this industry-academia collaboration followed a formative iterative implementation and development approach. Regular working group meetings held between system creators and lab members helped the group resolve most issues with the new system. Based on content from the teleconference calls, it was clear that industry personnel offered guidance at various points during the implementation. In turn, regular feedback on the system's performance also helped the industry personnel to augment the system to better respond to the needs of the test lab.
In addition to the assessment of BRISP industry-academia collaboration, we also used a questionnaire tool to identify both positive and negative aspects of LM, in addition to general feedback on system performance. It is important to note that this tool was used during the initial phases of HCS data management project. Results based on this questionnaire are discussed in Section 5.3.
We assessed system interoperability standards with and without LM so as to understand the impact of LM introduction on interoperability standards in the test lab environment. Key findings from interoperability evaluation are presented below. Analysis of the information management practices in the test lab (with and without LM) using C4I framework gave us an insight into the interoperability standards of these two different approaches. According to the C4I framework, interoperability can be addressed at different levels such as connection, communication, consolidation, and collaboration using this framework. Advancement in each of these areas can influence but cannot determine the advancement of the other areas. A biomedical information management system can have a high degree of interoperability at the communication and consolidation level, while being low at the other two levels. For example, using advanced technologies such as wireless broadband network to exchange data instead of using manual techniques (e.g. compact disks) can be deemed essentially as an advancement in the connection area and this cannot automatically assure semantically rich terminology at the communication/consolidation level and vice versa. Interoperability problems in the context of bioscience lab research lifecycle at (a) connection level indicates issues with data transfer, (b) communication/consolidation indicates problems with data conversion and integration and ambiguous representation of context, (c) collaboration indicates problem with data transfer outside test lab environment, context-sensitive agility of data conversion for easy communication with collaborators. An analysis of the degree of interoperability supported by the data management practices with and without LM is included in Table 1. Comparative ratings were given on a three point scale of Low –Medium- High scale based on our analysis of data from observations and interviews.
Data sharing options were limited in the test lab, however, the lab members were using some means (such as email, common server/driver storage) to transfer their research data to their collaborators within the lab as well as at the distributed research sites. As reflected in Table 1, the connection level of this system had a medium (M) level of interoperability. But data ontology and format were not considerably standardized, thus giving a low (L) interoperability in the consolidation and communication domain. Similarly, collaboration was minimally supported by the system, although significant was achieved “manually” usually through formal meetings, electronic mail and shared documents rather than through integrated information management systems. For an information management system to succeed in collaborative interoperability, common ideas on workflow patterns and functions were needed to be established , which were not realized in this case. Hence, it was given a low (L) rating for interoperability at the collaboration level. Myneni et al. provided detailed analysis of interoperability levels supported by traditional information systems in the test lab before LM implementation .
After the introduction of LM, we again conducted observations and interviews in the test lab. Since LM is a web-based system, access control was used to create accounts for lab members and other collaborators such that data can be accessed and exchanged appropriately. Hence, this layer was given a high rating. Since lab members had to perform manual data pre-conditioning steps such that they can upload the information from other software to LM, and this process is still somewhat tedious, consolidation-communication was given a medium ranking. LM was used with other interoperating applications to provide data visualization at an improved quality level in the test lab. This showed that LM integrated well with other applications for adequate and comprehensive data analysis. However, there was still a scope for improvement in terms of programming agility and ease of establishing a collaboration layer between systems. Therefore, the collaboration was ranked medium. Data showed that the interoperability of research environment provided by LM is higher than other traditional research information management systems.
Although LM provided better support for system interoperability, the BRISP group often encountered problems with data format, data upload, and data loss during migration from the old system to LM. Past work on this project revealed that the test lab members were sometimes frustrated during the implementation process of LM . In order to understand the reasons behind lab member's frustration, we analyzed the implementation process to understand user experience and industry-academia collaboration in this study.
Four LM users responded to the web-based questionnaire, where three participants responded to all the questions. Two of the four lab members have rudimentary experience of two years with LM, while the others have less exposure to the new system. As part of the questionnaire, the users were asked to rate LM in different categories (self-explanatory, easy to learn, market standards, fun to work with, similarities with other contemporary systems, and aesthetic look) on a four point scale of Poor-Excellent (see Figure 5). In addition to rating LM with respect to the above-mentioned qualities, the users were also asked to give a brief account of the advantages and limitations associated with LM. The users mentioned the new system as a one-stop resource for their research data management and analysis with strong querying capabilities. However, results indicated poor maneuverability, difficult navigation, and non-intuitive user interface often made working with LM unpleasant.
In addition, we conducted direct observations in the test lab to understand the nature of limitations with LM. We observed the lab members as they carried out their regular tasks using LM. During these observations, we also focused on how LM was being integrated into lab's workflow. These data provided us with valuable insight into two new concepts called “Human interoperability” and “Cognitive interoperability”
With the introduction of LM into HCS workflow, lab members were able to accomplish certain tasks which they were never capable of earlier. Examples of such capabilities include as 1) looking at all their data sets together, 2) integrating and analyzing different types of data, and 3) visualizing the data in several different ways. Above all, the query module of LM presents a new paradigm of allowing people with limited database programming skills to ask complex questions against complex datasets. Detailed explanation of this particular HCS project can be found here  and is beyond the scope of this paper. The new workflow added additional data preconditioning steps to the old process, and the lab members had to learn to use the new system in order to conduct their daily tasks using LM. Learning such new skills and acquiring knowledge pertaining to the redefined workflow could be attributed to user frustrations during implementation process. In addition, the lab members had to work with multiple systems (such as excel files, laboratory server, Labmatrix) and work with each one of these applications simultaneously to populate custom data forms in LM. Analyses of interview data showed that LM users were very concerned with highly complex nature of the new process.
The frustrations caused by the learning curve associated with a system may be alleviated by incorporating design principles that support human interoperability, which promotes the inclusion of certain system characteristics during system design that allow human beings to use other similar tools with minimal training, showing some generalizable skills . Human interoperability may be highly intertwined with another concept called cognitive interoperability, which is defined as the human being's way of thinking when using a system . A perfect alignment between the human being's expectations of a system and system response would likely reduce lab members' resistance to new technology.
One lab member perceived LM as a “Query Results Analyzer”- a tool to analyze all results of a particular query, and hence wanted to have all the results related to a “search” query be highlighted and displayed as main content of the screen for easy analysis. However, the system was designed in such a way that the query area was emphasized rather than the query results. The designer viewed the system as “Query Machine”. This indicated a clear discrepancy between the thought patterns of the laboratory member and system designers. This lack of “Cognitive Interoperability” might be a reason behind user frustration with LM.
In summary, scientific research was greatly affected by the traditional data management practices prevailing in the test lab. Traditional information management systems resulted in sub-optimal collaborative environment, inadequate support for comprehensive data analysis, and provided lower interoperable standards. The BRISP group attempted to intervene with LM in order to systematize data management in test lab. As the group progressed with the LM implementation, they tracked the project progress by establishing performance indicators. Data revealed that LM provided the test lab with a better interoperable environment, higher data integration, visualization, and querying capabilities. It is important to note that additional workflow steps, failure to foresee added value of LM by lab personnel, unexpected programming problems affected the implementation process. In a nutshell, BRISP group faced challenges during this implementation, but managed to create a better research environment in the test lab by providing standardized data management and novel analysis mechanisms.
In this paper, we present the efforts made by a multi-site distributed industry-academia collaboration to systematize and improve research information management in a biomedical academic test laboratory. Our evaluation of BRISP collaboration has revealed multiple findings that bring together multiple informatics challenges, ranging from socio-technical factors, research collaboration, personnel management, information retrieval and management technicalities associated with bioscience labs.
The traditional research data management systems were inadequate in providing support for dynamic environment in biomedical laboratories that deal with voluminous, high content, heterogeneous data, and new ways of data management became necessary. The major problems with the traditional systems were related to data maintenance, data exchange within and outside the laboratory, interdisciplinary collaboration, and publications. Personnel-driven information management activities posed several problems to record-keeping and overall research productivity of the laboratory. The ways in which suboptimal data management can affect different operations of a bioscience lab were understood. Although BRISP project is oriented towards understanding specific challenges faced by a single test lab, the results are consistent with findings from other studies [16,17]. Industry-driven standards, ontology models to manage metadata, and platforms that enable integration of heterogeneous data sources can resolve certain issues identified in our study. This is one of the To our knowledge, this is one of the first reported studies that investigated information management and its relation to lab outcomes in terms of productivity and satisfaction of scientific investigators, as suggested in recent publications [35,36]
The BRISP collaboration faced problems because of unanticipated programming obstacles, time delays, user dissatisfaction, and lack of demonstrability of technology success. Such problems lengthened implementation time and lowered morale of BRISP group. Important cornerstones of the BRISP collaboration which helped the team advance with the implementation of Labmatrix in the test laboratory without many hurdles include motivating leadership, involved personnel from academia and industry, and regular monitoring of project status. While reported studies on similar collaborative efforts in bioscience environment have focused on providing detailed technical architectures of system development, our study focused on aspects of human learning and incremental nature of BRISP collaboration. The BRISP group identified certain performance indicators to track their overall progress throughout the implementation process. The performance indicators in a biomedical laboratory environment are complex and emerging in nature. Therefore, key indicators were revised and adjusted as the implementation process evolved. In our case, system interoperability and visualization emerged as important markers of improvement. However, user perceptions of Labmatrix based on these two indicators were not measured explicitly since they do not encapsulate the overall relevance of Labmatrix to test lab projects in which the members are directly involved, and the direct benefits of these indicators are not immediate.
Labmatrix was able to assist the test lab to integrate, analyze, and visualize their research data. Data integration, strong querying procedures with comprehensive data management were some key capabilities that test laboratory was able to achieve with Labmatrix implementation. New features were added to Labmatrix in order to customize it to suit the needs of the test lab members in an iterative fashion. Examples of such new capabilities included visualization, optimized data normalization routines, and strong query module that present a totally new paradigm allowing laboratory members with minimal database programming skills to build complex queries on complex biomedical datasets. Labmatrix provided a better interoperable research environment in the test lab. However, there is further scope for Labmatrix to improve its capability to support interoperability at communication level of C4I framework by developing common ontology and establishing well-accepted data schema. In addition, although Labmatrix has systematized data management in test lab, user perceptions of direct benefits of Labmatrix indicate there is still a room for improvement. Lack of demonstrability of benefits of informatics solutions such as Labmatrix is always a challenging task and this resulted in abandoning of initial projects by multiple lab members, subsequently may have affected their perceived level of benefits. Minimal level of perceivable direct benefit for the laboratory and/or for individual's work was a major factor overall project success. Lessons learned from this collaboration are similar to recent reports on the implementation studies of electronic health records [37, 38]. Bioscience information management systems are similar to any other health information technology, where it is essential to establish both long-term and short-term measurement variables that are socio-technical and project-driven in nature. The nature of barriers we have identified with information management procedures (prior to Labmatrix implementation) in the test lab are (a) cognitive (e.g. lack of streamlined procedures, labor intensive, lack of visualization), (b) social (e.g. data sharing issues, communication ambiguities), (c) organizational (e.g. no direct personal benefit, financial constraints, personnel management), and (d) systemic (e.g. use of homegrown databases, lack of industry-driven/discipline-specific standards). Implementation of Labmatrix has addressed multiple barriers through data visualization, web-based technology-driven lab workflow, centralized shareable data warehouse, and customized lab-specific standards across personnel. Use of formative usability evaluation methods  can enable integration of user and technology usage context into the system, improving overall technology acceptance. Methods like heuristic evaluation and cognitive walkthrough [39-41] have been proven to be beneficial in improving system usability. While BRISP collaboration has predominantly focused on understanding test lab needs, prioritizing feasible and highly desirable intervention areas, developing test lab-specific algorithms and query capabilities, similar collaborative efforts should focus on integration of usability techniques with development process so that one can mitigate user frustrations with counterintuitive system features and misalignment of computerized protocols with lab workflow.
We present below some important lessons learned during Labmatrix implementation that can help any academic biomedical laboratory migrating to a new information system plan ahead of their implementation process. In addition, these pointers can also provide guidance to academia-industry collaborations in biomedical research in terms of expectations and possibilities.
The study was supported in part by a grant (1R41CA105217-01A1-STTR) from National Institute of Health/National Cancer Institute (NIH/NCI). Support was also provided by generous gifts from the Guerrieri Family Foundation and from Mr. and Mrs. Clarice Smith. We thank all study participants for their valuable time and contributions.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Disclosures: The authors have DISCLOSURE
Under a licensing agreement between BioFortis, Inc. and the Johns Hopkins University, Dr. Bova and the University own BioFortis, Inc. stock, which is subject to certain restrictions under University policy. The terms of this arrangement are being managed by the Johns Hopkins University in accordance with its conflict of interest policies.
Steve H. Chen, Yakov Shafranovich, and Jian Wang are employees of BioFortis Inc., maker of the Labmatrix software.