Notwithstanding these barriers, a number of factors are driving the need to integrate the two core activities of the lab -- managing samples and conducting and documenting experiments.
An important background factor is the ever increasing amount of data to process and ever mounting pressure to process it faster. This puts a premium on more efficient management of information generally. It is bad enough if your lab does not have efficient mechanisms for managing sample data and/or managing experimental data at a time when the number and complexity of both kinds of data is continually expanding. But given that samples are relevant to most of the experiments being carried out, it is even worse if you manage these two kinds of data separately.
A second factor is the increasing amount of time spent by researchers online as a percentage of total time spent at the computer
. This is true in terms of time spent working-a recent study of seven life sciences labs noted that most now use wikis to share general information like meeting notes and non-confidential things like protocols, and that the scientists in the labs were big users of Google, for search, but also of Google Docs, and of online databases [7
]. The growing popularity of Mendeley for reference management is an example of adoption of a specialist online tool for researchers. Researchers also are spending more time online outside of work, where scientists are just as likely as others to use Google for search, Facebook for communicating with friends and family, etc.
The changing landscape of tools that researchers use at work and outside work, and the changing pattern of how people are using these tools, is giving rise to a third factor, changing expectations about how information should be discovered and managed, and the kinds of tools that are appropriate-and necessary-to do this efficiently. Expectations are not only changing, they are being raised. When you are used to Google and Facebook it seems odd not be able to, say, link information about a particular sample used in an experiment to the write up of that experiment. This trend is accelerating as a new generation of postdocs, graduate students and undergraduates, who are comfortable with online tools and take for granted their ongoing rapid development, take active roles in labs.
A fourth factor, mirroring the increase in data and its growing, complexity, is the decrease in the cost of accessing tools to manage the data. 10 years ago, LIMS and ELNs cost $10,000s-$100,000s and were beyond the reach of virtually all academic labs. Five years ago, low cost ELNs and sample/inventory management software began appearing on the market. Today free-admittedly simple-ELNs and sample/inventory management software, is becoming available, and fully featured ELNs and sample/inventory management systems are available for as little as $1,000. LIMS with instrument automation capability are the exception to this trend, and remain prohibitively expensive for virtually all academic labs.
A fifth factor arises from the role samples play and the way in which they are used in experimental research. Take an antibody, for example. It might begin life as a sample the lab brings in or creates. Its original characteristics are recorded. In many cases it will be aliquoted. Then the aliquots are processed or analyzed, and the changes they undergo are also recorded, and analyzed. The aliquots and what has happened to them may be compared with others and what happened to them, and this process will be examined in the broader context of what else was going on in the experiment. Does the management of data relating to the sample fall under the category of sample management or experimental data management? Both, of course! The distinction between the two is entirely artificial, and only arose because of the lack of tools that allowed the sample/aliquots and their history to be viewed in the context of the experiment(s) in which they were used.