PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Eval Health Prof. Author manuscript; available in PMC 2014 July 7.
Published in final edited form as:
PMCID: PMC4084908
NIHMSID: NIHMS579430

Estimating Return on Investment in Translational Research: Methods and Protocols

Abstract

Assessing the value of clinical and translational research funding on accelerating the translation of scientific knowledge is a fundamental issue faced by the National Institutes of Health and its Clinical and Translational Awards (CTSA). To address this issue, the authors propose a model for measuring the return on investment (ROI) of one key CTSA program, the clinical research unit (CRU). By estimating the economic and social inputs and outputs of this program, this model produces multiple levels of ROI: investigator, program and institutional estimates. A methodology, or evaluation protocol, is proposed to assess the value of this CTSA function, with specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered, analyzed, and evaluated. This paper provides an approach CTSAs could use to assess the economic and social returns on NIH and institutional investments in these critical activities.

Keywords: ROI, Return on Research Investment, Evaluation

Introduction

With support of the National Institutes of Health (NIH), the Clinical and Translational Science Award (CTSA) program was launched in 2006 and expanded to other academic medical institutions across the country. By 2012 there were approximately 60 CTSA supported institutions, known as Clinical and Translational Science Awardees (CTSAs), with the goal of the CTSA-Program to provide a nation-wide collaborative of integrated infrastructures to support, educate and accelerate clinical and translational health research. The CTSA-Program is now under the umbrella of a relatively new NIH unit that was established in 2011, the National Center for Advancing Translational Sciences (NCATS; (CTSA, 2013)).

In an era of increasingly scarce resources, important decisions with respect to which resources should be maintained by a CTSA, and which should not be renewed, become crucial for the future of all CTSAs (CTSA, 2013). Effective evaluation has been subject to much discussion within NIH, the CTSA-Program and individual CTSAs in recognition that it takes on average 17 years for only 14% of scientific innovations and discovery to reach clinical practice (Balas and Boren, 2000) and the consequent importance of engaging with communities and practice-based networks to accelerate translation (Westfall et al., 2007). The aims of this paper are fourfold: 1) to examine the concept of return on investment (ROI) as it could be applied to CTSA-Program resources as used at individual institutions; 2) to propose a model for applying ROI formulae using data currently collected from CTSA-Program required financial and operating data; 3) to propose a methodology for decision making with respect to ROI in one component of a CTSA, namely a clinical research unit; and 4) to suggest how the methodology, an evaluation protocol, can be applied to other units within, and across, the various CTSAs supported by the CTSA-Program, (Trochim et al., 2012).

Limited ability and experience assessing the value of CTSA research funding on accelerating the translation of scientific knowledge is a fundamental issue faced by both individual CTSAs and by NIH CTSA-Program (Rubio et al., 2012). To address this issue, we propose investigating the ROI of one key program that is common to all CTSAs, namely the clinical research unit, (McCammon et al., 2013 (in press)). By carefully examining the economic and social inputs and outputs of these units, it may be possible to produce multilevel ROI computations, at the investigator, program, institutional and national levels. The developed methodology, or evaluation protocol, will focus on achieving specific objectives, methods, descriptions of the data to be collected, how data are to be filtered and analyzed, and how the results can be used in evaluating various units. This model, while being created using one component of an individual CTSA, is developed in such a way that it is generalizable to other CTSA-Program aspects at an individual institution, such as pilot projects or investigator training programs.

Background and Significance

As Botchkarev & Andru (2011) note, “ROI was conceived as a financial term and defined as a concept based on a rigorous and quantifiable analysis of financial returns and costs. At present, ROI has been widely recognized and accepted in business and financial management in the private and public sectors.” Authors recognize differences in economic concepts, based on the field of the research, namely, finance or economics. In ROI, the method allows a decision maker to evaluate the timing and magnitude of expected gains to the timing and magnitude of investment costs (NICHSR, 2013). The simplest ROI divides the incremental economic gain from an action by its investment costs. When controlled for similar circumstances, the higher the ROI, the greater the financial return for the given investment and, presumably, the better use of the resources. Direct costs, such as salaries and wages, can be attributed directly to the investment, or project. The same is true for the direct returns, such as increased sales revenue. Proximal measures of cost and gains, or returns, are also included, insofar as they can be identified with the specific investment, and tracked for sufficient length of time. The analysis grows in complexity with the recognition of several important dimensions of the economic value implied by the ratio, the most important being the timing of the respective cost outlays and revenue inflows (The Cochrane Collaboration, 2013).

While ROI is fairly straightforward if costs and revenues can be directly identified to a project, difficulties arise in the use of ROI when it attempts to include indirect costs or returns, those associated with the decision, but not necessarily caused by it; for example, there are general expenses related to operating a CTSA at an institution, but it is difficult to attribute many of those expenses directly to one aspect of the CTSA, be they a project or unit. It is also difficult to quantify on the return side of the equation, the value supplied by the CTSA in generating a journal article or patent when there are multiple sources of funds available to an investigator, including grants and other outside funding. Additionally, the timing of the investment by a CTSA and the returns provided by the investment frequently differ. For example, initial clinical funding might be invested in year 1, but the return as measured by additional grant awards may not occur until years later. These early-cost/later-gains scenarios require discounting future net cash flows to recognize the risk related to the uncertainty inherent in estimating those future values (Phillips and Phillips, 2008; Zhang et al., 2008). In this paper the method does not restrict ROI to a simple ratio, but rather one that accounts for the proximal and distal costs and benefits of investments in clinical research units (CRUs). It should also be noted that in those CRUs that offer services for industry-sponsored trials, the calculations can often be simplified by imposing a fixed timeline on the returns.

In addition to the economic ROI, which focuses on financial value, some formulae include social costs and value, which is commonly referred to as social return on investment (Harvard Business School, 2000). Social return on investment is a framework for measuring and accounting for a broader concept of “value,” one that incorporates social and environmental, as well as economic costs and benefits (Gardner, 2007; ISO, 2013; NEF, 2013; Staiger et al., 2005). The academic and policy making literature have provided evidence for the importance of calculating SROI, including justification, protocols, and mechanisms for organizing and conducting a rigorous SROI in settings similar to that found in CTSAs (DeVol and Bedroussian, 2006; Pienta et al., 2010). SROI has been assessed in different fields: banking, corporate research and development, energy policy, and education policy (Blaug, 1997; Jones and Williams, 1998; Kronenberg et al., 2010; Nelson et al., 2009; Raymer, 2009; Richardson, 2006; Tulchin et al., 2009). As with ROI, SROI analysis can be conducted both retrospectively, based on actual realized costs and outcomes; or prospectively, predicting how much social value will be created, for a given cost, if the activities meet their intended outcomes (Scottish Government, 2011; Lingane and Olsen, 2004).

The variation in the meaning and use of ROI, how it is calculated, and at what level, are described well in several publications. Those authors accept for their evaluation purposes an individual measure of ROI, as a metric and ratio. Other authors consider ROI “as a method of persuasive communication to senior management, a process of getting everybody's attention to the financial aspects of the decisions and stimulating a rigid financial analysis. In this case, actually calculated ROI numbers are of less importance compared to the processes of gathering/analyzing cost and benefit data (Botchkarev and Andru, 2011).

Approach

The approach uses quantitative and qualitative methods to determine how to extend operational protocols to assist individual CTSAs in understanding and using data representing returns on investments in research funds in clinical research units. Using a discrete program within all CTSAs, the CRU, this approach encompasses unique and similar features of administrative, clinical, and research tracking systems (Meltzer and Smith 2011).

Basic principles drive the approach and methods: Involve stakeholders; understand what changes over time; value the things that matter; only include what is relevant; be conservative; be transparent; verify results.

The proposed evaluation protocol shows that the concept of ROI models can be adapted to better understand and manage the activities of an individual CTSA with respect to investment decisions.

Measurement

Measures of the value of research awards often include “productivity.” Productivity is commonly defined as a ratio between the output volume and the volume of inputs (Nordhaus, 2001). It measures how much inputs such as labor and capital are used in an economy to produce a given level of output (Linna et al., 2010; Velentgas et al., 2013). Research productivity is often represented by the publications of research discoveries and how often the work is cited by others (NIMH. 2013; Meltzer and Smith 2011). Rooted in the idea of a data life cycle, the scientific community has moved to recognize “that research data may have an enduring value on scientific progress as scientists use and reuse research data to draw new analysis and conclusions” (Jacobs and Humphrey, 2004; Levan and Stephan, 1991; Pienta et al., 2010). Some of these data sharing opportunities are encouraged by journals with the intent to replicate results (Anderson et al., 2005; Glenditsch et al., 2003). NIH issued its final ruling in 2003 on the requirements to share data funded by the National Institutes (Carnegie Foundation for the Advancement of Teaching. 2010; NIH, 2003). Such data sharing, in ROI terms, can be considered secondary returns.

Data sources and collection techniques include literature review, in-person and telephone interviews; extraction of data from administrative and research data systems; surveys of a sample of investigators using and not using the CRU; online databases of independent scientist and career development (K) awards and subsequent publications and employment. Because of interviews with the data managers at the clinical research unit (CRU), and CTSA- specific data scans, the evaluation protocol guides the evaluator through standardized processes for collecting and aggregating data, validating for errors, and transmitting the data sets to the analyst.

CRUs are likely to report economic data more consistent with standard financial records for fixed assets such as property, plant and equipment; and variable costs, such as those associated with personnel staffing the units. However, it is very likely that the institutions will provide different patterns of service (e.g., different eligibility rules or different terms of service) and account for these units in significantly different ways. Additional challenges include valuing different components of the CRUs, such as inpatient, outpatient, and mobile services. A key focus of interviews is on developing consistent and comprehensive definitions of terms and outcomes.

Analysis

The value or return will be a function of a number of characteristics: the awards through the CTSA and from other sources; the institutions at the time of the award and before and after; the investigator; the number of collaborations in the award, length and extent of “exposure” to the CRU of the research programs; all dependent on the scope and boundary discussions with stakeholders and on the synthesized model constructed.

There are several sets of potential models for each outcome; for instance, a model may include categorized data sharing status measures, wherein others may include principal investigator, institution, and other award characteristics. Depending on the type of outcomes being measured and the context of the ROI calculation, regression models or Data Envelopment Analyses (DEA) can be used. For instance, if the outcome were publication counts, Poisson regression models might be of use; whereas in the case of longitudinal publication outcomes, negative binomial regression models may be in order. A hierarchical set of models may help understand the extent to which differences in the outcome of interest may be attributable to characteristics of the unit, the stage of career, PI collaborations, or size and timing of the award. With such data, it will also be possible to compare relative effectiveness of investments across project times and across institutions.

Typically, ROI estimation is approached very simply. Total “returns” (e.g., monetized benefits) are divided by total “investments” (e.g., costs) to get the ratio of returns for each dollar invested. However, these simplistic analyses do not enable looking at distributions and variability or allow for statistical tests of differences. Using data that have not yet been aggregated into gross categories enables use of statistical methods rather than just reporting aggregate ROI.

Model development follows an iterative process, which follows a spiral development path (Ambler, 2002). That is, the model will begin as simple as possible, uncovering the basic issues involved in model development. Once these issues are resolved and tested using the data provided in the data collection phase discussed above, the model is enhanced and detailed at the next level of complexity and performance. Using such a process allows for both the development of a rapid, more local, decision tool and for the continuing development of a more complex and generalizable decision tool.

As stated before, while the basics of ROI are simple, other issues can make the use of ROI more challenging. This is particularly true when a return can only be realized years or decades after the investment (NCATS, 2012). Discounted ROI is well known for being highly biased toward rapid investment returns. This is a major issue for CTSAs as, for example, investing “time” in new researchers today by allowing them to use a CRU should provide “return” in terms of new discoveries in the future; but how exactly should each be quantified? As one of the four transformative aims of the CTSA-Program is to provide a foundation of shared resources that could reduce costs, delays and difficulties experienced in clinical research, including trials, this timing is particularly crucial. [http://www.ncats.nih.gov/research/cts/ctsa/about/about.html]

Additionally, non-financial characteristics of both investment and return can be difficult to identify in commensurate terms. For example, time is the most inelastic and finite of all resources but it must be expended in teaching new investigators in hopes that they do better and more meaningful research in their subsequent careers. While it is possible to achieve significant “return” with completely one-on-one responsiveness to the researcher's demands in the CRU, the investment in time usually is prohibitive with respect to the relative investment. But, only offering group instruction or supervision (a lower investment alternative) may not provide the necessary quality (i.e., return) required. One major emphasis of the proposed modeling approach is identifying comparable metrics within a function.

Methodology development: Evaluation Protocol

The proposed evaluation protocol addresses both standard ROI and SROI estimation methodologies but focuses on the economic ROI. Work with key stakeholders helps establish the scope and boundaries of the analysis for each program. This is not a trivial issue in ROI analysis. For example, there are a number of direct and indirect potential outcomes of given clinical trial projects: subsequent research publications; patent applications and patents received; subsequent grants received; and even the economic effects of spending the funds such as their stimulus to the local economy. There is no effective way to monetize all of these outcomes and the decision regarding which to include in ROI analysis is to some extent a matter of judgment. After meeting with stakeholders and determining their within-center approach to boundary conditions, data elements can be selected.

Results: Process and Structure for ROI Analysis

A multistep process for structuring the ROI analysis is summarized as follows:

  1. Create alternative conceptual frameworks to estimate the impacts & value
  2. Survey CTSA on available sources & formats of economic and social impact data; determine costs of collecting & analyzing data
  3. Collect selected financial, service utilization, & community encounter & impact data from collaborating sites
  4. Test usability of each framework and efficacy of resulting metrics
  5. Create evaluation protocol for use by CTSA in pilot and final testing

In planning the project, it is important to identify a conceptual model that is acceptable to the CTSA; this can be determined through interviews with staff and investigators within the CTSA and the CRU itself.

The process for collecting and analyzing the data to calculate the ROI is detailed in Figure 1. Here, evaluators define relevant data, examine the quality of existing data, and standardize methods for collecting and analyzing data; these steps result in selected mechanisms for further testing and adoption. The analysis uses accounting, financial, economic, and social return on investment principles to identify outcomes and value impact.

Figure 1
Estimating the ROI: process steps. ROI = return on investment.

Types of costs and gains, or benefits, are listed in Figures 3 and 3b. Limiting the initial work to direct and indirect costs, and not including incidental costs, is less complex and may allow the CTSA to move forward more quickly with these types of analyses. Figure 4 shows possible categories of costs and gains for a CRU.

The standard model of ROI estimates:

Timing & magnitude of expected GAINS / Timing & magnitude of expected COSTS

Considering the timing and magnitude of cash flows recognizes the impact of early versus later gains and costs. This recognition is captured in the discount rate, the percent used to discount future net cash flows to recognize the risk or uncertainty of estimating net gains into the future. Discounting cash flows requires selecting a percent rate based on an estimate of how “risky” the investment is relative to other projects in which the funder invests. In standard businesses, the discount rate is the average of the interest rate on their debt and the return shareholders expect on their investments in the company. This is an average weighted by the portion of the company that is financed by debt and financed by equity. In the CRU methodology, the discount rate can be selected using a sensitivity analysis varying from the interest rate on medium term interest rates in the commercial loan market to inflation rate plus 1 to 3 percent. This level of riskiness of the federal investment in the CTSA is conservative, but realistic.

Summary and Conclusion

This paper proposes using several approaches to study quantitatively the availability, accessibility, and quality of data used to define return on investment; and qualitatively seek additional process input into the financial and social models as they are developed and tested. The protocol includes identifying types of costs, impacts and values (external, internal, financial, social); creating alternative conceptual frameworks to estimate the impacts and value of translational research on individual researchers, the research enterprise, consumers of research and clinical care, and the public; surveying CTSAs on available sources and formats of economic and social impact data; determining costs of collecting and analyzing financial data; testing usability of each framework and efficacy of resulting metrics; and creating protocols for use by CTSAs.

Creating and sustaining the next generation of clinical and translational research, researchers, and practitioners within a culture of innovation and excellence requires thoughtful and fair allocation of resources. While not the only criteria for investment, the outputs in productivity, creativity, efficiency and better health status warrant measurement. As in business in general, CTSAs would benefit from the ability to use standardized methods and tools to measure return on investment. Realizing this need, some CTSAs are embarking on efforts to identify the investment, benefits, and ROI for their CRUs. Through this testing of the proposed model, the NIH can assure that this method of accountability and resource allocation can become one of several tested criteria to help make difficult but crucial decisions on the future of science and public health.

Figure 2Figure 2
(a) ROI financial flows. (b) Categories of financial flows. ROI = return on investment.

Acknowledgments

This work was supported by awards to the University of Michigan CTSA (grant number 2UL1TR000433-06); Weill Cornell CTSC (grant number UL1 TR000457-06); and Oregon Clinical and Translational Research Institute (OCTRI), grant number (UL1TR000128) from the National Center for Advancing Translational Sciences (NCATS) at the National Institutes of Health (NIH). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.

Contributor Information

Kyle L. Grazier, Richard Carl Jelinek Professor of Health Services Management and Policy, School of Public Health, Professor, School of Medicine, Director of Evaluation, Michigan Institute for Clinical and Health Research (CTSA), University of Michigan, 1420 Washington Heights, Ann Arbor, MI 48109-2029, Office: 734.936.1222, Fax: 734.764-4338.

William Trochim, Professor, Director of Evaluation, Weill Cornell Medical School Clinical and Translational Science Center (CTSC), Cornell University.

David M. Dilts, Professor, Director of Evaluation, Oregon Health and Science University Clinical and Translational Research Institute, Oregon Health and Science University.

Rosalind Kirk, Evaluation Specialist, Michigan Institute for Clinical and Health Research, University of Michigan.

References

  • Ambler SW. Agile Modeling. John Wiley and Sons; New York, NY: 2002.
  • Anderson RG, Greene WH, McCullough BD, Vinod HD. The Role of Data and Program Code Archives in the Future of Economic Research. St. Louis, MO: 2005. (Federal Bank of St. Louis Working Paper Series).
  • Balas EA, Boren SA. Yearbook of Medical Informatics. Schattauer; Stuttgart, Germany: 2000. Managing clinical knowledge for health care improvement; pp. 65–70.
  • Blaug M. The private and the social returns on investment in education: Some results for Great Britain. Journal of Human Resources. 1997;2:330–346.
  • Botchkarev A, Andru P. A Return on Investment as a Metric for Evaluating Information Systems: Taxonomy and Application. Interdisciplinary Journal of Information, Knowledge, and Management. 2011;6
  • Carnegie Foundation for the Advancement of Teaching. [Last accessed 13 May, 2013];Carnegie Classification information was based on the “Classifications Data File” 2010 (Last update: January 10, 2010). http://classifications.carnegiefoundation.org/resources/
  • The Cochrane Collaboration. [Last accessed May 19, 2013];Working together to provide the best evidence for health care. 2013 http://www.cochrane.org/glossary.
  • CTSA. [Last accessed 18 May, 2013];Clinical and Translational Science Award (CTSA) 2013 https://http://www.ctsacentral.org/
  • DeVol R, Bedroussian A. Mind to Market: A Global Analysis of University Biotechnology Transfer and Commercialization. Milken Institute; Santa Monica, CA: 2006.
  • Gardner K. Clinical and Translational Science Award, Economic Impact of Award and Spillover Effects. CGR, Inc.; Rochester, NY: 2007.
  • Glenditsch NP, Metelits C, Strand H. Posting your data: Will you be scooped or will you be famous? International Studies Perspectives. 2003;4:89–95.
  • Harvard Business School. The Nature of Returns: a social capital markets inquiry into elements of Investment and the Blended Value proposition. Harvard Business School; Boston, MA: 2000. (Social enterprise Series, No 17).
  • ISO. Inspiring Innovative Solutions. Washington DC: 2013. [Last accessed 18 May, 2013]. http://www.iso.org/iso/home.html.
  • Jacobs JA, Humphrey C. Preserving research data. Communications of the ACM. 2004;47:27–29.
  • Jones CI, Williams JC. Measuring the social return to R & D. Quarterly Journal of Economics. 1998;113:1119–1135.
  • Kronenberg T, Kuckshinrichs W, Hansen P. The social return on investment in the energy efficiency of buildings in Germany. Energy Policy. 2010;38:4317–4329.
  • Levan SG, Stephan PE. Research productivity over the life cycle: Evidence for academic scientists. American Economic Review. 1991;81:114–132.
  • Lingane A, Olsen S. Guidelines for social return on investment by California. Management Review. 2004;46:116.
  • Linna P, Pekkola S, Ukko J, Melkas H. Defining and measuring productivity in the public sector: managerial perceptions. International Journal of Public Sector Management. 2010;23:479–499.
  • McCammon MG, Conrad CM, Klug ZT, Myers CD, Watkins ML, Wiley JW, Bower CL. From an infrastructure to a service-based business model: 5 Years of mobile clinical research at the University of Michigan. Journal for Clinical and Translational Science 2013 in press. [PMC free article] [PubMed]
  • Meltzer DO, Smith PC. Theoretical Issues Relevant to the Economic Evaluation of Health Technologies, Handbook of Health Economics. Elsevier Science; New York, NY: 2011. pp. 433–469.
  • NIMH. National Advisory Mental Health Council Workgroup on Research Training–Report. [Last accessed 13 May, 2013];Investing in the Future. 2013 http://tinyurl.com/qed3ze8.
  • NCATS. (2012). National Center for Advancing Translational Sciences (NCATS), Request for Information. [Last accessed on May 27, 2013];Enhancing the Clinical and Translational Science Awards Program. 2012 Jun 14; at http://www.ncats.nih.gov/files/report-ctsa-rfi.pdf.
  • NEF. [Last accessed on May 27, 2013];Local Multiplier Analysis. 2013 From http://www.neweconomics.org.
  • Nelson JD, Cooper JM, Wright S, Murphy S. An evaluation of the transport to employment (T2E) scheme in Highland Scotland using social return on investment. Journal of Transport Geography. 2009;17:457–467.
  • NICHSR. [Last accessed 18 May, 2013];National Information Center on Health Services Research and Health Care Technology (NICHSR) 2013 http://www.nlm.nih.gov/nichsr/edu/healthecon/glossary.html.
  • NIH. [Last accessed 13 May, 2013];Final Statement on Sharing Research Data. 2003 http://grants.nih.gov/grants/policy/nihgps_2003/nihgps_2003.pdf.
  • Nordhaus WD. Alternative methods for measuring productivity growth. The National Bureau of Economic Research; 2001. [Last accessed 18 May, 2013]. http://www.nber.org/papers/w8095.
  • Phillips PP, Phillips JJ. ROI Fundamentals: Why and When to Measure Return on Investment. John Wiley & Sons; San Francisco, CA: 2008.
  • Pienta AM, Alter GC, Lyle JA. The Enduring Value of Social Science Research: The Use and Reuse of Primary Research Data; The Organisation, Economics and Policy of Scientific Research Workshop; Torino, Italy. 2010. [Last accessed 13 May, 2013]. p. 8. http://deepblue.lib.umich.edu/handle/2027.42/78307.
  • Raymer A. Big returns for a little more investment: Mapping theory in emergent research. Action Research. 2009;7:49–68.
  • Richardson BJ. Responsible Investment. Banking & Finance Law Review. 2006;22:303.
  • Rubio D, Sufian M, Trochim W. Strategies for a national evaluation of the clinical and translational science award. Clinical and Translational Sciences. 2012;5:138–139. [PMC free article] [PubMed]
  • Scottish Government. The SROI Network; Guide to Social Return on Investment. Cabinet Office of the Third Sector, The Scottish Government; 2011.
  • Staiger R, Richardson W, Barbara C. A discounting framework for regulatory impact analysis. Policy Sciences. 2005;18:33–54.
  • Trochim W, Urban JB, Hargraves M, Hebbard C, Buckley J, Archibald T, Johnson M, Burgermaster M. The Guide to the Systems Evaluation Protocol. Cornell Digital Print Services.; Ithaca, NY: 2012.
  • Tulchin A, Gertel-Rosenberg AS, Olsen S. Leveraging public health partnerships: Measuring the accrued social return on investment on an obesity prevention initiative. Obesity. 2009;17:226. [PubMed]
  • Velentgas P, Dreyer NA, Nourjah P, Smith SR, Torchia MM. AHRQ Publication No 12(13)-EHC099. Agency for Healthcare Research and Quality; Rockville, MD: 2013. Developing a Protocol for Observational Comparative Effectiveness Research: A User's Guide.
  • Westfall JM, Mold J, Fagnan L. Practice-based research - “Blue Highways” on the NIH roadmap. Journal of the American Medical Association. 2007;297:403. [PubMed]
  • Zhang L, Wu JG, Zhang XF. Understanding the Accrual Anomaly. Ross Business School; Ann Arbor, MI: 2008. (Working Paper Series 2008).