Our economic framework is defined by three modular components: (1) a utility model for different outcomes of dose-response experiments, (2) a cost model of obtaining the dose-response data for a particular molecule, and (3) a predictive model for forecasting how particular compounds will behave in the dose-response experiment. After each dose-response plate is tested, the predictive model is trained on the results of all confirmatory experiments performed thus far. This trained model is then used to compute the probability that each unconfirmed hit will be determined to be a true hit if tested. Using the utility model and cost model in conjunction with these probabilities, the contents of the next dose-response plate are selected to maximize the expected surplus (expected utility minus expected cost) of the new plate, and the plate is only run if its expected surplus is greater than zero.
This framework maps directly to an economic analysis. From calculus it can be derived that the point at which a pair of supply and demand curves intersect corresponds to the point that maximizes total surplus. This point of intersection is also the last dose-response plate with a marginal surplus greater than zero. The utility model quantifies how the screener values different outcomes of the confirmatory study. This information is used to define a demand curve: the marginal utility as a function of the number of successfully confirmed actives so far. The cost model exposes the cost of obtaining a particular dose-response curve and is expressed in the same units as the utility. The predictive model forecasts the outcome of future confirmatory experiments and, together with the cost model, is used to define a supply curve: the marginal cost of discovery (MCD) as a function of the number of actives successfully confirmed as actives.
Explicitly computing the expected surplus in order to determine an optimal stop point is critically dependent on modeling a screener's utility function with mathematical formulas, a daunting task that requires in-depth surveys or substantial historical data in addition to assumptions about the screener's response to risk (Kroes & Sheldon 1988
, Houthakker 1950
, Schoemaker 1982). However, in practice, the expected surplus need not be explicitly computed to solve this problem. We can access the relevant summary of screeners’ preferences by repeatedly asking whether they are willing to pay the required amount to successfully confirm one more active. Iteratively exposing the MCD to the screener is like placing a price tag on the next unit of discovery, and interrogating the screener is equivalent to computing whether or not the marginal surplus is greater than zero. For illustration purposes, however, we do use mathematical utility functions to show that simple predictive models, which map screening activity to the dose-response outcome, can be constructed, and that these models admit near optimal economic decisions. To our knowledge, this is the first time formal economic analysis has informed a high-throughput screening protocol.
3.1 High-throughput screen
The technical details of the assay and subsequent analysis can be found in PubChem (PubChem AIDs 1832 and 1833). For approximately 300,000 small molecules screened in duplicate, activity is defined as the mean of final, corrected percent inhibition. After molecules with autofluorescence and those without additional material readily available were filtered out, 1,323 compounds with at least one activity greater than 25% were labeled ‘hits’ and tested for dose-response behavior in the first batch. Of these tested compounds, 839 yielded data consistent with inhibitory activity. Each hit was considered a “successfully confirmed active” if the effective concentration at half maximal activity (EC50) was less than or equal to 20μM. Using this criterion, 411 molecules were determined to be active.
3.2 Utility model
In this study, one unit of discovery is defined as a single successfully confirmed active. The utility model is defined as a function of the number of confirmed actives obtained so far, n. We consider two utility models. First, in the fixed-return model (F) we fix the marginal utility, U′(n), for each confirmed hit at $50. Second, we consider a more realistic diminishing- returns model (D) where the marginal utility of the nth confirmed hit is $(max[502 − 2n, 0]). The utility function for each model can be derived by considering the boundary condition U(0) = $0, yielding U(n) = $(50n) for F and U(n) = $(501 min[n, 250] − min[n, 250]2) for D.
3.3 Cost model
We use a fixed-cost model where each plate costs the same amount of money, C(p(n))= $(400 p(n)), where p(n) is the total number of plates required to confirm actives. Allowing for 12 positive and 12 negative controls on each plate and measuring six concentrations in duplicate, 30 dose-response curves can be tested on a single 384-well plate. We assume the first such plate tests the 30 hits with highest activity in the original screen. Subsequent plates are filled to maximize expected surplus as described in the Introduction.
Clearly, in many scenarios it is not easy or meaningful to place a dollar value on the cost of doing a confirmatory experiment. In these cases, we recommend expressing the cost function in more general units: the number of confirmatory tests (NCT) required. In our example, this is equivalent to using the cost model defined as C(p(n))= (30 p(n)) NCT. This amounts to a negligible change in the downstream analysis which only changes the y-axis of the resulting supply and demand curves without affecting their shapes. An example of how this strategy functions in practice is included in the results.
3.4 Predictive model
We consider two predictive models: a logistic regressor (LR
) (Dreiseitl & Ohno-Machado 2002
) and neural network with a single hidden node (NN1
) (Baldi & Brunak 2001
), structured to use the screen activity as the single independent variable and using the result of the associated confirmatory experiment as the single dependent variable. Both models were trained using gradient descent on the cross-entropy error. This protocol yields models whose outputs are interpretable as probabilities.
3.5 Ordering hits for confirmation
Any combination of either of the two predictive models(LR and NN1) with either of the two utility models (F and D) and a fixed-cost model will always test hits in order of their activity in the screen. The proof is as follows: the cost of screening a dose-response plate is fixed and therefore irrelevant to ordering hits. Maximizing the expected utility is exactly equivalent to maximizing the expected number of true hits in the plate. Both NN1 and LR are monotonic transforms of their input; so maximizing the expected number of confirmed hits in a plate is equivalent to selecting the hits with the highest activity. These schemes, therefore, order compounds by activity.
3.6 Experimental yield and marginal cost of discovery
Most experiments compute the expected MCD by dividing the cost of the next plate by the number of confirmed actives expected to be found within it, using either LR or NN1. The expected number of actives, the yield, is computed by summing the outputs of either LR or NN1 over all molecules tested in the plate. For the data, the MCD is computed by dividing the cost of each plate by the actual number of confirmed actives within it. When communicating the MCD to our screening team, we present the MCD as the number of confirmatory tests required to successfully confirm one more active instead of dollars. This cost is computed by dividing the number of confirmatory experiments proposed (462 in this case) by the forecast experimental yield.