|Home | About | Journals | Submit | Contact Us | Français|
Dr. Mark Cockett is Vice President of Infectious Diseases and Applied Genomics at Bristol-Myers Squibb (BMS). In this interview, we ask Dr. Cockett about considerations that major pharmaceutical companies such as BMS make when screening for and developing antiviral small molecule therapeutics. We discuss the rationale behind an unbiased screening approach that led to recent published work identifying a hepatitis C-specific NS5A inhibitor. We conclude by asking about the emerging role of academia in antiviral drug discovery and future directions of pathogen drug discovery in general.
Dr. Mark Cockett is Vice President of Infectious Diseases and Applied Genomics at Bristol-Myers Squibb (BMS). The Department of Infectious Diseases focuses on HIV, hepatitis B, and hepatitis C. The Department of Applied Genomics provides genomic technologies and technology support to therapeutic areas across the company. Areas of support include new target discovery, mechanism of action determination, and drug efficacy and liability evaluation. The department also helps discover biomarkers to support clinical development.
Before joining BMS, Dr. Cockett was a research scientist at Celltech PLC, where he obtained his PhD with Strangeways Research Laboratory while studying matrix metalloproteinases and tumor cell invasion. He later was named Director of Molecular and Cellular Biology in the Neuroscience Department at Wyeth Pharmaceuticals.
As a company, we decided in 2002 to focus on 10 disease areas; two of those were viral: HIV and hepatitis. The decision to focus on those areas was driven partly by our expertise to use the skills that we have and resources we could bring to bear right here, partly by the company’s heritage and partly by the future unmet medical needs and market opportunities that exist in those areas. Everything is driven by high, unmet medical need disease areas on which we focus our efforts, but we are a company, so we also have to focus on developing drugs in disease areas that get a return on our investments. Focusing on a small number of areas is, I believe, one of the hallmarks of what makes Bristol-Myers Squibb successful today. You will see that we have delivered across the board in many of those areas, and we have delivered a number of drugs from our own research to the marketplace and have a pretty healthy late-phase pipeline. We have launched 10 drugs in the last seven years, and the majority of those came from our own research. Of our late-phase pipeline, drugs that we hope to register or file in the next year or two, most are from our own research as well. BMS has been fortunate to have chosen to focus on those areas, and they are areas in which we are delivering across the board.
Pragmatically, you screen what you can, and you follow up on what looks interesting from the biology perspective. You have compounds that have activities that cross a threshold you set yourself, so you want a certain potency and a certain chemical type that chemists feel are amenable to medicinal chemistry. Chemical space is so large that no one’s chemical libraries cover all chemical space. I have heard that about 1040 different chemicals would be needed to fill the chemical space. In contrast, companies like ours have compound libraries of only about 106 chemicals. Even if you have all pharmaceutical companies’ drugs in one library and screen them, you would still only be sampling a fraction of chemical space. You cannot sample all chemical space, and everyone’s libraries are biased by the history of how those libraries are created. You cannot agonize over gaps in it. That being said, what you can do is spend a lot of time as a company carefully making sure the library you do have is a good one. Given the limitations or the history of how the library was created, you can weed out compounds that are unstable, toxic, cytotoxic, or bad starting points for chemistry, and you can acquire compounds proactively to add to your compound deck that do have good properties. You can use computer-aided technologies, drug design-type technologies, to acquire compounds in new chemical space that your current library does not fill. So despite the deficiencies of all of our libraries, you can have a proactive program within your company to enhance the quality of your compound library. That is really important, because if your compound library is not a high quality one, you could end up chasing down compounds that will never be developable. Or if you think your compound hit is what is on the label and what is in the tube is degraded, you must then work out what it degraded to and what the activity is. Sometimes we do that, and it is very valuable, but sometimes you spend a long time chasing down a compound and find it is nothing of interest or something you cannot work with. So again, having a high quality compound library is good, but you will never have a perfect library, and it will always be biased by the history of the company and how you have built and acquired that library.
We deliberately try to do that. Back in the early 2000s, we went through a phase of culling out from our library things that we knew were going to be blind alleys, things that we knew chemists would never follow up on. When I worked at Wyeth, part of their compound library heritage came from American Cyanamid, a chemical manufacturer. All sorts of weed killers were in the library. Some were potentially developable, if you modify them enough to be a pharmaceutical product, but most were structures that were good for weed killers but not good for making drugs. Weeding those things out (excuse the pun) was probably worthwhile. If you are never going to follow up on them as a human therapeutic agent or if there is something you could never imagine following up on because of the properties of the molecules, you may as well weed them out from the start and not have them in your deck, so you save all that work and energy.
You can take two main approaches. One is to look at the virus and understand the genes in the virus and how it interacts with the host and work out ways to rationally inhibit the virus. That is, focusing on viral genes that encode enzymes, for instance, such as proteases and polymerases. That has been the mainstay of a lot of virology drug development over the years. Nucleoside drugs attack the polymerase active site in HIV, HBV, and HCV. Non-nucleoside drugs that attack the polymerase via allosteric binding sites also are used a lot in HIV. BMS has Sustiva on the market, which is part of the most popular HIV drug combination, ATRIPLA, which mixes non-nucleoside and two nucleoside inhibitors together as a triple drug therapy, all targeting the polymerase. BMS also has Reyataz, an inhibitor that targets HIV protease, on the market, as well as a hepatitis drug, Entecavir, which is a nucleoside inhibitor of the polymerase in hepatitis B. In all those targets, you can look at the genetic code of the virus, understand that there are some enzymes, do some biochemistry around those enzymes, rationally target them with drugs to try to inhibit them, and hope they do. The strength of this approach is that you can identify enzymes, measure them in tubes, and create assays to screen for compounds very quickly. You can use crystallography and X-ray crystal structures of how the drugs bind to understand the interaction of the drug with the protein. You can do a lot to rationally develop drugs and leverage a lot of technologies and approaches. The downside, however, is that you may not discover anything novel, and everyone is working on the proteases and polymerases, making it a very competitive area. It’s difficult to be the first company out with a drug in that class, and if you cannot be first, you must try to come up with a drug that can be best in class.
The advantage of going back to unbiased screening is two-fold. First, you can discover new things, and second, anything you get is active in a cellular assay right away. So just to expand on that a little bit, the NS5A inhibitor was discovered using a replicon screen. The replicon is a replicating piece of viral RNA that can be stably maintained in a cell line and continues to replicate. We incorporate reporters such as luciferase, so that the more replication going on, the more luciferase signal you get. You can screen through the compound deck in a cellular assay to look for inhibition of that signal, and that can be miniaturized and done in really high throughput. This particular assay was run through a high throughput screen in 2001, and we got hits from that screen. The hits were interesting chemicals that could inhibit the replicon that did not have any cytotoxic effect on the cell, so you have a cytotoxicity index and an antiviral index. You can basically say we have this set of hits from this screen that look like they have good antiviral effects and are not killing cells; therefore, these might be good antiviral drugs. The problem is that you do not know what they do or how they are actually working. You then have what we call a “mechanism of action” challenge: a prototype for a drug, or a starting point for a drug program. But until you know more about how they work in the cell and which targets they are working through, rational drug development is challenging because you are just working with a cellular assay, and you do not really know why and how it works. If you just rely on those assays, you might get misled. What you can do is essentially genomics, and you can select for resistance. You can incubate your drug with replicon cells, titrate the drug up to higher and higher amounts of inhibitor, and essentially kill off the replicon. You also can choose that the replicon is in a cell with a selectable marker such as neomycin. That way, you can insist cells have to survive, have to have a replicon in them with this selection marker, but you can titrate up the replication inhibitor to a point where the only thing that is going to replicate is something that becomes resistant to it. Select resistant colonies that contain replicons resistant to your drug, and you can isolate the replicon viral RNA and sequence it, asking what changed from before the treatment to after. Usually when you do this, many things change, and you must home in on which change in the virus was critical to designate resistance. It could take some time and many studies, but by doing so, you might find that your drug is targeting a viral protein with a point mutation that is resisting it. Once you narrow it down to “the inhibitor might inhibit this protein or that protein,” you can then do biochemical and other sorts of assays to firm up that finding and understand and characterize it further. Through what is probably about two years of research going from a screening hit to the target, you can characterize and come up with a chemical inhibitor of a new class of proteins. That is what happened with the NS5A inhibitor, and how you get to the starting blocks to begin a program.
When you do that screen, you get a lot of hits on different parts of the virus, you generate starting points for potentially several programs, and you choose which one you pursue, basically through a variety of mechanisms. Very often you are looking at which drugs seem to be acting most reliably, which chemicals look like the best starting points for a chemist to conduct medicinal chemistry on, and which mechanisms seem to shut down the virus most effectively. The aggregate of information generated around the drug activities makes you decide, “I am going to start this program. It is around a particular protein in the virus, and we are going to try to develop the drug around that.” Instead of the three- to five-year medicinal chemistry effort to develop screening hits into a drug for a traditional target, this process takes a little longer. For instance, our NS5A inhibitor discovered in a high throughput screen in 2001 took until 2008 to reach human studies, and we only started reporting our clinical data in the last couple of years as we have started to get some antiviral effects in patients. The first bit sets you up for success, but it is a long haul to improve the chemistry in getting potency, viral genotype coverage, and drug-like properties into the molecules.
In virology, it is very important to consider the spectrum of viruses for a given disease. When you think of viral disease, you might think of HIV or hepatitis. There might be one name for the virus, but in actuality, there are many subviruses in that family. In hepatitis, for example, there are genotypes one through six, and within genotype one, there are subtypes, 1a and 1b and so on, and sometimes your early prototype drugs may only work on one subtype. In the NS5A program, our earliest prototype drug really only worked on genotype 1b. It took years of work to build in genotype 1a activity. Now, our drug in the clinic has activity across all six genotypes, and that was published in the Nature paper, but to get to that point, it takes years of research. You have to set yourself up with a platform of viral or replicon assays that represent the different genotypes and sub-genotypes and routinely assay those and develop them in parallel to understand the potency of the drug. That is only one aspect, however. You then need to build in properties of the drug, which make it stable and safe in a human: potency, stability, lack of inhibition of p450 enzymes in the liver, and many other parameters important to develop a drug. So that is why it takes so long.
Our first generation drugs are all driven through just plain efficacy against the standard wild type viruses that exist for the given strains. But we all know from the HIV experience that if you use a single drug in a patient, you will generate resistant virus. Hepatitis C is even worse. In hepatitis C, a typical infected human produces 1012 virions a day, the polymerase makes a mistake 1 in every 104 base additions, and the genome is about 10,000 bases. So on average, every virus has a mistake, and if you make 1012 per day, there are 108 viruses with every given mistake, which is pretty remarkable. What this tells you is that you can mathematically estimate that a patient harbors virus with every pairwise combination of the mutant you could possibly have. So the patient already has mutant virus in his system, and we have to think about how we develop drugs to overcome that. The current standard of care in HIV drug discovery is to use three drugs — you use three because even though the virus might escape and become resistant to one drug, it is statistically more difficult for the virus to do the same with two and even tougher with three. Practically, you need three drugs to completely suppress the virus and not get resistance. We think it will be the same with hepatitis C, where we are working on drugs with multiple mechanisms, we are delivering those into clinical development, and we are combining them to see if they will have a higher barrier to resistance together. So the first round in tackling viral resistance is combination therapies.
Once we understand what resistance looks like in a patient, because you do not really understand until you get your first drugs in patients and you generate resistant virus as part of those treatments, we recycle that knowledge back from clinical development into our research to enable us to develop improved molecules. We take that information back into the laboratory, and we generate resistant viruses, and then we generate second generation drugs that not only have potency against wild type virus but also overcome the resistance. In doing so, we generally develop a drug with what we call a higher resistance barrier. Once you understand what resistance really looks like, you can rationally develop second generation drugs that are improved and maybe require more mutations to escape the drug selection, essentially having better drugs that you can potentially combine to create a very high resistance barrier. That is the approach we take: combination therapies in the clinic to overcome resistance by using multiple drugs with multiple mechanisms and then following that up with higher resistance barrier drugs.
Well, more and more, with various National Institutes of Health (NIH) and university funded projects, there are academic screening centers being set up to go from potential targets to potential drugs. One of the more important results of this is that academia in particular could spend time working on drug areas in which it is perhaps not commercially viable for companies at this point or areas that are too new and novel that the risks are too high for pharmaceutical companies. So there are opportunities for academic screening centers to complement what is done in industry and work on screening to identify chemical starting points for potential drug development in additional areas. Academic screening centers could be very good at identifying lead molecules to probe biology and understand biological mechanisms, but I am not sure they have built the infrastructure required to take those leads, turn them into real drugs, and go through that five-year process. So there is opportunity, as academic screening centers discover new chemical matter targeting new proteins in new ways, to potentially collaborate with industry and get into that next stage. Going from a target to a compound that is a chemical tool is still a long way from a drug, and academic screening centers are only at that first point right now. But I certainly think there is a lot of activity, a lot more chemical space being probed through academic screening centers.
Many screening centers, however, use similar chemical libraries, so the chemical space being sampled could still be limited. It probably would be advantageous to have all the academic screening centers have complementary or different libraries, and you could screen targets in a collaborative way, and that way probe more chemical space. Or you potentially could have screening centers that specialize in different target classes, for instance ion channels or G-protein coupled receptors.
Academic screening centers do tee up potential tools to understand biology better, and those tools turn around and help validate targets more effectively. So tool generation from academic screening centers is going to benefit biological knowledge and potentially spark collaborations with industry with interesting chemical starting points.
Unbiased screening approach will complement the rational approach of attacking enzymes that you can work out through rational genomic approaches, structural biology, and biochemistry. Using those approaches, you also uncover potential drugs that attack cellular proteins and help the cell to fight the virus. Those are very difficult to track down, for example, because it is tough to get resistance from a mammalian cell, and if you do find that, it is difficult to sequence a mammalian cell genome very efficiently and cheaply to be able to use genomics to uncover cellular targets. But I do believe that with the advances in genomic technologies, with newer sequencing technologies becoming less expensive, at some point in the future, maybe another five years or so, sequencing technologies will be at the level at which you can sequence a whole mammalian cell and find what changed in a resistant cell and maybe home in on cellular targets, too. So I think there will be several approaches that will be used in parallel. We will always go after an enzyme if we can rationally deduce that, because we know how to do it effectively. We can always track down an antiviral target using these genetic approaches. The cellular targets are more challenging, but I do believe we will be able to discover them in the future.
The host immune system is another important factor. For instance, the current treatment for an infected human with HIV or hepatitis is immune stimulation with interferon, and you can cure patients by stimulating the host immune system to fight the virus. I do believe that as we understand that more, we may be able to deliberately stimulate the immune system in specific ways to fight different viral diseases. That is something that we are looking at and working on now.