Lessons from PCSK9, and How We Know Where to Go in Drug Discovery

This article first appeared in the Timmerman Report.

What drug development lessons should we take from the PCSK9 story? That might depend on how and why we know what we know.

The recent news about Amgen’s anti-PCSK9 antibody evolocumab (Repatha) and its effects on cardiovascular outcomes—the FOURIER trial—added another fascinating chapter to the story of how human genetics is becoming more entwined with drug development. It also jogged my curiosity once again in some old liberal arts late night dorm-room discussions about epistemology, the theory of how we reach rational belief.

How do we collect biomedical knowledge? How do we know what we know about biology, about the genes and proteins and networks and physiology and other phenotypes that we’ve built into models and hairballs and devilishly detailed flowcharts over the past few centuries?  And why do we have the current body of facts that we do?

Targeting PCSK9 represents the most prominent example of using human genetics to identify new drug targets. There are others in the works (Sclerostin and APOC3 come to mind) and they herald an exciting new period of drug development in which the process will be expedited by the existence and study of humans with variants, functional and non-, in these targets. If you have a human being, or a closely related cohort of people, who have a certain gene mutation that keeps their cholesterol low and doesn’t appear to cause any detrimental health effects, you have a pretty powerful predictive human model for a drug (See TR contributor Robert Plenge’s case for a “Human Knockout Project”). With this kind of biological information in hand, setting priorities for drug discovery gets easier.

But the commentary following the presentation of FOURIER showed many are underwhelmed. As Plenge points out in an excellent blog post, people are disappointed by the relatively modest gains in cardiovascular outcomes and the implications for blockbuster status (or lack thereof) for Evolocumab and Alirocumab, the competing antibody from Regeneron Pharmaceuticals and Sanofi. However, Plenge points out, the drug development process of going from human mutants (over-expressors and, eventually, under-expressors of PCSK9) to a drug worked quite well on many levels. Dosing was improved, pre-clinical models were leveraged for specific hypothesis tests rather than broad (and possibly meaningless) demonstrations of efficacy, and clinical endpoints were informed by human biology.

As a trained geneticist, I love this terrific biological story. But I can’t dismiss the criticisms either, which brings me back to the question of knowledge. Human genetics will provide an orthogonal method of identifying targets, and will make the overall process more efficient. I wonder though: will it be enough to make a real dent in the problems facing the drug development enterprise? Or will it instead end up helping incrementally when we really need quantum leaps to help with clinical success, pricing and curing patients? And I think the answer comes back to epistemology. How and why do we in biology know what we know?

I’m going to focus on gene-centric discovery here. It’s my background and serves as a relevant example given the current drug development paradigm of focusing on specific gene targets. So: here’s a question a bioinformaticist friend and I debated while we were at a former company. Why do we know so much about some genes and so little about others? This was of particular importance because we had been directed to find novel targets. See the catch-22, though? Novel targets by definition have little known about them, and those making decisions were often leery of investing millions of dollars in a target with a skimpy data package. This, by the way, is one big positive when using human genetics and an allelic series, and is highlighted by PCSK9. That gene had been poorly studied before but human genetics allowed a quick ramp-up in understanding of its biology and role.

The problem of novelty in target identification came clear to me as soon as I tried convincing other scientists to consider some novel targets. Here’s a story familiar to anyone who has done ‘omics research. I did a lot of transcriptomics. Invariably, in comparing different tissues, diseases, cells, I generated a list of differentially expressed genes. Often lots and lots of lists. Buzzfeed had nothing on me! Although maybe I would have been more successful taking a page from their sensationalistic style: “You won’t believe the top 10 most differentially expressed genes between inflamed and normal mouse colons (number 3 is a real shocker)!”

In any case, there would be familiar genes and there would be novel genes. When we showed these lists to the biologists with whom we were working, they mostly gravitated to the genes they recognized. I can’t blame them; they knew they’d be asked to justify further work, and having several hundred papers sure makes it easier to build an argument for biological plausibility (Insert your favorite version of the lamplight/car keys story here). The specific question my friend and I debated was: Are known things known and novel things novel because the known things are more important in terms of biological function and therefore will have the greatest likelihood of being good drug targets? Or are they known because of historical accident? Or, the third option, is what we know due to the tools we use? I don’t think this is a binary (trinary?) choice. The reality is surely a mix of all three. But if the first condition is the most predominant, that has some implications on what we can expect human genetics to do for drug development.

Postulate discovery in biology has a bias toward the genes with the largest effect being found first regardless of how one does the looking. To illustrate this, I’ll use an example from Ted Chiang’s amazing novella, “The Story of Your Life,” which was the basis for one of my favorite movies of last year, Arrival. A central theme in these works was how different ways of perceiving reality can nevertheless lead to the same place.

If one throws a ball through the air, where will it land? One can use Newtonian mechanics to describe the arc, rise and fall. Or one can use Lagrangian formulations to see the pathway as the one of minimizing actions the ball must take. Either method predicts the ball ends up in a specific place. My analogy: is our knowledge of genes like that? Would the accumulation of knowledge have looked pretty much the same even if different scientists had been using different tools to study different biological problems because by the nature of our shared evolutionary history certain genes are just more fundamental, important and pleiotropic? (For a fascinating rumination on the same question in chemistry, take a look at Derek Lowe’s piece here).

Contrast this with historical accident. Here I’ll go back to physics and invoke the idea of the many-worlds hypothesis. If we could rewind the clock of time and start again, how different would our history and discovery be? In this interpretation, initial discoveries are at least somewhat random but once they occur, it becomes more likely that knowledge will accrete around those initial discoveries like nacre around a grain of sand in an oyster’s mantle. Initial discoveries have a canalization effect, in other words, and as data and effects of specific genes accumulate, those canals get deeper. As illustrated in my earlier example of showing people lists of genes, there is a natural and understandable gravitation toward adding another pebble to the hill rather than placing a rock on a novel patch of ground.

And then there are tools. I’ve been a biological technologist for much of my career, using technologies like microarrays and, later, next-gen sequencing to speed, enhance and extend experimental approaches. So I know there are questions we could not have easily asked, biological problems we would not have tried to approach without the right tools. I remember the early days of fluorescent microscopy and how much that changed our view of the cell, and of Sanger and Maxam-Gilbert sequencing, when actually decoding the order of nucleotides for a gene became feasible. I also remember friendly-ish debates among the geneticists, biochemists, molecular biologists and cell biologists about the best way to do research, with each approach having specific benefits. This general assertion—that tools help us do more–seems circular and obvious, but the implications are deep. Just as many believe language shapes how we think, tools shape how we measure and construct our pictures of the world. When you have a hammer, and all that.

Circling back to PCSK9 and other human genetics-enabled targets, having an orthogonal target discovery method may not be enough to really push the industry forward if we’ve already found the majority of the most broadly effective drug targets. New targets may be effective but not better than current therapies except perhaps in niche indications. Good for precision medicine, but not so great amid the current pushback on drug prices. On the other hand, if limitations of tools and/or historical accident played the majority role in limiting discovery in the past, many innovative targets may be right around the corner as we sequence more genomes and begin to connect the dots between genetic abnormalities and problematic (or advantageous) phenotypes.

I don’t know the answer, but we’ll get an idea in the next few years as more of these genetics-derived targets make it to the clinic. If it does turn out that genetics helps with process and speed more than innovative leaps, well, that’s still helpful. That would also push us further toward new approaches, new platforms and combinatorial therapies. None of that will be easy, or quicker, or simpler. Just looking at the PD1/PDL1 combinatorial clinical trials landscape might be a preview of how messy this could be.PDl1

Also, if human genetics is orthogonal, it does increase the number of shots on goal a company can make although there are limits on how many targets any company can take to the clinic. It still begs the question, though, of whether those shots will be better or just different. And if it’s the latter, that’s not the solution the industry needs. Unfortunately, like so many techniques and tools that have come before (high throughput screening, anyone?), we just won’t know until we know. As much as people would like it to be so, knowledge in this area just won’t inexorably march onward and upward in a straight line.

 

Changing small molecule exclusivity rules as a long-term drug price policy play

This piece originally appeared in the Timmerman Report

We’re entering uncharted waters in the US government. I don’t think it’s hyperbolic to say there will be new regulations, new laws that we’ve never seen before. While I don’t pretend to understand the new administration in any way, I do expect there will be more chaos at the level of policy-making than we’ve seen in decades and one thing that chaos does is it increases the likelihood of extreme outcomes.

So here is one speculative policy idea:

I think we should trade the 12-year exclusivity period from biologics to small molecule drugs. Continue reading

It’s time for biopharma to embrace public health

This piece first appeared in the Timmerman Report.

Some years ago when I was working for a large biopharma, I heard a story. It seems a senior scientific executive had visited and given a seminar in which he described the company’s portfolio of drugs for type 2 diabetes. The company was projecting great uptake and profits. A member of our site raised his hand and said, “But if people just ate less and exercised a little more, they could prevent type 2 diabetes and the market would disappear.”

The answer: “Yeah, but they won’t.”

Harsh! But that executive was right. The Institute for Health Metrics and Evaluation (IHME) recently published a paper in JAMA describing how much different health conditions contribute to private and public health spending in the US. Number one? Diabetes. Following that were heart disease and chronic pain. These are chronic lifestyle diseases with big environmental and behavioral components, and the data make me wonder if there’s an opportunity here for the industry to zig and do some things that, in the long run, may make drug development more sustainable.

I think it’s time for biopharma to get involved in public health. Continue reading

How Valeant, Anthem, and chirping crickets suggest Saunders’ social contract is doomed

This piece originally appeared in the Timmerman Report.

When Allergan CEO Brent Saunders announced his manifesto on drug pricing at Allergan just after Labor Day, he was met with acclaim and approval (some examples here and here). He called for a return to the social contract between biopharma companies and patients. In his view, patients understood in the past that developing drugs was risky and cost a lot of time and money, and therefore patented drugs would be expensive. Drug companies, holding up their end of the social contract, felt an obligation above simple profit-making—that drugs are supposed to keep patients healthy or to get them back to that state. That meant pricing had to take into account the public good, not just profit maximizing, and be reasonable. Moving forward, Saunders announced that, among other things, Allergan would commit to value-based pricing and to limit price increases to no more than single-digit percentage hikes per year.

These are worthy and admirable goals. But I look at other recent events and can’t help feeling his effort is doomed. Continue reading

One way to improve clinical trial reporting: a Yelp-style rating system

This piece originally appeared in the Timmerman Report.

STAT recently published an in-depth report about the many research centers that don’t bother to publicly disclose the results of their clinical trials, even though they are required to do so. This follows on a New England Journal of Medicine article back in March that had a similar analysis of the lack of reporting and publication of clinical trial data to clinicaltrials.gov.

Most observers of biomedical research would agree that getting clinical trial data out about what happened in a trial is pretty important, whether the trial succeeded or failed. After all, biomedical translational research is most meaningful when done on human subjects and negative information can be quite informative and useful. Animal models are nice, but translation of results from animals to humans is a spotty proposition at best. We need to know what’s working, and what’s not, to know how to best allocate our research resources and how to treat patients.

The lack of reporting is an embarrassment for research. It’s also understandable, because so far the FDA hasn’t used its authority to punish anyone for delayed reporting. Nobody appears to have lost any research funding because they failed to post trial results in a timely manner. Universities told STAT their researchers were “too busy,” given other constraints on their time, to report their results. So what really seems to be going on is that reporting is prioritized below most other activities in clinical research.

It was interesting and eye-opening that industry fared better than academia in both the STAT story and the NEJM article with respect to how many studies have been reported. Having seen the industry process first-hand, I’d speculate that (at least for positive trials) there’s a much stronger incentive to get data out in public. Successful trial results can create buzz among clinicians and patients, revving up trial enrollment which can then help get a new drug on the market faster, and convince people to use it when it’s available. It may be that in academia the effort of getting trial results in the required format for clinicaltrials.gov is perceived as too much work, relative to the rewards. Academics are naturally going to spend more energy on directly rewarded activities like writing grant proposals and writing peer-reviewed scientific publications that help them win even more grants, promotions, and other accolades. Well okay. If this is the case, then figuring out new incentives may be key.

So what would work? Anyone who participates in a clinical trial is providing time, may be subject to risks and often is asked to provide samples that are biobanked to support future exploratory and translational research. It’s like when people donate to food banks. I’m pretty sure they mean that food to be eaten and not to sit on a shelf. These participants in clinical trials deserve to have their volunteerism rewarded.

This got me thinking about how to empower patients to get more of what they want. Patient-centered research is a buzzword these days, and for good reason. Patients have at times been an afterthought in the biomedical research enterprise. I thought of services like Yelp and Uber and Angie’s List and other peer-to-peer systems that allow users to get information, provide feedback and give ratings to specific providers. And I wondered: could this be a way to apply pressure to clinical trial researchers to improve their reporting? Continue reading

No, CRISPR-Cas won’t save the day for ag biotech

You want to know how to drive a scientist crazy? Insist that you believe something that’s not supported by current scientific evidence. Tell her vaccines cause autism, or creationism is just as valid a theory as evolution, or that climate change isn’t really happening, I mean, after all, a monster blizzard hit Washington DC this January! Global warming, pssh…

There’s an old episode of Friends that did a good job of showing how this kind of conversation goes. Phoebe professes not to believe in evolution and Ross, a paleontologist, keeps trying to convince her that evolution is real using scientific evidence and logic. He grows increasingly frustrated and insistent as she continues to deny the basis of his life’s work, finally losing it when she goads him into admitting (like a good scientist) that even theories like evolution are not immune from questioning and testing.

We train scientists to carefully generate, weigh and use evidence. To no one’s surprise, this leads many scientists to generalize and think that in all matters having to do with the physical world we all should and of course will follow the evidence. Yes, sometimes that leads to unpopular ideas, and sometimes the ideas change as the weight of evidence changes. This training can make scientists kind of boring at cocktail parties. Still, the overall scientific process keeps moving forward and it’s because of this reliance on evidence.

But many people (including, at times, even some scientists) don’t always think the same way about things in the physical world. And that’s why I’m pessimistic that CRISPR-Cas technology will peacefully resolve the Genetically Modified Organism (GMO) debate. Continue reading