Lessons from PCSK9, and How We Know Where to Go in Drug Discovery

This article first appeared in the Timmerman Report.

What drug development lessons should we take from the PCSK9 story? That might depend on how and why we know what we know.

The recent news about Amgen’s anti-PCSK9 antibody evolocumab (Repatha) and its effects on cardiovascular outcomes—the FOURIER trial—added another fascinating chapter to the story of how human genetics is becoming more entwined with drug development. It also jogged my curiosity once again in some old liberal arts late night dorm-room discussions about epistemology, the theory of how we reach rational belief.

How do we collect biomedical knowledge? How do we know what we know about biology, about the genes and proteins and networks and physiology and other phenotypes that we’ve built into models and hairballs and devilishly detailed flowcharts over the past few centuries?  And why do we have the current body of facts that we do?

Targeting PCSK9 represents the most prominent example of using human genetics to identify new drug targets. There are others in the works (Sclerostin and APOC3 come to mind) and they herald an exciting new period of drug development in which the process will be expedited by the existence and study of humans with variants, functional and non-, in these targets. If you have a human being, or a closely related cohort of people, who have a certain gene mutation that keeps their cholesterol low and doesn’t appear to cause any detrimental health effects, you have a pretty powerful predictive human model for a drug (See TR contributor Robert Plenge’s case for a “Human Knockout Project”). With this kind of biological information in hand, setting priorities for drug discovery gets easier.

But the commentary following the presentation of FOURIER showed many are underwhelmed. As Plenge points out in an excellent blog post, people are disappointed by the relatively modest gains in cardiovascular outcomes and the implications for blockbuster status (or lack thereof) for Evolocumab and Alirocumab, the competing antibody from Regeneron Pharmaceuticals and Sanofi. However, Plenge points out, the drug development process of going from human mutants (over-expressors and, eventually, under-expressors of PCSK9) to a drug worked quite well on many levels. Dosing was improved, pre-clinical models were leveraged for specific hypothesis tests rather than broad (and possibly meaningless) demonstrations of efficacy, and clinical endpoints were informed by human biology.

As a trained geneticist, I love this terrific biological story. But I can’t dismiss the criticisms either, which brings me back to the question of knowledge. Human genetics will provide an orthogonal method of identifying targets, and will make the overall process more efficient. I wonder though: will it be enough to make a real dent in the problems facing the drug development enterprise? Or will it instead end up helping incrementally when we really need quantum leaps to help with clinical success, pricing and curing patients? And I think the answer comes back to epistemology. How and why do we in biology know what we know?

I’m going to focus on gene-centric discovery here. It’s my background and serves as a relevant example given the current drug development paradigm of focusing on specific gene targets. So: here’s a question a bioinformaticist friend and I debated while we were at a former company. Why do we know so much about some genes and so little about others? This was of particular importance because we had been directed to find novel targets. See the catch-22, though? Novel targets by definition have little known about them, and those making decisions were often leery of investing millions of dollars in a target with a skimpy data package. This, by the way, is one big positive when using human genetics and an allelic series, and is highlighted by PCSK9. That gene had been poorly studied before but human genetics allowed a quick ramp-up in understanding of its biology and role.

The problem of novelty in target identification came clear to me as soon as I tried convincing other scientists to consider some novel targets. Here’s a story familiar to anyone who has done ‘omics research. I did a lot of transcriptomics. Invariably, in comparing different tissues, diseases, cells, I generated a list of differentially expressed genes. Often lots and lots of lists. Buzzfeed had nothing on me! Although maybe I would have been more successful taking a page from their sensationalistic style: “You won’t believe the top 10 most differentially expressed genes between inflamed and normal mouse colons (number 3 is a real shocker)!”

In any case, there would be familiar genes and there would be novel genes. When we showed these lists to the biologists with whom we were working, they mostly gravitated to the genes they recognized. I can’t blame them; they knew they’d be asked to justify further work, and having several hundred papers sure makes it easier to build an argument for biological plausibility (Insert your favorite version of the lamplight/car keys story here). The specific question my friend and I debated was: Are known things known and novel things novel because the known things are more important in terms of biological function and therefore will have the greatest likelihood of being good drug targets? Or are they known because of historical accident? Or, the third option, is what we know due to the tools we use? I don’t think this is a binary (trinary?) choice. The reality is surely a mix of all three. But if the first condition is the most predominant, that has some implications on what we can expect human genetics to do for drug development.

Postulate discovery in biology has a bias toward the genes with the largest effect being found first regardless of how one does the looking. To illustrate this, I’ll use an example from Ted Chiang’s amazing novella, “The Story of Your Life,” which was the basis for one of my favorite movies of last year, Arrival. A central theme in these works was how different ways of perceiving reality can nevertheless lead to the same place.

If one throws a ball through the air, where will it land? One can use Newtonian mechanics to describe the arc, rise and fall. Or one can use Lagrangian formulations to see the pathway as the one of minimizing actions the ball must take. Either method predicts the ball ends up in a specific place. My analogy: is our knowledge of genes like that? Would the accumulation of knowledge have looked pretty much the same even if different scientists had been using different tools to study different biological problems because by the nature of our shared evolutionary history certain genes are just more fundamental, important and pleiotropic? (For a fascinating rumination on the same question in chemistry, take a look at Derek Lowe’s piece here).

Contrast this with historical accident. Here I’ll go back to physics and invoke the idea of the many-worlds hypothesis. If we could rewind the clock of time and start again, how different would our history and discovery be? In this interpretation, initial discoveries are at least somewhat random but once they occur, it becomes more likely that knowledge will accrete around those initial discoveries like nacre around a grain of sand in an oyster’s mantle. Initial discoveries have a canalization effect, in other words, and as data and effects of specific genes accumulate, those canals get deeper. As illustrated in my earlier example of showing people lists of genes, there is a natural and understandable gravitation toward adding another pebble to the hill rather than placing a rock on a novel patch of ground.

And then there are tools. I’ve been a biological technologist for much of my career, using technologies like microarrays and, later, next-gen sequencing to speed, enhance and extend experimental approaches. So I know there are questions we could not have easily asked, biological problems we would not have tried to approach without the right tools. I remember the early days of fluorescent microscopy and how much that changed our view of the cell, and of Sanger and Maxam-Gilbert sequencing, when actually decoding the order of nucleotides for a gene became feasible. I also remember friendly-ish debates among the geneticists, biochemists, molecular biologists and cell biologists about the best way to do research, with each approach having specific benefits. This general assertion—that tools help us do more–seems circular and obvious, but the implications are deep. Just as many believe language shapes how we think, tools shape how we measure and construct our pictures of the world. When you have a hammer, and all that.

Circling back to PCSK9 and other human genetics-enabled targets, having an orthogonal target discovery method may not be enough to really push the industry forward if we’ve already found the majority of the most broadly effective drug targets. New targets may be effective but not better than current therapies except perhaps in niche indications. Good for precision medicine, but not so great amid the current pushback on drug prices. On the other hand, if limitations of tools and/or historical accident played the majority role in limiting discovery in the past, many innovative targets may be right around the corner as we sequence more genomes and begin to connect the dots between genetic abnormalities and problematic (or advantageous) phenotypes.

I don’t know the answer, but we’ll get an idea in the next few years as more of these genetics-derived targets make it to the clinic. If it does turn out that genetics helps with process and speed more than innovative leaps, well, that’s still helpful. That would also push us further toward new approaches, new platforms and combinatorial therapies. None of that will be easy, or quicker, or simpler. Just looking at the PD1/PDL1 combinatorial clinical trials landscape might be a preview of how messy this could be.PDl1

Also, if human genetics is orthogonal, it does increase the number of shots on goal a company can make although there are limits on how many targets any company can take to the clinic. It still begs the question, though, of whether those shots will be better or just different. And if it’s the latter, that’s not the solution the industry needs. Unfortunately, like so many techniques and tools that have come before (high throughput screening, anyone?), we just won’t know until we know. As much as people would like it to be so, knowledge in this area just won’t inexorably march onward and upward in a straight line.

 

Advertisements

How Brexit changed the way I look at biopharma’s reputation problem

This piece originally appeared in the Timmerman Report.

You may have heard something recently about Britain, the European Union (EU), some vote or other, chaos, turmoil, blah, blah, blah…You might also have heard how the presumptive Republican nominee for President of the United States has gotten to that position by identifying a strong thread of anti-establishment, populist sentiment in the US. And you may have heard that biotech and pharma is suffering from a reputation problem.

One of these things is not like the other, right?

I’m not so sure.

That biopharma has a reputation problem isn’t in doubt. The question, though, is how the industry got here. I want to know this because, thinking like many drug developers, I believe that by knowing the cause of a condition a fix can more easily be found.

There have been numerous candidate reasons, and I’m open to the idea that the cause is multifaceted just like it is for many chronic diseases. In the past year alone we’ve had the Martin Shkreli circus, admonishments about drug pricing from political candidates, analyses of how yearly increases in pricing often outstrip inflation, Pfizer pursuing quizzical acquisitions to avoid paying taxes, and companies suing the FDA to prevent generic competition. Biopharma’s problems go further back, as well, and examples of less than exemplary behavior abound. Hey, I was working at Merck when Vioxx was happening.

But Brexit points to something else. While it makes sense to look for behaviors by biopharma for causes for the reputation problem, business doesn’t happen in a vacuum. Political and social trends over the past few years suggest a rejection of elite opinion and earned expertise that is touching many parts of society and culture. Derek Lowe at In the Pipeline had a recent fascinating post on this phenomenon in the context of Right to Try laws (and also delving into Trump and Brexit). As he points out, Right to Try laws sit in that thorny spot where technological knowledge of drug development and Libertarian impulses collide. I can come up with a half dozen reasons why I think Right to Try laws are in general a bad idea, and none of them will sway someone who wants access to an experimental therapy for their dying child. You can see this playing out in the debate about whether eteplirsen should be approved for Duchenne Muscular Dystrophy.

Where did this suspicious, and sometimes hostile, reaction to elites and expert opinion come from? MSNBC anchor Chris Hayes, among others, has posited that over the past several years many people have suffered the effects of broken promises and crippled expectations. If the social contract between elites and the rest of the population is that if the elites (whether they are Democrats or Republicans) are given power, everyone will benefit, then breaking that contract leads to disillusionment and, eventually, rejection. A similar analysis from another part of the political spectrum was made by Charles Murray (H/T to @ScottGottliebMD). One can also point to growing partisanship as, if not causal, at least maintaining and contributing to the diminishment of expert and elite opinion. Unfortunately, there is no precision discrediting. When one calls to question the statements of scientists on specific topics such as global warming or vaccinations, one tars with a broad brush and the whole scientific edifice takes a hit. It’s like those kids with the paint rollers in Splatoon.

From this perspective, the poor reputation of biopharma stems at least in part from larger societal trends in how people perceive expertise. Healthcare is highly complicated and technical, and it’s not a stretch to say it’s associated with the expert and the elite. Taking this perspective has some good and some bad implications for biopharma. On the good side, one can say it’s not (all) the fault of the biopharma companies’ specific actions that the industry’s reputation has suffered. But on the bad side, this makes it that much harder to fix the problem. Better general overall behaviors by companies are a prerequisite for improving biopharma’s rep, but not the final cure.

However, if biopharma is serious about its reputation, and buys into this theory, it could use this perspective in a few ways.

First, it can look at the one industry that is highly expert driven and still has a good reputation: high technology, as represented by companies like Apple and Amazon, among others. I would conjecture that these companies, by taking a very consumer-focused approach and a real dedication to innovation, simply show people repeatedly, several times a day, that they are trustworthy and worth the money. Now, this is hard to do in biopharma where product development cycles are pretty much the diametric opposite of the fail fast, hard and often ethos found in Silicon Valley companies. But the industry can do a better job of explaining its innovative and impactful products, and being honest about when new products have neither—and pricing them accordingly.

The second thing is biopharma could start taking a longer, more societally focused view in how it uses its considerable lobbying muscle. To take one example, many in the US (and Europe) feel betrayed by the obvious effects of globalization on unemployment in some job sectors. Those in favor of globalization routinely argue that everyone benefits from cheaper prices on manufactured goods and also that hundreds of millions of people in the developing world are seeing a substantial increase in their living standards. This is measurably true. It’s also an argument that doesn’t resonate at all with someone who trained and worked as a machinist for fifteen years and lost her job due to outsourcing. There’s an asymmetry in perceived benefit versus experienced insult and loss

Biopharma could push for greater investments in job retraining, in both the public and private sectors, as well as for extensions to programs such as unemployment benefits to allow people the time to get retrained. You might say that this is outside the scope of what biopharma is responsible for, but that’s a self-imposed limit. One of the arguments for why elites and experts have lost their status is that so many organizations seem to be concerned solely with narrowly defined self-interest and shareholder value; not with the workers, customers and society within which they operate.

It’s a problem, figuring out the best way to rehabilitate biopharma’s rep, but it’s a necessary one to solve for the industry’s long term health. The Trump and Sanders campaigns have demonstrated that there are large reservoirs of resentment out there that shouldn’t be ignored. And it’s not hopeless either. Large scale societal change in attitudes can be done. Canada, unlike much of the developed world, has created a culture welcoming of immigrants, and this was accomplished via a long standing, coordinated effort by the Canadian government and others to make openness a core Canadian trait. They persisted and took the long view. If biopharma can spend decades and billions of dollars in dogged pursuit of specific targets (I’m looking at you, amyloid beta), then perhaps it can do the same to try and change the environment in which we all live and work.

 

Should Basic Lab Experiments Be Blinded to Chip Away at the Reproducibility Problem?

An earlier version of this piece appeared on the Timmerman Report.

Note added 23Feb2016: Also realized that I was highly influenced by Regina Nuzzo’s piece on biases in scientific research (and solutions) in Nature, which has been nicely translated to comic form here.

Some people believe biology is facing a “Reproducibility Crisis.” Reports out of industry and academia have pointed to difficulty in replicating published experiments, and scholars of science have even suggested it may be expected that a majority of published studies might not be true. Even if you don’t think the lack of study replication has risen to the crisis point, what is clear is that lots of experiments and analyses in the literature are hard or sometimes impossible to repeat. I tend to take the view that in general people try their best and that biology is just inherently messy, with lots of variables we can’t control for because we don’t even know they exist. Or, we perform experiments that have been so carefully calibrated for a specific environment that they’re successful only in that time and place, and sometimes even just with that set of hands. Not to mention, on top of that, possible holes in how we train scientists, external pressures to publish or perish, and ever-changing technology.

Still, to keep biomedical research pushing ahead, we need to think about how to bring greater experimental consistency and rigor to the scientific enterprise. A number of people have made thoughtful proposals. Some have called for a clearer and much more rewarding pathway for reporting negative results. Others have created replication consortia to attempt confirmation of key experiments in an orderly and efficient way. I’m impressed by the folks at Retraction Watch and PubPeer who, respectively, call attention to retracted work, and provide a forum for commenting on published work. That encourages rigorous, continual review of the published literature. The idea that publication doesn’t immunize research from further scrutiny appeals to me. Still others have called for teaching scientists how to use statistics with greater skill and appropriateness and nuance. To paraphrase Inigo Montoya in The Princess Bride, “You keep using a p-value cutoff of 0.05. I do not think it means what you think it means.”

To these ideas, I’d like to throw out another thought rooted in behavioral economics and our growing understanding of cognitive biases. Would it help basic research take a lesson from clinical trials and introduce blinding in our experiments? Continue reading

Baseball, regression to the mean, and avoiding potential clinical trial biases

This post originally appeared on The Timmerman Report. You should check out the TR.

It’s baseball season. Which means it’s fantasy baseball season. Which means I have to keep reminding myself that, even though it’s already been a month and a half, that’s still a pretty short time in the long rhythm of the season and every performance has to be viewed with skepticism. Ryan Zimmerman sporting a 0.293 On Base Percentage (OBP)? He’s not likely to end up there. On the other hand, Jake Odorizzi with an Earned Run Average (ERA) less than 2.10? He’s good, but not that good. I try to avoid making trades in the first few months (although with several players on my team on the Disabled List, I may have to break my own rule) because I know that in small samples, big fluctuations in statistical performance in the end  are not really telling us much about actual player talent.

One of the big lessons I’ve learned from following baseball and the revolution in sports analytics is that one of the most powerful forces in player performance is regression to the mean. This is the tendency for most outliers, over the course of repeated measurements, to move toward the mean of both individual and population-wide performance levels. There’s nothing magical, just simple statistical truth.

And as I lift my head up from ESPN sports and look around, I’ve started to wonder if regression to the mean might be affecting another interest of mine, and not for the better. I wonder if a lack of understanding of regression to the mean might be a problem in our search for ways to reach better health.
Continue reading

Making Change

And now for something completely different! Short fiction in honor of the recent unveiling of the Apple iWatch and Healthkit.

“I wouldn’t eat that if I were you.”

Sylvia paused, bacon cheeseburger halfway to her mouth, and peered at the neon green band wrapped around her wrist. The wraparound touchscreen was currently showing a cat emoji. It had a frowny face, expression halfway between puzzlement and alarm.

“What did you say?”

“I’m just saying,” said her Best Buddy wristband, “that when we met a few weeks ago, you mentioned wanting to keep your weight in a specific range.” The emoji shrugged. “Little friendly reminder. You know?”

Sylvia carefully put the burger back down and resisted the urge to lick grease off her fingers. She fumbled for her napkin, her fingers leaving translucent streaks on the thin, white paper.

“I–well, yeah. But, I mean, you’ve never said anything like this before like when–” She broke off, remembering the milkshake, the onion rings, the King-size Choconut bar…

“Well it’s not the first thing you do, is it? When you meet someone and you’re just getting to know them?” The cat had morphed into a light pink, animated mouse, standing on its hind legs, bashfully kicking one leg. “But now, we’re friends!” Continue reading

Baseball, Bayes, Fisher and the problem of the well-trained mind

One of the neat things about the people in the baseball research community is how willing many of them are to continually question the status quo. Maybe it’s because sabermetrics is itself a relatively new field, and so there’s a humility there. Assumptions always, always need to be questioned.

Case in point: a great post by Ken Arneson entitled “10 things I believe about baseball without evidence.” He uses the latest failure of the Oakland A’s in the recent MLB playoffs to highlight areas of baseball we still don’t understand, and for which we may not even be asking the right questions. Why, for example, haven’t the A’s advanced to the World Series for decades despite fielding good and often great teams? Yes there’s luck and randomness, but at some point the weight of the evidence encourages you to take a second look. Otherwise, you become as dogmatic as those who still point to RBIs as the measure of the quality of a baseball batter. Which they are not.

One of the thought-provoking things Arneson brings up is the question of whether the tools we use shape the way we study phenomena–really, the way we think–and therefore unconsciously limit the kinds of questions we choose to ask. His example is the use of SQL in creating queries and the inherent assumptions of that datatype that objects within a SQL database are individual events with no precedence or dependence upon others. And yet, as he points out, the act of hitting a baseball is an ongoing dialog between pitcher and batter. Prior events, we believe, have a strong influence on the outcome. Arneson draws an analogy to linguistic relativity, the hypothesis that the language a person speaks influences aspects of her cognition.

So let me examine this concept in the context of another area of inquiry–biological research–and ask whether something similar might be affecting (and limiting) the kinds of experiments we do and the questions we ask.

Continue reading

Is Opower the model for getting us to wellness and health?

This is a post about nudges. And optimism.

There’s a story I read a long time ago by David Brin. It’s called “The Giving Plague,” and the protagonist is a virologist and epidemiologist who describes his life working on viruses and vectors. The Plague of the title is a virus that has evolved the ability to make infected people enjoy donating blood. Recipients keep giving blood, leading to an exponentially expanding network of people who find themselves giving blood regularly and even circumventing age and other restrictions to make sure they can give their pint every eight weeks.

The central twist of the story is that the protagonist’s mentor, who discovers this virus, realizes people who donate blood also perform other altruistic acts–that the act of giving blood changes their own self image. Makes them behave as better people. And so he suppresses the discovery, for the greater good of society. The protagonist, a rampant careerist, begins plotting murder to allow him to take credit. But before he can act, more diseases strike, the Giving Plague moves through the population, and the protagonist forgets about it in his efforts to cure newer diseases.

And if anyone thinks something like this is too outlandish, I encourage you to read this piece about Toxoplasma gondii and how it makes infected mice charge at cats, the better to be eaten so that T. gondii can spread. Yeah.

But what does this story have to do with the future of wellness and health?

Continue reading