How Distributed R&D Could Spark Entrepreneurship in Biopharma

This piece originally appeared in the Timmerman Report.

Remember the patent cliff and the general lack of new and innovative medicines in the industry pipeline? That was the big story of the past decade in biopharma. It caused a lot of searching for the next best way to organize R&D to improve productivity. One doesn’t hear that quite as often today. There are more innovative drugs both recently approved and moving forward through the pipelines of several biopharma.

The conversation these days has shifted toward drug pricing, and how the public is going to pay for some of these new, exciting drugs (the answer, in some cases, is maybe it can’t).

I don’t think the industry out of the woods yet. One of the main reasons drug prices have become such an issue is because even though there are new, innovative drugs, there aren’t enough of them. At the same time many of the drugs being approved are incrementally better but nevertheless being priced at a premium. And good reporting has made the public more aware of how many of our existing drugs are rising in price on a yearly basis. Especially in a time of little inflation, prices of most goods have not been going up at nearly the rate of pharmaceuticals.

Biopharma sits in a tough place. Analyses suggest the cost of developing a new drug has generally been doubling every nine years, which may be a by-product of some combination of the complexity of biology, our inability to predict which drugs will work, and the “better than the Beatles” problem. The question then is how to overcome these issues and increase the efficiency of developing new, innovative drugs. Without some kind of change, the industry is looking at a very difficult future in which price hikes run headlong into the wall of payers who finally say enough. Then what? Continue reading

Should Basic Lab Experiments Be Blinded to Chip Away at the Reproducibility Problem?

An earlier version of this piece appeared on the Timmerman Report.

Note added 23Feb2016: Also realized that I was highly influenced by Regina Nuzzo’s piece on biases in scientific research (and solutions) in Nature, which has been nicely translated to comic form here.

Some people believe biology is facing a “Reproducibility Crisis.” Reports out of industry and academia have pointed to difficulty in replicating published experiments, and scholars of science have even suggested it may be expected that a majority of published studies might not be true. Even if you don’t think the lack of study replication has risen to the crisis point, what is clear is that lots of experiments and analyses in the literature are hard or sometimes impossible to repeat. I tend to take the view that in general people try their best and that biology is just inherently messy, with lots of variables we can’t control for because we don’t even know they exist. Or, we perform experiments that have been so carefully calibrated for a specific environment that they’re successful only in that time and place, and sometimes even just with that set of hands. Not to mention, on top of that, possible holes in how we train scientists, external pressures to publish or perish, and ever-changing technology.

Still, to keep biomedical research pushing ahead, we need to think about how to bring greater experimental consistency and rigor to the scientific enterprise. A number of people have made thoughtful proposals. Some have called for a clearer and much more rewarding pathway for reporting negative results. Others have created replication consortia to attempt confirmation of key experiments in an orderly and efficient way. I’m impressed by the folks at Retraction Watch and PubPeer who, respectively, call attention to retracted work, and provide a forum for commenting on published work. That encourages rigorous, continual review of the published literature. The idea that publication doesn’t immunize research from further scrutiny appeals to me. Still others have called for teaching scientists how to use statistics with greater skill and appropriateness and nuance. To paraphrase Inigo Montoya in The Princess Bride, “You keep using a p-value cutoff of 0.05. I do not think it means what you think it means.”

To these ideas, I’d like to throw out another thought rooted in behavioral economics and our growing understanding of cognitive biases. Would it help basic research take a lesson from clinical trials and introduce blinding in our experiments? Continue reading