What’s the Role of Experts? A Review of The Death of Expertise and Some Thoughts for Biopharma

This piece originally appeared in The Timmerman Report. 


The Death of Expertise by Tom Nichols, 2017, Oxford University Press.

If you’re reading this, chances are you’re an expert or well on your way to becoming one. The Timmerman Report is tailored by content and intent to be valuable to those with the knowledge, experience and interest to make biopharma news worth reading. Experts, in other words.

This isn’t a trivial point: for the vast majority of people—that is, those non-expert in biopharma—news in sites like this one or STAT or Endpoints is as useful as scuba equipment to an octopus. And that’s fine; that’s how our knowledge-based society works. Individuals become experts in specific fields, they take the time and effort to master a specific area and they build up the intellectual framework to enable advances, discoveries and explanations. Specialization underlies the technological, societal and scientific wonders we take for granted today. There are just too many fields of study for any one person to master, the Maesters of a Song of Ice and Fire aside. Divide and conquer isn’t just for Roman governance philosophy; it also makes for progress.

The natural corollary is that we are all affected by what experts outside our field say and do. Lacking a working and academic knowledge of biopharma does not immunize a person from the impact of the kinds of issues, news, and discoveries discussed and reported here. Drug pricing, innovation, access and healthcare quality and affordability have huge impacts on everyone in the US.

And boy, do many of them have opinions about that! Opinions that they hold tighter and higher than the words of experts. Opinions that influence the ways in which they speak, act, think and yes, sometimes, vote.

This growing issue is at the heart of Tom Nichols’ book, The Death of Expertise. Nichols, a professor in National Security Affairs at the Naval War College and adjunct at the Harvard Extension School, is a former Senate aide and an expert in Soviet studies. I first became familiar with his work when, after last year’s US Presidential Election, I started consciously expanding the circle of thinkers I listened to. Like Daniel MacArthur and many others of a more liberal bent, I’ve tried to find and listen to people on the center and right.

Continue reading


Should Basic Lab Experiments Be Blinded to Chip Away at the Reproducibility Problem?

An earlier version of this piece appeared on the Timmerman Report.

Note added 23Feb2016: Also realized that I was highly influenced by Regina Nuzzo’s piece on biases in scientific research (and solutions) in Nature, which has been nicely translated to comic form here.

Some people believe biology is facing a “Reproducibility Crisis.” Reports out of industry and academia have pointed to difficulty in replicating published experiments, and scholars of science have even suggested it may be expected that a majority of published studies might not be true. Even if you don’t think the lack of study replication has risen to the crisis point, what is clear is that lots of experiments and analyses in the literature are hard or sometimes impossible to repeat. I tend to take the view that in general people try their best and that biology is just inherently messy, with lots of variables we can’t control for because we don’t even know they exist. Or, we perform experiments that have been so carefully calibrated for a specific environment that they’re successful only in that time and place, and sometimes even just with that set of hands. Not to mention, on top of that, possible holes in how we train scientists, external pressures to publish or perish, and ever-changing technology.

Still, to keep biomedical research pushing ahead, we need to think about how to bring greater experimental consistency and rigor to the scientific enterprise. A number of people have made thoughtful proposals. Some have called for a clearer and much more rewarding pathway for reporting negative results. Others have created replication consortia to attempt confirmation of key experiments in an orderly and efficient way. I’m impressed by the folks at Retraction Watch and PubPeer who, respectively, call attention to retracted work, and provide a forum for commenting on published work. That encourages rigorous, continual review of the published literature. The idea that publication doesn’t immunize research from further scrutiny appeals to me. Still others have called for teaching scientists how to use statistics with greater skill and appropriateness and nuance. To paraphrase Inigo Montoya in The Princess Bride, “You keep using a p-value cutoff of 0.05. I do not think it means what you think it means.”

To these ideas, I’d like to throw out another thought rooted in behavioral economics and our growing understanding of cognitive biases. Would it help basic research take a lesson from clinical trials and introduce blinding in our experiments? Continue reading