No, CRISPR-Cas won’t save the day for ag biotech

You want to know how to drive a scientist crazy? Insist that you believe something that’s not supported by current scientific evidence. Tell her vaccines cause autism, or creationism is just as valid a theory as evolution, or that climate change isn’t really happening, I mean, after all, a monster blizzard hit Washington DC this January! Global warming, pssh…

There’s an old episode of Friends that did a good job of showing how this kind of conversation goes. Phoebe professes not to believe in evolution and Ross, a paleontologist, keeps trying to convince her that evolution is real using scientific evidence and logic. He grows increasingly frustrated and insistent as she continues to deny the basis of his life’s work, finally losing it when she goads him into admitting (like a good scientist) that even theories like evolution are not immune from questioning and testing.

We train scientists to carefully generate, weigh and use evidence. To no one’s surprise, this leads many scientists to generalize and think that in all matters having to do with the physical world we all should and of course will follow the evidence. Yes, sometimes that leads to unpopular ideas, and sometimes the ideas change as the weight of evidence changes. This training can make scientists kind of boring at cocktail parties. Still, the overall scientific process keeps moving forward and it’s because of this reliance on evidence.

But many people (including, at times, even some scientists) don’t always think the same way about things in the physical world. And that’s why I’m pessimistic that CRISPR-Cas technology will peacefully resolve the Genetically Modified Organism (GMO) debate. Continue reading

Should Basic Lab Experiments Be Blinded to Chip Away at the Reproducibility Problem?

An earlier version of this piece appeared on the Timmerman Report.

Note added 23Feb2016: Also realized that I was highly influenced by Regina Nuzzo’s piece on biases in scientific research (and solutions) in Nature, which has been nicely translated to comic form here.

Some people believe biology is facing a “Reproducibility Crisis.” Reports out of industry and academia have pointed to difficulty in replicating published experiments, and scholars of science have even suggested it may be expected that a majority of published studies might not be true. Even if you don’t think the lack of study replication has risen to the crisis point, what is clear is that lots of experiments and analyses in the literature are hard or sometimes impossible to repeat. I tend to take the view that in general people try their best and that biology is just inherently messy, with lots of variables we can’t control for because we don’t even know they exist. Or, we perform experiments that have been so carefully calibrated for a specific environment that they’re successful only in that time and place, and sometimes even just with that set of hands. Not to mention, on top of that, possible holes in how we train scientists, external pressures to publish or perish, and ever-changing technology.

Still, to keep biomedical research pushing ahead, we need to think about how to bring greater experimental consistency and rigor to the scientific enterprise. A number of people have made thoughtful proposals. Some have called for a clearer and much more rewarding pathway for reporting negative results. Others have created replication consortia to attempt confirmation of key experiments in an orderly and efficient way. I’m impressed by the folks at Retraction Watch and PubPeer who, respectively, call attention to retracted work, and provide a forum for commenting on published work. That encourages rigorous, continual review of the published literature. The idea that publication doesn’t immunize research from further scrutiny appeals to me. Still others have called for teaching scientists how to use statistics with greater skill and appropriateness and nuance. To paraphrase Inigo Montoya in The Princess Bride, “You keep using a p-value cutoff of 0.05. I do not think it means what you think it means.”

To these ideas, I’d like to throw out another thought rooted in behavioral economics and our growing understanding of cognitive biases. Would it help basic research take a lesson from clinical trials and introduce blinding in our experiments? Continue reading

Google Knows What’s in Your Inbox, But It Shouldn’t Get Your Genome Without Consent

Originally posted in the Timmerman Report

I once had an idea for a science fiction story where everyone was paranoid about their genetic information getting out because of a misguided belief that genes equal destiny and that the burden of privacy is all on the individual. People would wear protective suits and carefully guard against leaving any iota of tissue out in public—not a single follicle or skin flake. All to prevent anyone else—potential employers, rivals, even potential lovers—finding out information about their genes.

I planned the story as a satire, taking our current world where Precision Medicine and cheap genome sequencing and not-quite-as-cheap genome interpretation are real things, and extrapolating to an absurdity. I wanted to highlight the kinds of more realistic challenges we might face as we learn more about our genes and face increasing questions about privacy and access to health care services. Of course, I thought this was completely speculative; I’d just be building a straw man story to make a point. I knew something this extreme would never really happen.

But maybe I was wrong. Continue reading

Baseball, regression to the mean, and avoiding potential clinical trial biases

This post originally appeared on The Timmerman Report. You should check out the TR.

It’s baseball season. Which means it’s fantasy baseball season. Which means I have to keep reminding myself that, even though it’s already been a month and a half, that’s still a pretty short time in the long rhythm of the season and every performance has to be viewed with skepticism. Ryan Zimmerman sporting a 0.293 On Base Percentage (OBP)? He’s not likely to end up there. On the other hand, Jake Odorizzi with an Earned Run Average (ERA) less than 2.10? He’s good, but not that good. I try to avoid making trades in the first few months (although with several players on my team on the Disabled List, I may have to break my own rule) because I know that in small samples, big fluctuations in statistical performance in the end  are not really telling us much about actual player talent.

One of the big lessons I’ve learned from following baseball and the revolution in sports analytics is that one of the most powerful forces in player performance is regression to the mean. This is the tendency for most outliers, over the course of repeated measurements, to move toward the mean of both individual and population-wide performance levels. There’s nothing magical, just simple statistical truth.

And as I lift my head up from ESPN sports and look around, I’ve started to wonder if regression to the mean might be affecting another interest of mine, and not for the better. I wonder if a lack of understanding of regression to the mean might be a problem in our search for ways to reach better health.
Continue reading

An Open Standard for APIs Could Lead us to Better Health

There’s a parable about the elephant and the rider that’s been used by Chip and Dan Heath, and that originated with Jonathan Haidt, to describe how humans make decisions. A person’s mind can be thought of as consisting of a rider, representing the rational part of human thinking, and the elephant she’s riding, representing emotion. Both of these play a role in how a person decides things, and many of us believe the rider–the rational part–is in charge. The rider taps the elephant with her guide stick, and the elephant obediently moves in that general direction or does a specific task, like carrying lumber from place to place.

Except that’s not how a lot of decisions actually get made. Instead, the elephant sees a bunch of bananas, or a herd of other elephants, or a nice cool river to bathe in, and goes that way instead. And the rider…well, the rider can’t do much about it except, after the fact, rationalize how she always wanted to go in that direction to begin with. Yeah, it was time for a bath, sure

This framing has stuck in my mind for years and it’s a really helpful way of looking at many of the odd things that people do or say, ranging from climate change denial, to believing genetically modified organisms are inherently evil, to smoking despite everything we know about the harms that result, to even saying that Paul Blart, Mall Cop II is really, you know, not that bad–really. And it also speaks to one of the more vexing problems we have in human health. Why do people keep doing things they really probably shouldn’t, and know they shouldn’t, if they want to stay healthy?

I’ve touched before on how the power of digital tools can help make it easier for us to make good decisions. OPower is doing this for power consumption and conservation, and with the advent of tools like Apple’s Healthkit and the proliferation of activity trackers, the time is right to do this for health. Continue reading