One way to improve clinical trial reporting: a Yelp-style rating system

This piece originally appeared in the Timmerman Report.

STAT recently published an in-depth report about the many research centers that don’t bother to publicly disclose the results of their clinical trials, even though they are required to do so. This follows on a New England Journal of Medicine article back in March that had a similar analysis of the lack of reporting and publication of clinical trial data to clinicaltrials.gov.

Most observers of biomedical research would agree that getting clinical trial data out about what happened in a trial is pretty important, whether the trial succeeded or failed. After all, biomedical translational research is most meaningful when done on human subjects and negative information can be quite informative and useful. Animal models are nice, but translation of results from animals to humans is a spotty proposition at best. We need to know what’s working, and what’s not, to know how to best allocate our research resources and how to treat patients.

The lack of reporting is an embarrassment for research. It’s also understandable, because so far the FDA hasn’t used its authority to punish anyone for delayed reporting. Nobody appears to have lost any research funding because they failed to post trial results in a timely manner. Universities told STAT their researchers were “too busy,” given other constraints on their time, to report their results. So what really seems to be going on is that reporting is prioritized below most other activities in clinical research.

It was interesting and eye-opening that industry fared better than academia in both the STAT story and the NEJM article with respect to how many studies have been reported. Having seen the industry process first-hand, I’d speculate that (at least for positive trials) there’s a much stronger incentive to get data out in public. Successful trial results can create buzz among clinicians and patients, revving up trial enrollment which can then help get a new drug on the market faster, and convince people to use it when it’s available. It may be that in academia the effort of getting trial results in the required format for clinicaltrials.gov is perceived as too much work, relative to the rewards. Academics are naturally going to spend more energy on directly rewarded activities like writing grant proposals and writing peer-reviewed scientific publications that help them win even more grants, promotions, and other accolades. Well okay. If this is the case, then figuring out new incentives may be key.

So what would work? Anyone who participates in a clinical trial is providing time, may be subject to risks and often is asked to provide samples that are biobanked to support future exploratory and translational research. It’s like when people donate to food banks. I’m pretty sure they mean that food to be eaten and not to sit on a shelf. These participants in clinical trials deserve to have their volunteerism rewarded.

This got me thinking about how to empower patients to get more of what they want. Patient-centered research is a buzzword these days, and for good reason. Patients have at times been an afterthought in the biomedical research enterprise. I thought of services like Yelp and Uber and Angie’s List and other peer-to-peer systems that allow users to get information, provide feedback and give ratings to specific providers. And I wondered: could this be a way to apply pressure to clinical trial researchers to improve their reporting? Continue reading

No, CRISPR-Cas won’t save the day for ag biotech

You want to know how to drive a scientist crazy? Insist that you believe something that’s not supported by current scientific evidence. Tell her vaccines cause autism, or creationism is just as valid a theory as evolution, or that climate change isn’t really happening, I mean, after all, a monster blizzard hit Washington DC this January! Global warming, pssh…

There’s an old episode of Friends that did a good job of showing how this kind of conversation goes. Phoebe professes not to believe in evolution and Ross, a paleontologist, keeps trying to convince her that evolution is real using scientific evidence and logic. He grows increasingly frustrated and insistent as she continues to deny the basis of his life’s work, finally losing it when she goads him into admitting (like a good scientist) that even theories like evolution are not immune from questioning and testing.

We train scientists to carefully generate, weigh and use evidence. To no one’s surprise, this leads many scientists to generalize and think that in all matters having to do with the physical world we all should and of course will follow the evidence. Yes, sometimes that leads to unpopular ideas, and sometimes the ideas change as the weight of evidence changes. This training can make scientists kind of boring at cocktail parties. Still, the overall scientific process keeps moving forward and it’s because of this reliance on evidence.

But many people (including, at times, even some scientists) don’t always think the same way about things in the physical world. And that’s why I’m pessimistic that CRISPR-Cas technology will peacefully resolve the Genetically Modified Organism (GMO) debate. Continue reading

Should Basic Lab Experiments Be Blinded to Chip Away at the Reproducibility Problem?

An earlier version of this piece appeared on the Timmerman Report.

Note added 23Feb2016: Also realized that I was highly influenced by Regina Nuzzo’s piece on biases in scientific research (and solutions) in Nature, which has been nicely translated to comic form here.

Some people believe biology is facing a “Reproducibility Crisis.” Reports out of industry and academia have pointed to difficulty in replicating published experiments, and scholars of science have even suggested it may be expected that a majority of published studies might not be true. Even if you don’t think the lack of study replication has risen to the crisis point, what is clear is that lots of experiments and analyses in the literature are hard or sometimes impossible to repeat. I tend to take the view that in general people try their best and that biology is just inherently messy, with lots of variables we can’t control for because we don’t even know they exist. Or, we perform experiments that have been so carefully calibrated for a specific environment that they’re successful only in that time and place, and sometimes even just with that set of hands. Not to mention, on top of that, possible holes in how we train scientists, external pressures to publish or perish, and ever-changing technology.

Still, to keep biomedical research pushing ahead, we need to think about how to bring greater experimental consistency and rigor to the scientific enterprise. A number of people have made thoughtful proposals. Some have called for a clearer and much more rewarding pathway for reporting negative results. Others have created replication consortia to attempt confirmation of key experiments in an orderly and efficient way. I’m impressed by the folks at Retraction Watch and PubPeer who, respectively, call attention to retracted work, and provide a forum for commenting on published work. That encourages rigorous, continual review of the published literature. The idea that publication doesn’t immunize research from further scrutiny appeals to me. Still others have called for teaching scientists how to use statistics with greater skill and appropriateness and nuance. To paraphrase Inigo Montoya in The Princess Bride, “You keep using a p-value cutoff of 0.05. I do not think it means what you think it means.”

To these ideas, I’d like to throw out another thought rooted in behavioral economics and our growing understanding of cognitive biases. Would it help basic research take a lesson from clinical trials and introduce blinding in our experiments? Continue reading

Google Knows What’s in Your Inbox, But It Shouldn’t Get Your Genome Without Consent

Originally posted in the Timmerman Report

I once had an idea for a science fiction story where everyone was paranoid about their genetic information getting out because of a misguided belief that genes equal destiny and that the burden of privacy is all on the individual. People would wear protective suits and carefully guard against leaving any iota of tissue out in public—not a single follicle or skin flake. All to prevent anyone else—potential employers, rivals, even potential lovers—finding out information about their genes.

I planned the story as a satire, taking our current world where Precision Medicine and cheap genome sequencing and not-quite-as-cheap genome interpretation are real things, and extrapolating to an absurdity. I wanted to highlight the kinds of more realistic challenges we might face as we learn more about our genes and face increasing questions about privacy and access to health care services. Of course, I thought this was completely speculative; I’d just be building a straw man story to make a point. I knew something this extreme would never really happen.

But maybe I was wrong. Continue reading

Baseball, regression to the mean, and avoiding potential clinical trial biases

This post originally appeared on The Timmerman Report. You should check out the TR.

It’s baseball season. Which means it’s fantasy baseball season. Which means I have to keep reminding myself that, even though it’s already been a month and a half, that’s still a pretty short time in the long rhythm of the season and every performance has to be viewed with skepticism. Ryan Zimmerman sporting a 0.293 On Base Percentage (OBP)? He’s not likely to end up there. On the other hand, Jake Odorizzi with an Earned Run Average (ERA) less than 2.10? He’s good, but not that good. I try to avoid making trades in the first few months (although with several players on my team on the Disabled List, I may have to break my own rule) because I know that in small samples, big fluctuations in statistical performance in the end  are not really telling us much about actual player talent.

One of the big lessons I’ve learned from following baseball and the revolution in sports analytics is that one of the most powerful forces in player performance is regression to the mean. This is the tendency for most outliers, over the course of repeated measurements, to move toward the mean of both individual and population-wide performance levels. There’s nothing magical, just simple statistical truth.

And as I lift my head up from ESPN sports and look around, I’ve started to wonder if regression to the mean might be affecting another interest of mine, and not for the better. I wonder if a lack of understanding of regression to the mean might be a problem in our search for ways to reach better health.
Continue reading