This piece originally appeared in the Timmerman Report.
STAT recently published an in-depth report about the many research centers that don’t bother to publicly disclose the results of their clinical trials, even though they are required to do so. This follows on a New England Journal of Medicine article back in March that had a similar analysis of the lack of reporting and publication of clinical trial data to clinicaltrials.gov.
Most observers of biomedical research would agree that getting clinical trial data out about what happened in a trial is pretty important, whether the trial succeeded or failed. After all, biomedical translational research is most meaningful when done on human subjects and negative information can be quite informative and useful. Animal models are nice, but translation of results from animals to humans is a spotty proposition at best. We need to know what’s working, and what’s not, to know how to best allocate our research resources and how to treat patients.
The lack of reporting is an embarrassment for research. It’s also understandable, because so far the FDA hasn’t used its authority to punish anyone for delayed reporting. Nobody appears to have lost any research funding because they failed to post trial results in a timely manner. Universities told STAT their researchers were “too busy,” given other constraints on their time, to report their results. So what really seems to be going on is that reporting is prioritized below most other activities in clinical research.
It was interesting and eye-opening that industry fared better than academia in both the STAT story and the NEJM article with respect to how many studies have been reported. Having seen the industry process first-hand, I’d speculate that (at least for positive trials) there’s a much stronger incentive to get data out in public. Successful trial results can create buzz among clinicians and patients, revving up trial enrollment which can then help get a new drug on the market faster, and convince people to use it when it’s available. It may be that in academia the effort of getting trial results in the required format for clinicaltrials.gov is perceived as too much work, relative to the rewards. Academics are naturally going to spend more energy on directly rewarded activities like writing grant proposals and writing peer-reviewed scientific publications that help them win even more grants, promotions, and other accolades. Well okay. If this is the case, then figuring out new incentives may be key.
So what would work? Anyone who participates in a clinical trial is providing time, may be subject to risks and often is asked to provide samples that are biobanked to support future exploratory and translational research. It’s like when people donate to food banks. I’m pretty sure they mean that food to be eaten and not to sit on a shelf. These participants in clinical trials deserve to have their volunteerism rewarded.
This got me thinking about how to empower patients to get more of what they want. Patient-centered research is a buzzword these days, and for good reason. Patients have at times been an afterthought in the biomedical research enterprise. I thought of services like Yelp and Uber and Angie’s List and other peer-to-peer systems that allow users to get information, provide feedback and give ratings to specific providers. And I wondered: could this be a way to apply pressure to clinical trial researchers to improve their reporting?
Imagine a system in which analyses from credible sources such as the STAT and NEJM articles provide a baseline set of metrics for each institution. In addition there would be reviews—how well was a person treated? Was the trial explained clearly? What did the consent cover? If the trial resulted in publications, those can be linked to as well. The interface would need to be clear and simple, a well-designed view that allows users to get a quick snapshot of information to help them make a decision about whether or not to participate in a trial run by a given institution.
And that could create an incentive to improve clinical trial conduct in many respects, including timely communication of results. If enough of the potential participants begin using these profiles, organizations with poor reporting, or other deficits in performance, might find themselves struggling to find patients to enroll in their studies. That in turn would lead to reduced funding, fewer CV-improving publications, and stalled programs for drug development.
A number of companies could potentially build something like this. PatientsLikeMe already runs a portal for clinical trial recruitment; possibly they could add a few additional graphics or other pieces of information about how good a site is at reporting results—a smiley emoji for a site that promptly reports results, and a frowny one for a site that doesn’t. If I wanted to get even more speculative, maybe an e-commerce site like Amazon or one of the developers using Apple’s Healthkit would give it a shot. If Amazon really is bent on being the world’s marketplace, and continually expanding into new and different markets, clinical trial recruitment would be an interesting, splashy way to forge into a new domain.
Of course, this might not make a difference. If patients don’t act upon these ratings, there wouldn’t be pressure. For people with life-threatening conditions, the incentive for them to participate in clinical research, and potentially get an important new therapy, will without doubt be much greater than concerns about downstream publication of the results. Patients also don’t always have many choices when it comes to clinical trial enrollment, especially if there’s only one major site enrolling near their home. Still, with the rise in patient-centered research approaches and with patient groups becoming increasingly sophisticated, vocal and unified in their advocacy efforts, perhaps it isn’t unreasonable to expect them to use ratings as part of their decision process for the trials in which to enroll. This recent report on factors affecting failures in accruing enough patients for cancer clinical trials suggests there’s real leverage on the patient side—a lot of trials never reach their enrollment targets. Further, a recent update by STAT suggests that public pressure arising from their and other reports have led to an increase in clinical trial data depositing, so institutions can be responsive to outside forces.
Altruism is a key motivator that gets people to participate in clinical research. I think, for prospective patients, knowing ahead of time how likely it is that their efforts will actually result in a publicly available extension to human knowledge (even if it’s a negative result) could be very important in choosing which study to go with. And that patient-driven pressure could spur researchers to be better about crossing the finish line with their data.