To TED or not to TED…that is the question (for researchers)

All opinions are my own and do not necessarily reflect those of Novo Nordisk

Sharon Begley on Twitter (@sxbegle) pointed out an interesting article today about the effect of giving a TED talk for academic researchers.  The authors use a variety of library science techniques to characterize the TED-giving population.  Among other things, they found that presentations by academic researchers were generally more cited and liked on YouTube, but also that the number of citations of an academic researcher’s work did not increase after giving a TED talk.  This might suggest TED talks help researchers raise their public profile, but not necessarily their academic reputation.  There are caveats here, one of the main ones being that the scientists giving TED talks generally were measured as relatively “impactful” in terms of their publication history.  So, maybe these guys (and they’re mostly guys) were already cited as much as they would ever be.

On the other hand, getting a Nobel Prize does increase the number of times scientists are cited, so it is possible to increase your citations in some ways.  Just, you know, win a Nobel Prize.

This topic could go in a lot of directions–the role of scientists in popularizing science, the general TED phenomenon, public hunger for science versus science literacy, and maybe at a later point I can come back to that.  But in reading this study the first question I found myself asking was:  Does giving a TED talk get you more grants?  If we think about the academic research ecology, grants are the sun that feed the fields of academic research.  No grants, no research.  Also, no grants, no position.  And so I turned to the senior author on the study, Cassidy Sugimoto of Indiana University Bloomington.  I asked whether she thought grant funding might increase with a TED talk.  Her response via email:

“My hypothesis, motivated from our data, is that you would not see a significant difference. The scholars invited to present at TED were already in the scientific elite, cited significantly more than average for their fields. I would hazard a guess that they are also more likely to receive grant funding, but not because of TED. They are chosen for TED precisely because they are already elite. It’s a perfect example of the Matthew Effect at work–to those who have, more shall be given.”

And that led me to think of another question:  could giving a TED talk decrease your odds of getting a grant?  While we would like to think of the grant-awarding, peer review system of the NIH, for example, as fair and impartial, I have yet to meet a scientist who believes this.  This may not be a terrible thing–to paraphrase Winston Churchill, “It has been said that ‘NIH peer review’ is the worst form of  ‘grant awarding’ except all the others that have been tried.”  However, it does allow bias to creep in.  Studies have looked at the presence of biases in grant awarding under such theories as accumulative advantage.  Reviewers are human.  Humans have feelings and opinions.  If the human reviewing the grant of a TED speaker has strong negative opinions on the value of public engagement, or perceives TED presentations as grandstanding, would that lead unconsciously to a lower score?

On a practical level, the N is far too small and the timeframe too short to see the effect, if there is one.  Check back in 20 years and we’ll see.  But let me take this one step further and now ask, even if giving a TED talk doesn’t help in citations or grants, can it be a net positive for a researcher’s funding because it increases the probability of being able to use crowdsourcing approaches?

A number of researchers have sought research funding via Kickstarter and other sites.  However, I believe potential funders have a limited amount of disposable, impulse-driven income (I think of this as the iTunes/Latte pool), so this isn’t an endless new resource.  Right now the opportunity for crowdfunding is still pretty open but soon we could see many more scientists trying this route, especially as government sources of funding dry up.  In that case it will be a competitive market, and fame, such as might come from a nice TED talk, could be one of the factors leading to people choosing one project to fund over another.  Which could lead to what Cassidy Sugimoto refers to above as the Matthew effect.  The rich get richer.

And there are other possible negatives to crowdfunding, such as when researchers become driven to self-promotion to get crowdfunding support. To quote again from Cassidy’s email:

“…I think there are more disturbing issues in terms of branding of scholars. It is no longer sufficient to just do high quality research and publish it in reputable venues. Scholars must engage in personal branding–through social media and other means–to raise their value. Citations are not enough–now scholars also need to demonstrate their value in terms of tweets, media mentions, and the like…However, the negative implication is incentivizing branding over scholarship.”

But that’s a topic to expand upon some other day.  In any case, the possible negatives for giving a TED talk seem far outweighed by the real and possible benefits, and I’m in favor of anything that makes scientists talk to the public.

Not losing versus playing to win in Baseball and drug development

All opinions are my own and do not necessarily reflect those of Novo Nordisk

Another in a series about parallels between baseball and drug development

A recent post by Phil Birnbaum, who runs a baseball research site, did a nice job of highlighting how he feels stastisical analysis may best serve baseball organizations:  by ensuring that they don’t make losing moves.  While everyone is trying to win, in an industry where so much is uncertain, in many cases, it may be most effective to  “First, concentrate on eliminating bad decisions, not on making good decisions better.  And, second, figure out what  everyone else knows, but we don’t.”

This is a terrific observation, and one he backs up through the body of his post with examples from baseball and gambling strategy.  I think it applies quite well to drug development too.  I’ve made the earlier conjecture that drug development can be thought of as existing on the adaptive landscape, with improvements to drugs or drug classes getting harder and harder as you climb up that mountain of efficacy.  But when you’re on a slope, it’s really easy to go sideways or backwards.  So, the analogy here is that drug development, like baseball, needs to throw a lot of resources (not just statistical and analytical ones, either) into preventing a bad decision.

This thinking is also influenced by the ecology of pharma and biotech.  Let me be very clear about my initial assumption:  drug development is filled with really smart people, almost all of whom are dedicated, sharp, innovative, and really interested in winning.  So is baseball, (well, except maybe for the Kansas City Royals).  But it’s hard to put together a good drug development pipeline.  Resources help.  Resources often help.  But they aren’t enough.  And since the talent is there, the explanation for lackluster drug development progress may partly be found in companies still making poor decisions on assets.

Let me zero in on the second part of the quote in the first paragraph:  “And, second, figure out what everyone else knows, but we don’t.”  Here’s something else that companies could possibly do differently:  share data.  A really fascinating blurb in ScienceInsider just highlighted an effort by people at Johns Hopkins to try and get clinical trial information published, as long as it’s been publicly released in other formats such as through litigation or Freedom of Information Act requests.  While all the companies would prefer this not happen, if it happens uniformly, that can only be good for drug development as researchers learn more about why given trials were halted or failed.  If R&D costs as much as it does, part of the reason lies in duplicated effort.

To conclude, let me throw out another thought on decision making:  send in the crowds.  Crowdsourcing as a method for making decisions has been tried in a number of contexts and often has been found to lead to better overall decision making than more traditional methods.  If we want to make decisions on, for example, which drugs should move forward, setting up a system to poll everyone in the organization in a controlled, anonymous way might be enlightening.  I know this would not be a popular development for people in the C-suite, since, after all, that is their domain.  And I believe the assertion Malcolm Gladwell makes in Outliers that initial, small differences in environment can eventually lead to great differences in ability down the road as individuals get training and experiences not widely available.  Therefore those who are in the C-suite are different in their knowledge and outlook and know more about strategic decisions.  But they still don’t know everything, they still are human, they still have biases.  And a technician working in a lab in Boston may have noticed something in his cell cultures that  no one else is aware of.  If we want to make good decisions, shouldn’t we make sure that everyone possible has a voice?

Reining in Hyperbole About the Role of Drug Development

All opinions expressed are my own and do not necessarily reflect those of Novo Nordisk

“The traditional pharmaceutical research and development operating model is no longer sustainable,” says Dennis Liotta, PhD, founder of the Emory Institute for Drug Development (EIDD) and co-inventor of multiple approved drugs. “The marked decrease in the development of new therapeutics is having a uniformly negative effect on global health and threatens life expectancy, quality of life, economic development and national security. Emory’s new public-private enterprise is a bold new approach that can help solve this problem.”  From a press release from Emory University

Yesterday Emory University announced the creation of a non-profit entity, Drug Innovation Ventures at Emory, LLC (DRIVE), that aims to take discoveries made by Emory scientists and bring them to the point of proof-of-concept clinical trials.  I think this is a great idea.  While this press release is what I would call a bit overstated (while at the same time understating the real problems involved in translating discovery science into actual drugs), drug development is in need of different ideas and different approaches, and I’m all in favor of various organizations trying different things to develop drugs.  I’m also in favor of any system that gives academic researchers exposure to the steps leading to a lead candidate drug.

What concerns me, though, is the quote above.  Again, this is a press release and some hyperbole is to be expected, but at the same time is it fair to say that the reduced number of new drugs being developed is having a “uniformly negative effect on global health?”  Digging into DRIVE’s webpage, it’s clear the organization plans to focus on viral diseases, which makes the connection to global health direct.  However, I can’t help thinking that there’s a lot more involved in helping the health of people in developing countries than new vaccines and antivirals, and that while new drugs could help, the lack of new drugs doesn’t condemn those populations into some kind of downward spiral.  I doubt Dr. Liotta meant this explicitly, but his comment supports a view of a specific kind of technological innovation–drug development–as providing a cure, when I’d rather see expectations managed with a little more circumspection.  I think the industry suffers when presenting cures as being accomplished with a simple pill or a shot.  Many times, maybe most times, health problems are best viewed as the result of multiple, intertwined factors, of which biology is just one.

The Global Alliance for responsible sharing of genomic and clinical data: an Asilomar conference for today?

All opinions are my own and do not necessarily reflect those of Novo Nordisk

I’ve been mulling over the recent announcement by several prominent genomics organizations of a set of standards for the generation and sharing of genomic data.  This is a fantastic development in the field, and one with great implications for the future of genomics data research.  It’s also particularly apropos given the recent news that the US government has been collecting internet-based metadata broadly, in a previously secret program to detect signs of terrorist activity.

People are waking up to the idea that we all produce data of all kinds, and that advances in technology now make gathering that data as easy as getting wet in the Seattle winter.  Whether that data is created digitally and automatically throughout our day-to-day life, or via an action as simple as sending a cheek swab to a company, there are some deep questions that are just beginning to be asked, by the government and other organizations, as to how the use of these data best serves the public good.

As a UC Berkeley graduate student in the late 80s and early 90s, I heard the lore about the Asilomar Conference on Recombinant DNA.  I think it was part of our standard indoctrination package, along with instructions on how to navigate Telegraph Avenue and reasons why Berkeley was superior to Stanford and UCSF.  That conference, in the early days of the age of gene splicing technologies, was an amazing example of self-regulation by a group of scientists who had the forbearance to actually stop and think about the possible implications of what they were doing,  The Asilomar conference led to a set of standards for the practice of recombinant DNA technology, and an overall greater mindfulness of what we were actually doing when we moved genes around among organisms.

The announcement of the Global Alliance has a similar flavor to me.  Again, technological advances have opened a huge window of potential research opportunities that were not possible before.  The implications, however, will be unclear for quite some time, and the effect that cheap genome (and other kinds of) sequencing will have on research, public health, ethics, medicine, and many other fields is unknown, other than that it will be huge.  We were already seeing the beginnings of chaos in terms of data repositories, standards, and practices, but this Alliance suggests that, at least at the level of key players like the Broad Institute and the Wellcome Trust, there is a commitment to taking  a step back and trying to find a good answer to the question of how to do this well, to best serve the public good, and to try to minimize harm.

I wish them luck.

Supply chains in Drug Development?

This is a response I made to a recent post at Xconomy about the idea of drug development adopting a supply chain approach.  http://www.xconomy.com/san-diego/2013/05/31/test-the-supply-chain-model-this-market-driven-relationship-is-a-fail/.  All opinions are my own and do not necessarily reflect those of Novo Nordisk.

I really appreciate the ongoing conversation about how to fix the problems that appear to be facing drug development–specifically a lack of truly transformative, life- and health-changing new drugs.  I think the idea of a supply chain process in drug development is worth looking at.  However, I am not convinced it will actually fix the problem.

In this piece, Standish Fleming suggests a market driven process isn’t meeting the needs of drug development because the potential suppliers in the market (the startup biotechs) don’t have a clear view of what the eventual buyers (the pharma) really want as part of their strategic goals.  Alignment is often a good thing.  I believe many startups may not have a clear idea of what actually constitutes a good drug as many of them arise out of academia. This is not a criticism, just a statement of how the academic and industrial systems have different cultures, goals and knowledge bases. I also appreciate the point that, with capital harder to get via venture funds, pulling pharma in to replace that investment at an earlier stage requires some sacrifice of control on the part of the biotech, with a corresponding gain in risk sharing and predictability.

But I don’t think alignment is enough.  I worry instead that the key problem is one that’s been suggested by David Shaywitz and others–we just don’t understand enough about diseases to make the next generation of drugs.  It seems that the buyers themselves don’t have a clear idea of what is most likely to make a good drug.  As evidence, I’d suggest that if pharma really knew what they wanted, failures in Phase I-III would be far lower since drugs would never be tested in humans until pharma were sure of an 80-90% success rate.  Baseball aside, a 30% or lower success rate generally doesn’t make for a good business strategy, but that’s what we’ve got.  And I agree with the point that there are a lot of smart people working on the problem across pharma, so it’s not just a question of brainpower.

If pharma can’t easily predict what kinds of drugs will succeed, then this model may just swap out VC funding for pharma funding with the same net effect.  Also, the development of a drug is an incredibly long process.  For a pharma to be able to predict the market ten or more years ahead of time is adding another uncertainty yet.

Since I live in Seattle, I’d like to throw out the analogy of the Dreamliner.  A key reason the Dreamliner exists is because of 9/11.  Before that, Boeing was designing a supersonic passenger jet.  After 9/11, the pressure for nations to become more fuel-efficient to allow less involvement in the Middle East led Boeing to change course and design a plane that would instead be a model of efficient design.  So there’s an example in which changes in the market outside of a company’s control can render all its best plans moot.

Another point about the Dreamliner is that that project relied on a supply chain that ended up delaying launch for over a year.  I know people at Boeing and they have good project managers and good communicators and they told their suppliers exactly what they needed, and problems still arose.  Ever after launch, unexpected issues with batteries grounded the jets again.  How much messier might a supply chain relationship be between biotech and pharma?  Can deadlines and milestones be guaranteed when we won’t know until Phase I if we’re dealing with the next best thing in air travel or a flaming battery?

All this is not to say it couldn’t work; just that I’m skeptical.  I agree the current method seems inefficient and difficult to make work in the current funding environment.  I just wonder if maybe there is a third way.  Now, if only Bill Clinton could get into drug development…