Sequencing in polio, baseball pitching and cancer: sometimes the order of events matters

This piece originally appeared in the Timmerman Report.

What do the polio virus, baseball pitch choice and cancer have in common?

The answer, of course, is sequencing. But not in the “figure out the DNA” way (although that’s involved). Instead in the “what comes first” way. Confused? Read on!

A big perk of Seattle is proximity to great institutions of biomedical research like the University of Washington and the Fred Hutchinson Cancer Research Center. Ever since my graduate student days in genetics at UC-Berkeley I’ve enjoyed going to seminars–especially seminars that are outside my field of study. Very little beats a good seminar for giving you a quick, condensed view of the state of a field of research. A bad seminar…well…we all could use more sleep, right?

In early October, Raul Andino of UCSF came to the Hutch to talk about his work on viral evolution. His team has been examining a clever real-world system to track the evolution of viruses. The near-eradication of polio (one of the great public heath victories of the past century) has led to the curious problem that as of the middle of this year most new cases of polio arose as a result of vaccination efforts. The live, attenuated vaccine that’s used in the developing world can, in very rare cases, mutate in just the wrong ways in its host, leading to the creation of a virulent strain that can infect others. In the US we use an inactivated polio vaccine which requires several injections; in much of the developing world the oral polio virus is preferred due to its ease of administration, lower cost, and immunization profile. The Andino lab realized that by studying these isolated outbreaks, which all originated with the same, genetically identical progenitor, they could test a hypothesis about the adaptive landscape of virulence evolution. Continue reading

Advertisements

Drug development and the NFL draft

All opinions are my own and do not necessarily reflect those of Novo Nordisk.

The NFL draft is happening as I am writing this post. And of the many draft-related pieces I’ve read in the past few days, one from Vox.com particularly stood out. The article, by Joseph Stromberg, describes research by Cade Massey and Richard Thaler (here and here) about the skewed and irrational choices often made by teams during trades of draft picks. In essence, teams are likely to pursue a strategy in trading up that suggests they believe they have a much greater ability to forecast the future performance of a given player than is actually the case. Put another way, rather than following a strategy of diversified risk, teams commit to a specific player that they feel they need to get, rather than simply seeing who’s available when they are scheduled to pick and choosing the best player on their draft board.

Historical analysis shows that the difference between various players drafted at the same position is often negligible; on top of that teams who aggressively trade down and gather more picks in the lower rounds generally do better in terms of the value they receive for the money they spend in salaries. One might argue this is an artifact in part of the NFL Rookie salary structure, but even without that, players taken in later rounds will always command smaller salaries. Getting similar value for less money is generally a good thing.

If you’ve read posts from this blog before you know where I’m going. Drafting NFL rookies sounds a lot like developing drugs. Continue reading

Big Data and Public Health: An interview with Dr. Willem van Panhuis about Project Tycho, digitizing disease records, and new ways of doing research in public health

All opinions of the interviewer are my own and do not necessarily reflect those of Novo Nordisk.

One of the huge and perhaps still underappreciated aspects of the internet age is the digitization of information. While the invention of the printing press made the copying of information easy, quick and accurate, print still relied on books and other printed materials that were moved from place to place to spread information. Today digitization of information, cheap (almost free) storage, and the pervasiveness of the internet have vastly reduced barriers to use, transmission and analysis of information.

In an earlier post I described the project by researchers at the University of Pittsburgh that digitized US disease reports over the past 120+ years, creating a computable and freely available database of disease incidence in the US (Project Tycho, http://www.tycho.pitt.edu/) This incredible resource is there for anyone to download and use for research ranging from studies of vaccine efficacy to the building of epidemiological models to making regional public health analyses and comparisons.

Their work fascinates me both for what it said about vaccines and also for its connection to larger issues like Big Data in Public Health. I contacted the lead researcher on the project, Dr. Willem G. van Panhuis and he very kindly consented to an interview. What follows is our conversation about his work and the implications of this approach for Public Health research.

vanPanhuis,Wilbert[brianCohen20131113] (12)_resized

Dr. Willem van Panhuis. Image credit: Brian Cohen, 2013

Kyle Serikawa: Making this effort to digitize the disease records over the past ~120 years sounds like a pretty colossal undertaking. What inspired you and your colleagues to undertake this work?

Dr. Willem van Panhuis: One of the main goals of our center is to make computational models of how diseases spread and are transmitted. We’re inspired by the idea that by making computational models we can help decision makers with their policy choices. For example, in pandemics, we believe computational models will help decision makers to test their assumptions, to see how making different decisions will have different impacts.

So this led us to the thinking behind the current work. We believe that having better and more complete data will lead to better models and better decisions. Therefore, we needed better data.

On top of this, each model needs to be disease specific because each disease acts differently in how it spreads and what effects it has. In contrast, however, the basic data collection process that goes into creating the model for each disease is actually pretty similar across diseases. There is contacting those with the records of disease prevalence and its spread over time, collecting the data and then making the data ready for analysis. There’s considerable effort in that last part, especially as Health Departments often do not have the capacity to spend a lot of time and effort on responding to data requests by scientists.

The challenges are similar–we go through the same process every time we want to model a disease–so when we learned that a great source of much of the disease data in the public domain is in the form of these weekly surveillance reports published in MMWR and precursor journals, we had the idea: if we digitize the data once for all the diseases that would provide a useful resource for everybody.

We can make models for ourselves, but we can also allow others to do the same without duplication of effort. Continue reading

Big Data provide yet more Big Proof of the power of vaccines

All opinions are my own and do not necessarily reflect those of Novo Nordisk.

Time for another screed about the anti-vaccination movement.

Well, not about them per se, but rather about another study that demonstrates how much of a positive difference vaccines have made in the US. The article, from researchers at the University of Pittsburgh and Johns Hopkins University, describes what I can only imagine to be a Herculean effort to digitize disease reporting records from 1888 to 2011 (article behind a paywall, unfortunately).  Turns out there are publications that have been collecting weekly reports of disease incidence across US cities for over a century.  I have not been able to access the methods, but I can’t shake the image of hordes of undergraduates hunched over yellowed clippings and blurry photocopies of 19th century tables, laboriously entering numbers one by one into a really extensive excel spreadsheet.

All told, 87,950,807 individual cases were entered into their database, including location, time, and diseases.  Not fun, however it was done. Continue reading

The Innovator’s Dilemma in biopharma part 2. Where are the markets for disruptive tech?

All opinions are my own and do not necessarily reflect those of Novo Nordisk.

h/t to @Frank_S_David, @scientre, and the LinkedIn Group Big Ideas in Pharma Innovation and R&D Productivity for links and ideas

Biopharma may be ripe for disruptive innovation to come in and overturn their markets but that doesn’t mean it will happen. There are constraints beyond those of pure business, including the simple fact that treating diseases is really difficult and we don’t know as much as we would like about how biology really works. I see today’s biopharma market as a victim of its own success. The 80s and 90s saw the creation of truly life-changing, effective drugs like statins, and that has set the bar high enough that I think we’ve passed the inflection point at which approaches like high-throughput screening are becoming less likely to yield a substantial improvement in effectiveness. I’ve used the analogy before of drug development occurring on an adaptive landscape (Figure 2), with every improvement moving up a hill towards the theoretical perfect medicine at the apex. The higher up the hill one gets, the harder it is to move uphill and most efforts move sideways or down, simply because there’s more territory in those directions. This is a constraint that a disruptive innovation would have to overcome in some way.

Figure 2

Figure 2: The adaptive landscape for drug development.  Yes I drew this myself.  I would plug the drawing program, except I think they’d probably prefer not to be associated with this image. Continue reading

Hanging with the herd, for the immunity of it all

All opinions are my own and do not necessarily reflect those of Novo Nordisk.

When I hear about events such as the recent outbreak of measles among a small group in Texas, I am reminded of how complex, complicated and difficult public health efforts can be. In the US, for example, there are conflicting imperatives:  the rights of people to practice their beliefs versus the right of the community to be protected against preventable health threats.  This particular situation involved members of a church congregation, many of whom had not gotten vaccinated for measles due to worries about a link between autism and the Measles-Mumps-Rubella (MMR) vaccine.  While no scientific evidence has been found to support any such link, many had chosen not to be vaccinated “just in case.”

One day I hope to write about the link between the phenomenon of science denial and personal identity (one perspective can be seen here), but for now I just want to point out how this event and a recent publication by the Centers for Disease Control (CDC) on rotavirus vaccines demonstrate nicely the concept of herd immunity (article behind paywall, but writeup here).  There are different usage patterns for the term, so I’ll say up front I am using “herd immunity” to describe not just the proportion of individuals within a population who are immunized to a given pathogen but also the indirect effects for non-immunized individuals.  The term was first used in a publication in 1923, by Topley and Wilson, in the context of how to describe the host side of their studies in bacterial infection among mice.  The concept later gained mathematical underpinnings, including formulas describing how the different ratios of vaccinated to nonvaccinated individuals defines the degree of herd immunity depending upon how infectious a disease agent is. Continue reading