Baseball analytics, arthritis, and the search for better health forecasts

All opinions are my own and do not necessarily reflect those of Novo Nordisk.

It’s Fourth of July weekend in Seattle as I write this. Which means it’s overcast. This was predictable, just as it’s predictable that for the two months after July 4th the Pacific Northwest will be beautiful, sunny and warm. Mostly.

Too bad forecasting so many other things–baseball, earthquakes, health outcomes–isn’t nearly as easy. But that doesn’t mean people have given up. There’s a lot to be gained from better forecasting, even if the improvement is just by a little bit.

And so I was eager to see the results from a recent research competition in health forecasting. The challenge, which was organized as a crowdsourcing competition, was to find a classifier for whether and how rheumatoid arthritis (RA) patients will respond to a specific drug treatment. The winning methods are able to predict drug response to a degree significantly better than chance, which is a nice advance over previous research.

And imagine my surprise when I saw that the winning entries also have an algorithmic relationship to tools that have been used for forecasting baseball performance for years.

The best predictor was a first cousin of PECOTA. Continue reading

The power law relationship in drug development

All opinions are my own and do not necessarily reflect those of Novo Nordisk.

A few weeks ago a friend and I had the great opportunity to go see Nate Silver speak at the University of Washington. He’s a funny, engaging speaker, and for someone like me who makes his living generating and analyzing data, Silver’s work in sports, politics and other fields has been inspirational.  Much of his talk covered elements of his book, The Signal and the Noise, which I read over a year ago. It was good to get a refresher. One of the elements that particularly struck me this time around, to the point that I took a picture of his slide, was the concept of the power law and its empirical relationship to so many of the phenomena we deal with in life.

Nate Silver graph small

Figure 1: Slide from Nate Silver’s talk demonstrating the power law relationship in business–how often the last 20% of accuracy (or quality or sales or…) comes from the last 80% of effort.

Because I spend way too much time thinking about the business of drug development, I started thinking of how this concept applies to our industry and specifically the problem the industry is facing with creating innovative medicines.

Continue reading

The innovators dilemma in biopharma part 3. What would disruption look like?

All opinions are my own and do not necessarily reflect those of Novo Nordisk.

h/t to @Frank_S_David, @scientre, and the LinkedIn Group Big Ideas in Pharma Innovation and R&D Productivity for links and ideas

Part 1 is here.

Part 2 is here.

In the previous parts to this series I’ve covered both why the biopharma industry is ripe for disruption, and what the markets might be that could support a nascent, potentially disruptive technology until it matures enough to allow it to supplant the current dominant industry players.  In this final part I’d like to ask what disruption would look like and provide some examples of directions and companies that exemplify what are, to my mind, these sorts of disruptive technologies and approaches. With, I might add, the complete and utter knowledge that I’m wrong about who and what specifically will be disruptive! But in any case, before we can identify disruption, it’s worthwhile to ask what are the key elements of biopharma drug development that serve as real bottlenecks to affecting  human health, since these are the elements most likely to provide an avenue for disruption. Continue reading

Big Data and Public Health: An interview with Dr. Willem van Panhuis about Project Tycho, digitizing disease records, and new ways of doing research in public health

All opinions of the interviewer are my own and do not necessarily reflect those of Novo Nordisk.

One of the huge and perhaps still underappreciated aspects of the internet age is the digitization of information. While the invention of the printing press made the copying of information easy, quick and accurate, print still relied on books and other printed materials that were moved from place to place to spread information. Today digitization of information, cheap (almost free) storage, and the pervasiveness of the internet have vastly reduced barriers to use, transmission and analysis of information.

In an earlier post I described the project by researchers at the University of Pittsburgh that digitized US disease reports over the past 120+ years, creating a computable and freely available database of disease incidence in the US (Project Tycho, http://www.tycho.pitt.edu/) This incredible resource is there for anyone to download and use for research ranging from studies of vaccine efficacy to the building of epidemiological models to making regional public health analyses and comparisons.

Their work fascinates me both for what it said about vaccines and also for its connection to larger issues like Big Data in Public Health. I contacted the lead researcher on the project, Dr. Willem G. van Panhuis and he very kindly consented to an interview. What follows is our conversation about his work and the implications of this approach for Public Health research.

vanPanhuis,Wilbert[brianCohen20131113] (12)_resized

Dr. Willem van Panhuis. Image credit: Brian Cohen, 2013

Kyle Serikawa: Making this effort to digitize the disease records over the past ~120 years sounds like a pretty colossal undertaking. What inspired you and your colleagues to undertake this work?

Dr. Willem van Panhuis: One of the main goals of our center is to make computational models of how diseases spread and are transmitted. We’re inspired by the idea that by making computational models we can help decision makers with their policy choices. For example, in pandemics, we believe computational models will help decision makers to test their assumptions, to see how making different decisions will have different impacts.

So this led us to the thinking behind the current work. We believe that having better and more complete data will lead to better models and better decisions. Therefore, we needed better data.

On top of this, each model needs to be disease specific because each disease acts differently in how it spreads and what effects it has. In contrast, however, the basic data collection process that goes into creating the model for each disease is actually pretty similar across diseases. There is contacting those with the records of disease prevalence and its spread over time, collecting the data and then making the data ready for analysis. There’s considerable effort in that last part, especially as Health Departments often do not have the capacity to spend a lot of time and effort on responding to data requests by scientists.

The challenges are similar–we go through the same process every time we want to model a disease–so when we learned that a great source of much of the disease data in the public domain is in the form of these weekly surveillance reports published in MMWR and precursor journals, we had the idea: if we digitize the data once for all the diseases that would provide a useful resource for everybody.

We can make models for ourselves, but we can also allow others to do the same without duplication of effort. Continue reading

Big Data provide yet more Big Proof of the power of vaccines

All opinions are my own and do not necessarily reflect those of Novo Nordisk.

Time for another screed about the anti-vaccination movement.

Well, not about them per se, but rather about another study that demonstrates how much of a positive difference vaccines have made in the US. The article, from researchers at the University of Pittsburgh and Johns Hopkins University, describes what I can only imagine to be a Herculean effort to digitize disease reporting records from 1888 to 2011 (article behind a paywall, unfortunately).  Turns out there are publications that have been collecting weekly reports of disease incidence across US cities for over a century.  I have not been able to access the methods, but I can’t shake the image of hordes of undergraduates hunched over yellowed clippings and blurry photocopies of 19th century tables, laboriously entering numbers one by one into a really extensive excel spreadsheet.

All told, 87,950,807 individual cases were entered into their database, including location, time, and diseases.  Not fun, however it was done. Continue reading