Is Opower the model for getting us to wellness and health?

This is a post about nudges. And optimism.

There’s a story I read a long time ago by David Brin. It’s called “The Giving Plague,” and the protagonist is a virologist and epidemiologist who describes his life working on viruses and vectors. The Plague of the title is a virus that has evolved the ability to make infected people enjoy donating blood. Recipients keep giving blood, leading to an exponentially expanding network of people who find themselves giving blood regularly and even circumventing age and other restrictions to make sure they can give their pint every eight weeks.

The central twist of the story is that the protagonist’s mentor, who discovers this virus, realizes people who donate blood also perform other altruistic acts–that the act of giving blood changes their own self image. Makes them behave as better people. And so he suppresses the discovery, for the greater good of society. The protagonist, a rampant careerist, begins plotting murder to allow him to take credit. But before he can act, more diseases strike, the Giving Plague moves through the population, and the protagonist forgets about it in his efforts to cure newer diseases.

And if anyone thinks something like this is too outlandish, I encourage you to read this piece about Toxoplasma gondii and how it makes infected mice charge at cats, the better to be eaten so that T. gondii can spread. Yeah.

But what does this story have to do with the future of wellness and health?

Continue reading

Advertisements

The potential for “Found Research” in fecal transplant treatments

All opinions are my own and do not necessarily reflect those of Novo Nordisk.

A few days ago the New York Times ran a nice article discussing a recent test of whether fecal transplants can be done using a pill format delivery system. The research, reported (and free, no less!) in the Journal of the American Medical Association, was peformed by physicians at Massachusetts General Hospital who had formulated human feces in an encapsulated pill format to see if that would be effective as a kind of fecal transplant. Fecal transplants  appear to overcome infections by Clostridium difficile in patients. However, the conventional method for providing a fecal transplant is to deliver a liquid slurry either nasopharyngeally or via an enema-like procedure, neither of which is easily scalable. Also, yuck.

The current work, in which 14 of 20 patients responded to initial treatments using the poop pills, and an additional 4 responded the second time around, provided a proof of concept that a frozen, pill format delivery system may be a workable alternative to the current standard.

But as I was reading this article, I was struck by another thought. Are we missing a great opportunity for research into the interplay between the microbiome and human physiology?

Continue reading

Add Big Data to the things the Supreme Court Justices know when they see them

All opinions are my own and do not necessarily reflect those of Novo Nordisk.

Yesterday’s Supreme Court decision that searching cell phones from arrested individuals requires a search warrant has been described as “Bold. Landmark. Sweeping.” (I love Nina Totenberg’s reporting, by the way). Commentators have been discussing the precedent set for fourth amendment rights, digital privacy, comparison to historical judgements, civil liberties, yadda yadda yadda. Sure, that’s important and all, but let me add a different filter with which to view the decision.

The Supreme Court validated one of the fundamental premises of Big Data: that for many kinds of data at some point quantity becomes quality.

Continue reading

Big data and baseball efficiency: the traveling salesman had nothing on a baseball scout

All opinions are my own and do not necessarily reflect those of Novo Nordisk

The MLB draft is coming up and with any luck I’ll get this posted by Thursday and take advantage of web traffic. I can hope! Anyway, Tuesday in Fangraphs I read a fascinating portrayal of the draft process, laying out the nuts and bolts of how organizations scout for the draft. The piece, written by Tony Blengino (whose essays are rapidly becoming one of my favorite parts of this overall terrific baseball site), describes all the behind the scenes work that happens to prepare a major league organization for the Rule 4 draft. Blengino described the dedication scouts show in following up on all kinds of prospects at the college and high school levels, what they do, how much they need to travel, and especially how much ground they often need to cover to try and lay eyes on every kid in their area.

One neat insight for me was Blengino’s one-word description of most scouts as entrepreneurs. You could think of them almost as founders of a startup, with the kids they scout as the product the scouts are trying to sell to upper layers of management in the organization. As such, everything they can do to get a better handle on a kid’s potential can feed into the pitch to the scouting director.

I respect and envy scouts’ drive to keep looking for the next big thing, the next Jason Heyward or Mike Trout. As Blengino puts it, scouts play “one of the most vital, underrated, and underpaid roles in the game.” While one might make the argument that in MLB, unlike the NFL or NBA, draft picks typically are years away from making a contribution and therefore how important can draft picks be?, numerous studies have shown that the draft presents an incredible opportunity for teams in building and sustaining success. In fact, given that so much of an organization’s success hinges on figuring out which raw kids will be able to translate tools and potential into talent, one could (and others have)  made the argument that scouting is a huge potential market inefficiency for teams to exploit. Although I’ll have a caveat later. But in any case, for a minor league system every team wants to optimize their incoming quality because, like we say in genomic data analysis, “garbage in, garbage out.”

As I was reading this piece, I started thinking about ways to try and create more efficiencies. And I started thinking about Big Data.  Continue reading

Big Data and Public Health: An interview with Dr. Willem van Panhuis about Project Tycho, digitizing disease records, and new ways of doing research in public health

All opinions of the interviewer are my own and do not necessarily reflect those of Novo Nordisk.

One of the huge and perhaps still underappreciated aspects of the internet age is the digitization of information. While the invention of the printing press made the copying of information easy, quick and accurate, print still relied on books and other printed materials that were moved from place to place to spread information. Today digitization of information, cheap (almost free) storage, and the pervasiveness of the internet have vastly reduced barriers to use, transmission and analysis of information.

In an earlier post I described the project by researchers at the University of Pittsburgh that digitized US disease reports over the past 120+ years, creating a computable and freely available database of disease incidence in the US (Project Tycho, http://www.tycho.pitt.edu/) This incredible resource is there for anyone to download and use for research ranging from studies of vaccine efficacy to the building of epidemiological models to making regional public health analyses and comparisons.

Their work fascinates me both for what it said about vaccines and also for its connection to larger issues like Big Data in Public Health. I contacted the lead researcher on the project, Dr. Willem G. van Panhuis and he very kindly consented to an interview. What follows is our conversation about his work and the implications of this approach for Public Health research.

vanPanhuis,Wilbert[brianCohen20131113] (12)_resized

Dr. Willem van Panhuis. Image credit: Brian Cohen, 2013

Kyle Serikawa: Making this effort to digitize the disease records over the past ~120 years sounds like a pretty colossal undertaking. What inspired you and your colleagues to undertake this work?

Dr. Willem van Panhuis: One of the main goals of our center is to make computational models of how diseases spread and are transmitted. We’re inspired by the idea that by making computational models we can help decision makers with their policy choices. For example, in pandemics, we believe computational models will help decision makers to test their assumptions, to see how making different decisions will have different impacts.

So this led us to the thinking behind the current work. We believe that having better and more complete data will lead to better models and better decisions. Therefore, we needed better data.

On top of this, each model needs to be disease specific because each disease acts differently in how it spreads and what effects it has. In contrast, however, the basic data collection process that goes into creating the model for each disease is actually pretty similar across diseases. There is contacting those with the records of disease prevalence and its spread over time, collecting the data and then making the data ready for analysis. There’s considerable effort in that last part, especially as Health Departments often do not have the capacity to spend a lot of time and effort on responding to data requests by scientists.

The challenges are similar–we go through the same process every time we want to model a disease–so when we learned that a great source of much of the disease data in the public domain is in the form of these weekly surveillance reports published in MMWR and precursor journals, we had the idea: if we digitize the data once for all the diseases that would provide a useful resource for everybody.

We can make models for ourselves, but we can also allow others to do the same without duplication of effort. Continue reading

Are Biopharma reagent companies sitting on a pile of gold (or at least poptarts)?

All opinions are my own and do not necessarily reflect those of Novo Nordisk

The recent news about the United States Government monitoring a great deal of both general and specific electronic data has had one beneficial outcome (or at least, one I feel is beneficial):  it has made more people aware of what can actually be done with data, and also that we’re leaving massive amounts of personal data out there that can be traced to the behavior of individuals or organizations.  A few months ago, the Seattle Times published an article describing the explosion of big data and how that can be leveraged in so many ways.

This led me to speculate, in a very out-there kind of way, about what kinds of data Biopharma companies produce and whether there’s any hidden value in that.  Now, certainly companies are very careful about communicating information to the outside world.  Contracts with collaborators routinely contain embargo clauses, and presentations and posters are carefully vetted by legal and communications departments.  So companies would appear to be covered there.  But what kinds of data are out there that might be available, maybe not freely, but in potentia, to an interested audience?

Let me digress for a moment about mergers.  Biopharma over the last few years has seen a flurry of merger and acquisition activity.  The big pharma deals, like Pfizer/Wyeth, and Merck/Schering-Plough, have gotten big press, but there has also been a lot of consolidation among reagent suppliers.  To take one example, I’ve shamelessly taken Life Technologies’ merger history off of Wikipedia and condensed it into this table (after the jump): Continue reading