Sequencing in polio, baseball pitching and cancer: sometimes the order of events matters

This piece originally appeared in the Timmerman Report.

What do the polio virus, baseball pitch choice and cancer have in common?

The answer, of course, is sequencing. But not in the “figure out the DNA” way (although that’s involved). Instead in the “what comes first” way. Confused? Read on!

A big perk of Seattle is proximity to great institutions of biomedical research like the University of Washington and the Fred Hutchinson Cancer Research Center. Ever since my graduate student days in genetics at UC-Berkeley I’ve enjoyed going to seminars–especially seminars that are outside my field of study. Very little beats a good seminar for giving you a quick, condensed view of the state of a field of research. A bad seminar…well…we all could use more sleep, right?

In early October, Raul Andino of UCSF came to the Hutch to talk about his work on viral evolution. His team has been examining a clever real-world system to track the evolution of viruses. The near-eradication of polio (one of the great public heath victories of the past century) has led to the curious problem that as of the middle of this year most new cases of polio arose as a result of vaccination efforts. The live, attenuated vaccine that’s used in the developing world can, in very rare cases, mutate in just the wrong ways in its host, leading to the creation of a virulent strain that can infect others. In the US we use an inactivated polio vaccine which requires several injections; in much of the developing world the oral polio virus is preferred due to its ease of administration, lower cost, and immunization profile. The Andino lab realized that by studying these isolated outbreaks, which all originated with the same, genetically identical progenitor, they could test a hypothesis about the adaptive landscape of virulence evolution. Continue reading

What’s the Role of Experts? A Review of The Death of Expertise and Some Thoughts for Biopharma

This piece originally appeared in The Timmerman Report. 

 

The Death of Expertise by Tom Nichols, 2017, Oxford University Press.

If you’re reading this, chances are you’re an expert or well on your way to becoming one. The Timmerman Report is tailored by content and intent to be valuable to those with the knowledge, experience and interest to make biopharma news worth reading. Experts, in other words.

This isn’t a trivial point: for the vast majority of people—that is, those non-expert in biopharma—news in sites like this one or STAT or Endpoints is as useful as scuba equipment to an octopus. And that’s fine; that’s how our knowledge-based society works. Individuals become experts in specific fields, they take the time and effort to master a specific area and they build up the intellectual framework to enable advances, discoveries and explanations. Specialization underlies the technological, societal and scientific wonders we take for granted today. There are just too many fields of study for any one person to master, the Maesters of a Song of Ice and Fire aside. Divide and conquer isn’t just for Roman governance philosophy; it also makes for progress.

The natural corollary is that we are all affected by what experts outside our field say and do. Lacking a working and academic knowledge of biopharma does not immunize a person from the impact of the kinds of issues, news, and discoveries discussed and reported here. Drug pricing, innovation, access and healthcare quality and affordability have huge impacts on everyone in the US.

And boy, do many of them have opinions about that! Opinions that they hold tighter and higher than the words of experts. Opinions that influence the ways in which they speak, act, think and yes, sometimes, vote.

This growing issue is at the heart of Tom Nichols’ book, The Death of Expertise. Nichols, a professor in National Security Affairs at the Naval War College and adjunct at the Harvard Extension School, is a former Senate aide and an expert in Soviet studies. I first became familiar with his work when, after last year’s US Presidential Election, I started consciously expanding the circle of thinkers I listened to. Like Daniel MacArthur and many others of a more liberal bent, I’ve tried to find and listen to people on the center and right.

Continue reading

It’s time for biopharma to embrace public health

This piece first appeared in the Timmerman Report.

Some years ago when I was working for a large biopharma, I heard a story. It seems a senior scientific executive had visited and given a seminar in which he described the company’s portfolio of drugs for type 2 diabetes. The company was projecting great uptake and profits. A member of our site raised his hand and said, “But if people just ate less and exercised a little more, they could prevent type 2 diabetes and the market would disappear.”

The answer: “Yeah, but they won’t.”

Harsh! But that executive was right. The Institute for Health Metrics and Evaluation (IHME) recently published a paper in JAMA describing how much different health conditions contribute to private and public health spending in the US. Number one? Diabetes. Following that were heart disease and chronic pain. These are chronic lifestyle diseases with big environmental and behavioral components, and the data make me wonder if there’s an opportunity here for the industry to zig and do some things that, in the long run, may make drug development more sustainable.

I think it’s time for biopharma to get involved in public health. Continue reading

Big Data and Public Health: An interview with Dr. Willem van Panhuis about Project Tycho, digitizing disease records, and new ways of doing research in public health

All opinions of the interviewer are my own and do not necessarily reflect those of Novo Nordisk.

One of the huge and perhaps still underappreciated aspects of the internet age is the digitization of information. While the invention of the printing press made the copying of information easy, quick and accurate, print still relied on books and other printed materials that were moved from place to place to spread information. Today digitization of information, cheap (almost free) storage, and the pervasiveness of the internet have vastly reduced barriers to use, transmission and analysis of information.

In an earlier post I described the project by researchers at the University of Pittsburgh that digitized US disease reports over the past 120+ years, creating a computable and freely available database of disease incidence in the US (Project Tycho, http://www.tycho.pitt.edu/) This incredible resource is there for anyone to download and use for research ranging from studies of vaccine efficacy to the building of epidemiological models to making regional public health analyses and comparisons.

Their work fascinates me both for what it said about vaccines and also for its connection to larger issues like Big Data in Public Health. I contacted the lead researcher on the project, Dr. Willem G. van Panhuis and he very kindly consented to an interview. What follows is our conversation about his work and the implications of this approach for Public Health research.

vanPanhuis,Wilbert[brianCohen20131113] (12)_resized

Dr. Willem van Panhuis. Image credit: Brian Cohen, 2013

Kyle Serikawa: Making this effort to digitize the disease records over the past ~120 years sounds like a pretty colossal undertaking. What inspired you and your colleagues to undertake this work?

Dr. Willem van Panhuis: One of the main goals of our center is to make computational models of how diseases spread and are transmitted. We’re inspired by the idea that by making computational models we can help decision makers with their policy choices. For example, in pandemics, we believe computational models will help decision makers to test their assumptions, to see how making different decisions will have different impacts.

So this led us to the thinking behind the current work. We believe that having better and more complete data will lead to better models and better decisions. Therefore, we needed better data.

On top of this, each model needs to be disease specific because each disease acts differently in how it spreads and what effects it has. In contrast, however, the basic data collection process that goes into creating the model for each disease is actually pretty similar across diseases. There is contacting those with the records of disease prevalence and its spread over time, collecting the data and then making the data ready for analysis. There’s considerable effort in that last part, especially as Health Departments often do not have the capacity to spend a lot of time and effort on responding to data requests by scientists.

The challenges are similar–we go through the same process every time we want to model a disease–so when we learned that a great source of much of the disease data in the public domain is in the form of these weekly surveillance reports published in MMWR and precursor journals, we had the idea: if we digitize the data once for all the diseases that would provide a useful resource for everybody.

We can make models for ourselves, but we can also allow others to do the same without duplication of effort. Continue reading

Big Data provide yet more Big Proof of the power of vaccines

All opinions are my own and do not necessarily reflect those of Novo Nordisk.

Time for another screed about the anti-vaccination movement.

Well, not about them per se, but rather about another study that demonstrates how much of a positive difference vaccines have made in the US. The article, from researchers at the University of Pittsburgh and Johns Hopkins University, describes what I can only imagine to be a Herculean effort to digitize disease reporting records from 1888 to 2011 (article behind a paywall, unfortunately).  Turns out there are publications that have been collecting weekly reports of disease incidence across US cities for over a century.  I have not been able to access the methods, but I can’t shake the image of hordes of undergraduates hunched over yellowed clippings and blurry photocopies of 19th century tables, laboriously entering numbers one by one into a really extensive excel spreadsheet.

All told, 87,950,807 individual cases were entered into their database, including location, time, and diseases.  Not fun, however it was done. Continue reading