Lack of replication no surprise when we’re studying really complex problems

All opinions are my own and do not necessarily reflect those of Novo Nordisk

For another nice take on this topic see Paul Knoepfler’s blog post here.

One of the sacred (can I say sacred in reference to something scientific?) tenets of the scientific method is reproducibility.  If something is real and measurable, if it’s a fact of the material world, then the expectation is that the result should be reproducible by another experimenter using the same methods as described in the original report.  One of the most well known (among physicists anyway) examples of irreproducible data is the Valentine’s Day Magnetic Monopole detected by Blas Cabrera back in 1982.  Great experimental data.  Never repeated, and therefore viewed as insufficient proof for the existence of a magnetic monopole.

So it’s troubling that in the past few years there have been numerous stories about the lack of reproducibility for different scientific experiments.  In biomedical science the number of  reports on the difficulty of reproducing results has gotten so great that the NIH has begun thinking about how to confirm and require reproducibility of some kinds of experimental results.  Just a few days ago another field, that of psychological priming, saw the publication of an article that the effects of “high-performance priming,” could not be reproduced.  This is another field undergoing serious questioning about whether/why results don’t reproduce, with commentary from such luminaries as Daniel Kahneman. Continue reading

Are market cap and present cash flows the best way to measure innovation?

All opinions are my own and do not necessarily reflect those of Novo Nordisk

Forbes, with the help of the folks from The Innovator’s DNA recently published their coverage and rankings of the 100 most innovative companies.  I’m particularly interested in their ranking method, as it contains elements that are near and dear to my heart–namely, metrics and crowdsourcing.  In a nutshell, they describe how they use a company’s current market capitalization, along with it’s current net present value based on cashflows, to extrapolate how much the market feels the company has in potential.  The method nicely incorporates crowdsourcing in that the market cap measures how much investors as a whole think a company is really worth, now and in the future, and if that’s higher than expected based on cashflow, that suggests investors are factoring in a bonus to value based on future expectations.  Higher future expectations are interpreted as investors seeing a particular company as innovative and having the potential for great leaps forward in offerings and/or income.

I really like using the crowd in this way, and would love to see an analysis that retrospectively looks at these kinds of values over, say, 1970-1990, and combines that with a mature assessment of which companies have been adjudged by business historians to truly have been innovative standouts, which is not the same as business successes.  We say now that Bell Labs was one of the most innovative places on the planet in the 1900s.  Would the same have been said at the time?

At the same time, I can’t help musing if this process couldn’t be made even better.  Recognizing innovation when it’s happening has obvious advantages for anyone looking to get into the next amazing thing, whether as a participant, an investor, or a policy maker.  So let’s start by examining where there might be shortfalls to the Innovator’s DNA method. Continue reading

What does the Hela genome agreement imply for consent and genome data usage?

All opinions my own and do not necessarily reflect those of Novo Nordisk.

A fair amount of reporting (for example here, here and here) has gone into the recent news that the NIH and the descendants of Henrietta Lacks have reached an agreement about the conditions under which the genome sequence of the HeLa cell line will be shared.  The basic parameters are that researchers wanting access to the data will need to apply for permission, the application committee will include members of the Lacks family, any publications will acknowledge the contribution of the Henrietta Lacks, and future genome sequences will be submitted to dbGAP.

This is a generally welcome development, and in no small part due to the work of Rebecca Skloot.  Her book, The Immortal Life of Henrietta Lacks provided the impetus to the current developments by popularizing the story of Ms. Lacks and the cell line derived from her tissues.  However, this agreement also can be seen as a precedent of sorts, and the future implications for the ethics of consent, genetic information sharing and genomic research are unclear.

Whose genome is it, anyway?

In Pasco Phronesis, David Bruggeman penned a post on some of the possible implications.  He discusses one of the key elements of genetic consent that I generally haven’t seen elaborated on much in the current literature: familial consent and exposure.   To what extent do those who share part of a sequenced genome have a say in the granting and rescinding of consent for the usage of genetic information?   Continue reading

How do you take a cheetah’s temperature?

All opinions are my own and do not necessarily reflect those of Novo Nordisk.

The answer, of course, is very carefully (ba-dum-bum!).

Okay, having gotten that horrible joke out of the way, it seems like people are really interested in cheetahs and how they hunt.  I’ve written about monitoring cheetah hunting behaviors before and the implications that technology has for future data gathering and analysis (see here, here and here).  The most recent addition to what seems to be a growing canon of testing and abolishing cheetah hunting myths was published recently in Biology Letters (abstract only; rest behind a paywall.  But see here for a more extensive summary of the paper).  In this work, the researchers report using temperature sensors to test the hypothesis that cheetahs are unsuccessful in some of their hunts because they overheat.

This seems like such a nice, neat story.  That’s probably one reason why it’s taken so long to actually test it.  The story goes something like:  Cheetahs are the fastest land mammals.  Everyone heats up when they run.  If cheetahs run the fastest, they must heat up the most.  Therefore, heat is the gating factor for Cheetah hunting.  Stories are so appealing!  But that appeal, in satisfying our sense of order, can keep us from looking under the hood and asking questions.

Luckily for us, these researchers did.  Robin Hetem and colleagues found that cheetah temperature did not increase substantially during hunts, whether successful or not.  Oddly, temperature did increase after the hunt was over, and furthermore increased more for the cheetahs that were successful.  The study authors speculate that the reason was heightened awareness to scavengers who might want to steal the cheetah’s prey.

This study highlights the value of remote sensing and our rapidly increasing ability to both monitor and store data continuously, and thereby test our notions of the world.  Data of itself isn’t good or bad, but boy is it useful.

Finding an Alka Seltzer for the oceans

All opinions are my own and do not necessarily reflect those of Novo Nordisk
Some time ago I wrote an article for Real Change (reposted here) about research being done at the University of Washington to understand the effects of ocean acidification due to rising atmospheric carbon dioxide levels.  The rising pH of the oceans is another one in my list of things we don’t worry enough about with climate change but really should.  Like the bees.  It’s such a seemingly tiny, subtle thing.  The measured decrease in pH of maybe 0.1 units is due to ocean waters absorbing atmospheric CO2 and the resulting conversion of some of that to carbonic acid.  Seems small but it’s really a big deal.
Scientists have documented apparent effects of ocean acidity on coral reefs and oysters, among other organisms (abstracts from links; articles behind paywalls), and while oyster farmers can try to add antacids to their spawning beds, the oceans as a whole are a bit large for a local solution.  Which is why I was excited to see the Paul Allen Family Foundation post the current submissions to their Ocean Challenge (HT @deirdrelockwood).
Let me provide a disclaimer that I have not read most of these proposals in depth.  However, scanning through the titles and sampling a few in greater detail, it’s clear that the Ocean Challenge has prompted a number of groups to come up with ideas about how to try and monitor, test, and mitigate the effects of ocean acidification, at least at the local level and in some cases on a grander scale.  The proposals are available online for public comment, and finalists selected in September.
There are a couple of things to really like about this.   All the proposal summaries are devoid of names and affiliations, which may lead to more unbiased evaluations by public commentators.  This is something that’s been debated for years with respect to other granting agencies like the NIH.  Another great thing is that this is open–anyone can apply and everyone’s ideas are out there for others to learn from, debate, and expand upon.  I’m a fan of open source science, and transparency, and this feels like it’s in that vein.  And last, this is really a big problem.  Not to say government agencies aren’t funding and studying this, but as we’ve seen with other private non-profit foundations like the Gates Foundation, there is a third way beyond government and private industry to try and effect policy and make changes.  I hope for success from this effort.  Because I would really miss oysters.

Are Biopharma reagent companies sitting on a pile of gold (or at least poptarts)?

All opinions are my own and do not necessarily reflect those of Novo Nordisk

The recent news about the United States Government monitoring a great deal of both general and specific electronic data has had one beneficial outcome (or at least, one I feel is beneficial):  it has made more people aware of what can actually be done with data, and also that we’re leaving massive amounts of personal data out there that can be traced to the behavior of individuals or organizations.  A few months ago, the Seattle Times published an article describing the explosion of big data and how that can be leveraged in so many ways.

This led me to speculate, in a very out-there kind of way, about what kinds of data Biopharma companies produce and whether there’s any hidden value in that.  Now, certainly companies are very careful about communicating information to the outside world.  Contracts with collaborators routinely contain embargo clauses, and presentations and posters are carefully vetted by legal and communications departments.  So companies would appear to be covered there.  But what kinds of data are out there that might be available, maybe not freely, but in potentia, to an interested audience?

Let me digress for a moment about mergers.  Biopharma over the last few years has seen a flurry of merger and acquisition activity.  The big pharma deals, like Pfizer/Wyeth, and Merck/Schering-Plough, have gotten big press, but there has also been a lot of consolidation among reagent suppliers.  To take one example, I’ve shamelessly taken Life Technologies’ merger history off of Wikipedia and condensed it into this table (after the jump): Continue reading

More developments in autism prediction

All opinions are my own and do not necessarily reflect those of Novo Nordisk

A recent publication about efforts to find early indicators for autism recently came out in the journal Brain and reports an intriguing observation about brain size.  The researchers sought to identify whether Magnetic Resonance Imaging (MRI) of the brains of infants and very young children could help to predict which children would go on to develop autism.  Like many pilot studies of this sort, the experiment was done simply by looking.  The researchers identified a cohort of newborn siblings of autistic children and also a control cohort without that risk factor and began taking MRI images of their brains at the age of between 6-9 months, and again at 12-15 months and 18-24 months.  Prior research has shown that having a sibling with autism greatly increases the probability that a child will also develop autism, so in this situation the expectation was that some of the sibling group would develop autism and researchers would retrospectively be able to look at the data collected during the study and identify MRI features that correlated with development of disease, should any exist.

The impetus behind this is that previous research has not shown any definitive behavioral clues in infants (6 months or younger) that predict the development of autism.  However, the earlier a child is diagnosed, the earlier behavioral interventions can be applied to help that child and his or her family cope with future challenges. Continue reading