Lack of replication no surprise when we’re studying really complex problems

All opinions are my own and do not necessarily reflect those of Novo Nordisk

For another nice take on this topic see Paul Knoepfler’s blog post here.

One of the sacred (can I say sacred in reference to something scientific?) tenets of the scientific method is reproducibility.  If something is real and measurable, if it’s a fact of the material world, then the expectation is that the result should be reproducible by another experimenter using the same methods as described in the original report.  One of the most well known (among physicists anyway) examples of irreproducible data is the Valentine’s Day Magnetic Monopole detected by Blas Cabrera back in 1982.  Great experimental data.  Never repeated, and therefore viewed as insufficient proof for the existence of a magnetic monopole.

So it’s troubling that in the past few years there have been numerous stories about the lack of reproducibility for different scientific experiments.  In biomedical science the number of  reports on the difficulty of reproducing results has gotten so great that the NIH has begun thinking about how to confirm and require reproducibility of some kinds of experimental results.  Just a few days ago another field, that of psychological priming, saw the publication of an article that the effects of “high-performance priming,” could not be reproduced.  This is another field undergoing serious questioning about whether/why results don’t reproduce, with commentary from such luminaries as Daniel Kahneman. Continue reading

Are market cap and present cash flows the best way to measure innovation?

All opinions are my own and do not necessarily reflect those of Novo Nordisk

Forbes, with the help of the folks from The Innovator’s DNA recently published their coverage and rankings of the 100 most innovative companies.  I’m particularly interested in their ranking method, as it contains elements that are near and dear to my heart–namely, metrics and crowdsourcing.  In a nutshell, they describe how they use a company’s current market capitalization, along with it’s current net present value based on cashflows, to extrapolate how much the market feels the company has in potential.  The method nicely incorporates crowdsourcing in that the market cap measures how much investors as a whole think a company is really worth, now and in the future, and if that’s higher than expected based on cashflow, that suggests investors are factoring in a bonus to value based on future expectations.  Higher future expectations are interpreted as investors seeing a particular company as innovative and having the potential for great leaps forward in offerings and/or income.

I really like using the crowd in this way, and would love to see an analysis that retrospectively looks at these kinds of values over, say, 1970-1990, and combines that with a mature assessment of which companies have been adjudged by business historians to truly have been innovative standouts, which is not the same as business successes.  We say now that Bell Labs was one of the most innovative places on the planet in the 1900s.  Would the same have been said at the time?

At the same time, I can’t help musing if this process couldn’t be made even better.  Recognizing innovation when it’s happening has obvious advantages for anyone looking to get into the next amazing thing, whether as a participant, an investor, or a policy maker.  So let’s start by examining where there might be shortfalls to the Innovator’s DNA method. Continue reading

What does the Hela genome agreement imply for consent and genome data usage?

All opinions my own and do not necessarily reflect those of Novo Nordisk.

A fair amount of reporting (for example here, here and here) has gone into the recent news that the NIH and the descendants of Henrietta Lacks have reached an agreement about the conditions under which the genome sequence of the HeLa cell line will be shared.  The basic parameters are that researchers wanting access to the data will need to apply for permission, the application committee will include members of the Lacks family, any publications will acknowledge the contribution of the Henrietta Lacks, and future genome sequences will be submitted to dbGAP.

This is a generally welcome development, and in no small part due to the work of Rebecca Skloot.  Her book, The Immortal Life of Henrietta Lacks provided the impetus to the current developments by popularizing the story of Ms. Lacks and the cell line derived from her tissues.  However, this agreement also can be seen as a precedent of sorts, and the future implications for the ethics of consent, genetic information sharing and genomic research are unclear.

Whose genome is it, anyway?

In Pasco Phronesis, David Bruggeman penned a post on some of the possible implications.  He discusses one of the key elements of genetic consent that I generally haven’t seen elaborated on much in the current literature: familial consent and exposure.   To what extent do those who share part of a sequenced genome have a say in the granting and rescinding of consent for the usage of genetic information?   Continue reading

How do you take a cheetah’s temperature?

All opinions are my own and do not necessarily reflect those of Novo Nordisk.

The answer, of course, is very carefully (ba-dum-bum!).

Okay, having gotten that horrible joke out of the way, it seems like people are really interested in cheetahs and how they hunt.  I’ve written about monitoring cheetah hunting behaviors before and the implications that technology has for future data gathering and analysis (see here, here and here).  The most recent addition to what seems to be a growing canon of testing and abolishing cheetah hunting myths was published recently in Biology Letters (abstract only; rest behind a paywall.  But see here for a more extensive summary of the paper).  In this work, the researchers report using temperature sensors to test the hypothesis that cheetahs are unsuccessful in some of their hunts because they overheat.

This seems like such a nice, neat story.  That’s probably one reason why it’s taken so long to actually test it.  The story goes something like:  Cheetahs are the fastest land mammals.  Everyone heats up when they run.  If cheetahs run the fastest, they must heat up the most.  Therefore, heat is the gating factor for Cheetah hunting.  Stories are so appealing!  But that appeal, in satisfying our sense of order, can keep us from looking under the hood and asking questions.

Luckily for us, these researchers did.  Robin Hetem and colleagues found that cheetah temperature did not increase substantially during hunts, whether successful or not.  Oddly, temperature did increase after the hunt was over, and furthermore increased more for the cheetahs that were successful.  The study authors speculate that the reason was heightened awareness to scavengers who might want to steal the cheetah’s prey.

This study highlights the value of remote sensing and our rapidly increasing ability to both monitor and store data continuously, and thereby test our notions of the world.  Data of itself isn’t good or bad, but boy is it useful.

Finding an Alka Seltzer for the oceans

All opinions are my own and do not necessarily reflect those of Novo Nordisk
Some time ago I wrote an article for Real Change (reposted here) about research being done at the University of Washington to understand the effects of ocean acidification due to rising atmospheric carbon dioxide levels.  The rising pH of the oceans is another one in my list of things we don’t worry enough about with climate change but really should.  Like the bees.  It’s such a seemingly tiny, subtle thing.  The measured decrease in pH of maybe 0.1 units is due to ocean waters absorbing atmospheric CO2 and the resulting conversion of some of that to carbonic acid.  Seems small but it’s really a big deal.
Scientists have documented apparent effects of ocean acidity on coral reefs and oysters, among other organisms (abstracts from links; articles behind paywalls), and while oyster farmers can try to add antacids to their spawning beds, the oceans as a whole are a bit large for a local solution.  Which is why I was excited to see the Paul Allen Family Foundation post the current submissions to their Ocean Challenge (HT @deirdrelockwood).
Let me provide a disclaimer that I have not read most of these proposals in depth.  However, scanning through the titles and sampling a few in greater detail, it’s clear that the Ocean Challenge has prompted a number of groups to come up with ideas about how to try and monitor, test, and mitigate the effects of ocean acidification, at least at the local level and in some cases on a grander scale.  The proposals are available online for public comment, and finalists selected in September.
There are a couple of things to really like about this.   All the proposal summaries are devoid of names and affiliations, which may lead to more unbiased evaluations by public commentators.  This is something that’s been debated for years with respect to other granting agencies like the NIH.  Another great thing is that this is open–anyone can apply and everyone’s ideas are out there for others to learn from, debate, and expand upon.  I’m a fan of open source science, and transparency, and this feels like it’s in that vein.  And last, this is really a big problem.  Not to say government agencies aren’t funding and studying this, but as we’ve seen with other private non-profit foundations like the Gates Foundation, there is a third way beyond government and private industry to try and effect policy and make changes.  I hope for success from this effort.  Because I would really miss oysters.