Premature testification is not a laughing matter

All opinions are my own and do not necessarily reflect those of Novo Nordisk.

“These studies, once validated, open an opportunity for creating tests that an expectant mother can take to see if she is producing these autoantibodies. ”

I wrote this statement back in July as part of a post describing the report that maternal autoantibodies to specific neural proteins correlated with the appearance of autism symptoms in the children of those mothers.  Little did I suspect that plans to create a test were already in the works.  Science recently reported (paywall) that researchers behind this study are teaming up with a testing company to develop and market a diagnostic test for maternal autoantibodies.

On the one hand I am much in favor of prognostic tests that will help us anticipate health problems.  I believe in many cases early knowledge and interventions can be helpful, even when there is no “cure.”  One of the hopes for the wave of genomic biomedical research going on now is that it will allow us to better estimate who will and will not come down with specific diseases based on clues in individuals’ DNA.  But on the other hand, it can be problematic when tests are created and released before the underlying biomedical hypothesis has been strongly vetted and supported.

I’m going to keep linking to this article by Tom Seigfried until I think every scientist and other interested person on the planet has read it, and in this context it’s because of his observation about the fallibility of standard statistical methods in scientific research.  As he points out, the standard statistical tests used by biomedical researchers may provide hard p-values but that doesn’t mean that any given reported finding is true.  Indeed, in many cases small studies can provide strong p-values and nevertheless fail to replicate once larger sample sizes are used.

For the reported studies on autoantibodies, the sample sizes were relatively small and so far have not been replicated by independent groups.  They have not been examined in different genetic populations or in areas with different environmental conditions–something that may influence autoantibody development.  Without this kind of information and research, it’s difficult to predict how effective an eventual test will be.

I was thinking about these issues already in the context of another study that recently received a lot of press–the report by a group in Indianapolis that they had identified blood-borne transcriptional biomarkers for suicide risk.  These researchers examined what genes were turned on in the blood of patients with and without evidence of suicidal thoughts and used this study to identify a handful of genes whose changes in gene expression correlated with hospitalization for suicidal behaviors.  The headlines and publicity surrounding this finding played up the possibility that a test could be developed to predict suicidal behavior.

Is this a good goal?  I believe it is, but at the same time this is again the result of studies by one group.  And just to be clear, I have seen no evidence that the group in Indianapolis is actively planning on a test anytime soon.

However, for suicidality or autism, trying to create a test on the basis of limited initial findings seems to me to be far too early.  At a time in the United States where hostility to scientific findings that don’t fit within a perceived world-view is strong and powerful, it becomes even more incumbent upon scientists to be measured in the interpretation and commercialization of their findings.  Every test will have some element of risk because there will always be the danger of false positives and false negatives, and the less data a test is based upon, the less well we can accurately  measure that uncertainty.  It might be nice for the media reports to be measured as well.

When you have potential, major downstream effects such as influencing a decision whether to have children, or influencing family members to forcibly commit someone to psychiatric care, the decision to create the test in the first place should be based on rigorous data and a body of work that allows an accurate estimation of the test and its uncertainty.  I’m reminded of the movie Minority Report, and its conception of a future in which crimes are prevented via the talents of three psychic siblings who predict when someone will commit a crime.  This allows police to stop crimes before they happen.  Only sometimes the psychics disagree meaning their “test” of guilt is fallible.   If a test has a high enough level of uncertainty, it isn’t really helpful as a test and can lead to flawed decision-making, a problem that’s magnified depending on the degree of the consequences.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s