All opinions are my own and do not necessarily reflect those of Novo Nordisk
Over the past three days I had the opportunity to attend the Global Health Metrics Conference here in Seattle. This is not my field; I’m a genomics researcher working in biomedical research and drug development, but I’ve also been curious about what’s going on in the area of public and global health. This seemed like a good place to get a crash course. The Lancet has kindly published all the abstracts and I wanted to give my impressions of what I heard.
First takeaway: I was surprised and intrigued by how many parallels I saw between the work I do (primarily transcriptomics and genomics) and the work I saw reported. Sure, global health researchers use surveys rather than high throughput sequencing, and gather data on nations rather than patients, and deal with the complexities of culture and government instead of human biology, and work in the public sphere as opposed to the private, and use a completely different vocabulary than I do, but other than that it was really similar. So similar I put together this table:
|Biomedical Genomics Research
||Global Health Metrics
|Increasing amount and types of data
|Growing emphasis on efficacy measurements
|Lots of Acronyms, NIH, AMA, EULAR, ADME
||GBD, DALYs, CDVS, USAID
|Struggle to understand what tissue, cell, analyte to measure
||Struggle to characterize the right metric to demonstrate effects/efficacy
|Gene X Environment interactions poorly understood
||Local environment effects beginning to be captured
||Nation specific solutions
|Noisy data, lots of unknowns
||Maybe even noisier data and, yeah, unknowns
|More focus on longitudinal studies
And so on. I’ll elaborate on a few more below. Another immediate takeaway: I wasn’t even aware of the Institute for Health Metrics and Evaluation (sorry guys). Now that I am, it’s a place I’d like to visit.
One thing that really impressed me was the work that IHME has put into making the Global Burden of Disease survey lucid, simple and accessible. The data presentation by Kyle Foreman and Peter Speyer (@Peterspeyer) was terrific. Not so much for any specific piece of data (although the trends and findings are all pretty fascinating), but rather for their demonstration of the power of dynamic presentation and facile web-based tools. Static powerpoint charts are clearly so last decade. Anyone wanting to check out their presentation can go here, or even better just go directly to the site. As a scientist who also works with large, multifactorial datasets, I know the struggle to condense that data into a usable, comprehensible form. I think Peter and Kyle have done a great job, and I also like the potential crowdsourcing aspect of it. As I’ve commented on before, crowdsourcing methods, whether via games or other techniques, have a real potential to fully utilize large datasets and also to solve big problems.
Of the many talks I heard, a few I’ll highlight, just for the specific points I took away. On the first day, Tanya Marchant showed interesting and cautionary data about making sure that what you’re measuring really measures what you think you’re measuring. In this case, measuring the presence of skilled birthing assistants as a proxy for maternal care during childbirth turns out to be incomplete because of other factors such as availability of basic medical supplies. Reminds me of debates over things like how best to measure drug efficacy in clinical trials–for example, response versus progression free survival in oncology.
Joseph Dieleman presented his work on looking on the effects of external aid to developing nations for health. In a perfect world, external aid would just be added to pre-existing health expenditures, and after aid expired, local governments would maintain spending at pre-aid levels, or even higher. Well, turns out this isn’t always the way this happens. Aid comes in, local health budget gets shifted “temporarily,” but temporarily turns to permanently when the external aid leaves. One of the thoughts that went through my head during this conference was to remember the law of unintended consequences.
I enjoyed Michael Wolfson‘s talk on functional health status. Coming from an industry that really likes it’s tried and true measures like HDL/LDL levels, the concept of looking holistically at factors relating to actually feeling good was a nice contrast, and food for thought.
Bruce Hollingsworth had a great quote in his part, “People need incentives to provide accurate data.” Yeah. Tell me about it. In transcriptomics it’s been a mantra for years that “Garbage in, garbage out,” in terms of incoming biological sample integrity and resulting data quality. From what I saw, the data you can get trying to measure Global Health is maybe even noisier than the kinds of data that I normally deal with. My main conjecture for why all hope is not lost due to data quality in Global Health is that GH researchers are able to bias the indicators they sample towards things with (hopefully) real meaning, else they would be adrift in a sea of not very useful data. Maybe they feel that way anyway? Bruce also made the point that there are external factors, again, which influence health. Even people who know where to go for the best treatment may not because the facility is too far away. Location, location, location.
Speaking of garbage (but not in a bad way), David Phillip‘s talk later that day referred to the problem of trying to extract useful data out of vital health records full of things like garbage codes. That is, causes of death that are supremely unhelpful from a public health perspective, such as (I’m exaggerating here) death by lack of life. His work on extracting useful proportions from this data based on the overall data distribution reminded me of imputation techniques that are used in genomics.
There were many more engaging talks, and I also had great conversations at lunch with different people. I suppose I shouldn’t be surprised by the similarities. I think many research fields these days are converging on a similar emphasis on big data, analytics, efficacy, and finding the right metrics. I also appreciated the long view shown by so many of these programs. One of the drawbacks of private industry is the prevalence, often, of the short term view. I could wish we had the decades-long commitment shown by various Global Health initiatives.
The aspect I find daunting in Global Health is how much uncertainty that community is dealing with, which greatly affects efficacy and efficiency. An intervention might be exactly the right one when viewed in isolation, but can be so easily derailed by external factors. Like biology, like baseball, it seems the key thing is to find the metrics that at least tell you that you made the change you hoped for, with the understanding that what happens at the end is so often, unfortunately, out of our control.