Baseball, regression to the mean, and avoiding potential clinical trial biases

This post originally appeared on The Timmerman Report. You should check out the TR.

It’s baseball season. Which means it’s fantasy baseball season. Which means I have to keep reminding myself that, even though it’s already been a month and a half, that’s still a pretty short time in the long rhythm of the season and every performance has to be viewed with skepticism. Ryan Zimmerman sporting a 0.293 On Base Percentage (OBP)? He’s not likely to end up there. On the other hand, Jake Odorizzi with an Earned Run Average (ERA) less than 2.10? He’s good, but not that good. I try to avoid making trades in the first few months (although with several players on my team on the Disabled List, I may have to break my own rule) because I know that in small samples, big fluctuations in statistical performance in the end  are not really telling us much about actual player talent.

One of the big lessons I’ve learned from following baseball and the revolution in sports analytics is that one of the most powerful forces in player performance is regression to the mean. This is the tendency for most outliers, over the course of repeated measurements, to move toward the mean of both individual and population-wide performance levels. There’s nothing magical, just simple statistical truth.

And as I lift my head up from ESPN sports and look around, I’ve started to wonder if regression to the mean might be affecting another interest of mine, and not for the better. I wonder if a lack of understanding of regression to the mean might be a problem in our search for ways to reach better health.
Continue reading

Baseball, Bayes, Fisher and the problem of the well-trained mind

One of the neat things about the people in the baseball research community is how willing many of them are to continually question the status quo. Maybe it’s because sabermetrics is itself a relatively new field, and so there’s a humility there. Assumptions always, always need to be questioned.

Case in point: a great post by Ken Arneson entitled “10 things I believe about baseball without evidence.” He uses the latest failure of the Oakland A’s in the recent MLB playoffs to highlight areas of baseball we still don’t understand, and for which we may not even be asking the right questions. Why, for example, haven’t the A’s advanced to the World Series for decades despite fielding good and often great teams? Yes there’s luck and randomness, but at some point the weight of the evidence encourages you to take a second look. Otherwise, you become as dogmatic as those who still point to RBIs as the measure of the quality of a baseball batter. Which they are not.

One of the thought-provoking things Arneson brings up is the question of whether the tools we use shape the way we study phenomena–really, the way we think–and therefore unconsciously limit the kinds of questions we choose to ask. His example is the use of SQL in creating queries and the inherent assumptions of that datatype that objects within a SQL database are individual events with no precedence or dependence upon others. And yet, as he points out, the act of hitting a baseball is an ongoing dialog between pitcher and batter. Prior events, we believe, have a strong influence on the outcome. Arneson draws an analogy to linguistic relativity, the hypothesis that the language a person speaks influences aspects of her cognition.

So let me examine this concept in the context of another area of inquiry–biological research–and ask whether something similar might be affecting (and limiting) the kinds of experiments we do and the questions we ask.

Continue reading