Discussion about this post

User's avatar
Headless Marbles's avatar

Yes! "Training a model without testing it" is exactly what so much social science research amounts to. Plotting the marginal "fitted values," i.e., predictions, of a model as a function of some feature to make some point about what has been learned from the data, but without ever actually, you know, testing the model's predictions out-of-sample.

Expand full comment
RenOS's avatar

This article is imo entirely correct on methods, but I also want to talk about how to improve the chosen topics.

Back when physics was in its infancy, it started with quantifying things that people have already noticed. For example, gravity. People had already noticed that things tend to fall down, rather than up, or the movements of celestial bodies. The hard part was generating a specific formula on how it works in-detail. Only *after* considerable hard work we found that intuitions on at least some points were wrong, such as that the force of gravity is actually identical between differently-weighted objects. But many if not most intuitions turned out roughly correct! Likewise for many other things; The basic motivating question for many researchers was extremely often "why is this thing, which we all can perceive and all agree on, the way it is?". This does not preclude eventually overturning some intuitions, but that is generally much further down the line!

In social sciences however, I regularly see even honest students and researchers start from the assumption that all "folk theories" are wrong and they only try to work out how they are wrong. Upturning intuitions is their entire raison d'etre. If you don't get "surprising" results, you're at best boring and uncool or at worst vilified. Best example is imo stereotypes; If I ask friends who literally studied psychology, many will declare with confidence that all stereotypes are wrong and have been thoroughly debunked by psychological research. Yet stereotype accuracy is one of the most robust and replicable findings there is. But psychologists shouldn't stop there! We need to quantify how accurate they are, we need to find how different parameters influence accuracy, and so on. The good news is, this is at least being worked on by a small group of researchers; The bad news is, many others try their best to entirely ignore these findings.

To be frank, you can't do science this way. You should start from the things that regular society can broadly agree on AND which is backed up by replicable, strong findings- stereotype accuracy, the primacy of selection effects in education, the robustness of intelligence measures, and so on. From there, you can work yourself towards more contentious, variable or difficult topics. But you shouldn't start from those.

This avoids another problem we could see here: If you start out with investigating the things that most people have been agreeing on for a long time, an argument like "maybe attitudes changed, dunno" will seem non-credible. And this is very good, since we definitely shouldn't waste money on findings that very plausibly won't last even just a few years.

Expand full comment
6 more comments...

No posts