"Maybe then science can become more like economics: self-correcting."
It's bizarre to claim that Economics is self correcting based on a handful of anecdata when a more complete analysis shows that finding fraud or submitting corrections has no impact whatsoever on citation rates and the direction of future research;
It's just not. Getting a comment on your paper is far more common than getting a retraction, and quite frankly the coding practices of economists are always atrocious and bug-ridden. Economists rarely even check the code that goes into their own papers.
The reason for this is pretty obvious once you realize that only very rarely do people actually go through the replication package of a paper, and the messier it is the less likely they are to bother. So if you want to maximize top 5 publications, the lesson is to pump out trash code, fast.
At least reviewers in STEM bother to glance at the code and experimental documentation ime.
This is an interesting story, but I don’t think your takeaway (that to catch fraud you should ask important questions) follows from it at all. I don’t think it’s self-evident that policymakers would necessarily go to the trouble to replicate a result before citing it. I bet the vast majority of the time they don’t. I would guess very very few people ever run replication packages on papers unless they are explicitly trying replicate them.
What you say may well be true, and I think it’s reasonable to ask people to work on important question. But this story is beside the point. Plenty of less important research (eg in psychology) has been found to be fraudulent as well.
Typically large errors do not result in retraction or even correction, at least in medicine and psychology – the journal editor and publisher are not particularly interested in getting things right, and retraction/correction is a huge pain and also often raises the spectre of lawsuits. I hope this is different in economics, but I'd be surprised if it is. If it's not different, then most bad results continue to stand in economics, too.
Thanks for the shoutout!
Thanks for the excellent book!
Economics is certainly better at this than the social sciences but this made me think of Bryan Caplans article about how virtually no one actually looked at his data: https://www.betonit.ai/p/no-one-cared-about-my-spreadsheets?r=24ib7&utm_campaign=post&utm_medium=web
"Maybe then science can become more like economics: self-correcting."
It's bizarre to claim that Economics is self correcting based on a handful of anecdata when a more complete analysis shows that finding fraud or submitting corrections has no impact whatsoever on citation rates and the direction of future research;
https://www.rwi-essen.de/fileadmin/user_upload/RWI/Publikationen/I4R_Discussion_Paper_Series/068_I4R_Ankel-Peters_Fiala_Neubauer.pdf
It's just not. Getting a comment on your paper is far more common than getting a retraction, and quite frankly the coding practices of economists are always atrocious and bug-ridden. Economists rarely even check the code that goes into their own papers.
https://x.com/JohnRuf6/status/1806780777683956178
The reason for this is pretty obvious once you realize that only very rarely do people actually go through the replication package of a paper, and the messier it is the less likely they are to bother. So if you want to maximize top 5 publications, the lesson is to pump out trash code, fast.
At least reviewers in STEM bother to glance at the code and experimental documentation ime.
This is an interesting story, but I don’t think your takeaway (that to catch fraud you should ask important questions) follows from it at all. I don’t think it’s self-evident that policymakers would necessarily go to the trouble to replicate a result before citing it. I bet the vast majority of the time they don’t. I would guess very very few people ever run replication packages on papers unless they are explicitly trying replicate them.
What you say may well be true, and I think it’s reasonable to ask people to work on important question. But this story is beside the point. Plenty of less important research (eg in psychology) has been found to be fraudulent as well.
Typically large errors do not result in retraction or even correction, at least in medicine and psychology – the journal editor and publisher are not particularly interested in getting things right, and retraction/correction is a huge pain and also often raises the spectre of lawsuits. I hope this is different in economics, but I'd be surprised if it is. If it's not different, then most bad results continue to stand in economics, too.
I would love a post on your model of why the fraudster is not excommunicated.
The age of impunity.