arXiv
Language Models
When AI shortcuts become invisible failures — the trap of "vibe econometrics"
It's like using autocorrect on a critical email: the output looks clean and flows naturally, so you never notice the typo that reversed your whole argument. By the time someone fact-checks, the letter's already sent.
This means researchers and organizations using AI to speed up causal analysis (proving whether A caused B) may be confidently publishing wrong conclusions without realizing it — because the output doesn't *look* wrong.
Bug reported: No