Today I came across a fascinating concept called Bonini’s Paradox — the more complexities you add to a model to make it “realistic,” the less useful it becomes.
Take a map of your town, for example. The only map that could capture all its reality would be — not a map, but your real town. Models are by virtue just a parsed-down replica of reality.
I also came across this quote by Voltaire today, which got me thinking: the most boring analysts are those who try to model everything.
You know the drill. You come up with a framework for analyzing something at work. “What about this quirk? We need to include X and drop Y.” You try telling everything — adding all the complexities — and you get a useless, boring model.
You end up spending more time adjusting for negligible quirks in your data than making something useful out of what you have. Not only is ultra-complex data analysis boring, it’s wrong — courtesy of Bonini’s paradox.
Why does this happen? As I mentioned in a previous post, I believe there is a culture gap between the accuracy-driven accounting mindset and the usefulness-driven statistics mindset.
Many analysts need to use both frameworks — the issue is not to conflate them. There is a time and place for bean-counting. But when you’re working in the land of confidence intervals and sampling error, accuracy is never absolute.
By adding complexities, you lose usefulness — and become more boring.
Leave a Reply