Here’s a better framework for data-driven decision-making

Data scientists are in the business of decision-making. Our work is focused on how to make informed choices under uncertainty.
And yet, when it comes to quantifying that uncertainty, we often lean on the idea of “statistical significance” — a tool that, at best, provides a shallow understanding.
In this article, we’ll explore why “statistical significance” is flawed: arbitrary thresholds, a false sense of certainty, and a failure to address real-world trade-offs.
Most important, we’ll learn how to move beyond the binary mindset of significant vs. non-significant, and adopt a decision-making framework grounded in economic impact and risk management.
Imagine we just ran an A/B test to evaluate a new feature designed to boost the time users spend on our website — and, as a result, their spending.
The control group consisted of 5,000 users, and the treatment group included another 5,000 users. This gives us two arrays, named treatment
and control
, each of them containing 5,000 values representing the spending of individual users in their respective groups.