A practical guide to 15 cognitive biases that distort decision-making in work, data, and everyday life.
In both technology and operations, decisions made under pressure can create ripple effects—financially, organizationally, and technically. Over the years, I’ve seen how even smart people fall into predictable mental traps. These aren’t just human quirks—they’re cognitive biases that affect how we process information and make choices.
Here are 15 common ones I think about regularly, and how I try to catch myself in the act.
We focus on the specific case and ignore the statistical backdrop. A good engineer may dismiss how rarely their scenario actually happens. Whether it’s system failures or customer churn, ignoring base rates distorts risk.
We see patterns in random data—especially in dashboards or product analytics. This illusion creates false narratives: "users drop off after step 3, so step 3 is broken." Sometimes, randomness is just randomness.
It’s easier to notice what supports our beliefs than what contradicts them. I’ve caught myself searching logs or running queries that “prove” my theory instead of challenging it. Data shouldn’t be a mirror.
We often overestimate our knowledge, especially in areas we feel fluent. I’ve learned that saying “I don’t know” can save far more time than pretending I do.
If something swings drastically—performance, engagement, outages—it’s likely to return to normal over time. Making big changes based on short-term extremes can be costly.
Assuming patterns will continue just because they have. "The system didn’t fail last week, so it won’t fail this week"—until it does. Inductive reasoning is comforting but not always reliable.
When we evaluate the success of a process but ignore those who dropped out of it, we get skewed results. It’s common in funnel analysis and experimentation. Who didn’t make it matters just as much.
Just because two things change together doesn’t mean one caused the other. This bias is rampant in A/B testing, where spurious correlations can lead to false confidence.
Averages hide variability. A mean engagement time of 5 minutes doesn’t tell you about users who stayed 20 seconds or 2 hours. I’ve learned to dig into distributions—not just summaries.
Sometimes we want more data even if it won’t change the outcome. I’ve seen this stall decision-making. More isn’t always better—sometimes it’s just noise.
Small samples give us big confidence—incorrectly. Whether it’s testing a feature on 12 users or drawing conclusions from one week's data, we underestimate the role of chance.
We prefer known risks over unknown ones, even if the unknown has better upside. This plays out in hiring, platform choices, and market expansion. Uncertainty isn't always bad—it’s just harder to measure.
We consistently underestimate how long tasks will take. Building a feature "in two weeks" has become the punchline of many retrospectives. I’ve learned to pad timelines—and still expect surprises.
We interpret everything through the lens of our expertise. I see this in engineers who want to code their way out of every problem. Sometimes the best fix isn’t technical.
We struggle to intuitively grasp compounding growth. Whether it’s debt, data scale, or user acquisition, we plan linearly when the curve is anything but.
Awareness doesn’t eliminate bias—but it’s the first step. By naming these tendencies, we can pause before reacting and make more grounded decisions. In business, code, or life, that pause is often where clarity begins.