All posts
Strategic

How Cognitive Biases Undermine Strategic Thinking

Exploring how cognitive biases impact decision-making in data design and evaluation.

CBCaryle Blondell
3 minutes read
strategic

How Cognitive Biases Creep into Data Design

(And How We Can Catch Them)

Over the years, I’ve led and advised on everything from data infrastructure rollouts to user-focused product design. No matter the context, one pattern shows up consistently: we’re not as objective as we think.

Whether we’re setting KPIs, choosing metrics, or reviewing outcomes, our thinking is constantly influenced by hidden biases. Here are a few cognitive pitfalls I’ve encountered—and strategies I’ve learned (and used) to course-correct.


1. Survivorship Bias: We’re Learning from the Winners, Not the System

We love case studies. Success stories make great decks and digestible narratives. But success alone doesn’t tell the whole story. A critical flaw we face: if we only study what worked, we miss the lessons from what didn’t—and we risk mistaking correlation for causation.

I’ve seen this firsthand in ops playbooks and vendor selection. Teams copy tactics from top performers without realizing those same tactics failed in dozens of other contexts.

What I do now: I ask, What don’t we see here? For every “best practice,” I try to surface the null cases—those who did the same thing and got different results. Often, the context (not the tactic) made the difference.


2. Confirmation Bias: We Measure What We Hope Is True

In data projects, it’s easy to go looking for proof that our programs are working. But as we may know, true evaluation requires disconfirming evidence, not just affirming it.

This hit me hard in a program review a few years ago. We had glowing metrics—but only because we weren’t asking the tough questions. It turned out our impact was more limited than we thought.

How I shifted: I frame evaluation questions like scientific hypotheses: What result would contradict our theory of change? If we’re serious about outcomes, we have to make space for uncomfortable answers.


3. Sunk Cost Fallacy: Staying Invested When the Map No Longer Matches the Terrain

This one is brutal. You’ve already poured six months into a system or initiative, and pivoting feels like failure. But, implementation evaluation isn’t about preserving sunk costs—it’s about realigning to changing needs and evidence.

What I’ve started doing: Midway through large rollouts, I set aside space for “exit reviews”—a brutal but honest check: Would I start this project again today, knowing what I now know? If not, I ask: What’s the minimum graceful exit?


4. Action Bias: The Urge to Do Something… Anything

In uncertainty, we’re wired to move quickly—fire off a new initiative, change a process, or launch a fix. But sometimes, deliberate inaction can be the most strategic move.

This shows up a lot in operations: when early data is ambiguous, we act prematurely and introduce noise instead of clarity.

My go-to filter: I ask, Are we acting on signal, or reacting to stress? If it’s the latter, we slow down. We revisit the design logic. We test assumptions first.


Final Thought: Bias Isn’t a Personal Flaw—It’s a Design Opportunity

Understanding bias isn’t about blaming ourselves. It’s about designing smarter systems. Evaluation should be baked into the design—not bolted on after launch.

That means:

  • Using pre-mortems to anticipate failure before it happens.
  • Building feedback loops that test assumptions over time.
  • Creating safe space for iteration and exit without ego.

I’ve seen these principles pay off—in fewer false starts, more honest roadmaps, and ultimately, better outcomes. If you’re designing programs, leading teams, or building systems, this kind of cognitive clarity isn’t optional. It’s strategic infrastructure.