Growth

Growth Experiments Aren't About Winning — They're About Learning

TL;DR

Most teams say they're "running growth experiments", but what they're usually doing is optimisation theatre: testing for short-term uplift, chasing tidy metrics, and mistaking activity for progress.

Real growth experimentation starts with beliefs, not ideas; prioritises learning over uplift; and treats metrics as decision tools, not performance shields.

If an experiment doesn't change what you believe, what you prioritise, or what you do next, it wasn't a growth experiment, even if the numbers went up.


When "Experimentation" Loses Its Meaning

Growth experimentation has become one of those phrases that sounds rigorous but often isn't.

Teams talk about running tests, shipping experiments, moving fast. Dashboards fill up. Results are shared. Yet underneath the motion, something is missing: clarity about what is actually being learned.

In many organisations, experiments are framed to answer the wrong question. Instead of helping teams understand customers, products, or markets more deeply, they're designed to produce a neat uplift, or at least a defensible story.

This piece is about reclaiming what growth experimentation is for.

What Teams Are Usually Doing When They Say They're "Running Growth Experiments"

In practice, most "growth experiments" fall into one of three buckets:

1. Performance optimisation
Tweaking creatives, landing pages, headlines, or buttons to improve a known metric.

2. Channel testing
Running paid activity across different platforms to see which performs best on cost or volume.

3. Activity validation
Shipping something, anything, because shipping itself has become the proxy for progress.

None of these are inherently bad. In fact, they're often necessary.

The problem is that they're frequently confused with growth experimentation when they're really about efficiency within a known system, not learning about an unknown one.

Optimisation vs Exploration: The Fault Line Most Teams Miss

The key distinction is simple:

  • Optimisation experiments ask: How do we get more out of what we already believe is true?
  • Growth experiments ask: What might we be wrong about?

Optimisation seeks uplift. Growth seeks insight.

When teams optimise too early, or mistake optimisation for growth, they narrow their field of view. They iterate inside a small box rather than expanding what's possible.

That's how you end up with "successful" experiments that don't meaningfully change the trajectory of the business.

Beliefs vs Ideas: The Missing Layer in Most Experimentation

This is where many teams go wrong.

They jump straight to ideas:

  • Test this creative
  • Try this incentive
  • Add this feature

But ideas without context are just tactics.

A belief is different. A belief is a claim about how the world works, about customer behaviour, motivation, friction, or value.

For example:

  • We believe this group values craftsmanship over price.
  • We believe reducing friction here will increase long-term usage.
  • We believe this moment in the journey is where confidence is lost.

Ideas are simply ways to probe that belief.

One belief can (and should) spawn multiple ideas. If you treat each idea as a standalone win-or-lose test, you collapse learning into a binary outcome and miss the bigger picture. This is where design thinking and growth experimentation connect — both start with beliefs about real problems, not just ideas to test.

How False Learnings Are Created

False learnings are rarely the result of bad intent. They usually emerge from pressure, inexperience, or poor structure.

Common causes include:

  • Poor isolation of variables — Multiple things changing at once, making causality impossible to establish.
  • Contextual noise — Seasonality, parallel campaigns, product changes, or external factors muddying results.
  • Misattribution — Assigning success or failure to the wrong action because it happens to coincide.
  • Over-segmentation — Isolating tests so aggressively that results no longer represent real behaviour at scale.

The result is confidence without correctness, decisions made on shaky ground that feel data-driven but aren't.

Why Teams End Up Measuring the Wrong Things

There are a few recurring reasons:

1. Lack of comfort with data

Many people in growth or marketing roles aren't as fluent with data as they need to be. Not mathematically, but practically. Extracting, manipulating, and interrogating datasets is still rarer than it should be.

When confidence is low, teams default to:

  • Platform dashboards
  • Pre-packaged metrics
  • What's easiest to report

2. Performance bias

Pressure to show results pushes teams toward short-term, bottom-of-funnel metrics, even when the experiment is acting earlier in the journey.

This leads to false negatives:

  • Real impact occurs upstream
  • Confidence or momentum improves
  • But final conversion doesn't move yet

3. Metric misalignment

Looking too broadly hides signal. Looking too narrowly misses context.

A metric is only useful if it matches where the experiment intervenes in the journey.

A Rule of Thumb for Choosing the Right Metric

There isn't a single universal metric. But there is a reliable rule of thumb:

Choose the metric that sits closest to the belief you're testing, and whose movement you understand in terms of business impact.

That means:

  • You know why this metric should move
  • You know how it ladders up (or doesn't) to revenue, retention, or margin
  • You can observe it with enough sensitivity and speed to inform a decision

One primary metric is usually enough. Supporting metrics are fine, as long as their role is clear.

Simplicity isn't a compromise. It's an accelerant.

When Teams Are Hiding Behind Metrics Instead of Using Them

There are clear signals when this is happening.

1. Metrics change to protect the narrative

The same question gets answered with different metrics each time, whichever one looks best in that moment. Consistency disappears. Trends become impossible to follow.

2. Metrics lack causal explanation

Teams can tell you what moved, but not why. There's no connection between actions taken and outcomes observed.

3. No forward intent

Metrics are reported, but nothing changes as a result. No reprioritisation. No new hypotheses. No next step.

4. Vanity by abstraction

High-level numbers are used where decision-level metrics are required, creating the illusion of progress without accountability.

When metrics become a shield, growth stalls.

The Role of Psychology

Growth isn't just about customer behaviour. It's also about organisational behaviour.

Teams:

  • Fear cannibalisation
  • Avoid exposing uncomfortable truths
  • Prefer activity that feels productive

Sometimes the biggest barrier to growth isn't the market, it's internal psychology. Recognising that is part of senior growth work.

The Point of All This

Growth experimentation isn't about being right quickly. It's about becoming less wrong over time.

If your experiments:

  • Change what you believe
  • Open new paths to explore
  • Inform better decisions under pressure

They're working, even if the graph didn't spike this week.

And if they don't? Then no amount of uplift will save you in the long run.

Final Thought

Metrics don't create growth. Decisions do.

Metrics are just the tools we use to make those decisions with a little more humility, and a little less guesswork.