Andreas Cederblad Δ
All articles
experimentation6 minApril 2, 2025

Building an Experimentation Culture: From Ad Hoc Tests to Growth Systems

Running A/B tests doesn't make you an experimentation culture. Building systems for learning does. Here's how to make the shift from random testing to systematic growth.

Most companies that claim to have an experimentation culture don't. They have an experimentation habit -- someone on the team runs occasional A/B tests, mostly on button colours and headline copy, and reports the results in a deck that nobody reads twice.

That's not a culture. That's a checkbox.

A real experimentation culture changes how an organisation thinks, decides, and learns. It's the difference between running tests and building a system for compounding knowledge. I've seen this transformation firsthand, and it's the single highest-leverage change a growth-stage company can make.

What an Experimentation Culture Actually Looks Like

It has three properties:

1. Hypotheses before tactics. Every initiative starts with a falsifiable hypothesis. Not "let's try a new landing page" but "we believe that reducing the number of form fields from 7 to 3 will increase lead conversion by 15% because our analytics show 60% drop-off at the form step."

The hypothesis forces clarity. It makes you articulate what you believe, why you believe it, and what would prove you wrong. That's the foundation of learning.

2. Decisions follow data, not opinions. In most organisations, the highest-paid person's opinion wins. In an experimentation culture, the data wins. The CEO's pet idea gets tested the same way as the intern's suggestion. If the data says the intern was right, the intern was right.

This is harder than it sounds. It requires ego reduction at the leadership level. But it's also liberating -- it removes politics from product and marketing decisions and replaces them with evidence.

3. Negative results are valued. A test that disproves your hypothesis is not a failure. It's information. Some of the most valuable experiments I've run were ones where the "obvious" improvement made things worse. That learning prevented months of misguided effort.

If your team only celebrates wins, they'll stop testing risky ideas. The safest test is the one with the smallest learning potential. Real experimentation requires taking swings that might miss.

The Maturity Curve

I think about experimentation maturity in four stages:

Stage 1: Reactive Testing

Tests happen sporadically, usually when someone reads a blog post about conversion optimisation or when a designer wants to settle a debate. There's no backlog, no prioritisation framework, and no systematic tracking of results.

Most companies are here. It's better than nothing, but the learning is random and easily lost.

Stage 2: Structured Testing

A dedicated person or small team owns experimentation. There's a backlog of test ideas, a prioritisation method (ICE scoring or similar), and a process for designing and reviewing experiments. Results are documented.

This is where I start with most CRO and experimentation engagements. Getting from Stage 1 to Stage 2 is primarily a process problem.

Stage 3: Integrated Experimentation

Every team -- product, marketing, sales, customer success -- runs experiments as part of their workflow. There's a shared experimentation platform, a company-wide repository of learnings, and cross-functional knowledge sharing.

Getting here requires leadership buy-in and investment. The ROI is enormous -- I've seen companies at Stage 3 make better decisions in a month than Stage 1 companies make in a year.

Stage 4: Experimentation as Operating System

The company treats experimentation as its primary decision-making mechanism. Strategy is a series of hypotheses. Execution is a series of tests. Learning velocity is a core competitive metric.

Very few companies reach this stage. The ones that do tend to dominate their markets.

Building the Foundation

If you're at Stage 1 and want to move to Stage 2, here's what I recommend:

Create an Experiment Backlog

Every idea that starts with "we should try..." goes into the backlog. Product changes, marketing tactics, pricing experiments, messaging variations -- all of it. The backlog is not a to-do list. It's an inventory of hypotheses.

Prioritise Ruthlessly

Not every idea deserves a test. Prioritise based on three factors:

  • Impact potential. How much could this move a metric that matters?
  • Confidence. How strong is the evidence that this will work?
  • Effort. How long will it take to set up and run?

High impact, moderate confidence, low effort -- that's your sweet spot. Test the ideas where the learning is most valuable, not where the outcome is most certain.

Establish a Minimum Viable Process

You need four things:

  1. A hypothesis template (we believe X because Y, and we'll measure Z)
  2. A statistical rigour standard (sample size, significance threshold, runtime)
  3. A documentation format (what we tested, what happened, what we learned)
  4. A review cadence (weekly or bi-weekly experiment reviews)

That's it. Don't over-engineer the process. Start simple and refine as you go.

The Connection to Growth Metrics

Experimentation without direction is just curiosity. The experiments that compound are the ones connected to your core growth metrics and KPIs.

Start with your biggest lever. If your conversion rate is 1.5% and your industry average is 3%, that's where your experimentation energy should go. If your retention rate is dropping quarter over quarter, that's the problem worth testing solutions for.

Every experiment should trace back to a metric in your North Star framework. If it doesn't, ask whether it's worth running. Focus beats volume in experimentation just as it does in strategy.

Common Mistakes

Testing too small. Button colour tests might reach statistical significance, but they rarely move business outcomes. Test bigger ideas. Restructure the page. Change the offer. Rewrite the value proposition.

Stopping too early. Statistical significance requires adequate sample sizes and runtime. Stopping a test because it looks good after two days is not experimentation. It's confirmation bias with extra steps.

Not testing at all because "we don't have enough traffic." You can test with 10,000 monthly visitors. You just need to test fewer things simultaneously and run them longer. Low traffic is a constraint, not a disqualification.

Treating experimentation as a marketing-only activity. The best experimentation cultures test across the entire customer journey -- acquisition, activation, retention, revenue, and referral. Product teams, customer success teams, and even operations teams should be running experiments.

The Cultural Investment

The tools are easy. A/B testing platforms, analytics, statistical calculators -- all commoditised. The hard part is the culture.

It means leaders admitting they don't know the answer. It means teams accepting that their favourite idea might lose. It means investing time in learning when the organisation is addicted to doing.

But the payoff is extraordinary. An experimentation culture compounds knowledge the way compound interest compounds money. Every test, win or lose, makes the next decision slightly better. Over months and years, that compounding effect creates an insurmountable advantage.

The question isn't whether you can afford to build this culture. It's whether you can afford not to.

Andreas Cederblad Δ