Designing and building an experimentation programme from zero — achieving 21% conversion rate increase through systematic audit, exclusion testing, and stakeholder co-creation
Two years of flat or declining conversion performance. No structured CRO programme. No systematic approach to diagnosing or addressing conversion issues.
A sister brand within Sykes had been struggling with conversion for consecutive years — negative momentum compounding. The business knew something was wrong but had no structured way to diagnose what, no hypothesis-driven framework to prioritise fixes, and no testing infrastructure to validate solutions before full rollout.
The mandate was open-ended: design and build an experimentation programme from scratch. No existing process to inherit. No institutional memory of what good looked like. Just flatlined performance and a recognition that guesswork wasn't working.
The challenge wasn't just technical. A programme that the CRO team runs alone is brittle. A programme that stakeholders across product, design, and marketing actively champion is sustainable. The goal was the latter.
I worked in three phases, each building the evidence base for the next.
Phase 1 — Understand Before Testing
I started with a full website conversion audit — not a surface-level heuristic review, but a systematic mapping of funnel drop-off points, page-level friction patterns, and behavioural signals from session recordings and heatmaps. Cross-referenced these findings with existing experiment insights and UX research from the parent Sykes brand to formulate evidence-based hypotheses tailored to the sister brand context.
From this audit, I built an initial hypothesis backlog: a prioritised set of evidence-based problem statements ranked by likely impact and implementation complexity.
Phase 2 — Exclusion Testing: Finding What Users Value
Before building new experiences, I ran targeted exclusion tests. The logic: rather than immediately adding features or changing layouts, first understand what's essential by removing elements and observing user response.
Ran exclusion tests to determine areas of user sensitivity — identifying what users valued most (to double down on) and what was creating noise that, when removed, increased the visibility of what mattered.
This phase was diagnostic. Exclusion tests answered 'what's essential?' before ideation tests answered 'what could we add?' This sequencing prevented the common mistake of layering complexity onto an already unclear experience.
Phase 3 — Stakeholder Workshops: Building Ownership, Not Just Ideas
Once a growing library of live experiment data and early user behaviour signal was in place, I ran stakeholder ideation workshops — but these weren't brainstorming sessions starting from zero.
Before these sessions, two foundations were in place: a growing library of live experiment data giving early signal on user behaviour, and a structured ideation framework that encouraged creativity while anchoring assumptions in evidence. Stakeholders left these sessions with their names attached to experiments. When results came back, they felt ownership of outcomes because they'd been involved in framing the question, not just handed a conclusion.
The 21% conversion rate increase was the compounding effect of multiple things working together: successful experiments shipping, low-value experiences being systematically removed, and the site experience becoming progressively clearer and more aligned with user intent.
The stakeholder outcome was equally important for long-term programme health. Stakeholders who participated in the evidence-gathering and ideation phases became active champions. When the programme needed resources or prioritisation support, they advocated for it because they felt ownership of the outcomes.
Key secondary outcomes: experimentation vocabulary, process, and shared definition of success established; programme scaled without proportional increase in CRO team headcount; framework made replicable for other sister brands.
Building a programme from scratch is a culture project as much as a technical one. The infrastructure (testing platform, analytics, frameworks) is necessary but not sufficient. The harder work is building the organisational muscle memory that treats experimentation as a default mode of decision-making rather than an optional extra.
The decision to run exclusion tests before ideation tests was the most important sequencing choice. It meant the first round of stakeholder workshops started from behavioural evidence about what users valued, not abstract opinions about what we thought they wanted. That credibility carried forward into all subsequent ideation sessions.
If I were to build this again, I'd set up stakeholder reporting infrastructure earlier in the process — even before the first tests launched. The act of receiving regular experiment updates (even if inconclusive at first) builds the habit of engaging with data-driven outcomes, which compounds over time into genuine cultural shift.
Transferable principle: Don't build the programme you want. Build the programme the organisation is ready for. Start with evidence-gathering (audit, exclusion tests), then involve stakeholders in hypothesis generation once they've seen that user behaviour can contradict assumptions. Ownership follows evidence, not vice versa.