Designing and building an experimentation programme from zero — achieving 21% conversion rate increase
Two years of flat / declining conversion performance. No structured CRO programme. No approach to diagnosing or addressing conversion issues.
The mandate was open-ended: design and build an experimentation programme from scratch with no existing process to inherit.
The challenge wasn't just technical. A programme that the CRO team runs alone can become fragile. A programme that stakeholders across product, design, and engineering actively champion is sustainable. The goal was the latter.
I worked in three phases, each building the evidence base for the next.
Phase 1 - Understand Before Testing
I started with a full website conversion audit mapping exisiting funnel drop-off points, page-level friction patterns, and behavioural signals from session recordings and heatmaps. Cross-referenced these findings with existing experiment insights and UX research from the parent Sykes brand to formulate evidence-based hypotheses tailored to the sister brand context.
From this audit, I built an initial hypothesis backlog: a prioritised set of evidence-based problem statements ranked by likely impact and implementation complexity.
Phase 2 - Exclusion Testing: Finding What Users Value
Before building new experiences, I ran targeted exclusion tests. The logic: rather than immediately adding features or changing layouts, first understand what's essential by removing elements and observing user response.
Ran exclusion tests to determine areas of user sensitivity, identifying what users valued most (to double down on) and what was creating noise that, when removed, increased the visibility of what mattered.
The Exclusion tests answered 'what's essential?' before ideation tests answered 'what could we add?' This sequencing prevented the common mistake of layering complexity onto an already unclear experience.
Phase 3 - Stakeholder Workshops: Building Ownership, Not Just Ideas
Once a growing library of live experiment data and early user behaviour signal was in place, I ran stakeholder ideation workshops — but these weren't brainstorming sessions starting from zero.
Before these sessions, two foundations were in place: a growing library of live experiment data giving early signal on user behaviour, and a structured ideation framework that encouraged creativity while anchoring assumptions in evidence. Stakeholders left these sessions with their names attached to experiment ideas. When results came back, they felt ownership of outcomes because they'd been involved in framing the question, not just handed a conclusion.
The 21% conversion rate increase related to multiple things working together: successful experiments shipping, low-value experiences being systematically removed, and the site experience becoming progressively clearer and more aligned with user intent.
The stakeholder outcome was equally important for long-term programme health. Stakeholders who participated in the evidence-framing and ideation phases became active champions.
Key secondary outcomes: experimentation vocabulary, process, and shared definition of success established; programme scaled without proportional increase in CRO team headcount; framework made replicable for other brands.
Building a programme from scratch is a culture project as much as a technical one. The infrastructure (testing platform, analytics, frameworks) is necessary but not sufficient. The harder work is building the organisational muscle memory that treats experimentation as a default mode of decision-making rather than an optional extra.
The decision to run exclusion tests before ideation tests was the most important sequencing choice. It meant the first round of stakeholder workshops started from behavioural evidence about what users valued, not abstract opinions about what we thought they wanted. That credibility carried forward into all subsequent ideation sessions.
If I were to build this again, I'd set up stakeholder reporting infrastructure earlier in the process even before testing was scaled. The act of receiving regular experiment updates (even if inconclusive at first) builds the habit of engaging with data-driven outcomes, which increase over time into genuine cultural shift.
Transferable principle: Don't build the programme you want. Build the programme the organisation is ready for. Start with evidence-gathering (audit, exclusion tests), then involve stakeholders in hypothesis generation once they've seen that user behaviour can contradict assumptions. Ownership follows evidence, not vice versa.