Example: Conversion Experiment
This example shows how to run a conversion-focused experiment, where success is defined as “more visitors complete a key action.”
Goal
Increase the percentage of visitors who complete your key action after engaging with your agent.
What you’ll test
Pick one change to test, such as:
- A shorter first message vs a longer first message
- A different recommendation or offer
- A different call-to-action wording
Setup checklist
1) Create two variants
- Control: Your current experience
- Treatment: The version with your change
Set the split to 50/50 to start.
2) Choose your primary metric
Pick a conversion metric that matches your goal.
Good primary metric examples:
- Purchase conversion
- Checkout start conversion
- Click-through conversion (for example, from a recommendation to a product page)
Add 1–3 secondary metrics for context, such as:
- Revenue (to make sure the lift isn’t coming from lower-quality conversions)
- Average order size (if applicable)
- Escalation rate (as a guardrail if you’re also handling support)
3) Route traffic into the experiment
In Deploy → Traffic control, create a rule that sends a small percentage of eligible traffic to your experiment.
Recommended rollout plan:
- Start at 10% for the first day
- Increase to 25% once things look healthy
- Increase to 50% once you’re confident
Running the experiment
Watch the early signals
In the experiment results, look for:
- Exposures rising evenly across variants (roughly matching your split)
- Guardrails staying stable
Let it run long enough
Avoid deciding based on a small sample. It’s normal for results to swing early.
Deciding what to ship
Ship the treatment when:
- The primary metric is meaningfully better than control
- Secondary metrics don’t reveal a bad trade-off
- The improvement stays consistent as exposures grow
Stop the experiment early if:
- The treatment clearly harms a guardrail metric
- The experience is broken or confusing for visitors
Last updated on