Set Up an Experiment
This guide walks you through planning, launching, and monitoring an experiment.
Before you start
- Decide what change you’re testing (one change per experiment is best).
- Decide what “success” means (pick one primary metric).
- Make sure the outcome you care about is already showing up in your analytics today.
1) Create the experiment
- Open your agent.
- Go to Deploy → Experiments.
- Click New experiment.
Fill in:
- Name: A clear label your team will recognize.
- Identifier: A short, stable name for reporting (keep it consistent).
- Description (optional): What you’re changing and why.
- Dates (optional): Use these if you want the experiment to run on a schedule.
2) Add variants (the versions you’re comparing)
Variants are the versions of your agent you want to compare.
- Add at least two variants.
- Choose one variant as Control (your baseline).
- Set the weight for each variant (how the experiment splits traffic).
Recommendations:
- Start with a 50/50 split for two variants, unless you’re being cautious.
- Keep variant names simple (for example: “control” and “treatment”).
3) Choose what to measure
You’ll choose:
- Primary metric: The main definition of success. You should only have one.
- Secondary metrics (optional): Helpful context (trade-offs or wins you didn’t expect).
- Guardrails (optional): “Don’t let this get worse” metrics.
4) Send traffic to the experiment
Creating an experiment does not automatically send traffic to it.
- Go to Deploy → Traffic control.
- Create a new rule.
- Choose who is eligible (for example: channel, customer segment, marketing campaign).
- Choose a starting percentage (start small if you’re unsure).
- Choose your experiment as the destination.
Tips:
- Start small (like 5–10%) and expand once everything looks healthy.
- Keep rules simple so results are easier to interpret.
5) Start the experiment
When everything is ready:
- Return to Deploy → Experiments.
- Open your experiment.
- Change the status to Running.
Once it’s running, you should see exposures begin to accumulate.
6) Stop the experiment and ship the winner
When you’re ready to end the test:
- Change the status to Completed.
- Update your routing so all traffic goes to the winning version.
- Keep the experiment for reference so your team can learn from it later.
Troubleshooting
“I don’t see any exposures”
- Confirm your routing rule is active and ordered correctly.
- Confirm the audience definition is not too restrictive.
- Wait a few minutes—new traffic is required before you’ll see exposure counts.
“I see exposures, but not the outcome I care about”
- Confirm the outcome is tracked and visible outside of experiments first.
- Run a real end-to-end test flow and confirm the outcome appears in analytics.
Last updated on