A/B Test Statistical Significance
VARIANT A (Control)
VARIANT B (Test)
A/B testing (split testing) compares two versions of a web page, email, or other element to determine which performs better. Statistical significance testing ensures the observed difference is not due to random chance. A result at 95% confidence means there is only a 5% probability the difference occurred by chance.
💡
Tip: Run tests until you reach the required sample size — peeking at results early and stopping when you see significance inflates false positive rates. Use a sample size calculator before starting the test.
- 1Calculate conversion rates for each variant: rate = conversions / visitors
- 2Compute the pooled proportion: p_pool = (cA + cB) / (vA + vB)
- 3Calculate the standard error: SE = sqrt(p_pool × (1 − p_pool) × (1/nA + 1/nB))
- 4Z-score: z = (rateB − rateA) / SE
- 5If |z| ≥ 1.96, the result is significant at 95% confidence (two-tailed)
A: 1,000 visitors, 50 conversions. B: 1,000 visitors, 70 conversions=Z = 2.08 — Significant at 96.2%Relative lift: +40%. B is the winner.
A: 200 visitors, 10 conversions. B: 200 visitors, 13 conversions=Not significant — need more trafficSmall sample size means high uncertainty
| Confidence Level | Z-Score (two-tailed) | Meaning |
|---|---|---|
| 90% | 1.645 | 10% chance of false positive |
| 95% | 1.960 | 5% chance — industry standard |
| 99% | 2.576 | 1% chance — high confidence |
| 99.9% | 3.291 | Very high confidence |
🔒
100% 免费
无需注册
✓
准确
经过验证的公式
⚡
即时
即时结果
📱
移动友好
所有设备