A/B Test Sample Size Calculator
Determine the minimum sample size needed for statistically significant A/B test results.
Test Parameters
Enter your test configuration
Your current conversion rate
Smallest improvement you want to detect
Visitors per day to the page being tested
Test Requirements
Enter your parameters to see results
Enter your parameters on the left to calculate sample size
Why Sample Size Matters
Running A/B tests with insufficient sample size leads to unreliable results. You might see false positives (declaring a winner that isn't actually better) or false negatives (missing a real improvement).
Key Concepts
- Statistical Significance (α): The probability of a false positive. 95% confidence means 5% chance of false positive.
- Statistical Power (1-β): The probability of detecting a real effect. 80% power means 20% chance of missing a real improvement.
- Minimum Detectable Effect: The smallest improvement worth detecting. Smaller effects require larger samples.
Best Practices for A/B Testing
- Calculate sample size first: Know your required sample before starting the test
- Don't peek: Checking results early inflates false positive rates
- Run for full weeks: Capture day-of-week patterns (minimum 2 weeks)
- Test one thing at a time: Isolate variables for clear learnings
- Document everything: Record hypotheses, results, and learnings
When to Use One-Tailed vs Two-Tailed Tests
- Two-tailed: Use when you want to detect any change (improvement or regression)
- One-tailed: Use only when you're certain the change can't make things worse