Calculate required sample size for surveys, A/B tests, and clinical trials with confidence level, margin of error, and power analysis.
Determining the right sample size is one of the most important decisions in any research study, survey, or experiment. Too small a sample gives unreliable results; too large wastes resources. The required sample size depends on your desired confidence level, acceptable margin of error, expected variability, and for experiments, the effect size you want to detect.
This calculator handles three common scenarios: surveys (proportion-based), comparative experiments (A/B tests), and continuous-variable studies. It computes the minimum sample size needed and shows how sample size changes with different confidence levels, margins of error, and effect sizes.
Whether you're planning a market research survey, designing an A/B test for your website, sizing a clinical trial, or planning an academic study, this tool provides the statistical foundation for your research design. It helps you avoid underpowered studies that miss real effects and oversized studies that waste time, budget, and participants. That makes the study plan easier to defend before data collection starts.
Use this calculator to set a study size before collecting data so the target confidence, margin of error, or minimum detectable effect is explicit up front. It is useful for surveys, experiments, and validation work where sample size needs to be justified rather than guessed. The result also makes it easier to explain the study plan to stakeholders before any data is collected.
Survey: n = (z²pq/e²) / (1 + (z²pq/e²N)). A/B test: n = (z_α + z_β)²(p₁q₁ + p₂q₂) / (p₁ − p₂)². Continuous: n = (z_α + z_β)²(2σ²) / δ². Where z = Z-score, p = proportion, e = margin of error, N = population.
Result: 370 respondents needed
For 95% confidence with ±5% margin of error, assuming 50% proportion and 10,000 population, you need 370 survey respondents.
Survey sample size depends on four factors: confidence level (Z-score), expected variability (p), margin of error (e), and population size (N). The formula uses the finite population correction (FPC) when the population is small.
The most common misunderstanding: sample size is not proportional to population size. A city of 100,000 and a country of 100,000,000 need essentially the same sample for the same precision. The FPC only matters when your sample is a large fraction of the population.
A/B tests compare two groups, so the calculation is different. You need to specify: the baseline rate (control group), the minimum detectable effect (how small a lift you want to detect), alpha (false positive rate, usually 5%), and beta (false negative rate, usually 20% for 80% power).
The key insight: detecting small effects requires massive samples. To detect a 1% absolute lift on a 5% baseline (5% → 6%) requires ~15,000 per group. To detect a 5% lift (5% → 10%) requires only ~400 per group.
Real-world studies must account for attrition, non-response, data quality issues, and subgroup analysis. A common rule of thumb: recruit 20-30% more than the calculated minimum. If you plan to analyze subgroups, each subgroup needs the full sample size independently.
95% is standard for most research. Use 99% for high-stakes decisions (medical, regulatory). Use 90% for preliminary or exploratory research where resources are limited.
Use 50% — it gives the largest (most conservative) sample size. Any other proportion requires fewer samples. This is why 50% is the default.
Only for small populations (<50,000). For large populations, sample size barely changes. A survey of 1 million or 100 million people needs almost the same sample size.
Power (usually 80%) is the probability of detecting a real effect when it exists. Higher power (90%) requires larger samples but reduces the chance of missing real differences.
You need: baseline conversion rate, minimum detectable effect (MDE), significance level (usually 5%), and power (usually 80%). Smaller MDE requires much larger samples.
Because n ∝ 1/e². The margin of error is in the denominator squared, so reducing it by half means 4× the sample size. Going from ±5% to ±1% requires 25× more samples.