Explore the Central Limit Theorem — compute standard error, z-scores, confidence intervals, and see how sample size affects the sampling distribution.
The Central Limit Theorem (CLT) calculator demonstrates one of the most powerful results in statistics: regardless of the underlying population distribution, the sampling distribution of the sample mean approaches a normal distribution as sample size increases. This tool computes standard errors, z-scores, confidence intervals, and visualizes how the sampling distribution narrows with larger samples.
The CLT is the theoretical foundation for most of inferential statistics. It justifies using normal-based methods (z-tests, confidence intervals) even when the population isn't normally distributed, as long as the sample size is large enough — typically n ≥ 30 is considered sufficient.
Enter your population parameters and sample details to see the exact sampling distribution characteristics, probability calculations, and a comparison table showing how standard error decreases with increasing sample size.
Use the preset examples to load common values instantly, or type in custom inputs to see results in real time. The output updates as you type, making it practical to compare different scenarios without resetting the page.
Understanding the CLT is essential for anyone working with data. It explains why normal-based methods dominate statistics, why larger samples are better, and how to properly interpret confidence intervals and hypothesis tests.
This calculator makes the abstract theorem concrete by showing exact numbers, visualizations, and comparisons across sample sizes — ideal for statistics coursework and practical research design.
Standard Error: SE = σ/√n. Z-score: z = (x̄ − μ)/SE. Margin of Error: MoE = z* × SE. Confidence Interval: x̄ ± MoE.
Result: SE = 0.3119, z = 0.962, P(X̄ < 3.8) ≈ 83.2%
For die rolls (μ = 3.5, σ = 1.708) with n = 30, the standard error is 1.708/√30 ≈ 0.312. A sample mean of 3.8 gives z = (3.8 − 3.5)/0.312 ≈ 0.96, meaning about 83% of samples would have a mean below 3.8.
The CLT is the reason we can perform t-tests, construct confidence intervals, and conduct z-tests even when the underlying data isn't normally distributed. Without the CLT, we'd need to know the exact population distribution before doing inference — which is rarely possible.
The CLT's convergence rate depends on the population's shape. Symmetric distributions converge quickly (n = 10 may suffice). Skewed distributions need larger n. The Berry-Esseen theorem provides a bound: the maximum error is proportional to E[|X − μ|³]/(σ³√n).
A 95% confidence interval means: if we repeated our sampling procedure many times, about 95% of the resulting intervals would contain the true μ. The CLT justifies using x̄ ± 1.96 × SE as that interval, because the sampling distribution of x̄ is approximately N(μ, SE²).
For a population with mean μ and standard deviation σ, the distribution of sample means from samples of size n approaches N(μ, σ²/n) as n increases, regardless of the population's shape. Understanding this concept helps you apply the calculator correctly and interpret the results with confidence.
This is a rule of thumb. For symmetric populations, the CLT kicks in with smaller n. For highly skewed distributions (like income), larger samples are needed. The key is that the sampling distribution looks approximately normal.
σ is the population standard deviation (spread of individual values). SE = σ/√n is the standard error (spread of sample means). SE is always smaller than σ because averaging reduces variability.
In practice, σ is often unknown and estimated by the sample standard deviation s. This introduces extra uncertainty, which is handled by the t-distribution rather than the z-distribution.
Yes. For a sample proportion p̂ from a population with proportion p, the sampling distribution is approximately N(p, p(1−p)/n) when np ≥ 5 and n(1−p) ≥ 5.
The CLT requires finite variance. Distributions like the Cauchy distribution have no finite variance, so the CLT doesn't apply — no amount of averaging will produce a normal sampling distribution.