Compute normal distribution probabilities, z-scores, and sampling distributions with visual comparison of population vs sample mean curves and confidence intervals.
The normal distribution and sampling calculator provides a complete toolkit for working with the Gaussian distribution — the most important distribution in statistics. Compute probabilities for individual observations (X) and sample means (X̄), with visual comparison of both distributions.
The normal distribution describes countless natural phenomena: heights, weights, test scores, measurement errors, and more. Its sampling distribution underpins confidence intervals, hypothesis testing, and the Central Limit Theorem. This calculator handles both individual-level and sample-level probabilities in a single interface.
Enter the population mean (μ) and standard deviation (σ), then explore point and interval probabilities, z-scores, confidence intervals at multiple levels, and a detailed sample size analysis showing how precision improves with larger samples. Check the example with realistic values before reporting. Use the steps shown to verify rounding and units. Cross-check this output using a known reference case. Use the example pattern when troubleshooting unexpected results. Validate that outputs match your chosen standards.
This calculator combines individual observation probability with sampling distribution analysis — the two most common statistical calculations. The visual overlay shows why sample means are more concentrated, making the Central Limit Theorem tangible.
Essential for anyone taking an introductory statistics course, conducting survey research, performing quality control, or doing any inference about population parameters.
PDF: f(x) = (1/(σ√(2π))) exp(−(x−μ)²/(2σ²)). z = (x − μ)/σ. SE = σ/√n. CI: X̄ ± z*·SE. P(X ≤ x) = Φ(z).
Result: Individual: P(X ≤ 120) = 90.88%, z = 1.33. Sample mean: P(X̄ ≤ 120) = 100.00%, z = 6.67
With μ = 100, σ = 15: a single observation of 120 has z = 1.33 (91st percentile). But a sample mean of 120 from n = 25 has z = 6.67 (virtually impossible) because SE = 15/√25 = 3.
The CLT states that X̄ ~ N(μ, σ²/n) regardless of the population distribution, provided n is sufficiently large. This calculator demonstrates the effect: as you increase n, the sampling distribution narrows dramatically, showing why large samples give precise estimates.
Six Sigma methodology uses the normal distribution to set quality standards. A "six sigma" process has defect rates of 3.4 per million — corresponding to 6 standard deviations from the mean. Control charts use z-scores to detect when a process has shifted.
In hypothesis testing, z-scores convert to p-values via the normal CDF. A two-tailed p-value is 2×P(Z > |z|). This calculator provides the building blocks for understanding t-tests, z-tests, and the foundation of statistical inference.
A z-score measures how many standard deviations a value is from the mean: z = (x − μ)/σ. A z-score of 2 means the value is 2 standard deviations above the mean, which is in the top ~2.3%.
Standard error (SE) is the standard deviation of the sampling distribution of x̄: SE = σ/√n. It measures how much sample means vary from sample to sample. Larger samples → smaller SE → more precise estimates.
Larger samples average out individual variation. A sample of 100 is very unlikely to have a mean far from μ, even though individual values might be spread out. The SE decreases as 1/√n.
A 95% CI means: if you repeated the sampling process, 95% of the resulting intervals would contain the true mean. It's x̄ ± z*·SE, where z* = 1.96 for 95% confidence.
When the data is continuous, symmetric, and bell-shaped. Many natural measurements (heights, errors, averages) are approximately normal. The CLT also makes it appropriate for sample means regardless of the population shape.
For individual probabilities, consider other distributions (lognormal for right-skewed data, Weibull for lifetimes). For inference about means, the CLT ensures the sampling distribution approaches normal for n ≥ 30.