Calculate standard error for means, proportions, differences, or raw data. Includes margin of error, finite population correction, and SE vs sample size comparison.
The standard error (SE) measures the precision of a sample statistic as an estimate of the population parameter. It quantifies how much the statistic would vary across repeated samples of the same size. While the standard deviation describes variability within a sample, the standard error describes variability of the sample statistic itself.
This calculator computes the standard error for five common scenarios: mean estimation, proportion estimation, difference between two means, difference between two proportions, and directly from raw data. Each calculation includes the margin of error at your chosen confidence level and optional finite population correction.
Standard error is the building block of confidence intervals, hypothesis tests, and power analysis. Understanding it is essential for any researcher or analyst working with sample data. Check the example with realistic values before reporting. Use the steps shown to verify rounding and units. Cross-check this output using a known reference case. Use the example pattern when troubleshooting unexpected results.
Computing standard errors by hand requires different formulas for different estimators, and mistakes are common — especially with the difference-of-proportions formula. This calculator handles all five cases, applies FPC when relevant, and shows how SE changes with sample size so you can plan your study effectively. Keep these notes focused on your operational context.
SE of Mean: SE = s / √n SE of Proportion: SE = √(p̂(1−p̂)/n) SE of Difference of Means: SE = √(s₁²/n₁ + s₂²/n₂) SE of Difference of Proportions: SE = √(p̂₁(1−p̂₁)/n₁ + p̂₂(1−p̂₂)/n₂) Margin of Error: MOE = z* × SE With FPC: SE_adj = SE × √((N−n)/(N−1)) Key relationship: SE ∝ 1/√n
Result: SE = 2.1909, MOE = ±4.29
With s = 12 and n = 30, the standard error of the mean is 12/√30 = 2.19. At 95% confidence (z* = 1.96), the margin of error is ±4.29. This means the sample mean is expected to be within about 4.3 units of the true population mean.
Nearly every inferential procedure in statistics uses the standard error. Confidence intervals are point estimate ± z* × SE. Test statistics are (estimate − null) / SE. Power analysis uses SE to determine the sample size needed to detect an effect. Understanding SE is understanding the precision of your data.
The inverse square root relationship between SE and n has profound practical implications. Getting from ±10% precision to ±5% requires 4× the data. Getting to ±1% requires 100× the starting sample. This diminishing returns curve is why most studies settle for "good enough" precision rather than pursuing perfection — the cost grows quadratically with precision.
Beyond simple means and proportions, standard errors exist for regression coefficients, correlation coefficients, percentiles, and virtually any sample statistic. When analytical formulas aren't available, bootstrap methods estimate SE by resampling from the data. The concept extends to any statistic that varies across samples.
Standard deviation (SD) measures variability of individual observations within a sample. Standard error (SE) measures variability of the sample statistic (like the mean) across different samples. SE = SD/√n, so it's always smaller than SD for n > 1.
With more observations, extreme values average out and the sample statistic becomes more stable. The Central Limit Theorem guarantees this: the variance of the sample mean is σ²/n, giving SE = σ/√n.
Confidence intervals are generally preferred for scientific communication because they're more intuitive (the range of plausible values). SE is useful for technical audiences and as an input to meta-analyses. Some journals require one or the other.
When comparing two independent estimates, the SE of their difference combines both uncertainties: SE_diff = √(SE₁² + SE₂²). This applies to both mean differences and proportion differences.
Compute the sample standard deviation (s), then divide by √n. The raw data mode in this calculator does this automatically, computing s from your data and then SE = s/√n.
Meta-analyses weight studies by the inverse of their squared SE. Studies with smaller SE (larger samples, less variability) get more weight because they provide more precise estimates of the effect.