Perform one-way ANOVA analysis online. Compute F statistic, p-value, eta-squared, omega-squared, and full ANOVA table with group summaries and visual comparison.
Analysis of Variance (ANOVA) is a powerful statistical method that tests whether the means of three or more groups differ significantly from each other. Rather than running multiple pairwise t-tests — which inflates the chance of a Type I error — ANOVA uses a single F-test to evaluate all groups simultaneously.
This calculator performs one-way ANOVA, partitioning total variability into between-group and within-group components. Enter your data for each group, set the significance level, and instantly receive the F statistic, p-value, full ANOVA source table, and effect sizes like eta-squared and omega-squared.
ANOVA is indispensable in experimental research, agriculture (comparing fertilizer treatments), medicine (comparing drug dosages), education (comparing teaching methods), and manufacturing (comparing machine outputs). Understanding whether observed differences are real or due to chance is fundamental to data-driven decision making. Check the example with realistic values before reporting. Use the steps shown to verify rounding and units. Cross-check this output using a known reference case.
Running multiple t-tests to compare several groups dramatically increases your false positive rate. For example, comparing 4 groups pairwise means 6 t-tests, each with a 5% error rate, leading to roughly a 26% chance of at least one false positive. ANOVA keeps the overall error rate at your chosen alpha. This calculator also provides effect size metrics and a complete ANOVA source table, eliminating tedious hand calculations and reducing arithmetic errors.
One-Way ANOVA: F = MSB / MSW Where: SSB = Σ nᵢ(x̄ᵢ − x̄)² (between-group sum of squares) SSW = Σ Σ (xᵢⱼ − x̄ᵢ)² (within-group sum of squares) MSB = SSB / (k − 1) (mean square between) MSW = SSW / (N − k) (mean square within) k = number of groups, N = total observations Effect size: η² = SSB / SST ω² = (SSB − (k−1)·MSW) / (SST + MSW)
Result: F(2, 12) = 13.1818, p = 0.0009
Three groups of 5 observations each produce an F statistic of 13.18 with 2 and 12 degrees of freedom. The p-value of 0.0009 is well below 0.05, so we reject the null hypothesis and conclude that at least one group mean differs significantly from the others.
The ANOVA source table decomposes total variability into two components. The Between-Groups row captures variation due to differences among group means, while the Within-Groups row captures variation within each group (random error). If the between-group variation is large relative to within-group variation, the F ratio will be large and the p-value small, leading to rejection of the null hypothesis.
Statistical significance alone doesn't tell you how large the effect is. Eta-squared (η²) and omega-squared (ω²) quantify the proportion of total variance attributable to the grouping variable. As a rough guide, η² around 0.01 is small, 0.06 is medium, and 0.14+ is large (Cohen's benchmarks). Omega-squared provides a less biased estimate, especially with small samples.
After a significant ANOVA, post-hoc tests identify which specific group pairs differ. Tukey's Honestly Significant Difference (HSD) is the most common, controlling the familywise error rate while comparing all pairs. Bonferroni correction is more conservative, dividing alpha by the number of comparisons. Dunnett's test is used when comparing each treatment group against a single control group.
The null hypothesis (H₀) states that all group population means are equal: μ₁ = μ₂ = … = μk. The alternative is that at least one mean differs. ANOVA does not specify which mean is different — only that not all are the same.
Whenever you compare three or more groups. Running multiple pairwise t-tests inflates the familywise error rate. ANOVA controls the overall Type I error at your chosen alpha level.
F is the ratio of between-group variance to within-group variance. A large F means the group means spread apart more than individual observations vary within their groups, suggesting a real effect.
A significant ANOVA tells you groups differ but not which ones. Use post-hoc tests like Tukey's HSD, Bonferroni, or Dunnett's test to identify specific pairwise differences while controlling error rates.
Independence of observations, normality of residuals within each group, and homogeneity of variances (equal variance across groups). Moderate violations of normality are tolerable with larger samples due to the Central Limit Theorem.
ANOVA is a parametric test assuming normal distributions and equal variances. Kruskal-Wallis is its non-parametric alternative, comparing rank distributions rather than means. Use Kruskal-Wallis when data is ordinal or assumptions are badly violated.