Find critical values for Z, t, chi-square, and F distributions by significance level and degrees of freedom with reference tables and rejection region visualization.
Critical values are the boundary values that separate the rejection region from the acceptance region in hypothesis testing. If your test statistic exceeds the critical value, you reject the null hypothesis. Every introductory statistics course covers critical values — and every student needs a reliable way to look them up.
This calculator finds critical values for the four major statistical distributions: standard normal (Z), Student's t, chi-square (χ²), and Fisher's F. It supports one-tailed (left or right) and two-tailed tests, and generates reference tables showing critical values across common significance levels and degrees of freedom.
Beyond simple lookup, the calculator visualizes the rejection region with a color-coded bar and computes exact tail probabilities. The reference tables eliminate the need for printed statistical tables — you can see how the critical value changes with α and df in a single view. Presets for common scenarios (95% Z, 99% Z, t with various df, chi-square, and F-tests) let you start quickly.
Looking up critical values in printed tables is tedious and error-prone — you need different tables for each distribution, and interpolation is often necessary. This calculator replaces all those tables with a single tool that handles Z, t, χ², and F distributions with arbitrary α and df values.
The reference tables and df convergence display provide deeper insight than any single lookup. Students see how critical values respond to parameter changes, building intuition about the relationships between confidence level, sample size, and rejection regions. Practitioners save time and avoid the errors that come from reading tiny table entries.
Critical value = inverse CDF at probability 1 − α (right-tailed) or α/2 (two-tailed) Z: Φ⁻¹(1 − α/2) for two-tailed t: t⁻¹(1 − α/2, df) for two-tailed χ²: χ²⁻¹(1 − α, df) for right-tailed F: F⁻¹(1 − α, df₁, df₂) for right-tailed
Result: Critical values: ±1.9600
For a two-tailed Z-test at α = 0.05, each tail gets α/2 = 0.025. The z-value with 2.5% above it is 1.96. So reject H₀ if |z| > 1.96, meaning the test statistic falls in either tail beyond ±1.96.
In hypothesis testing, you compute a test statistic and compare it to the critical value. If the statistic falls in the rejection region (beyond the critical value), you reject the null hypothesis. For a two-tailed test at α = 0.05, the rejection region is the outer 5% of the distribution — 2.5% in each tail. The critical value marks the boundary.
The choice between one-tailed and two-tailed depends on your research question. If you're testing whether a drug has any effect (positive or negative), use two-tailed. If you're specifically testing whether it improves outcomes, use one-tailed. The one-tailed test is more powerful for detecting effects in the predicted direction but cannot detect effects in the opposite direction.
**Standard Normal (Z)** is used when the population standard deviation is known or the sample is large (n > 30). **Student's t** is used when the population SD is unknown and estimated from sample data — it accounts for the extra uncertainty. **Chi-square (χ²)** arises in tests about variance, goodness-of-fit, and independence — it's always positive and right-skewed. **F** is the ratio of two chi-square variables divided by their df; it's used in ANOVA and regression significance tests.
These distributions are deeply connected: Z² ~ χ²(1), the F distribution with df₁ = 1 is equivalent to t², and as df increases, both t and χ² approach the normal. Understanding these connections helps you see hypothesis testing as a unified framework rather than a collection of unrelated procedures.
A two-tailed test splits α between both tails (α/2 each), producing a larger critical value (e.g., z = 1.96 for α = 0.05). A one-tailed test puts all α in one tail, producing a smaller value (z = 1.645 for α = 0.05). Use two-tailed when the alternative hypothesis is "not equal" and one-tailed when it is "greater than" or "less than."
The t-distribution has heavier tails than the normal for small df, requiring larger critical values to achieve the same confidence. As df → ∞, the t-distribution converges to the standard normal. At df = 30, t is already close to z; at df = 120+, they are virtually identical.
Chi-square tests assess goodness-of-fit and independence in contingency tables (one set of df). F-tests compare two variances or test overall significance in ANOVA (two sets of df). Both are right-tailed in most applications.
They are two sides of the same coin. The critical value approach finds the threshold first, then compares the test statistic. The p-value approach computes the exact probability, then compares it to α. They always give the same decision.
Yes. A 95% confidence interval uses the same z* or t* as a two-tailed test at α = 0.05. CI = point estimate ± critical value × standard error. This calculator directly gives you the multiplier.
The calculator uses iterative numerical methods (bisection with 100 iterations) and provides 4 decimal places of accuracy. For the standard normal, it uses an exact rational approximation. Values match published tables to at least 3 decimal places.