Calculate probabilities, percentiles, and statistics for continuous and discrete uniform distributions. Includes PDF/CDF visualization, quantile table, order statistics, and sampling distribution.
The uniform distribution calculator computes probabilities and statistics for both continuous and discrete uniform distributions. The continuous uniform distribution assigns equal probability density to all values in an interval [a, b], while the discrete version assigns equal probability to each integer from a to b.
This calculator handles both types: enter bounds and query values to find P(x₁ ≤ X ≤ x₂), view the PDF/PMF, quantile table, order statistics, and distribution properties. For continuous distributions, it also computes the sampling distribution of the mean and shows how standard error decreases with sample size.
The uniform distribution is fundamental in probability theory — it's the "maximum entropy" distribution when you know only the range, and it serves as the basis for random number generation in all computing. Check the example with realistic values before reporting. Use the steps shown to verify rounding and units. Cross-check this output using a known reference case. Use the example pattern when troubleshooting unexpected results.
The uniform distribution is the starting point for understanding probability distributions. It models scenarios with maximum uncertainty within known bounds, and it's the foundation of random number generation in computing.
This calculator serves students learning distribution theory, engineers modeling random processes, and analysts who need quick probability calculations for uniformly distributed variables.
Continuous: f(x) = 1/(b−a) for a ≤ x ≤ b. Mean = (a+b)/2. Variance = (b−a)²/12. Discrete: P(X=k) = 1/n where n = b−a+1. Variance = (n²−1)/12.
Result: P(3 ≤ X ≤ 7) = 40%, mean = 5, σ = 2.887
For Uniform(0, 10), the PDF height is 1/10 = 0.1 everywhere. P(3 ≤ X ≤ 7) = (7−3)/(10−0) = 40%. The mean is the midpoint 5, and variance is 100/12 ≈ 8.33.
In Bayesian statistics, when you have no prior information about a parameter except its range, the uniform distribution is the maximum entropy prior — it encodes "I know nothing about which values are more likely." This makes it the default choice for uninformative priors in Bayesian analysis.
Every probability distribution can be generated from uniform random numbers. If U ~ Uniform(0,1), then F⁻¹(U) follows the distribution F. This inverse transform method is the basis of all Monte Carlo simulation. For example, −ln(U)/λ generates exponential random variables.
The sum of n independent Uniform(0,1) random variables follows the Irwin-Hall distribution. For n=2, the "triangular" distribution. For n=12, it closely approximates the standard normal — this was historically used to generate normal random numbers before more efficient algorithms existed.
Use it when all outcomes in a range are equally likely: random number generation, arrival times when you know nothing about the schedule, angles, rounding errors, or as a prior in Bayesian analysis when you have no information about relative likelihoods. Use this as a practical reminder before finalizing the result.
Continuous uniform applies to real numbers in interval [a,b] — any value is possible. Discrete uniform applies to integers from a to b, each with equal probability. A die roll is discrete; a random time is continuous.
The denominator 12 comes from integration. The variance of U(0,1) is the integral of (x − 0.5)² from 0 to 1, which equals 1/12. Scaling by (b−a) gives (b−a)²/12. The factor of 12 is fundamental, not arbitrary.
If you take n samples from a distribution and sort them, the k-th smallest is the k-th order statistic X₍ₖ₎. For uniform distributions, E[X₍ₖ₎] divides the interval into n+1 equal parts, placing the k-th value at position k/(n+1).
The mean of n uniform samples has mean (a+b)/2 (same as individual) but standard error σ/√n. By the CLT, the sample mean is approximately normal for large n, even though individual values are uniform.
Yes! U(−5, 5) is perfectly valid. The mean would be 0, variance 100/12 ≈ 8.33. Negative bounds are common when modeling measurement errors centered around zero.