Calculate geometric distribution probabilities with PMF, CDF, waiting time tables, distribution visualization, and summary statistics for trials until first success.
The geometric distribution models the number of independent trials needed to get the first success. If each trial has probability p of success, the geometric distribution tells you the probability that the first success occurs on trial k. It answers questions like: "How many times do I need to roll a die to get a 6?" or "How many customers will I call before making a sale?"
This calculator computes exact probabilities (P(X = k), P(X ≤ k), P(X ≥ k), or range probabilities), generates the complete distribution table with PMF, CDF, and survival function, and shows a visual PMF bar chart. The waiting time table reveals how many trials you need for 50%, 90%, 95%, 99%, and 99.9% confidence of success.
The geometric distribution is the discrete analog of the exponential distribution and the only discrete distribution with the memoryless property: knowing you've already failed k times doesn't change the probability of success on the next trial. It's fundamental in reliability engineering, quality control, telecommunications, and game design.
The geometric distribution appears everywhere: customer acquisition (calls until sale), reliability (usage cycles until failure), networking (packet retransmissions until success), genetics (offspring until desired genotype), and game design (attempts until rare drop). This calculator provides exact probabilities without approximation.
The waiting time table is particularly practical: "How many trials do I need to be 95% confident of at least one success?" This directly translates to sample size planning, testing budgets, and resource allocation. The PMF visualization helps build intuition about why geometric distributions are so right-skewed.
PMF: P(X = k) = (1 − p)^(k−1) × p, for k = 1, 2, 3, ... CDF: P(X ≤ k) = 1 − (1 − p)^k Mean: E[X] = 1/p Variance: Var(X) = (1 − p) / p² Median: ⌈−1 / log₂(1 − p)⌉ Mode: 1 (always) Skewness: (2 − p) / √(1 − p)
Result: P(X = 6) = 0.0670 (6.70%), Mean = 6.00 trials, Median = 4
For rolling a standard die (p = 1/6 ≈ 0.1667), the probability of getting the first 6 on exactly the 6th roll is about 6.7%. The expected number of rolls is exactly 6. However, the median is only 4 — there's a 50% chance of getting a 6 within the first 4 rolls. The distribution is right-skewed: most successes come early, but occasionally you wait a long time.
A classic application of the geometric distribution is the coupon collector's problem: how many items must you buy to collect all n distinct coupons? Each new coupon type becomes harder to find as you collect more. The wait for coupon i (when you have i−1) follows a geometric distribution with p = (n−i+1)/n. The total expected purchases are n × H(n), where H(n) is the harmonic number. For n=10, you'd expect about 29.3 purchases.
In reliability engineering, the geometric distribution models the number of usage cycles until a component fails, when each cycle has independent failure probability p. The survival function P(X > k) = (1−p)^k gives the probability the component survives k cycles. The hazard rate (conditional probability of failure given survival to cycle k) is constant at p — this is the discrete reliability equivalent of the exponential distribution.
The geometric distribution underlies randomized algorithms: random probing in hash tables, randomized retry protocols (like Ethernet's binary exponential backoff), and probabilistic data structures. In skip lists, the height of each node follows a geometric distribution. In communication protocols, the number of retransmission attempts until successful delivery follows a geometric pattern.
If you've rolled 10 dice without getting a 6, the probability of getting a 6 on the next roll is still 1/6 — not higher. Past failures don't affect future probabilities. This is the memoryless property, unique to the geometric (discrete) and exponential (continuous) distributions. Many people incorrectly believe they're "due" for a success after many failures (the gambler's fallacy).
Intuition: if each trial succeeds with probability p, on average you need 1/p trials for one success. For a fair die (p = 1/6), you expect 6 rolls. For a coin flip (p = 1/2), you expect 2 flips. This simple formula is one of the most useful results in probability.
Yes! Some textbooks define the geometric distribution as the number of failures before the first success (starting from 0), while others count the total trials including the success (starting from 1). This calculator uses the "trials until success" convention (starting from 1), which is more common in applied work.
The geometric distribution is a special case of the negative binomial with r = 1 (waiting for the first success). The negative binomial counts trials until the r-th success. The sum of r independent geometric random variables follows a negative binomial distribution.
When: (1) trials are independent, (2) each trial has the same probability p of success, (3) you're counting trials until the first success, and (4) there are only two outcomes per trial. If p changes between trials or trials are dependent, the geometric model doesn't apply.
The PMF is highest at k = 1 because it equals p, and each subsequent value is multiplied by (1 − p) < 1. The most likely outcome is always immediate success on the first trial. This is counterintuitive when p is small (like 1%), where success on trial 1 has only 1% probability — but every other individual trial has even less!