Calculate P(A|B) and P(B|A) from joint, marginal, or conditional inputs — with contingency tables, independence tests, and lift analysis.
The conditional probability calculator computes P(A|B) — the probability of event A occurring given that event B has occurred. This fundamental concept underlies Bayes' theorem, medical diagnosis, machine learning, and everyday reasoning about uncertainty.
The calculator supports three input modes: provide the joint probability P(A∩B) directly, or supply one of the conditional probabilities P(B|A) or P(A|B) along with the marginal probabilities. It then derives all other quantities including the reverse conditional, union probability, and a test for independence.
A contingency table scaled to 1,000 individuals makes the abstract probabilities concrete, and a comprehensive table shows all six conditional probabilities involving A, B, and their complements. Check the example with realistic values before reporting. Use the steps shown to verify rounding and units. Cross-check this output using a known reference case. Use the example pattern when troubleshooting unexpected results. Validate that outputs match your chosen standards. Run at least one manual sanity check before publishing.
Conditional probability is the foundation of probabilistic reasoning. Every prediction, diagnosis, and risk assessment involves conditioning on observed evidence. Understanding how to correctly compute and interpret P(A|B) prevents common errors like the base rate fallacy and the prosecutor's fallacy.
This calculator supports multiple input formats, making it easy to work with whatever information you have — and it prevents algebraic mistakes in the conversion process.
P(A|B) = P(A ∩ B) / P(B). P(B|A) = P(A ∩ B) / P(A). Independence: P(A∩B) = P(A) × P(B). Lift = P(A|B) / P(A).
Result: P(A|B) = 0.4167, P(B|A) = 0.8333
P(A|B) = 0.25/0.6 ≈ 0.417, meaning if B has occurred, A's probability rises from 30% to 41.7%. P(B|A) = 0.25/0.3 ≈ 0.833, meaning if A has occurred, B is very likely.
Conditioning restricts the probability space. When we compute P(A|B), we're saying "given that B is our new universe, what fraction of B also contains A?" Mathematically, P(A|B) = P(A∩B)/P(B). The denominator P(B) normalizes the probability to sum to 1 within the reduced space.
Two events are independent when knowing one tells you nothing about the other. Mathematically: P(A|B) = P(A), or equivalently P(A∩B) = P(A)P(B). Independence is often assumed for simplicity, but real-world events are frequently dependent — failing to account for this leads to errors in risk assessment.
A doctor seeing a positive test result needs P(disease|positive), not P(positive|disease). These are very different when the disease is rare. Conditional probability (and its extension, Bayes' theorem) is essential for correct medical decision-making.
P(A|B) means the probability of A within the subset where B is true. You're restricting your sample space to only outcomes where B has occurred, then asking how often A also occurs.
They answer different questions. P(rain|cloudy) asks about rain when it's cloudy. P(cloudy|rain) asks about clouds when it's raining. These can be very different numbers.
Check whether P(A∩B) = P(A) × P(B). Equivalently, check if P(A|B) = P(A). If either holds, the events are independent; knowing one gives no information about the other.
Confusing P(evidence|innocent) with P(innocent|evidence). A DNA match probability of 1 in a million doesn't mean there's a 1 in a million chance the suspect is innocent.
No. P(A|B) is always between 0 and 1. If your calculation yields a value outside this range, check that P(A∩B) ≤ min(P(A), P(B)).
Lift measures how much B changes A's probability: Lift = P(A|B)/P(A). A lift of 2 means A is twice as likely when B is present. It's widely used in marketing and recommendation systems.