Apply Bayes' theorem to update probabilities with new evidence — compute posterior probability, likelihood ratios, and confusion matrices for medical tests.
Bayes' theorem calculator updates your belief about a hypothesis after observing new evidence. Given a prior probability (base rate), the likelihood of the evidence under the hypothesis, and the likelihood under the alternative, it computes the exact posterior probability.
This tool is indispensable for medical test interpretation, spam filtering, forensic analysis, and any situation where you need to reason about uncertainty. A common application is determining whether a positive medical test truly indicates disease — the answer depends critically on the base rate (prevalence), which most people underestimate.
The calculator supports two input modes: a medical-test mode using sensitivity and specificity, and a general mode using raw conditional probabilities. It outputs the posterior probability, likelihood ratios, a confusion matrix scaled to 10,000 individuals, and a sensitivity analysis showing how the posterior changes across different priors. Check the example with realistic values before reporting. Use the steps shown to verify rounding and units. Cross-check this output using a known reference case.
Bayes' theorem is arguably the most important formula in applied probability. Doctors, data scientists, engineers, and lawyers all need to update beliefs based on evidence, yet human intuition is notoriously poor at this task — especially when base rates are low.
This calculator makes Bayesian reasoning accessible and visual, helping you avoid the base rate fallacy and make better decisions under uncertainty.
P(A|B) = [P(B|A) × P(A)] / [P(B|A) × P(A) + P(B|¬A) × P(¬A)]. Positive Likelihood Ratio = Sensitivity / (1 − Specificity). Posterior Odds = Prior Odds × LR+.
Result: P(Disease | Positive Test) ≈ 0.0876 (8.76%)
With 1% prevalence, 95% sensitivity, and 90% specificity: P(+) = 0.95×0.01 + 0.10×0.99 = 0.1085. Posterior = (0.95×0.01)/0.1085 ≈ 8.76%. Even with a positive test, there's only about a 9% chance of actually having the disease.
The most famous illustration of Bayes' theorem is the medical screening paradox. A disease affecting 1% of the population is screened with a 95% sensitive, 90% specific test. Most people guess a positive result means ~95% chance of disease. The actual answer is under 9%. This counterintuitive result occurs because false positives from 99% of healthy individuals vastly outnumber true positives from 1% of sick individuals.
Frequentist statistics evaluates the probability of data given a fixed hypothesis. Bayesian statistics flips this — it evaluates the probability of a hypothesis given observed data. Bayes' theorem is the mathematical bridge between these perspectives.
In clinical practice, a second test isn't independent — the patient's prior has been updated by the first test. Use the posterior from the first positive test as the prior for the second test. Two consecutive positives with independent tests dramatically increase the posterior probability.
It's a formula that tells you how to update your belief after seeing new evidence. If you think there's a 1% chance of something, and you get a positive signal, Bayes' theorem tells you the new (higher) probability.
Because the base rate matters enormously. With 1% prevalence and 90% specificity, the 10% false positive rate applied to 99% of healthy people generates far more false positives than the 95% sensitivity finds true positives.
Sensitivity is the probability a test is positive when disease is present (TP rate). Specificity is the probability a test is negative when disease is absent (TN rate). Both need to be high, but specificity matters more with rare conditions.
Absolutely. Use general mode and supply any P(B|A) and P(B|¬A). Common uses include spam detection, fraud analysis, DNA evidence evaluation, and machine learning classification.
The positive likelihood ratio (LR+) is sensitivity divided by the false positive rate. It tells you how much a positive result increases the odds of the hypothesis. LR+ > 10 is strong evidence; LR+ < 2 is weak.
It shows how 10,000 people would be classified. Green cells are correct (true positives and true negatives). Red cells are errors (false positives and false negatives). The ratio of TP to (TP+FP) is the PPV.