Calculate post-test probability from sensitivity, specificity, and prevalence with likelihood ratios, confusion matrix, sequential testing, and PPV/NPV sensitivity analysis.
The post-test probability calculator determines how a diagnostic test result changes the probability of disease. Starting from a pre-test probability (prevalence), it uses sensitivity and specificity to compute the post-test probability after a positive or negative result.
This tool applies Bayes' theorem to medical diagnosis — showing how positive predictive value (PPV) and negative predictive value (NPV) depend critically on prevalence. A test with 95% sensitivity and 95% specificity has a PPV of only 16% when prevalence is 1%, but 86% when prevalence is 50%.
Enter test characteristics and prevalence to see the confusion matrix, likelihood ratios, sequential testing analysis, and a sensitivity table showing how PPV/NPV change across different prevalence levels. Check the example with realistic values before reporting. Use the steps shown to verify rounding and units. Cross-check this output using a known reference case. Use the example pattern when troubleshooting unexpected results. Validate that outputs match your chosen standards.
Understanding how test results change disease probability is critical for evidence-based medicine. This calculator makes Bayesian diagnostic reasoning accessible — showing exactly why prevalence matters, how sequential testing builds certainty, and which tests are strong enough for clinical decisions.
Essential for medical students, clinicians, epidemiologists, and anyone interpreting diagnostic test results.
LR+ = Sensitivity / (1 − Specificity). LR− = (1 − Sensitivity) / Specificity. Post-test odds = Pre-test odds × LR. PPV = (Sens×Prev) / (Sens×Prev + (1−Spec)×(1−Prev)).
Result: PPV = 48.6%, NPV = 99.4%, LR+ = 18
At 5% prevalence, a positive result raises probability from 5% to 48.6%. A negative result lowers it to 0.55%. LR+ of 18 is a strong diagnostic tool — each positive multiplies the odds by 18.
Mass screening for rare diseases suffers from the base rate fallacy. Even with a 99% sensitive and 99% specific test, screening for a disease with 0.1% prevalence produces 10× more false positives than true positives (PPV ≈ 9%). This is why targeted testing based on clinical risk factors is preferred over universal screening.
The ROC curve plots sensitivity vs (1−specificity) across all possible test thresholds. The area under the ROC curve (AUC) summarizes overall test performance. This calculator evaluates a single point on the ROC curve — the chosen threshold that determines the specific sensitivity/specificity trade-off.
Sequential testing (test again if positive) maximizes specificity with each step but may miss cases. Parallel testing (confirm if either positive) maximizes sensitivity. The optimal strategy depends on the cost of false positives vs false negatives in the clinical context.
At 1% prevalence, 99% of people are disease-free. Even a 95% specific test produces false positives in 5% of 99 = ~5 people, while catching 1% of 1 person with disease. Most positives are false — hence low PPV.
Sensitivity asks: of those WITH disease, what fraction tests positive? PPV asks: of those who TEST positive, what fraction has disease? Sensitivity is a test property; PPV depends on prevalence.
LR transforms pre-test odds to post-test odds. Convert probability to odds (p/(1−p)), multiply by LR, convert back. LR+ applies to positive results, LR− to negative results.
Yes! The same framework applies to any binary classifier: spam detection, quality control, fraud detection. Sensitivity = recall, PPV = precision in machine learning terminology.
If the tests are truly independent (different methodologies), sequential testing multiplies likelihood ratios. Two positive results with LR+ = 10 give combined LR+ = 100. If using the same test twice, independence may not hold.
A graphical tool connecting pre-test probability, likelihood ratio, and post-test probability on three scales. Drawing a line from pre-test through LR gives post-test probability. This calculator provides the same information numerically.