Post-Test Probability Calculator

Calculate post-test probability from sensitivity, specificity, and prevalence with likelihood ratios, confusion matrix, sequential testing, and PPV/NPV sensitivity analysis.

About the Post-Test Probability Calculator

The post-test probability calculator determines how a diagnostic test result changes the probability of disease. Starting from a pre-test probability (prevalence), it uses sensitivity and specificity to compute the post-test probability after a positive or negative result.

This tool applies Bayes' theorem to medical diagnosis — showing how positive predictive value (PPV) and negative predictive value (NPV) depend critically on prevalence. A test with 95% sensitivity and 95% specificity has a PPV of only 16% when prevalence is 1%, but 86% when prevalence is 50%.

Enter test characteristics and prevalence to see the confusion matrix, likelihood ratios, sequential testing analysis, and a sensitivity table showing how PPV/NPV change across different prevalence levels. Check the example with realistic values before reporting. Use the steps shown to verify rounding and units. Cross-check this output using a known reference case. Use the example pattern when troubleshooting unexpected results. Validate that outputs match your chosen standards.

Why Use This Post-Test Probability Calculator?

Understanding how test results change disease probability is critical for evidence-based medicine. This calculator makes Bayesian diagnostic reasoning accessible — showing exactly why prevalence matters, how sequential testing builds certainty, and which tests are strong enough for clinical decisions.

Essential for medical students, clinicians, epidemiologists, and anyone interpreting diagnostic test results.

How to Use This Calculator

  1. Enter the test's sensitivity (true positive rate) as a percentage.
  2. Enter the test's specificity (true negative rate) as a percentage.
  3. Enter the pre-test probability or disease prevalence.
  4. Set the number of sequential tests to see how repeated positives increase certainty.
  5. Review the confusion matrix per 10,000 people to understand false positive/negative counts.
  6. Study the PPV/NPV vs prevalence table to see prevalence effects.
  7. Use presets for common medical tests.

Formula

LR+ = Sensitivity / (1 − Specificity). LR− = (1 − Sensitivity) / Specificity. Post-test odds = Pre-test odds × LR. PPV = (Sens×Prev) / (Sens×Prev + (1−Spec)×(1−Prev)).

Example Calculation

Result: PPV = 48.6%, NPV = 99.4%, LR+ = 18

At 5% prevalence, a positive result raises probability from 5% to 48.6%. A negative result lowers it to 0.55%. LR+ of 18 is a strong diagnostic tool — each positive multiplies the odds by 18.

Tips & Best Practices

The Base Rate Fallacy in Screening

Mass screening for rare diseases suffers from the base rate fallacy. Even with a 99% sensitive and 99% specific test, screening for a disease with 0.1% prevalence produces 10× more false positives than true positives (PPV ≈ 9%). This is why targeted testing based on clinical risk factors is preferred over universal screening.

ROC Curves and Optimal Thresholds

The ROC curve plots sensitivity vs (1−specificity) across all possible test thresholds. The area under the ROC curve (AUC) summarizes overall test performance. This calculator evaluates a single point on the ROC curve — the chosen threshold that determines the specific sensitivity/specificity trade-off.

Sequential and Parallel Testing Strategies

Sequential testing (test again if positive) maximizes specificity with each step but may miss cases. Parallel testing (confirm if either positive) maximizes sensitivity. The optimal strategy depends on the cost of false positives vs false negatives in the clinical context.

Frequently Asked Questions

Why is PPV so low when prevalence is low?

At 1% prevalence, 99% of people are disease-free. Even a 95% specific test produces false positives in 5% of 99 = ~5 people, while catching 1% of 1 person with disease. Most positives are false — hence low PPV.

What's the difference between sensitivity and PPV?

Sensitivity asks: of those WITH disease, what fraction tests positive? PPV asks: of those who TEST positive, what fraction has disease? Sensitivity is a test property; PPV depends on prevalence.

How do likelihood ratios work?

LR transforms pre-test odds to post-test odds. Convert probability to odds (p/(1−p)), multiply by LR, convert back. LR+ applies to positive results, LR− to negative results.

Can I use this for non-medical tests?

Yes! The same framework applies to any binary classifier: spam detection, quality control, fraud detection. Sensitivity = recall, PPV = precision in machine learning terminology.

What if I run the same test twice?

If the tests are truly independent (different methodologies), sequential testing multiplies likelihood ratios. Two positive results with LR+ = 10 give combined LR+ = 100. If using the same test twice, independence may not hold.

What is the Fagan nomogram?

A graphical tool connecting pre-test probability, likelihood ratio, and post-test probability on three scales. Drawing a line from pre-test through LR gives post-test probability. This calculator provides the same information numerically.

Related Pages