Calculate specificity (true negative rate) from TP, FP, FN, TN. Includes false positive rate, confidence interval, prevalence-adjusted PPV table, and diagnostic metrics.
Specificity is a diagnostic test's ability to correctly identify negative cases — those without the condition. A highly specific test produces few false positives, making it excellent for "ruling in" a diagnosis (SpPIn: Specificity Positive rules In). Specificity is calculated as TN/(TN+FP), the proportion of true negatives among all actual negatives.
This calculator computes specificity and its confidence interval from a confusion matrix, along with the false positive rate (1 − specificity), and shows how poor specificity at low prevalence can make PPV unacceptably low. The prevalence-impact table demonstrates how many false positives per true positive you can expect at different disease rates.
Specificity analysis is critical in clinical diagnostics, drug screening, security detection, industrial quality control, and any binary classification system where false alarms have significant consequences. Use the preset examples to load common values instantly, or type in custom inputs to see results in real time. The output updates as you type, making it practical to compare different scenarios without resetting the page.
False positives have real costs: unnecessary follow-up testing, psychological distress from false positive cancer screens, wrongful drug test accusations, and wasted resources on false security alerts. This calculator quantifies these risks by computing the FPR, the expected false positives per 1,000 tested, and prevalence-adjusted predictive values. This tool is designed for quick, accurate results without manual computation. Whether you are a student working through coursework, a professional verifying a result, or an educator preparing examples, accurate answers are always just a few keystrokes away.
Specificity (True Negative Rate): TNR = TN / (TN + FP) False Positive Rate: FPR = FP / (FP + TN) = 1 − Specificity 95% CI for Specificity: TNR ± 1.96 × √(TNR(1−TNR) / (TN+FP)) Prevalence-adjusted PPV: PPV = (Sens × Prev) / (Sens × Prev + FPR × (1−Prev)) FP per TP ratio: = FPR × (1 − Prev) / (Sens × Prev)
Result: Specificity = 99.78%
With 898 true negatives and only 2 false positives among 900 negative individuals, specificity is 99.78%. The FPR is just 0.22%, meaning approximately 2.2 false positives per 1,000 healthy people tested. At even 1% prevalence, the PPV would be 80.4%.
Consider screening a million people for a disease with 0.1% prevalence using a 95% sensitivity, 99% specificity test. Expected results: 950 true positives, 50 false negatives, 9,990 false positives, 989,010 true negatives. PPV = 950/(950+9,990) = 8.7%. Over 90% of positive results are false! This is why ultra-high specificity (>99.9%) is required for effective mass screening of rare conditions.
A powerful approach uses sequential testing. The first test has high sensitivity (catches all cases); the second has high specificity (eliminates false positives). If Test 1 has 99% sensitivity and 90% specificity, and Test 2 has 95% specificity, the combined specificity is approximately 90% × 95% + 10% × 5% = 99.5%, dramatically reducing false positives.
In ML classification, specificity equals the true negative rate and is particularly important for imbalanced datasets. The ROC curve plots sensitivity (true positive rate) against 1−specificity (false positive rate) across all classification thresholds, providing a threshold-independent assessment of model performance.
Specificity measures the proportion of actual negatives that are correctly identified by the test. A specificity of 99% means that among 100 healthy people, 99 will correctly test negative and 1 will have a false positive result.
When screening large populations for a rare condition, even small false positive rates generate many false alarms because the vast majority of those tested are healthy. High specificity minimizes unnecessary follow-up testing, anxiety, and costs.
Context-dependent. For confirmatory diagnostic tests: >99%. For screening tests: >95%. For general classification: >90%. The required specificity depends on the cost of a false positive relative to a false negative.
The false positive rate (1 − specificity) is analogous to the Type I error rate (α) in hypothesis testing — it's the probability of incorrectly rejecting a true null hypothesis (declaring disease when none exists). Understanding this concept helps you apply the calculator correctly and interpret the results with confidence.
Generally, there's a trade-off (the ROC curve). Raising the diagnostic threshold increases specificity but lowers sensitivity. Improvements to both require fundamentally better test technology. Two-stage testing can improve overall specificity. Understanding this concept helps you apply the calculator correctly and interpret the results with confidence.
Specificity = P(test negative | no disease) — a test property. NPV = P(no disease | test negative) — depends on prevalence. Specificity is constant; NPV increases in low-prevalence populations.