Calculate static analysis finding rate and false positive rate from SAST scan results. Track true vs false positive trends over time.
Static Application Security Testing (SAST) tools scan source code to find vulnerabilities before runtime. While essential for shift-left security, SAST tools are notorious for high false positive rates that can erode developer trust and slow remediation. Tracking the ratio of true to false positives across scans is crucial for tuning tools, measuring effectiveness, and maintaining developer confidence.
This calculator helps you analyze SAST scan results by computing the finding rate (findings per scan), false positive rate, true positive rate, and actionable finding density. Enter your total scan results and confirmed false positives to see how effective your SAST configuration is and where tuning is needed.
Understanding this metric in precise terms allows technology leaders to make evidence-based decisions about scaling, architecture, and infrastructure investment priorities for their organizations. Tracking this metric consistently enables technology teams to identify system performance trends and address potential issues before they impact end users or business operations.
A high false positive rate wastes developer time triaging non-issues and causes alert fatigue, leading teams to ignore real findings. Tracking these metrics helps you tune SAST rules, demonstrate security testing ROI, and maintain a productive relationship between security and development teams. Having accurate metrics readily available streamlines incident postmortems, architecture reviews, and technology roadmap discussions with engineering leadership and product teams.
Finding Rate = Total Findings / Number of Scans. False Positive Rate = False Positives / Total Findings × 100. True Positive Rate = (Total − False Positives) / Total × 100. Actionable Findings = Total − False Positives.
Result: FP Rate: 40% | 26.7 findings/scan | 192 true positives
From 12 scans producing 320 total findings, 128 were confirmed false positives (40% FP rate). This leaves 192 actionable findings (16 per scan). A 40% FP rate is typical for uncustomized SAST tools but should be reduced through rule tuning to below 20%.
Beyond false positive rate, track: detection rate (what percentage of known vulnerabilities does the tool find), fix rate (what percentage of findings are remediated), mean time to remediate, and finding recurrence rate.
Effective SAST tuning is iterative: run scans, review findings, confirm true/false positives, adjust rules, and measure the impact. Focus on eliminating high-volume false positive patterns first for maximum efficiency gain.
SAST adoption depends on developer experience. Provide findings in the IDE, include fix guidance, and never block builds on false positives. A developer-friendly SAST workflow dramatically improves remediation rates.
Run incremental SAST on pull requests (fast, focused feedback) and full scans on main branch merges (comprehensive coverage). Gate merges on critical/high findings only to avoid blocking development velocity.
Below 20% is considered good. Well-tuned commercial SAST tools achieve 10–15%. Open source tools may have 30–50% without customization. Any rate above 50% typically causes developers to stop reviewing SAST results.
Customize rules for your frameworks and libraries, suppress known false positive patterns with verification, configure language-specific rules, exclude generated code and test files, and use SAST tools that support flow-sensitive analysis. Running this calculation with a range of plausible inputs can help you understand the sensitivity of the result and plan for different scenarios.
Ideally on every commit or pull request for incremental analysis, and full scans weekly or per release. CI/CD integration ensures consistent scanning without manual effort. Incremental scans are faster and more developer-friendly.
Multiple tools increase coverage but also increase false positives and operational complexity. One well-tuned primary tool is usually more effective than multiple poorly configured tools. Add a second tool only if your primary tool has known blind spots.
SAST analyzes source code without executing it (white-box). DAST tests the running application from outside (black-box). SAST finds coding errors early; DAST finds runtime and configuration issues. Both are complementary and recommended.
Track: cost of findings discovered by SAST vs. cost if found in production (10–100x multiplier), reduction in production security incidents, developer time saved by early detection, and compliance requirements met through automated scanning.