Emily Patterson’s latest Medium article, “A Product Person’s Guide to the Application Security Market: SAST 2025,” offers a timely reality check on the state of SAST. She notes that the SAST market is “completely saturated,” and most vendors are fighting for business by touting low false positive rates. This industry-wide obsession with false positive reduction has become a red herring—a distraction from what actually matters in application security.
In practice, the real issue isn’t just how quiet your scanner is; it’s how accurate it is at finding the vulnerabilities that truly matter.
A tool that proudly reports “zero false positives” but silently misses critical bugs is far more dangerous than one that occasionally cries wolf.
If your SAST tool gives you a green light while serious flaws lurk in the code, those missed issues create a false sense of security. The cost of a missed vulnerability (a false negative) in production—think data breaches, account takeovers—vastly outweighs the annoyance of a handful of false alarms. In other words, focusing solely on reducing noise can blind us to the real goal: catching all the important vulnerabilities.
This is why accuracy (and thoroughness) should be the North Star metric for product and security leaders evaluating AppSec tools. It’s time to reframe the conversation. Instead of asking, “How do we eliminate false positives?” we should be asking, ”Is my SAST engine truly understanding my code and catching the real risks?”
Modern applications are full of complex logic and context-specific gotchas that simple pattern-matching tools often overlook. A few examples:
- authentication bypasses hidden in logic flow
- sensitive data exposure through indirect pathways
- SSRF vulnerabilities requiring understanding of network calls
- authorization checks missing in certain contexts
Traditional static analysis tools that rely on known patterns and rules (like Snyk Code, GitHub CodeQL, SonarQube, or Semgrep) tend to excel at finding the “usual suspects” (e.g. an obvious SQL injection or XSS) but struggle with vulnerabilities that don’t fit a known signature. As one of our recent tests showed, one popular scanner even passed a pull request with ZERO warnings despite it containing multiple critical flaws. This scenario would have developers celebrating a dangerously insecure build.
DryRun Security takes an approach that flips this paradigm by prioritizing contextual accuracy over simplistic pattern matching. Instead of relying on regexes or a limited set of rules, our AI-powered engine understands the context and intent of code changes. It performs contextual security analysis and understands how pieces of code interact—to detect issues that others miss.
In the words of Emily Patterson, DryRun Security's AI code reviews “feel like they could replace a traditional SAST tool easily.”
We agree! Our own head-to-head comparisons back this up: DryRun consistently caught critical vulnerabilities that all the big-name tools missed. For example, DryRun was able to identify that multiple error messages in a login flow amounted to a user enumeration weakness, and that using a certain API without proper checks led to an authorization bypass—*nuanced flaws that static analyzers oblivious to context would never flag.
We’ve documented these findings across several language ecosystems, putting DryRun Security up against traditional SAST scanners in real open-source applications:
In a RailsGoat test app, DryRun found every critical vulnerability (6/6) while others like Snyk, CodeQL, SonarQube, etc., missed most (some caught as few as 0 of 6). Crucial issues like SSRF and logic-based auth bypasses were only caught by DryRun’s context-aware analysis— highlighting how purely pattern-based tools left dangerous blind spots.
DryRun again caught 100% of the high-impact flaws in this Django project analysis, including subtle IDOR (access control) issues and weak crypto usage that others failed to identify. Two of the competing tools confidently reported “No issues found,” which would have been a disastrous false assurance. Pattern-driven scanners did okay on the obvious stuff, but they fell short on anything requiring deeper understanding of the code’s intent. DryRun Security flagged those business logic flaws and security oversights that went completely under the radar for the rest.
In a head-to-head on an ASP.NET Core vulnerable API, DryRun detected 6 out of 6 critical issues, whereas a popular enterprise SAST tool found 0. SQL injection, XSS, SSRF, authZ bypass, user enumeration, hardcoded credentials—you name it, DryRun Security caught it—while others either missed the majority or skipped them entirely. One legacy tool’s default config didn’t flag even the obvious SQL injection, essentially providing no help at all and a dangerous sense of safety. The takeaway was clear: contextual security analysis beats pattern matching, hands down. DryRun Security behaved like a seasoned security code reviewer versus the simplistic text/pattern searches of the other tools.
(For those interested in the nitty-gritty, you can read the full breakdowns on our blog for Ruby on Rails, Python/Django, and C#. Each tells a similar story of DryRun vs. Snyk, CodeQL, Semgrep, and SonarQube.)
The Bottom Line
Product and security leaders: don’t be distracted by false-positive mania. It’s a red herring. What counts is whether your application security tooling is actually catching the things that could get you breached.
As Emily Patterson wisely advises, if you’re shopping for SAST in 2025, look for solutions with advanced AI baked in.
Otherwise you risk using a commodity tool that might look quiet but isn’t really doing the job.
DryRun Security’s results demonstrate that you can have both thorough coverage and low noise: by focusing on real, contextual vulnerabilities, we ensure critical issues aren’t missed while keeping developer trust (no one wants to wade through thousands of irrelevant findings). In essence, we strive to maximize true positives and meaningful findings, not just minimize false positives at all costs.
Oh, and one more thing… 📣 We’re not stopping at Ruby, Python, and C#. Tomorrow, we’re dropping our Java Spring edition of the SAST showdown, where DryRun goes head-to-head with Sonar, Snyk, Semgrep, and CodeQL on a Java Spring codebase. If you’re curious whether this accuracy gap persists in the Java world (hint: it does), stay tuned for that blog post. It’s going to be a continuation of this discussion, and we’re excited to share the results!
It’s time to change the narrative in AppSec: less about bragging rights for “low false positives,” more about confidently catching the vulnerabilities that matter most.
If you’re ready to join the Contextual Security Analysis revolution and find real risks that your traditional SAST can’t, schedule some time with us and we’d be happy to walk through a demo with you. If you’d like to learn more about Contextual Security Analysis, download “A Guide On Contextual Security Analysis.”