By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
ToolAccuracy of FindingsDetects Non-Pattern-Based Issues?Coverage of SAST FindingsSpeed of ScanningUsability & Dev Experience
DryRun SecurityVery high – caught multiple critical issues missed by othersYes – context-based analysis, logic flaws & SSRFBroad coverage of standard vulns, logic flaws, and extendableNear real-time PR feedback
Snyk CodeHigh on well-known patterns (SQLi, XSS), but misses other categoriesLimited – AI-based, focuses on recognized vulnerabilitiesGood coverage of standard vulns; may miss SSRF or advanced auth logic issuesFast, often near PR speedDecent GitHub integration, but rules are a black box
GitHub Advanced Security (CodeQL)Very high precision for known queries, low false positivesPartial – strong dataflow for known issues, needs custom queriesGood for SQLi and XSS but logic flaws require advanced CodeQL experience.Moderate to slow (GitHub Action based)Requires CodeQL expertise for custom logic
SemgrepMedium, but there is a good community for adding rulesPrimarily pattern-based with limited dataflowDecent coverage with the right rules, can still miss advanced logic or SSRFFast scansHas custom rules, but dev teams must maintain them
SonarQubeLow – misses serious issues in our testingLimited – mostly pattern-based, code quality orientedBasic coverage for standard vulns, many hotspots require manual reviewModerate, usually in CIDashboard-based approach, can pass “quality gate” despite real vulns
Vulnerability ClassSnyk (partial)GitHub (CodeQL) (partial)SemgrepSonarQubeDryRun Security
SQL Injection
*
Cross-Site Scripting (XSS)
SSRF
Auth Flaw / IDOR
User Enumeration
Hardcoded Token
ToolAccuracy of FindingsDetects Non-Pattern-Based Issues?Coverage of C# VulnerabilitiesScan SpeedDeveloper Experience
DryRun Security
Very high – caught all critical flaws missed by others
Yes – context-based analysis finds logic errors, auth flaws, etc.
Broad coverage of OWASP Top 10 vulns plus business logic issuesNear real-time (PR comment within seconds)Clear single PR comment with detailed insights; no config or custom scripts needed
Snyk CodeHigh on known patterns (SQLi, XSS), but misses logic/flow bugsLimited – focuses on recognizable vulnerability patterns
Good for standard vulns; may miss SSRF or auth logic issues 
Fast (integrates into PR checks)Decent GitHub integration, but rules are a black box (no easy customization)
GitHub Advanced Security (CodeQL)Low - missed everything except SQL InjectionMostly pattern-basedLow – only discovered SQL InjectionSlowest of all but finished in 1 minuteConcise annotation with a suggested fix and optional auto-remedation
SemgrepMedium – finds common issues with community rules, some missesPrimarily pattern-based, limited data flow analysis
Decent coverage with the right rules; misses advanced logic flaws 
Very fast (runs as lightweight CI)Custom rules possible, but require maintenance and security expertise
SonarQube
Low – missed serious issues in our testing
Mostly pattern-based (code quality focus)Basic coverage for known vulns; many issues flagged as “hotspots” require manual review Moderate (runs in CI/CD pipeline)Results in dashboard; risk of false sense of security if quality gate passes despite vulnerabilities
Vulnerability ClassSnyk CodeGitHub Advanced Security (CodeQL)SemgrepSonarQubeDryRun Security
SQL Injection (SQLi)
Cross-Site Scripting (XSS)
Server-Side Request Forgery (SSRF)
Auth Logic/IDOR
User Enumeration
Hardcoded Credentials
VulnerabilityDryRun SecuritySemgrepGitHub CodeQLSonarQubeSnyk Code
1. Remote Code Execution via Unsafe Deserialization
2. Code Injection via eval() Usage
3. SQL Injection in a Raw Database Query
4. Weak Encryption (AES ECB Mode)
5. Broken Access Control / Logic Flaw in Authentication
Total Found5/53/51/51/50/5
Contextual Security Analysis
April 2, 2025

False Positive Reduction is a Red Herring—Accuracy Is King in Application Security

Emily Patterson’s latest Medium article, “A Product Person’s Guide to the Application Security Market: SAST 2025,” offers a timely reality check on the state of SAST. She notes that the SAST market is “completely saturated,” and most vendors are fighting for business by touting low false positive rates. This industry-wide obsession with false positive reduction has become a red herring—a distraction from what actually matters in application security.

In practice, the real issue isn’t just how quiet your scanner is; it’s how accurate it is at finding the vulnerabilities that truly matter. 

A tool that proudly reports “zero false positives” but silently misses critical bugs is far more dangerous than one that occasionally cries wolf. 

If your SAST tool gives you a green light while serious flaws lurk in the code, those missed issues create a false sense of security. The cost of a missed vulnerability (a false negative) in production—think data breaches, account takeovers—vastly outweighs the annoyance of a handful of false alarms. In other words, focusing solely on reducing noise can blind us to the real goal: catching all the important vulnerabilities.

This is why accuracy (and thoroughness) should be the North Star metric for product and security leaders evaluating AppSec tools. It’s time to reframe the conversation. Instead of asking, “How do we eliminate false positives?” we should be asking, ”Is my SAST engine truly understanding my code and catching the real risks?”

Modern applications are full of complex logic and context-specific gotchas that simple pattern-matching tools often overlook. A few examples: 

  • authentication bypasses hidden in logic flow 
  • sensitive data exposure through indirect pathways 
  • SSRF vulnerabilities requiring understanding of network calls
  • authorization checks missing in certain contexts 

Traditional static analysis tools that rely on known patterns and rules (like Snyk Code, GitHub CodeQL, SonarQube, or Semgrep) tend to excel at finding the “usual suspects” (e.g. an obvious SQL injection or XSS) but struggle with vulnerabilities that don’t fit a known signature. As one of our recent tests showed, one popular scanner even passed a pull request with ZERO warnings despite it containing multiple critical flaws. This scenario would have developers celebrating a dangerously insecure build.

DryRun Security takes an approach that flips this paradigm by prioritizing contextual accuracy over simplistic pattern matching. Instead of relying on regexes or a limited set of rules, our AI-powered engine understands the context and intent of code changes. It performs contextual security analysis and understands how pieces of code interact—to detect issues that others miss.

In the words of Emily Patterson, DryRun Security's AI code reviews “feel like they could replace a traditional SAST tool easily.” 

We agree! Our own head-to-head comparisons back this up: DryRun consistently caught critical vulnerabilities that all the big-name tools missed. For example, DryRun was able to identify that multiple error messages in a login flow amounted to a user enumeration weakness, and that using a certain API without proper checks led to an authorization bypass—*nuanced flaws that static analyzers oblivious to context would never flag.

We’ve documented these findings across several language ecosystems, putting DryRun Security up against traditional SAST scanners in real open-source applications:

Ruby on Rails 

In a RailsGoat test app, DryRun found every critical vulnerability (6/6) while others like Snyk, CodeQL, SonarQube, etc., missed most (some caught as few as 0 of 6). Crucial issues like SSRF and logic-based auth bypasses were only caught by DryRun’s context-aware analysis— highlighting how purely pattern-based tools left dangerous blind spots.

Python/Django 

DryRun again caught 100% of the high-impact flaws in this Django project analysis, including subtle IDOR (access control) issues and weak crypto usage that others failed to identify. Two of the competing tools confidently reported “No issues found,” which would have been a disastrous false assurance. Pattern-driven scanners did okay on the obvious stuff, but they fell short on anything requiring deeper understanding of the code’s intent. DryRun Security flagged those business logic flaws and security oversights that went completely under the radar for the rest.

C# (.NET)

In a head-to-head on an ASP​.NET Core vulnerable API, DryRun detected 6 out of 6 critical issues, whereas a popular enterprise SAST tool found 0. SQL injection, XSS, SSRF, authZ bypass, user enumeration, hardcoded credentials—you name it, DryRun Security caught it—while others either missed the majority or skipped them entirely. One legacy tool’s default config didn’t flag even the obvious SQL injection, essentially providing no help at all and a dangerous sense of safety. The takeaway was clear: contextual security analysis beats pattern matching, hands down. DryRun Security behaved like a seasoned security code reviewer versus the simplistic text/pattern searches of the other tools.

(For those interested in the nitty-gritty, you can read the full breakdowns on our blog for Ruby on Rails, Python/Django, and C#. Each tells a similar story of DryRun vs. Snyk, CodeQL, Semgrep, and SonarQube.)

The Bottom Line

Product and security leaders: don’t be distracted by false-positive mania. It’s a red herring. What counts is whether your application security tooling is actually catching the things that could get you breached

As Emily Patterson wisely advises, if you’re shopping for SAST in 2025, look for solutions with advanced AI baked in.

Otherwise you risk using a commodity tool that might look quiet but isn’t really doing the job. 

DryRun Security’s results demonstrate that you can have both thorough coverage and low noise: by focusing on real, contextual vulnerabilities, we ensure critical issues aren’t missed while keeping developer trust (no one wants to wade through thousands of irrelevant findings). In essence, we strive to maximize true positives and meaningful findings, not just minimize false positives at all costs.

Oh, and one more thing… 📣 We’re not stopping at Ruby, Python, and C#. Tomorrow, we’re dropping our Java Spring edition of the SAST showdown, where DryRun goes head-to-head with Sonar, Snyk, Semgrep, and CodeQL on a Java Spring codebase. If you’re curious whether this accuracy gap persists in the Java world (hint: it does), stay tuned for that blog post. It’s going to be a continuation of this discussion, and we’re excited to share the results!

It’s time to change the narrative in AppSec: less about bragging rights for “low false positives,” more about confidently catching the vulnerabilities that matter most.

If you’re ready to join the Contextual Security Analysis revolution and find real risks that your traditional SAST can’t, schedule some time with us and we’d be happy to walk through a demo with you. If you’d like to learn more about Contextual Security Analysis, download  “A Guide On Contextual Security Analysis.” 

And follow us on LinkedIn or X for the latest updates.