“Quality software is secure software. So if you’re a craftsman and you do care, you do want your code to be secure.”
I recently sat down with Cole Cornford, host of "Secured," a podcast for software security enthusiasts, to talk about my experiences and thoughts on the world of AppSec and to provide whatever insights I can into the evolving challenges and solutions within the realm of application security. In my role as the CTO of DryRun Security, I am in a very fortunate position to spend my entire day thinking about the challenges we face both now and in the future.
One of the items we discussed was the increasing need to have a technology-agnostic methodology for assessing risk in applications due to the exponential growth in technology options available to developers in recent years.
I explained that a consultant or practitioner might be working on a GoLang App one week and the next, a Node.js App, and a Spring application after that. In fact, it’s possible you may see multiple technology stacks in one assessment, given dependencies on other web services. Often, the consultant or practitioner has incredibly limited time to perform a security assessment, and during that time, it’s not uncommon for them to dedicate a portion of their time to learning a new framework. This is an important and necessary step because there are a lot of nuances in these frameworks and in the tech stacks being used.
My co-host of the Absolute AppSec podcast, Seth Law, and I developed an agnostic approach to assessing application security risks for this very reason and it was through our experiences as practitioners and consultants that helped us hone and refine this methodology and we’ve trained hundreds of developers, practitioners, managers, pentesters, and consultants with many success stories. One major contributing factor to the success of this approach is contextual analysis.
Essentially, contextual analysis is the process of getting to know the application through various bits of information that, when compiled, equate to a “big picture” view of the application’s risk posture. That view then shows you which bits to prioritize when performing reviews and gives the best guidance for securing the app. To find this context and paint this portrait, you study different data points to understand its composition and purpose. It’s a repeatable system that can help you perform a comprehensive and timely code review.
The conventional approach of focusing solely on the OWASP Top 10 vulnerabilities isn’t always effective in addressing real-world challenges. So, we/I emphasize the importance of understanding application context, composition, and purpose, rather than being overly fixated on a predefined checklist of vulnerabilities. What we look for should change based on the factors that make up “context”. This sort of approach enables practitioners to comprehensively assess an application's security posture in diverse tech stacks and do so in a timely manner.
The Challenges of Application Security Education
During my conversation with Cole, I expressed that I believe, by nature, security tooling is always behind the curve. Security engineers are following the trend software developers are building. Software developers are the ones who choose the latest standards, adopt the newest protocols, and deploy the latest options in technology choices. Naturally and inherently, security folks are always going to be behind, always adapting.
One other bit I talk about is how training developers to do anything outside of what needs to be done for a particular sprint or ship is difficult in general. That might seem harsh but it is most certainly not. On the contrary, I do not believe that is the fault of the developer and instead, I believe this is often the reality of the time constraints and business demands placed on developers.
Even with those time constraints and business demands, it has been my experience that roughly 8-10% of developers are REALLY into security training, retain some value from it, and tend to transition into security champion roles. Security champions are necessary because the ratio of security engineer to developer is wide, and security people must offload some of those responsibilities because it’s just not possible to handle. It’s a scale issue. Even the best-funded AppSec programs are incredibly small in proportion to their developer counterparts.
Then what happens next is that the security champion is typically given some kind of security tooling that either the security person configured or, in rare situations, the developer was trained to configure, so that they can get a warning that they likely need to look up and/or ask the security team for help with. I believe this approach logically makes a lot of sense.
At the same time, I think we as an industry have learned that there are a lot of realities and challenges this approach poses. While there have been some success stories with security champion programs for sure, in my experience, they largely miss the mark for a number of reasons. These include (but are not limited to) factors such as:
- attempting to perform an already difficult and niche skill set like Application Security as what is tantamount to a “side job” for a developer.
- trying to make application security tooling built for security experts something useful and usable by a developer (a large ask).
- developers tend to deal with quite a bit of competing priorities for time and attention making it extremely difficult to fit security in. (Once again, in my experience the developers who care about the security of their applications, or security champions, are also the most in-demand developers.)
The Future with AI
Cole agreed that the current approach is not working as intended and posited that developers need things that are front-of-mind and relevant to what they’re doing at that point in time. Both Cole and I discussed our mutual excitement about the potential that AI has to reshape AppSec education and practices.
Another item I wanted to highlight is that I mentioned a number of options for quickly getting started with some OSS options to optimize your workflows using AI. LLMs are incredibly powerful when you harness the fine-tuning and custom options available out there now and every day there are massive leaps in AI technology so my goal was, and is, to get folks started with something that would sort of demonstrate this.
Sure, it might be considered a quick and dirty way to get started in experimenting with the power of AI but that seems like a net positive. I believe you should do this because LLMs will be on the list of technology options that we have to assess for security risk. Getting a handle on it early is important, and it might potentially optimize your workflows. You may even discover a new uber-cool way to use LLMs for AppSec that nobody has thought of yet!
As we move forward, I am very excited to see how AI continues to transform the landscape of application security and how we can harness its power to build more secure software.
What’s next?
Here at DryRun Security, we want to be part of that transformation. We use the first-to-market Contextual Security Analysis (CSA) approach. CSA uses all the pieces of context gathered as developers are writing code (e.g. codepath, functions, author, language) to make contextually aware assertions in near real-time. We couple CSA with AI to short-circuit the long feedback loops common in security and reduce the noise that legacy scanners generate. We believe this is the future of the Application Security industry.
If you found this interesting and want to learn more about how context can help you improve application security, visit https://dryrun.security/csa and download “A Guide on Contextual Security Analysis.”
We’re also opening up new spots on our private beta so you can try DryRun Security for yourself. Sign up here to get added to the beta list.