AI in Security
If you're anything like me, keeping up with the latest trends in technology, you've probably noticed the buzz around AI's role in security. It's being considered as the Swiss Army knife for prevention, detection, remediation - frankly, for just about everything in the cybersecurity realm.
With a long history of machine learning and AI being marketed by security vendors, this recent surge in interest suggests we're at the peak of a significant transformation. Every security leader is looking to incorporate xGPT-like capabilities into their arsenal. But with the capabilities of Large Language Models (LLMs), a pressing question emerges: Are we truly seeing a return on investment from these expensive tools? Let's circle back to this question later.
Rewind a few years, and cybersecurity seemed a lot simpler. Our biggest concern was computer viruses, which, while annoying, could generally be handled with a good antivirus solution and a bit of common sense. Fast forward to today, and the landscape has drastically changed. Cyber threats and breaches occur at a pace most of us struggle to keep up with. They're more targeted, sophisticated, and even the giants of the tech world, with their enormous security budgets, aren't immune.
Integrating AI into our security strategies has shifted from a luxury to an absolute necessity. In this blog, I aim to shed light on how AI can enhance existing security tools, particularly in application security, though its potential applications are vast.
AI in Software Security
For those of us who've been in the software development trenches, it's clear that security is not just another item on the checklist. It's a fundamental aspect that must be integrated into every stage of the Software Development Life Cycle (SDLC) – a sentiment any ambitious security professional would echo.
Traditionally, security considerations were often considered in the final stages of SDLC. We've all been there, racing against deadlines only to discover a host of security issues that need fixing, delaying launch, and causing more than a few sleepless nights.
The concept of "shifting left" – integrating security early and often throughout the SDLC – has gained traction. Though originally a testing framework, this approach aligns with the Secure-by-Design principle, proving to be incredibly beneficial. It's no wonder many government agencies are advocating for this approach.
Let’s talk few use cases -
1. Elevating Code Analysis
The rise of code development models capable of writing code in multiple languages brings to the forefront concerns about the security of this code, especially regarding third-party libraries that may no longer be maintained. When scanned, SAST/SCA tools often highlight numerous vulnerabilities, traditionally providing scan details for developers to act upon. For package vulnerabilities, the lack of specific information about which library functions are called, or how to address CVEs in transitive dependencies, often leads to whitelisting as a quick fix and backlogging the ticket for a future fix, though tools like Semgrep have shown improvement in this area.
With AI's help, this scenario is changing. We can now pinpoint the specific API calls of a library, enabling more informed decisions about whether a fix is necessary or if whitelisting is sufficient. This empowers developers and not just security teams to make decisions, marking a significant shift in how vulnerabilities are addressed. Some companies have been promising the integration of AI in SAST for couple of years, but only recently have we begun to see its full potential realized. Nearly every code scanning company has now integrated AI into their workflows, not just for detection but for actually fixing identified vulnerabilities. Although scan times have initially increased, they are expected to decrease as the technology matures.
2. Life to DAST
Some security practitioners argue that DAST has become ineffective, and is mostly dead. The question arises, Are the findings of the DAST no longer effective? The notion is that the manual effort and configuration time required by DAST slow down the modern software shipping process, widening the gap in its full utilization. However, this doesn't mean DAST is obsolete. It remains a staple in manual penetration testing, with no better alternative for identifying certain vulnerabilities like Remote Code Execution (RCE). The growth of AI in this space could be a game-changer, with AI-powered DAST tools capable of generating dynamic test cases based on new threats. This ability to prioritize vulnerabilities based on DAST findings could revolutionize how we address critical security issues.
3. Threat Modeling
Threat modeling, a personal favorite activity in the security domain, is akin to playing detective with our planned applications. However, the process is often unstructured and time-consuming, relying on interviews with product teams, architects, DevOps, and the OGs to gather tribal knowledge. This manual and complex process, while beneficial, struggles to keep pace with product development, limiting its application to only a few projects.
AI could significantly aid this process, extracting information from design documents, mapping out data flows, and identifying potential design flaws. It could also align architecture and requirements with policy or compliance needs, aiding developers in building more resilient systems.
4. Vulnerability Prioritization
Every engineer has faced the dilemma of prioritizing vulnerabilities based on severity. Traditional tools often leave us relying on generic severity ratings, leading to a backlog of high-priority tickets. AI can change this process by providing context, assessing exploitability probability, and giving security teams or developers more confidence on which vulnerability to pick first, allowing security teams to focus on more strategic tasks rather than getting bogged down in ticket triage.
Sounds Interesting, But...
While AI presents numerous opportunities for enhancing security practices, it's not without its challenges:
1. Integration and Operational Challenges
Integrating AI into our existing security setups can feel like trying to solve a puzzle box that keeps changing its design. We've all got this mix of tools and systems we've been wiring together over the years. Throwing AI into this mix? It's like adding another layer of complexity to an already intricate problem. And let's not forget the skills need to build these or learn to interpret AI-generated insights. This would make a whole new gap within the teams, where the learning curve is steep. But hey, who doesn't love a good challenge?
2. False Positive and False Negatives
Classic security tooling problem, too many security alerts, we start whitelisting or tuning them out. Do enough of that, false negatives slip through unnoticed. Striking the right balance will still be an art than science. Trying to tune AI not to be too paranoid but not to be lax either, would still be a problem. Because it would still be judged by security teams on the quantity and quality of the alerts. A manual review remains necessary to validate the findings.
3. LLMs themselves
Too many of them out there now. Well, there's a saying - "LLMs are only as good as their training data". The effectiveness of these models really depends on the quality of the data they've been trained on. If the training data isn't diverse, rich, or accurate, these models might not perform as expected, leading to less reliable outcomes. So, as we explore the growing landscape of LLMs, it's crucial to pay attention to the quality of their training data to ensure we're giving accuracy some points too.
Conclusion
Overall, I believe that AI is a powerful tool that can be used to improve product security. However, the importance of using AI responsibly and being aware of the risks and challenges involved is critical as well. But the benefits of using AI outweigh the risks (unpopular opinion).
Are we truly seeing a return on investment from these expensive tools? Yes
Will this replace humans - Nah. It is not going to replace a security expert. Why, you ask? here's why -
In conclusion, the potential for AI to transform cybersecurity is immense. From automating routine tasks to detecting threats with speed and accuracy, AI could significantly help you and give you few powers. The key to success will be in finding the right balance between leveraging AI's capabilities and maintaining the human oversight necessary to ensure security decisions are made wisely (and ethically).