Are we too focussed on shift-left? What about the right?
Comprehensive Approach in the Shift-Left Era
As someone who's been in security for a while, sometimes feels like forever, I've seen my fair share of trends over the last few years. While "Shift-left" is not the latest buzzword, it certainly is becoming a common aspect of every security team. The idea of "shifting security left" - getting developers involved earlier to build security into the development process from the start.
And don't get me wrong, shift left is a promising approach with some real benefits if done right. Getting developers security-aware from day one can help squash bugs and vulnerabilities before they're committed to code. Who wouldn't want to catch a glitch before the whole system meltdown? I am a big-time promoter of Starting Security early. It’s like building a house with a strong foundation.
But here's what I've noticed lately - a lot of security teams seem to have latched onto shift left as the silver bullet, without laying enough of the groundwork on the other side. Shift left is indeed a critical piece for robust security, BUT, it’s just one piece. And that's what I want to talk about today.
You see, while shifting left is all well and good, we can't forget the importance of - dare I say it - security on the "right" side too. All the focus on developers and integrating security early on is meaningless if we don't have our backend processes locked down just as tight. It's like baking a cake - you can put in all the love and care mixing the batter, but if the oven's is still off, the cake's not gonna bake (haha).
What do I mean by getting our act together on the backend?
A few things:
Testing, testing, testing
No matter how clean the code coming out of Dev, we've gotta have a rigorous testing regimen once it hits production. As we work on shifting left to catch issues earlier, don't forget the value of busting down our backends with pentests too. APIs are a prime target these days with everything moving to the cloud, so spending some time red-teaming won't be a bad idea. It's tricky work but so insightful, finding flaws before the real attackers. Plus it's kinda fun seeing who can unravel things the fastest, right?
Automating a lot of these test cases is good, But it is only as good as how it's implemented. Just running a scan and assuming you're all clear is a recipe for headaches down the line. Automation can scan for lots of common stuff like missing patches, but a human brain is still required to think outside the box. How do you know if your forget password workflow is solid enough to prevent the hijacking of another user's account? if you can do it, then you know there is still some stuff to do. Of course, this isn't all about just breaking things. Writing thorough reports showing developers exactly how we did it, helps construct even more hardened code in the future and is much more beneficial than traditional security training.
Are we doing well on logging?
In the shift-left approach, the primary focus is on preventing security issues before they occur, which is undoubtedly important. However, this proactive stance can lead teams to underestimate the importance of reactive but required measures like logging. Logging is a whole other side that sometimes feels overlooked or forgotten completely. This can lead to recording only basic events being recorded, and overlooking what might be required of the new products being built. For example, a team might focus on ensuring their code is secure and might not realize the need to log user actions on different APIs throughout its journey on the app, missing out on key data that could be vital for diagnosing a breach.
Now turning on and logging everything is going to cost so much, what's required is to do "meaningful logging". Identifying what's your key metrics and figuring out what is required to identify what's normal and what's a potential red flag is important.
How often do we use our panic button? -
Now we are proactive, have compliance checked, and everything's going well, but let’s talk real - What do you do when a vulnerability is exploited? What is the preparedness level for the unexpected events post-launch? You can identify that with the frequency with which you hit the panic button. If it’s more than a rare occurrence, it’s a clear indication of the need to strengthen your responses. This requires having a well-documented robust incidence response plan and training for the team to execute it. Without that, we're all too likely to overreact to incidents that should have been manageable. Imagine, being on a hike, you have all fancy gear, but no one has the AllTrails app (or map) downloaded, and now there's no internet. This might be a good adventure when going on a hike, but wouldn't work when we are talking about safeguarding our applications. Having a well-prepared plan can help in handling things with calm and efficiency, instead of chaos.
Striking the Right Balance
While thinking about "shift-left" is a great idea, we've got to keep an eye out for the boring but required things too. So, how do we keep it well-rounded? With a balanced approach:
(These are only a few ideas)
Think Full Circle:
Security is not just a one-off task. It goes round and round, from the moment we dream up new software until it's out there doing its thing, while keep updating the defense strategy.
Emphasize Continuous Monitoring and Response:
While the shift-left approach embeds security early in the development process, continuous monitoring and response ensure that any issues not caught in the earlier stages are identified and responded to, as soon as they appear in later stages or post-deployment. Even with the most thorough security practices in development, identifying anomalies is still important. Continuous monitoring acts as a safety net, catching anomalies and threats that slip through initial defenses.
Incorporate Risk Management:
Integrating risk management with the shift-left approach ensures that security efforts are aligned with the organization's most critical assets and threat vectors. By understanding what is most at risk and where the most significant threats lie, organizations can tailor their early security efforts to be most effective. For instance, if an application handles sensitive financial data, the shift-left efforts would prioritize encryption, access controls, and data integrity checks from the outset. Risk management provides the strategic context for these decisions and helps teams to understand what to work on and when.
Have a Game Plan:
Incidents can and will occur, even with an arsenal of tools, even with perfect devsecops practice. Having a plan on how to act when things go south and how to communicate the issue to other teams to prevent it from happening again is essential and a sign of the maturity of the security team. The plan should be integrated well with the shift left approach. There should be a feedback loop that would communicate back to the dev teams about the incidents and how to incorporate them as best practices moving on, for a more resilient security posture.
So there you have it! Having the right balance. In summary - by all means, keep shifting left! But as you do, make sure to hold down the fort on the other side too. A balanced approach with a focus on both left and right is what really closes all the doors to the bad guys.
Keep things simple, and secure.