Security Is Just Engineering Tech Debt (And That's a Good Thing)
Breaking the Illusion That Security Is Anything But Software Quality
Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer or any other entities with which I am affiliated.
After working so many years and in multiple domains in security, I’ve come to a realization that tends to rattle some of my peers: security isn’t special. It isn’t magic. It isn’t a separate, mystical discipline that lives in its own sacred space, governed by arcane checklists and mysterious risk scores. It’s just engineering work. Sometimes that work gets done properly. Most of the time, it doesn’t. And when it doesn’t, it piles up like any other form of technical debt—waiting to explode in your face when you can least afford it.
I know that sounds reductive to some, but it’s actually freeing. Because if security is just engineering work, the solution isn’t complicated. You don’t need a WIZard in a hoodie. You don’t need another vendor dashboard. You just need to treat it like you treat every other part of your engineering backlog: something you identify, prioritize, and fix like any other defect or unfinished feature.
The Great Security Misconception
For too long, we've treated security as this mysterious, specialized domain that somehow exists apart from "normal" engineering. We've built entire organizational structures around this misconception - security teams vs. engineering teams, security backlogs vs. engineering backlogs, security reviews vs. code reviews.
But here's the uncomfortable truth: this separation is artificial and counterproductive. It's downright harmful.
When I look at what we call "security work," I see engineering work that simply hasn't been done yet. That vulnerability scan finding? It's just an engineering defect. That missing access control? It's just an incomplete feature. That outdated library? It's just neglected maintenance. That "critical P1 security ticket" is just a bug with marketing.
And let's talk about those ridiculous "risk scores" we slap on security tickets. CVSS 9.8! Critical severity! But why don't we have similar dramatic scoring for reliability issues? When a service is flaky and drops 5% of customer transactions, we don't assign it a "RELIABILITY SCORE: CATASTROPHIC" label. We just fix it because it's broken. Security defects deserve the same treatment - they're just bugs that need fixing, not special creatures requiring exotic handling.
Why Security Became a Silo
So how did we end up here? The history isn’t pretty. Part of it is ego. Security people wanted to feel special. Yes, I said it. We built mystique around our profession because it felt good to be the specialized experts with arcane knowledge. We cultivated an image of "defenders of the realm", “stewards of knowledge” too complex for engineers to grasp. Compliance made it worse and created artificial demand for separation. Regulations like SOX, PCI-DSS, and HIPAA emphasized "separation of duties" and specialized security roles. Companies responded by creating isolated security teams to check compliance boxes rather than integrating security into engineering processes. Vendors were only too happy to reinforce this model. There's an entire industry built around security tools, consulting, and services that thrives on security's perceived specialness. If security became "just engineering," billions in specialized security spending might be redirected to general engineering quality. Over time, we institutionalized a skills gap—security people who couldn’t code, engineers who didn’t understand security—and built processes to keep those gaps safely siloed.
And when those silos inevitably started slowing things down, we did what most organizations do—we created another band-aid: security champions programs. Let me be clear—I’ve built them. I’ve run them. But the very fact that we need them is a symptom of the problem, not a solution. We created security champions because security wasn’t already seen as part of engineering’s day-to-day job. We had to appoint special people to “advocate” for security, because otherwise, it simply wouldn’t get prioritized.
But think about this for a second—have you ever seen a Reliability Champion? Have you ever seen someone appointed to run around reminding teams to care about latency or observability? Of course not. Those things are already baked into how we build systems. Every engineer understands that uptime, failure handling, and monitoring are part of building production-grade software. Nobody needs a special title to advocate for that.
And yet, somehow, security still needs to wave its own flag. We don’t organize SRE Awareness Weeks, but somehow security gets its own “Security Awareness Month”—as if remembering not to ship insecure software is something to celebrate once a year, instead of something we should expect every single day.
The fact that security still needs “champions” to remind teams to validate inputs or protect customer data doesn’t signal maturity—it signals just how far we still have to go. It’s not a win. It’s a band-aid on a culture that still treats security as extra work, not the fundamental part of the software.
The result is exactly what you’d expect: security lives in its own backlog, with its own language, its own tools, and its own set of people constantly begging for attention. And because it lives apart from the daily flow of engineering work, it almost always loses that fight for attention.
The Greatest Hits of Bad Engineering Rationalizations
Teams don’t ignore security because they don’t care. They ignore it because they convince themselves it’s safe to do so. You’ve heard the excuses. You might have even made them.
There’s the classic, “It’s behind authentication, so we’re safe,” as if anyone with an account isn’t a potential attacker. Or the crowd favorite, “Our cloud provider handles that,” when in reality, you still own the application, the data, and every insecure config you’ve deployed. Some teams love to say, “We’re too small to be a target,” which makes them even more attractive to attackers looking for an easy payday. And of course, the all-time number one hit: “We’ll deal with it after launch.”
You know you’ve got a security debt problem when your engineers have to come look at the security team JIRA board, which barely gets opened up. You know you’ve got a problem when you report the same pentest finding as of last year. You really know you’ve got a problem when your security incidents go through an entirely different process, have entirely different playbooks, have entirely incident reporting templates than your engineering incidents, staffed by teams who don’t even speak the same language or use the same tools.
Security Work Is Just Tomorrow's Engineering Backlog
Let me share a story. At a company, imaging you had a quarterly "security review" where the security team would descend upon engineering teams like auditors from the IRS. They would find the same issues, again and again:
Unvalidated inputs (JIRA tickets tagged as SEC-1234)
Missing access controls (always marked "CRITICAL" in red)
Unencrypted data stores (assigned a CVSS score of 8.9)
Hardcoded credentials (escalated to the CIO because "COMPLIANCE RISK!")
Each time, these would go into the "security backlog" - a special purgatory where issues were ranked by mysterious "risk scores" and prioritized alongside feature work. They would have special "sprint planning" sessions where the security team would beg for engineer time, pleading with statements like "this is a P0 finding from our pen test!"
Meanwhile, when the site went down because of poor error handling or bad cache invalidation, nobody needed a special "reliability score" to prioritize the fix. It just got done because it was obviously broken.
Preventing Tomorrow's Engineering Debt Today
Consider threat modeling - a practice security folks have been evangelizing forever. What is threat modeling, really? It's preventative engineering.
When we threat model a new feature, we should not really do it as a security work. Rather make it engineering design work that considers adversarial use cases alongside legitimate ones thinking about how the system might fail when used in ways its creators didn't intend.
This isn't fundamentally different from considering how a system might fail under load, or when a dependency goes down, or when a user makes a mistake. It's just good engineering that accounts for a broader set of scenarios.
Let me give you a concrete example. A team I worked with was building a new file-sharing feature. During threat modeling, we identified that files might contain malware, users might upload extremely large files that could cause storage issues, and the system needed controls to prevent unauthorized access to shared files.
Traditional thinking would label these as "security concerns" (to be addressed later). The security team would create SEC-4567 (CRITICAL: Malware Upload), SEC-4568 (HIGH: Storage DoS), and SEC-4569 (CRITICAL: Horizontal Privilege Escalation). These tickets would go into the kanban board and then languish until the next security review where the team would get berated for not addressing them.
But strip away the security theater, and what do you have? Just engineering requirements that need to be built:
File type restriction as a feature
File size limits are a feature
Proper access controls are a feature
These should have been ENG-1234, ENG-1235, and ENG-1236 from the start - just regular engineering tasks in the sprint backlog.
By identifying these needs early through threat modeling, we prevented future security specific debt. The team built these controls as part of the initial implementation, rather than retrofitting them later at 10x the cost. But most importantly, we stopped treating them as exotic "security features" and just made them ordinary engineering tickets.
Detection Rules Instrumentation
Another example is detection rules. In many organizations, security teams write detection rules that live in a SIEM or other monitoring system, separate from application telemetry and monitoring. These detections get funneled into a completely different workflow system, staffed by different teams, and tracked with different metrics.
But what is a security detection rule, really? It's just instrumentation of business processes and application behavior.
For instance, to detect potentially fraudulent transactions, the security team will write correlation rules in Splunk to identify suspicious patterns like:
Multiple failed login attempts followed by a successful login and immediate funds transfer
Transactions occurring from unusual locations
Transaction amounts just under reporting thresholds
When these fired, they'd go to the SOC, who'd do initial triage and then escalate to engineering via a frustrating chain of Slack channels, emails, and ticketing system transfers. There'd be interminable "handoff meetings" where security would explain the issue to engineering (who would inevitably push back with "that's a false positive").
But these weren't "security detections" - they were business logic validations that should have been built into the application from the start or the behavior that SOC analysts aren’t really aware until now. The detection rules were just compensating controls for incomplete engineering work. They should have been regular application logs feeding into the same alerting infrastructure that monitors for performance issues or crashes.
When reframed these as business requirements rather than security controls, engineering teams will start building them directly into the application. Failed login throttling, location-based risk scoring, will eventually become product features rather than after-the-fact security band-aids. And when something suspicious happened, the same on-call engineer who responded to system / application outages would get the alert, because guess what? It's all just system behavior.
The Later You Address It, The More It Costs
Like all tech debt, security issues become exponentially more expensive to fix the longer you wait.
A missing input validation that takes 15 minutes to implement during initial development might take hours to retrofit once the code is in production, especially if it's been unwittingly relied upon by other components. And if that validation gap leads to a data breach? Now you're looking at potentially millions in costs, incident response processes, post-mortem meetings, root cause analyses, and security improvement plans that will consume your entire engineering org for months.
But what about dependency vulnerabilities like Log4Shell? Aren’t those different? Nope. They’re just bugs in third-party code—no different from any other bad dependency.
The teams that patched quickly weren’t better at security. They were better at engineering. They had solid dependency management, clear component inventories, and confidence in their testing.
The ones who struggled weren’t failing at security—they were failing at basic software hygiene. Their tech debt just happened to show up wearing a security label.
And that’s why Log4Shell keeps showing up in every vendor demo. Not because it was impossible to fix, but because security followed a separate, broken process, disconnected from how the rest of engineering operates.
How to Think About Security as Engineering Work
So how do we operationalize this perspective? Here are some practical shifts:
1. Stop separating security requirements from functional requirements
Security characteristics are functional characteristics. Authentication, authorization, input validation, output encoding - these aren't separate from your application's function; they define how your application functions. There should be no such thing as a "security requirement" - only product requirements that include security characteristics.
2. Make threat modeling a normal design activity
Just as you wouldn't design a system without considering performance or reliability, don't design one without considering security. This isn't a separate "security review" - it's just thorough engineering design. Your architecture decision records should include security considerations by default, not as a special addendum.
3. Build security instrumentation into your observability stack
Your application's security telemetry shouldn't live in a separate system managed by a separate team. It should be part of your unified observability approach, giving you a complete picture of your system's behavior. There should be no separate security dashboards, just dashboards that include security signals alongside performance, reliability, and business metrics.
4. Measure security debt like you measure other tech debt
Track security issues using the same mechanisms you use to track other forms of technical debt. Measure their impact on velocity, reliability, and maintenance costs. Stop using exotic risk scoring frameworks that nobody understands. If you must have severity levels, use the same ones you use for regular bugs: P0-P4 or whatever your organization's standard is.
5. Stop following separate IR process for "security incidents"
When there's a breach or vulnerability exploit, don't label it a "security incident" - it's just an incident. Run the same incident response process you'd use for an outage or data corruption issue. Use the same post-mortem template, the same severity classifications, and the same remediation tracking.
The Payoff
When you stop treating security as something separate, something special, security actually gets better.
It gets built into the normal engineering workflow, not bolted on later.
It shows up in design decisions, not after launch.
It lives in the same observability stack, not in a siloed SOC.
Most importantly, the walls between security and engineering start to dissolve. Security folks stop being the “auditors at the gate” and start being engineering partners. And engineers stop waiting for “security sign-off” and start designing with security by default.
That’s what real shift-left looks like.
Not another plugin in your IDE.
Not another scanner in your pipeline.
It’s engineering owning security from the start.
And when organizations make that shift, they don’t just get better security—they ship better software, faster. Because good security is just good engineering.
And tech debt is tech debt, whether you label it "security" or not.
The only question is whether you’ll pay it down now, or later—with interest.
A Call to Arms: Dismantle the Security Silo
So here's my challenge to both sides:
To security professionals: Stop pretending you're special. You're not wizards or warriors - you're engineers who happen to focus on a particular class of problems. Drop the FUD, the specialized jargon, the "sky is falling" rhetoric. Start speaking the language of engineering: requirements, defects, bugs, test coverage. Be engineers first, security specialists second.
To engineering leaders: Stop treating security as someone else's problem. Security work is engineering work, full stop. It belongs in your backlog, your sprints, your code reviews, your design docs, and your architecture decisions. If you're tracking technical debt but not security debt, you're not really tracking all the debt.
Security isn't magic. It's not compliance. It's not a checklist. It's just engineering. And the sooner we all accept that, the sooner we can start building more secure systems without all the drama, silos, and mutual finger-pointing that has characterized our industry for far too long.
I'm not suggesting we don't need security expertise - we absolutely do. But we need that expertise embedded in our engineering processes, not segregated into its own special kingdom with special language, special tools, and special workflows.
Because every second we spend maintaining artificial boundaries between security and engineering is a second we're not spending on actually looking at the problems. I've come to believe that the best security programs don't look like security programs at all. They look like engineering excellence programs that naturally incorporate security as one aspect of quality.
This gives a new perspective to how we see security.
I could not be more agree. Great writeup