Who Owns Your Security When AI Writes Your Code?
Every AI provider disclaims liability for generated code. The model forgot to use your existing security implementation. That's your problem now. We think that's worth talking about.
The liability gap
Read the Terms of Service for any AI coding assistant. Somewhere in there — usually section 7 or 8, past the part where they explain how they'll use your data — you'll find a version of this:
"You are solely responsible for your use of the Output and for ensuring it complies with applicable laws and regulations. We do not warrant that Output will be accurate, complete, or free of errors."
This isn't unreasonable on its face. No software comes with a guarantee of perfection. But consider what's actually happening: a company is selling you a tool that writes code. The code it writes might be insecure. If that insecure code ships to production, causes a breach, or exposes user data — that's on you. The company already protected itself when you clicked "I agree."
This is the standard arrangement. It's how most of our industry works. Companies accept the benefits of AI adoption — faster development cycles, reduced engineering costs, new revenue from API access — while the liability for what that AI actually produces lands entirely on the user.
Implied responsibility
This isn't unique to AI. It's how every layer of our society operates.
Companies accept the benefits of being part of an assumed social contract — access to markets, educated workers, public infrastructure, consumer trust — but when it comes to paying the actual cost of the things that go wrong, they've already insulated themselves. Terms of Service. End User License Agreements. Liability limitations. Arbitration clauses.
The responsibility is always implied, never actual. Society operates on the assumption that the institutions within it will bear some share of the consequences their products create. But the legal structure ensures they don't have to.
In AI-assisted development, this gap is especially sharp. The model is doing real work — writing functions, building features, modifying security-sensitive code. But the entity that built, trained, and sold the model bears no responsibility for what it produces. The developer using it does. And increasingly, that developer is trusting the model to handle things they don't fully understand themselves.
The model forgot
Here's what actually happened to us. We were building Saga's security layer — an elaborate system for sandboxing commands, validating inputs, managing auth tokens. The model we were working with forgot to use it.
Not maliciously. Not because it was a bad model. It just didn't check what already existed. It generated new code for a problem we'd already solved, but the new code didn't have the protections the existing code had. A hardened implementation was right there in the codebase. The model reinvented the wheel — and the new wheel didn't have brakes.
This is the default behavior. AI coding assistants generate from their training distribution — fast, generic, prototype-grade. They don't search your repository for existing implementations. They don't check if you already built a safer version of what they're about to write. They don't carry context about your codebase's security conventions from one session to the next.
And when that behavior creates a vulnerability, the Terms of Service have already told you: that's your problem.
Who should own the security analysis?
The typical answer is: use a third-party security scanner. Snyk. SonarQube. Semgrep. These are good tools. They find real vulnerabilities.
But they're also services. They run in someone else's cloud. Your code is sent to their servers for analysis. Your vulnerability reports live on their infrastructure. If the company changes pricing, gets acquired, deprecates your plan, or suffers a breach — your security analysis goes with it.
More fundamentally: these tools don't know what your codebase already does. They flag patterns. They don't say "you already have a hardened version of this insrc/auth/middleware.rs — use that." They find problems. They don't understand your solutions.
A different approach
We built something we call WYC — Where's Your Condom. The name is deliberately irreverent. You don't forget protection that has a name like that.
WYC is a security review pipeline that runs locally, on your machine, in the background. When you save files, when an AI agent finishes writing code, when you're about to commit — it scans. But it does something most scanners don't: before it flags anything, it searches your codebase for what you've already built.
Found a potential command injection? WYC checks if you already have aCommandBuilder with input validation. If you do, the finding includes that context: "Your codebase already handles this safely in saga_core/src/tool/sandbox.rs. Use that."
This context goes everywhere. Every model that works through Saga — whether it's a local Llama instance, Claude Code in your terminal, or GPT in a chat window — gets the same security context. The model doesn't need to be smart about your codebase's security. The context makes it smart.
Files on your hard drive
Here's the technical decision that matters most: WYC stores everything as JSON files on your local disk.
Not a database on someone's server. Not an API that requires authentication. Not a service that can be deprecated. Files.
Any tool that can read a JSON file can access your security analysis. Claude Code reads it through MCP tools. A local model reads it directly from disk. A script you write in Python can parse it. If Saga doesn't exist tomorrow, the files are still there.
This is a deliberate architectural choice. We don't want to be another dependency between the developer and their own security data. The filesystem is the interface. The data belongs to the user.
Why this won't come up otherwise
This conversation — who bears the cost when AI-generated code is insecure — is not one the industry is incentivized to have. The current arrangement works for every company involved:
- •AI providers sell model access, disclaim liability for output
- •Security scanning companies sell cloud services for the code those models produce
- •The developer pays for both, and bears the liability for what ships
Nobody in that chain has a business reason to ask: "should the person using these tools actually own their security analysis?" The question disrupts every revenue model in the stack.
So the topic stays invisible. Not because it's not important. Because the current system benefits every incumbent. Implied responsibility — the assumption that someone is looking out for the user — is a feature of the business model, not a bug.
What we're building toward
WYC is one piece of a larger idea: your AI workspace should be sovereign. The data lives on your machine. The analysis happens locally. Models are guests — useful, powerful guests — but they don't own the house.
When a model writes code for you, the security review that catches what it missed should be yours. Not a subscription. Not a cloud report. Not a feature that disappears when you cancel a plan. Files on your hard drive, in a format any tool can read, that survive anything the industry does next.
That's not a technical limitation. That's a design principle.
WYC ships with Saga. Local-first security review, provider-agnostic context delivery, files on your disk. Download Saga or read more about the security review system.