Defense in depth.
On your machine.
AI models are powerful guests. Saga makes sure they stay guests. Every layer — filesystem, process, data, access control — is designed so that no model, no agent, and no provider can reach what isn't theirs.
Sandboxed by default
Every AI agent operates inside a virtual filesystem. There is no path to your system files, your SSH keys, or another project's data. The boundary isn't a policy — it's the architecture.
Virtual Filesystem Zones
How it's enforced
- 1Path validation
Every path must start with
/saga/. Traversal attempts (..) are rejected before any I/O. - 2Caller identity
User, MCP agent, and system operations have separate permission tiers. Agents cannot write to the context zone.
- 3Serialization guard
Paths are validated again during JSON serialization. A malformed path can't survive a round-trip.
Three layers of project isolation
Each project is its own world. A compromised agent in one project cannot see, touch, or infer anything about another. Three independent isolation layers ensure that even if one fails, the others hold.
Process isolation
Layer 1Every terminal tab is a separate PTY process with its own shell, environment, working directory, and crash recovery log. Processes don't share memory. One tab crashing doesn't affect another.
Data isolation
Layer 2Each project gets its own filesystem tree, database partition, and tab collection. Deleting a project cascades through every table. Symlinks into project directories are rejected.
Access control
Layer 3Four-layer write defense: pattern denylist, marker file detection, project root jail, and read-only fallback. When an agent requests access outside its boundary, a permission modal asks you directly.
Local history. No git required.
Every change is content-addressed and snapshotted locally. If a model overwrites your work, if a process crashes, if you just want to go back — the history is on your disk, deduplicated, and instant.
Content-addressed with blake3
Identical content is stored once, no matter how many snapshots reference it. Deduplication is automatic.
Temporal validation
Every snapshot is witness-hashed. You can prove what your code looked like at any point in time — cryptographically.
Configurable retention
Keep everything, or keep the last N snapshots. Your disk space, your rules.
What gets captured
- +File state before and after agent writes
- +Agent execution state and conversation checkpoints
- +Context graph state at each decision point
- +Artifacts touched and tokens generated per task
- xNothing leaves your machine. Ever.
The model forgot. That's your problem.
AI coding assistants generate fast, generic, prototype-grade code. They don't check if your codebase already has a hardened version of what they're about to reinvent. When that creates a vulnerability, the Terms of Service say it's on you.
What providers promise
- x"Output may not be accurate" — buried in ToS
- x"You are responsible for your use of the output"
- xNo liability for generated code quality
- xSecurity scanning? Use a third-party service
What Saga does
- +Sandboxes every agent in a virtual filesystem
- +Isolates projects at process, data, and access layers
- +Snapshots your work before any agent writes
- +Reviews code for what the model missed
Security review that knows your codebase
Most scanners flag patterns. Saga's security review checks what your codebase already handles — and tells every model to use what you built instead of reinventing it.
Background scanning
When you save files, commit code, or an AI agent finishes writing — the security pipeline runs automatically. No action needed.
Codebase awareness
Before flagging anything, the scanner checks what your codebase already does. If you have a hardened auth middleware, it doesn't tell you to write another one. It tells models to use the one you built.
Inbox delivery
Findings arrive in a dedicated inbox with the actual code, what your codebase already handles, and why it matters. Discuss any finding with any AI model directly.
Universal context
Every AI tool you use through Saga — local models, Claude, GPT, terminal agents — automatically gets your security context. The model doesn't need to be smart about security. The context makes it smart.
Every model is a guest in your house
Security data is stored as files on your machine. Any tool that can read a file can access them. No API keys. No subscriptions. No vendor lock-in.
Terminal agents
Claude Code, Codex CLI, Gemini CLI — all access findings through MCP tools.
Chat conversations
Discuss findings with any model. Security context is automatically injected.
Local models
Running Llama or Qwen locally? They read the same findings. Zero internet required.
Your data survives everything
App crashes. API outages. Provider shutdowns. Price changes. None of it touches your data. It's files on your hard drive, in formats any tool can read.
Stop outsourcing your code security
The model that writes your code shouldn't also be the only thing checking it. Own your security. Own your data. Own your workspace.