When you click "Connect Supabase" inside BYOB, something deceptively complex happens in the background. Within seconds, your AI agent can run database migrations, generate TypeScript types, push schema changes, and wire up your SvelteKit frontend — all without you configuring a single environment variable by hand.
This post is the full engineering story behind that experience. We'll cover the OAuth handshake, how credentials are encrypted and stored, the zero-knowledge pattern that keeps your secrets away from the AI model, the runtime environment lifecycle, and the signed URL service that handles private file access.
The 30-Second Overview
Before diving into details, here's the complete flow from clicking "Connect Supabase" to an AI-driven migration running against your live database:
The key design principle threading through every component: the AI model never sees your credentials. It invokes a tool called execute_supabase_cli, the backend injects your tokens into the shell environment, and the model only sees command output — never the SUPABASE_ACCESS_TOKEN itself.
Part 1: The OAuth Handshake
The integration starts with a standard OAuth 2.0 authorization code flow, but with a few specific design choices worth calling out.
Scope and CSRF Protection
BYOB requests the all scope from Supabase's OAuth server, granting permission to manage database schemas, run CLI operations, and access project settings.
Before redirecting the user, the backend generates a 32-byte cryptographically random state parameter stored in a temporary dictionary. When the OAuth callback arrives, the state is validated. If it doesn't match, the request is rejected before any token exchange happens — this prevents CSRF attacks at the authentication boundary.
Project Discovery
Most OAuth integrations get tokens and stop. BYOB takes one extra step: after the code exchange, it calls the Supabase Management API to list all projects owned by the authenticated user.
This lets the user pick exactly which Supabase project gets linked to their BYOB workspace. A developer might have a production database, a staging database, and several side projects. They should be explicit about which one the AI agent can touch.
Part 2: Credential Encryption
Once the user selects their project, BYOB has access tokens that need to be stored safely. All tokens are encrypted using AES-256-GCM before being written to the database.
The wire format is: base64(nonce + ciphertext). A fresh 12-byte nonce is generated for every encryption using os.urandom(12), so every stored token has a unique nonce even if the underlying secret is identical.
@staticmethod
def _encrypt_env_content(plaintext: str, hex_key: str) -> str:
key = bytes.fromhex(hex_key)
aesgcm = AESGCM(key)
nonce = os.urandom(12) # Fresh nonce every time
ciphertext = aesgcm.encrypt(nonce, plaintext.encode("utf-8"), None)
return base64.b64encode(nonce + ciphertext).decode("utf-8")
Decryption splits the base64 blob at byte 12 to recover the nonce, then decrypts with the same key. The same encryption scheme covers .env files in the project runner, so credentials at rest are always encrypted end-to-end.
Lazy Token Refresh
OAuth access tokens expire. Rather than running a background scheduler to refresh tokens proactively, BYOB uses a lazy refresh strategy: expiry is checked every time an integration is accessed.
The tradeoff: the first request after expiry is slightly slower due to a network roundtrip. But it eliminates background scheduler complexity and the race condition where a scheduled refresh runs while a request is already in flight.
Part 3: The Zero-Knowledge Pattern
This is the most architecturally interesting piece. How does the AI agent run supabase db push against your database without the model ever seeing your SUPABASE_ACCESS_TOKEN?
The Tool Abstraction
The AI model has access to a tool called execute_supabase_cli. From the model's perspective, it accepts a single parameter: command. The system prompt explicitly instructs the model: "Authentication and access_token are ALL injected automatically. You MUST NEVER ask the user for credentials."
When the model calls this tool:
{
"name": "execute_supabase_cli",
"arguments": { "command": "db push" }
}
It expects back only command output — table names, migration status, success or error messages.
The Injection Layer
What actually happens when that tool call arrives at the Air backend:
The model receives migration confirmations and error messages. It never receives the token. Even if a prompt injection attack tried to get the model to print its environment variables, there would be nothing to print — the token lives in the runner's process environment, not in any model-accessible context.
Why This Matters
Consider a naive implementation that gives the AI model direct token access and lets it invoke the CLI itself. This works, but means:
- •Your
SUPABASE_ACCESS_TOKENappears in the model's context window - •It gets saved to chat history
- •A status message might read:
Running SUPABASE_ACCESS_TOKEN=eyJhb... supabase db push - •A sufficiently clever system prompt injection could extract it
Part 4: The Runtime Environment Lifecycle
When a BYOB project boots, it needs Supabase credentials available as environment variables before any user code runs. Here's the bootstrap sequence:
The restore_env_from_d1 function fetches all stored secrets from Cloudflare D1 (BYOB's edge database), decrypts them using a per-project Fernet key, and writes them to the project's .env file. This runs before the SvelteKit dev server initializes.
Per-Project Key Isolation
Each project gets its own Fernet encryption key stored encrypted in the main database. If one project's key were somehow compromised, it would only affect that project's secrets — not a global key protecting everything.
The D1 storage layer encrypts individual secret values with this per-project key before storage:
encrypted_value = project_fernet.encrypt(value.encode()).decode()
SvelteKit Environment Compatibility
The provisioned variables use SvelteKit's PUBLIC_ prefix convention. When PUBLIC_SUPABASE_URL and PUBLIC_SUPABASE_ANON_KEY land in .env, they're immediately usable from $env/static/public with zero manual configuration:
import { PUBLIC_SUPABASE_URL, PUBLIC_SUPABASE_ANON_KEY } from '$env/static/public';
export const supabase = createClient(PUBLIC_SUPABASE_URL, PUBLIC_SUPABASE_ANON_KEY);
Connect Supabase once. Your SvelteKit app has working credentials. No .env editing required.
Part 5: The AI Agent's Database Workflow
With infrastructure in place, here's the end-to-end flow when a user asks the AI to make schema changes.
The Migration Workflow
The agent follows a strict sequence for schema changes that mirrors professional database development:
The db pull step is critical. Without pulling the existing schema first, the model might generate migrations that conflict with existing tables or miss foreign key relationships. Pulling gives the model ground truth about what's actually in the database before writing anything new.
RLS Policy Generation
When creating tables containing user data, the agent automatically generates Row Level Security policies:
ALTER TABLE user_profiles ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Users can view own profile" ON user_profiles FOR SELECT
USING ((SELECT auth.uid()) = user_id);
CREATE POLICY "Users can insert their profile" ON user_profiles FOR INSERT
WITH CHECK ((SELECT auth.uid()) = user_id);
Note the (SELECT auth.uid()) pattern rather than bare auth.uid(). This wraps the auth call in a subquery, causing Postgres to cache the result per query rather than evaluating it per row. For tables with many rows, this is a meaningful performance gain that the agent applies automatically.
Part 6: Signed URL Service
BYOB stores user-uploaded files in private Supabase Storage buckets. The FreshUrl service handles generating short-lived access URLs.
The Caching Strategy
URLs are cached in memory by storage path, so repeated requests for the same asset within a session skip the API call entirely. The security model is defense-in-depth: even if a signed URL were intercepted, it expires in 24 hours. The underlying files in private buckets are never accessible without a valid signature.
Part 7: The D1 Sync Layer
BYOB uses Cloudflare D1 as a secondary storage layer for environment variables that need to survive container restarts. Project containers are ephemeral — a restart resets filesystem state, which means .env files written during one container lifecycle disappear in the next.
The D1 sync solves this with a write-through pattern:
_sync_to_d1 is called automatically after any .env modification, so D1 always reflects the latest state. The sync deliberately excludes VITE_EXPORT_API_KEY — that's a platform-internal key re-injected on each boot, not something that belongs in user-managed storage.
What This Enables
The architecture enables something that would take significant manual work to set up. A developer describes a data model in plain language, and within minutes has:
- •A live Postgres table with correct column types and constraints
- •Row Level Security policies protecting user data
- •TypeScript types matching the schema, immediately usable in SvelteKit
- •Environment variables configured and ready in the running app
- •A Supabase client initialized and usable from any component
The goal was an integration that works so smoothly it feels obvious — and making things feel obvious is usually the hardest engineering problem.
Connect your Supabase project