← Back to Blog

2026-02-28

Why I Built Kill Switches Into My AI Before I Built a Landing Page

By Damara Carlentine

How 25 years of Fortune 500 AI taught me that the first feature of any AI product isn't generation — it's the ability to stop.

The Lesson From 14,000 McDonald's Restaurants

When you build AI that runs across 14,000 locations, you learn something fast: the most important feature isn't what the AI does when it works. It's what happens when it doesn't.

I spent years building McHire, the AI hiring system that McDonald's deployed nationally. And the question that kept me up at night was never 'can it screen candidates faster?' It was 'what happens when it breaks at 2am in Tampa on a holiday weekend?'

The answer had to be: someone can stop it instantly, prove it stopped, and show exactly what happened.

That lesson shaped everything about how I built Boots Proposals.

What I Built Before the Landing Page

When I started building Boots Proposals — an AI-powered system that generates complete proposals, contracts, and reports in 60 seconds — most founders would have started with the homepage, the pricing page, or the onboarding flow.

I started with three kill switches.

Level 1 — Global: One setting that instantly shuts down all AI generation across the entire platform. Every organization, every user, every job. Done. Level 2 — Organization: A per-tenant switch that pauses AI for a specific customer without affecting anyone else. Useful for billing issues, abuse response, or when a customer asks us to stop. Level 3 — Job: A per-generation cancel that lets a user (or our system) stop a specific proposal mid-generation. The user clicked 'generate' and changed their mind? It stops.

The AI worker checks all three levels before processing every single job. If any switch is active, the job doesn't run. Period. If your AI tool doesn't have a kill switch, it has a prayer. And prayers don't pass enterprise security reviews.

Kill Switches Without Audit Logs Are Theater Here's what I learned from enterprise procurement: having a kill switch is necessary but not sufficient. The follow-up question is always 'prove it.' Prove that when you stopped AI generation, it actually stopped. Prove who activated the switch. Prove when it happened. Prove which jobs were affected. Prove it wasn't re-enabled without authorization. That's why every kill switch activation in Boots Proposals writes to an audit log. Not a log file that rotates and disappears. A database table with structured, queryable events. And it's not just kill switches. Every significant action in the system gets logged: Every AI generation — requested, completed, failed, retried — with token counts and duration Every login and signup — including failed attempts with IP addresses for brute force detection Every billing event — subscription changes, payment successes and failures, cancellation reasons Every kill switch activation and deactivation — with actor identity, scope level, and reason Each event records who did it (user, system, worker, or Stripe), the IP address, the browser, and a session ID that ties related actions together.

Your Data Is Not My Other Customer's Data Multi-tenant SaaS means multiple customers share the same infrastructure. The question every enterprise buyer asks is: 'how do I know my data is separate from everyone else's?' Most AI tools handle this at the application layer — meaning the code checks which user is logged in and filters the results. That works until someone writes a bug. Then data leaks. Boots Proposals handles isolation at the database engine layer using PostgreSQL Row Level Security. Every query is automatically filtered by the user's organization before the application code even sees it. Even if I wrote a bug that forgot to filter by org_id, the database would still block the query. And we don't just claim it works. We have 31 automated tests that prove it: Organization A cannot SELECT Organization B's proposals, contracts, clients, reports, templates, or services Organization A cannot UPDATE or DELETE Organization B's data Organization A cannot INSERT data into Organization B's namespace by manipulating the org_id field These tests run on every deployment. If a code change breaks tenant isolation, it gets caught before it reaches production.

Why I Published the Full Architecture I've sat in enough security review meetings to know what happens when a vendor says 'we take security seriously.' The procurement team rolls their eyes, the CISO asks for evidence, and the vendor scrambles to produce documentation they should have built months ago. So I published the whole thing. Publicly. On our website. The full security architecture page shows every table in the audit log, every event we capture, every vendor in our infrastructure trust chain and their compliance posture, and exactly how our kill switch hierarchy works. Not a privacy policy written by lawyers. Not a 'trust center' with marketing copy. The actual technical architecture, in plain English, with enough detail that a security team can evaluate it without scheduling a call. If your AI tool can't explain how it protects your data in plain English — not legal English, not marketing English — it's not enterprise-ready. It's a wrapper.

What This Means If You're Evaluating AI Tools Whether you're a freelancer, a consultant, or a small business owner evaluating AI proposal tools, here are the questions you should be asking: 'Can you stop the AI?' — If the answer isn't specific (what level, how fast, who can do it), it's a no. 'Can you prove my data is separate from other customers?' — If they say 'we use encryption,' that's not an answer. Encryption protects data in transit and at rest. Tenant isolation protects data from other tenants. 'What do you log?' — If they can't list specific events with specific fields, they're not logging anything useful. 'Can I see the architecture?' — If the answer is 'we'll share that after you sign an NDA,' ask yourself what they're hiding. Small business owners deserve the same transparency that Fortune 500 companies demand. You shouldn't have to be McDonald's to know how your AI tool protects your data.

Read the Full Security Architecture The complete security architecture for Boots Proposals — including tenant isolation details, audit log schema, kill switch hierarchy, vendor trust chain, and access control model — is published at bootsagentai.com/security. If you're a freelancer or consultant tired of AI tools that can't answer basic security questions, Boots Proposals is now accepting pilot members at $29/month.

About Boots On The Ground AI Founded by Damara Gonzalez, Boots On The Ground AI builds practical AI solutions for small businesses in the Chicago suburban area. With 25+ years of Fortune 500 product management experience — including building AI hiring systems deployed across 14,000+ McDonald's locations — we bring enterprise-grade thinking to small business problems. Aurora, Illinois | damara@bootsagentai.com | bootsagentai.com

Ready to create AI-powered proposals for your business?

Start Free