Product

Your AI Agent Can Send That Email. Did You Approve It?

Bitsight found 30,000+ exposed agent instances with send-email permission. Quieter problem for everyone else: outbound actions that fire without a gate.

Your AI Agent Can Send That Email. Did You Approve It?
6 min read

In February 2026, cybersecurity firm Bitsight scanned the internet for exposed AI agent instances and found more than 30,000 of them: OpenClaw deployments with no authentication, sitting open to anyone who looked. When Bitsight's researchers tested what those instances could do, they confirmed that an AI agent outbound action (specifically, sending email on the owner's behalf via Gmail) required no additional authorization beyond the initial OAuth grant. The access itself was enough.

That's the security finding. The operational finding is quieter and affects people who aren't running exposed instances at all.

Access is not the same as authorization

When you connect an AI agent to your email, your CRM, or your calendar, you're granting OAuth permissions. Read, write, send. Those permissions are broad by design, as the agent needs them to do useful work. What most setups don't include is a step between "the AI decided to send this" and "the email left your outbox."

That gap is smaller than it sounds. An AI email management agent that summarizes your inbox and drafts replies has everything it needs to send, not just draft. An AI CRM assistant that updates lead status and logs notes has everything it needs to create records, trigger automations, and send follow-ups. Whether it does any of that without your explicit instruction depends on how the agent is configured, and most configurations optimize for responsiveness, not for pausing before outbound action.

This isn't a bug. It's how agents are built. The whole premise is that they act on your behalf. The question is what "on your behalf" includes, and whether you know what it includes before it happens.

The business risk isn't dramatic

Most discussions of AI agents acting without permission focus on the dramatic cases: the inbox that got bulk-deleted, the internal system that got exposed. But the everyday version is more mundane and, for solopreneurs and small teams, often more costly.

An AI that manages client follow-ups sends a re-engagement email to someone who asked to be left alone for the month. An AI that handles scheduling sends a meeting invitation on a day you'd blocked for focused work. An AI with CRM access sends a pricing email that quotes the standard rate to a client you were about to offer a discount as part of a renewal negotiation.

None of these require the AI to behave badly or go rogue. They just require the AI to act on available information without the context you happen to have. For solopreneurs especially, these context gaps are common, and unlike large teams, there's no one else to catch the mistake before it goes out. Our piece on the 5 workflow automations worth setting up in 2026 covers how to structure these exact use cases with appropriate human checkpoints. That context gap is normal. The AI doesn't know what you know. The problem is that once the email is sent, the meeting is booked, the record is written, the action is already out in the world.

Why "just be careful with permissions" doesn't fix it

The obvious response is to scope permissions carefully. Give the agent read access but not send access. Limit what it can touch.

This works until you need the agent to actually do something useful. If the agent can only read and not send, it can't handle email. If it can't write to the CRM, it can't log anything. Most of the value of an AI agent comes from its ability to take action, not just observe. Restricting permissions to the point where the agent can't act without breaking something is just turning off the automation.

Better prompting doesn't fully solve it either. Instructions like "always confirm before sending" live in the prompt, which means they depend on the language model interpreting them correctly in every context. They work most of the time. When they don't (because of a long context, an ambiguous instruction, or an input the model hasn't seen before), the agent acts anyway.

The same failure mode shows up repeatedly in production AI agent deployments: a constraint that held for weeks stops holding when input patterns change or context windows fill. Your instructions don't degrade gracefully; they either hold or they don't, and you usually find out from a client. The reliable layer isn't in the prompt. It's between the AI's decision and the action executing. This architectural principle, separating the AI's reasoning from what it's actually allowed to do, is the same insight behind why most AI agent failures are architecture problems, not prompt problems.

An approval gate is structural, not behavioral

An approval step before an outbound action doesn't depend on the AI following instructions. It doesn't depend on the model being consistent. It works because the action literally can't run until a human confirms it.

That confirmation doesn't have to be slow or disruptive. A quick swipe on your phone (yes, send this; no, hold) takes a few seconds and happens on your schedule. If you don't respond, the workflow waits. The email stays as a draft. The CRM record stays unwritten. Nothing happens until you say it should.

Over time, you can let the system learn which actions you always approve and let those run automatically. An AI follow-up that you've approved 40 times in a row for the same type of lead doesn't need your sign-off on the 41st. This is how confidence scoring works in practice: the system tracks your approval history per action type and lets proven patterns graduate to automatic execution. But a message to a client you've been in active negotiation with, or an outbound action during a time-sensitive window, stays in the queue until you say go.

What you're building is an agent that can move fast on the actions you've validated and pause on the ones that need your eyes. That's different from an agent that can move fast on everything, which means some things go out before you'd want them to.

What to set up before your next workflow

Before you connect any new tool to an AI agent, it's worth running through a short checklist. What write or send permissions does this OAuth grant include? Is there a step in the workflow between the AI's output and the action firing? If the AI acts on incomplete context, who finds out first: you or the recipient?

For any workflow that touches outbound communication, the answer to that last question should always be you. Not because AI agents are unreliable in some general sense, but because outbound actions create commitments. Commitments made on incomplete context are hard to walk back.

Giving an AI access to your inbox or your CRM is useful. Giving it an ungated path to your outbox is a different decision, and it's worth making that one deliberately. The OWASP Top 10 for LLM Applications lists excessive agency (granting AI models more permissions than the task requires) as one of the most common and consequential security risks in deployed AI systems. And because approvals are free under action credit pricing, adding a gate before outbound actions carries no cost penalty; there's no financial reason to skip it.

Approvals are always free on Rills. You only pay for the actions that create real value. Add an approval step to your first workflow and see how fast the queue moves.

Ready to automate your workflows?

Eliminate monitoring anxiety with AI agents that propose actions while you stay in control. Start your 14-day trial today.

Start Free Trial

14-day trial, no credit card required