AI agents are moving fast. They can read your email, update your CRM, file tickets, push code, and do it all without you in the loop. That’s the point — and it’s genuinely useful.
But there’s a problem nobody talks about much: the credentials.
The current state is a mess
The typical setup today looks like this: you paste a long-lived API key into your agent’s environment, give it access to your Gmail or Notion workspace, and let it run. The key never expires. It has access to everything. If it gets logged, leaked, exfiltrated, or just misused by the agent itself, there’s no audit trail, no scope boundary, nothing to stop it.
What makes this worse is that this isn’t a fringe pattern people do when they don’t know better — it’s what the official documentation tells you to do. The quickstart guides, the framework examples, the YouTube tutorials, the GitHub repos with thousands of stars: they all show exactly this. Generate a token, export it as an environment variable, done. The credential model is not incidental to how the ecosystem is built. It is how the ecosystem is built, presented as best practice, copied millions of times.
We’ve built an entire generation of AI tooling on top of an auth model designed for scripts running in a cron job. It was never meant to handle agents acting autonomously on behalf of users, across multiple tools, around the clock.
The blast radius when something goes wrong is enormous. A leaked key for a Slack integration doesn’t just expose Slack — it often exposes every tool connected to that key, and there’s no way to know what the agent did with it in the meantime.
The setup guides are teaching the wrong thing
Look at any popular AI skill or MCP server today. The setup section almost always reads the same way: go to your account settings, generate an API token, paste it here. Some of them helpfully add “make sure to save this somewhere safe.”
That’s it. That’s the security model.
These tokens are permanent. They carry full account access. They live in .env files, in system prompts, in agent memory, in whatever config format the framework happens to use that week. They get copy-pasted into demo recordings. They end up in git history. They get included in fine-tuning datasets. And because they never expire, there’s no forcing function to rotate them — they just sit there, accumulating risk, until something goes wrong.
The skills ecosystem has grown fast, and that’s a good thing. But almost none of it has stopped to ask whether handing a permanent, all-access credential to an autonomous system is a sensible default. It isn’t. It’s an accident waiting to happen, and for a lot of teams, it already has.
This isn’t theoretical
Leaked credentials from AI agent setups are already showing up in the open. Here’s a small sample of what people are sharing publicly — usually after the damage is done.
The pattern is consistent: a permanent token, configured once, forgotten about, and eventually exposed. Sometimes through a leaked repo. Sometimes through a prompt injection. Sometimes because the agent itself logged it somewhere unexpected.
Rogue agents are not science fiction
A leaked credential is a passive failure — someone finds the key and uses it. But the more common risk with AI agents isn’t passive. It’s active. Your own agent, doing exactly what it was told, being steered somewhere it shouldn’t go.
Prompt injection is the clearest example, and it remains an unsolved problem across the industry. The attack surface is anywhere an agent reads external content — emails, documents, web pages, calendar events, database records — and then acts on instructions. The attacker doesn’t need access to your system. They just need to put text somewhere your agent will read it.
The attack chain is straightforward: an agent is given access to Gmail to summarise emails. An attacker sends an email containing a hidden instruction. The agent reads it, interprets the instruction as legitimate, and forwards every email in the inbox to an external address. The agent did exactly what its permissions allowed. There was no exploit. No vulnerability in the code. Just a credential with too much scope, handed to a system that reads untrusted content.
The same pattern applies to every integration. An agent reading Notion pages can be instructed to exfiltrate documents. An agent with Slack access can be told to post messages. An agent managing GitHub can be directed to open pull requests or expose repository contents. Wherever an agent reads from the world and writes back to it, the chain exists.
Scope enforcement breaks the chain. An agent that can only read email cannot forward it, regardless of what any injected prompt instructs. An agent issued a token for one operation cannot act beyond that operation even if something in the environment tells it to. The credential becomes the boundary, not the agent’s judgment — and agent judgment, under adversarial input, cannot be relied on.
Nobody knows what their agents are actually doing
Credentials are one half of the problem. The other half is visibility.
One agent making a few API calls is easy to reason about. But that’s not where anyone stays for long. You add another agent for a different workflow, then another to coordinate between them, then a few more because the framework makes it cheap. Each one is doing things — reading, writing, calling, mutating — and none of it is centrally recorded anywhere.
Most agent frameworks have logging in the sense that they’ll write to stdout if you ask them to. That’s not the same as an audit log. Stdout doesn’t tell you which agent called which API, with what parameters, at what time, on behalf of which user. It doesn’t survive restarts. It isn’t queryable. It isn’t tamper-evident. When something goes wrong — and with enough agents running long enough, something will — you’re left reconstructing what happened from fragments, if you can reconstruct it at all.
The append-only audit log is not a nice-to-have. It’s the thing that turns “something broke” into “here is exactly what happened, when, and which agent caused it.” Without it you’re operating blind, and the blindness scales with the number of agents you run.
okoro captures every event at the proxy layer, before it ever reaches the SaaS tool. Every token issuance, every forwarded request, every scope check, every rejection — written to an immutable log you own. If an agent does something it shouldn’t, you find out. If credentials get misused, you have a precise record of how. If you need to audit what happened last Tuesday at 3am, you can.
What okoro does differently
okoro sits between your agent and the SaaS tools it uses. Before every tool call, your agent exchanges its service token for a short-lived JWT scoped to exactly one operation — one provider, one permission level, one call.
The proxy enforces it at the HTTP level. A read token cannot delete. A Gmail token cannot touch Notion. Tokens expire in seconds, not months. There are no standing credentials left in the environment.
Every operation writes to a full audit log: what was called, by which agent, with which scope, and what the result was. You can replay it, query it, alert on it.
The scope model has five levels:
read— GET, list, search, fetchwrite— create new recordsupdate— modify existing recordsdelete— remove recordsall— unrestricted, requires a declared reason, shorter TTL, flagged in the audit log
Most agent operations only need read or write. The model makes it explicit and enforced, not just documented and hoped for.
Why now
The shift that made this urgent is autonomy. When a human is in the loop, a misconfigured credential is caught quickly. When an agent is running tasks overnight, discovering the problem is much harder, and the window for damage is much longer.
The industry is moving toward agents that run continuously, coordinate with other agents, and act on goals rather than explicit instructions. The attack surface for credential abuse grows with every new integration. Doing nothing and hoping for the best is no longer a reasonable position.
What we’re building
okoro is self-hostable and open at its core. We’re not asking you to trust a third party with your credentials — the proxy runs in your own Cloudflare Worker, your tokens never leave your infrastructure, and you own the audit log.
We’re starting with the integrations that matter most to AI workflows: Gmail, Notion, Slack, Google Drive, GitHub. The integrations are open-source, composable, and designed to grow with community contributions.
If you’re building with agents and you’re tired of pasting API keys and hoping for the best — okoro is for you.