Securing the New Digital Workers
0x41434f
My work across IT, security, privacy, and compliance gives me a clear view of how companies actually function. I see the tools people struggle with, the constant flow of data between systems, and the ongoing effort to keep things efficient and secure.
Like many others, I’ve been captivated by the rapid progress of AI, especially with large language models. Their potential to smooth out some of the rough edges of internal operations is hard to ignore. That’s what led me to start building MinuteWork, an AI copilot designed to reduce the friction of internal support. Imagine asking a question about HR policy or requesting IT help directly inside Slack or a web chat, and an AI agent understands, finds the answer, or even completes the task by connecting securely to internal systems. That’s the goal. I want internal support to feel effortless.
As I began prototyping, the scale of the average company’s software environment became more obvious. Okta’s 2025 Businesses at Work report confirmed what I was already seeing. The average company uses 101 different SaaS applications. In the US, that number jumps to 114. Startups might use around 42, but large enterprises often deal with over 240. While suites like Microsoft 365 and Google Workspace are common, everything else varies a lot. Startups tend to prefer Slack or Figma. Large enterprises use tools like Workday or ServiceNow. And it doesn’t stop at licensed software. Employees often bring in free tools on their own just to stay productive. This shadow IT adds even more sprawl. Just figuring out which employee uses which app is a challenge for IT and security teams. It usually takes specialized tools because the old methods like network scans or device agents don’t go far enough.
Now imagine adding AI agents like MinuteWork into this mix. These agents will need to connect to tools using Application Programming Interfaces (APIs). The IT agent might need to work with Jira or ServiceNow. The HR agent could query Workday. The finance agent might pull expense data from Concur. Eventually, you’ll want a security agent interacting with SIEM tools or endpoint managers, and a cloud ops agent dealing with AWS or Azure. These agents are not just logging in. They’re autonomous systems making thousands of API calls, managing tokens, and performing real actions inside core systems. This is a whole new layer on top of existing SaaS sprawl. I call it agent sprawl.
I once had to trace an integration key from a bot that had supposedly been turned off six months earlier. It was still active and had write access. These things linger.
This brings my security background right back to the center. It also reminded me of something Andrej Karpathy said about how LLMs spread. He pointed out that LLMs give huge benefits to individuals, but large organizations are slower to adopt them. From my experience, he’s right.
- The models are versatile but shallow. One bad hallucination can cause real damage.
- Integrating them into companies means dealing with legacy systems, compliance, security rules, legal reviews, and more.
- Organizations resist change. Bureaucracy slows everything down.
And this slow adoption makes the security risks even more important. How do we manage hundreds or even thousands of agent connections in a complex environment like this?
The basics of identity and access management still apply. We need strong authentication, which for agents means managing API keys and OAuth tokens safely. We need strict authorization based on least privilege. Your HR agent should never be able to access IT systems. We need logging and monitoring focused on programmatic access, not just human logins. And we need access reviews that include agents, not just employees.
Security tools are already a big spending area for companies. But are the current tools designed to handle non-human identities?
The nature of these agents also introduces new risks. As Anthropic and others have written, these systems have a “universal interface” because they understand natural language. That makes them more flexible, but also potentially easier to manipulate. They have “universal capabilities,” meaning they can be asked to do a lot more than they were built for, including things that might be harmful. If multiple agents share the same foundation model, a single issue could affect many systems. And agents can fail in odd ways, like getting stuck in loops.
If we imagine a future where agents talk to other agents, things get even riskier. One failure could spread across systems fast. That’s why lifecycle management matters. We already struggle to deactivate unused employee accounts or old API keys. Now add agents to the mix. How do we detect when they go stale? How do we revoke all their access? Do we need systems that remind humans to clean up unused agents?
That’s why it was encouraging to see Workday introduce the idea of an Agent System of Record. Their platform focuses on managing the lifecycle of AI agents, including onboarding, offboarding, permissions, cost control, and compliance. It shows that companies are starting to see agents not just as tools, but as a new kind of digital workforce. One that needs proper governance.
If I were sketching what the industry needs to build next, a kind of Agent Security and Governance Hub, it would look like this:
First, visibility and discovery. We need to find agents, not just the platforms. That means an agent registry and tools that can monitor API gateways, cloud logs, and identity platforms to detect both registered and shadow agents.
Second, access management. We need a credential vault for agent keys and tokens, integrated with secrets management. Permission analysis tools to check for least privilege. Automated workflows that detect inactivity, trigger reviews, and revoke access when needed.
Third, auditing and observability. Deep traceability to connect agent actions with API logs and SaaS logs. Tools that analyze behavior for security events. And a way to feed those insights back into improving policy and detection.
Fourth, governance and policy. A policy engine that defines what agents are allowed to do. Clear accountability by linking each agent to a human owner or team. And reminders to follow up on exceptions, reviews, and flagged issues.
Fifth, connection standards. Managing hundreds of custom API connections is messy. If standards like the Model Context Protocol take off, maybe we can apply governance consistently at that level too.
Building AI agents like WorkSync with tools like Agno is exciting. The components are clear. The Streamlit test UI is helpful. The Next.js template makes it easy to ship. But my security mindset won’t let me stop there. These systems are powerful. They’re also autonomous. And they’re being added to already complex environments. We need to approach this with care. With a mindset shaped by everything we’ve learned about identity, access, and the messiness of real software stacks.
If you're building with AI agents and need help securing them, my team and I can help.
We offer services to:
- Strategize & Architect: Find high-value use cases, pick the right frameworks (like Agno), and design for scale.
- Build & Secure: Implement robust agents with strong identity, access, and credential controls.
- Integrate & Evaluate: Connect agents to your systems safely, and define evals to keep them reliable and safe.
If you're looking for help grounded in real-world agent development and enterprise security, let’s talk. Our consultation engagements start at $5,000.
We’re excited about this space and want to help organizations navigate it safely.