
Writer: Itamar Apelblat, CEO and Co-Founder, Token Safety
Not way back, AI deployments contained in the enterprise meant copilots drafting emails or summarizing paperwork. Right this moment, AI brokers are provisioning infrastructure, answering buyer assist tickets, triaging alerts, approving transactions, writing manufacturing code, and a lot extra. They’re now not passive assistants. They’re operators throughout the enterprise.
For CISOs, this shift creates a well-known however amplified downside: entry.
Each AI agent authenticates to techniques and companies. It makes use of API keys, OAuth tokens, cloud roles, or service accounts. It reads information, writes configurations, and calls downstream instruments. In different phrases, it behaves precisely like an id, as a result of it’s one.
But in lots of organizations, AI brokers will not be ruled as first-class identities. They inherit the privileges of their creators. They function beneath over-scoped service accounts. They’re granted broad entry simply to verify issues work. As soon as deployed, they usually evolve sooner than the controls round them.
That is the rising blind spot in AI safety.
Step one towards closing it’s what we name identity-first safety for AI: recognizing that each autonomous agent should be ruled, audited, and attested similar to a human person or machine workload. Meaning distinctive identities, outlined roles, clear possession, lifecycle administration, entry management, and auditability.
However right here’s the laborious fact: id alone is now not enough.
Conventional id and entry administration (IAM) solutions a simple query: Who’s requesting entry? In a human-driven world, that was usually sufficient. Customers had roles and job capabilities. Companies had outlined scopes. Workflows had been comparatively predictable.
AI brokers create, use, and rotate identities at machine pace—outpacing conventional IAM controls.
This information reveals CISOs handle the complete lifecycle of AI agent identities, scale back danger, and preserve governance and audit readiness.
AI brokers change that equation.
They’re dynamic by design. They interpret inputs, plan actions, and name instruments based mostly on context. An AI agent that begins with the mission to generate a quarterly report would possibly, if prompted or misdirected, try and entry techniques unrelated to reporting. An infrastructure agent designed to remediate vulnerabilities would possibly pivot to modifying configurations in ways in which exceed its authentic scope.
When that occurs, identity-based controls don’t essentially cease it from taking place.
Conventional IAM assumes determinism. A job is granted as a result of a person or service performs an outlined operate. The scope of motion is predictable.
AI brokers break that assumption. Their goal could also be mounted, however the path they take to realize it’s fluid. They cause, chain instruments collectively, and discover various actions.
Static roles had been by no means designed for actors that resolve act in actual time. If the agent’s position permits the motion, entry is granted, even when the motion now not aligns with the rationale the agent was deployed within the first place.
That is the place intent-based permissioning turns into important.
If id solutions who, intent solutions why.
Intent-based permissions consider whether or not an agent’s declared mission and runtime context justify activating its privileges at that second. Entry is now not only a static mapping between id and position. It turns into conditional on objective.
Think about an AI agent answerable for deploying code. In a conventional mannequin, it might have standing permissions to switch infrastructure. In an intent-aware mannequin, these privileges activate solely when the deployment is tied to an authorized pipeline occasion and alter request. If the identical agent makes an attempt to switch manufacturing techniques exterior that context, the privileges don’t activate that entry.
The id hasn’t modified, however the intent, and subsequently the authorization, has.
This mix addresses two of the most typical failure modes we’re seeing in AI deployments.
First, privilege inheritance. Builders usually take a look at brokers utilizing their very own elevated credentials. These privileges persist in manufacturing environments, creating pointless publicity. Treating brokers as distinct identities may help remove this bleed-through.
Second, mission drift. AI brokers can pivot mid-run based mostly on prompts, integrations, or adversarial enter. Intent-based controls stop that pivot from turning into unauthorized entry.
For CISOs, the worth isn’t simply tighter management. It’s governance that scales.
AI brokers work together with hundreds of APIs, SaaS platforms, and cloud sources. Attempting to handle danger by enumerating each permissible motion rapidly turns into unmanageable. Coverage sprawl will increase complexity, and complexity erodes assurance.
An intent-based mannequin simplifies oversight. Governance shifts from managing hundreds of discrete motion guidelines to managing outlined id profiles and authorized intent boundaries.
Coverage opinions give attention to whether or not an agent’s mission is suitable, not whether or not each particular person API name is accounted for in isolation.
Audit trails develop into extra significant as effectively. When an incident happens, safety groups can decide not solely which agent carried out an motion, however what intent profile was lively and whether or not the motion aligned with its authorized mission.
That stage of traceability is more and more vital for regulatory scrutiny and board-level accountability.
The broader challenge is that this: AI brokers are accelerating sooner than conventional entry management fashions had been designed to deal with. They function at machine pace, adapt to context, and orchestrate throughout techniques in ways in which blur the strains between software, person, and automation.
CISOs can not afford to deal with them as simply one other workload.
The shift to agentic AI techniques requires a shift in safety pondering. Each AI agent should be handled as an accountable id. And that id should be constrained not solely by static roles, however by declared objective and operational context.
The trail ahead is evident. Stock your AI brokers. Assign them distinctive, lifecycle-managed identities. Outline and doc their authorized missions. And implement controls that activate privileges solely when id, intent, and context align.
Autonomy with out governance is an enormous danger. Identification with out intent is incomplete.
Within the agentic period, understanding who’s appearing is critical. Making certain they’re appearing for the appropriate cause is what makes agentic AI safe.
When you’re securing agentic AI we’d love to indicate you a technical demo of Token and listen to extra about what you’re engaged on.
Sponsored and written by Token Safety.
