As agentic AI becomes more common across industries, companies face a new cybersecurity challenge: how to verify and secure systems that operate independently, make decisions on their own, and appear or disappear without human involvement.
Consider a financial firm where an AI agent activates early in the morning to analyse trading data, detect unusual patterns, and prepare reports before the markets open. Within minutes, it connects to several databases, completes its task, and shuts down automatically. This type of autonomous activity is growing rapidly, but it raises serious concerns about identity and trust.
“Many organisations are deploying agentic AI without fully thinking about how to manage the certificates that confirm these systems’ identities,” says Chris Hickman, Chief Security Officer at Keyfactor.
“The scale and speed at which agentic AI functions are far beyond what most companies have ever managed.”
AI agents are unlike human users who log in with passwords or devices tied to hardware. They are temporary and adaptable, able to start, perform complex jobs, and disappear without manual authentication.
This fluid nature makes it difficult to manage digital certificates, which are essential for maintaining trusted communication between systems.
Greg Wetmore, Vice President of Product Development at Entrust, explains that AI agents act like both humans and machines.
“When an agent logs into a system or updates data, it behaves like a human user. But when it interacts with APIs or cloud platforms, it looks more like a software component,” he says.
This dual behaviour requires a flexible security model. AI agents need stable certificates that prove their identity and temporary credentials that control what they are allowed to do.
These permissions must be revocable in real time if the system behaves unexpectedly.
The challenge becomes even greater when AI agents begin interacting with each other. Without proper cryptographic controls, one system could impersonate another.
“Once agents start sharing information, certificate management becomes absolutely essential,” Hickman adds.
Complicating matters further, three major changes are hitting cryptography at once. Certificate lifespans are being shortened to 47 days, post-quantum algorithms are nearing adoption, and organisations must now manage a far larger number of certificates due to AI automation.
“We’re seeing huge changes in cryptography after decades of stability,” Hickman notes. “It’s a lot to handle for many teams.”
Keyfactor’s research reveals that almost half of all organisations have not begun preparing for post-quantum encryption, and many still lack a clearly defined role for managing cryptography.
This lack of governance poses serious risks, especially when certificate management is handled by IT departments without deep security expertise.
Still, experts believe the situation can be managed with existing tools.
“Agentic AI fits well within established security models such as zero trust,” Wetmore explains. “The technology to issue strong identities, enforce policies, and limit access already exists.”
According to Sebastian Weir, AI Practice Leader at IBM UK and Ireland, many companies are now focusing on building security into AI projects from the start.
“While AI development can be up to four times faster, the first version of code often contains many more vulnerabilities...”
“...Organisations are learning to consider security early instead of adding it later,” he says.
Financial institutions are among those leading the shift, building identity systems that blend the stability of long-term certificates with the flexibility of short-term authorisations.
Hickman points out that Public Key Infrastructure (PKI) already supports similar scale in IoT environments, managing billions of certificates worldwide.
He adds, “PKI has always been about scale. The same principles can support agentic AI if implemented properly.”
The real focus now, according to experts, should be on governance and orchestration.
“Scalability depends on creating consistent and controllable deployment patterns. Orchestration frameworks and governance layers ensure transparency and auditability," says Weir.
Poorly managed AI agents can cause significant damage. Some have been known to delete vital data or produce false financial information due to misconfiguration.
This makes it critical for companies to monitor agent behaviour closely and apply zero-trust principles where every interaction is verified.
Securing agentic AI does not require reinventing cybersecurity. It requires applying proven methods to a new, fast-moving environment.
“We already know that certificates and PKI work. An AI agent can have one certificate for identity and another for authorisation. The key is in how you manage them,” Hickman concludes.
As businesses accelerate their use of AI, the winners will be those that design trust into their systems from the beginning. By investing in certificate lifecycle management and clear governance, they can ensure that every AI agent operates safely and transparently. Those who ignore this step risk letting their systems act autonomously in the dark, without the trust and control that modern enterprises demand.