Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label ai agents. Show all posts

Exabeam Extends Proven Insider Threat Detection to AI Agents with Google Cloud

 



BROOMFIELD, Colo. & FOSTER CITY, Calif. – September 9, 2025 – At Google Cloud’s pioneering Security Innovation Forum, Exabeam, a global leader in intelligence and automation that powers security operations, today announced the integration of Google Agentspace and Google Cloud’s Model Armor telemetry into the New-Scale Security Operations Platform. This integration gives security teams the ability to monitor, detect, and respond to threats from AI agents acting as digital insiders. This visibility gives organizations insight into the behavior of autonomous agents to reveal intent, spot drift, and quickly identify compromise.

Recent findings in the “From Human to Hybrid: How AI and the Analytics Gap are Fueling Insider Risk” study from Exabeam reveal that a vast majority (93%) of organizations worldwide have either experienced or anticipate a rise in insider threats driven by AI, and 64% rank insiders as a higher concern than external threat actors. As AI agents perform tasks on behalf of users, access sensitive data, and make independent decisions, they introduce a new class of insider risk: digital actors operating beyond the scope of traditional monitoring. Just as insider threats have traditionally been classified as malicious, negligent, and compromised, AI agents now bring their own risks: malfunctioning, misaligned, or outright subverted.

SIEM and XDR solutions that are unable to baseline and learn normal behavior lack the intelligence necessary to identify when agents go rogue. As a pioneer in machine learning and behavioral analytics, Exabeam addresses this critical gap by extending its proven capabilities to monitor both human and AI agent activity. By integrating telemetry from Google Agentspace and Google Cloud’s Model Armor into the New-Scale Platform, Exabeam is expanding the boundaries of behavioral analytics and setting a new standard for what modern security platforms must deliver.

“This is a natural evolution of our leadership in insider threat detection and behavioral analytics,” said Steve Wilson, Chief AI and Product Officer at Exabeam. “Exabeam solutions are inherently designed to deliver behavioral analytics at scale. Security operations teams don’t need another tool — they need deeper insight into both human and AI agent behavior, delivered through a platform they already trust. We’re giving security teams the clarity, context, and control they need to secure the new class of insider threats.”

The company’s latest innovation, Exabeam Nova, is central to this, serving as the intelligence layer that enables security teams to interpret and act on agent behavior with confidence. Exabeam Nova delivers explainable, prioritized threat insights by analyzing the intent and execution patterns of AI agents in real time. This capability allows analysts to move beyond surface-level alerts and understand the context behind agent actions — whether they represent legitimate automation or potential misuse. By operationalizing telemetry from Google Agentspace and Google Cloud’s Model Armor in the New-Scale Platform, Exabeam Nova equips security teams to defend against the next generation of insider threats with clarity and precision.

“AI agents are quickly changing how business gets done, and that means security must evolve at the same rate,” said Chris O’Malley, CEO at Exabeam. “This is a pivotal moment for the cybersecurity industry. By extending our behavioral analytics to AI agents, Exabeam is once again leading the way in insider threat detection. We’re giving security teams the visibility and control they need to protect the integrity of their operations in an AI-driven world.”

“As businesses integrate AI into their core operations, they face a new set of security challenges,” said Vineet Bhan, Director of Security and Identity Partnerships at Google Cloud. “Our partnership with Exabeam is important to addressing this, giving customers the advanced tools needed to protect their data, maintain control, and innovate confidently in the era of AI.”

By unifying visibility across both human and AI-driven activity, Exabeam empowers security teams to detect, assess, and respond to insider threats in all their forms. This advancement sets a new benchmark for enterprise security, ensuring organizations can confidently embrace AI while maintaining control, integrity, and trust.

AI and the Rise of Service-as-a-Service: Why Products Are Becoming Invisible

 

The software world is undergoing a fundamental shift. Thanks to AI, product development has become faster, easier, and more scalable than ever before. Tools like Cursor and Lovable—along with countless “co-pilot” clones—have turned coding into prompt engineering, dramatically reducing development time and enhancing productivity. 

This boom has naturally caught the attention of venture capitalists. Funding for software companies hit $80 billion in Q1 2025, with investors eager to back niche SaaS solutions that follow the familiar playbook: identify a pain point, build a narrow tool, and scale aggressively. Y Combinator’s recent cohort was full of “Cursor for X” startups, reflecting the prevailing appetite for micro-products. 

But beneath this surge of point solutions lies a deeper transformation: the shift from product-led growth to outcome-driven service delivery. This evolution isn’t just about branding—it’s a structural redefinition of how software creates and delivers value. Historically, the SaaS revolution gave rise to subscription-based models, but the tools themselves remained hands-on. For example, when Adobe moved Creative Suite to the cloud, the billing changed—not the user experience. Users still needed to operate the software. SaaS, in that sense, was product-heavy and service-light. 

Now, AI is dissolving the product layer itself. The software is still there, but it’s receding into the background. The real value lies in what it does, not how it’s used. Glide co-founder Gautam Ajjarapu captures this perfectly: “The product gets us in the door, but what keeps us there is delivering results.” Take Glide’s AI for banks. It began as a tool to streamline onboarding but quickly evolved into something more transformative. Banks now rely on Glide to improve retention, automate workflows, and enhance customer outcomes. 

The interface is still a product, but the substance is service. The same trend is visible across leading AI startups. Zendesk markets “automated customer service,” where AI handles tickets end-to-end. Amplitude’s AI agents now generate product insights and implement changes. These offerings blur the line between tool and outcome—more service than software. This shift is grounded in economic logic. Services account for over 70% of U.S. GDP, and Nobel laureate Bengt Holmström’s contract theory helps explain why: businesses ultimately want results, not just tools. 

They don’t want a CRM—they want more sales. They don’t want analytics—they want better decisions. With agentic AI, it’s now possible to deliver on that promise. Instead of selling a dashboard, companies can sell growth. Instead of building an LMS, they offer complete onboarding services powered by AI agents. This evolution is especially relevant in sectors like healthcare. Corti’s CEO Andreas Cleve emphasizes that doctors don’t want more interfaces—they want more time. AI that saves time becomes invisible, and its value lies in what it enables, not how it looks. 

The implication is clear: software is becoming outcome-first. Users care less about tools and more about what those tools accomplish. Many companies—Glean, ElevenLabs, Corpora—are already moving toward this model, delivering answers, brand voices, or research synthesis rather than just access. This isn’t the death of the product—it’s its natural evolution. The best AI companies are becoming “services in a product wrapper,” where software is the delivery mechanism, but the value lies in what gets done. 

For builders, the question is no longer how to scale a product. It’s how to scale outcomes. The companies that succeed in this new era will be those that understand: users don’t want features—they want results. Call it what you want—AI-as-a-service, agentic delivery, or outcome-led software. But the trend is unmistakable. Service-as-a-Service isn’t just the next step for SaaS. It may be the future of software itself.

Personal AI Agents Could Become Digital Advocates in an AI-Dominated World

 

As generative AI agents proliferate, a new concept is gaining traction: AI entities that act as loyal digital advocates, protecting individuals from overwhelming technological complexity, misinformation, and data exploitation. Experts suggest these personal AI companions could function similarly to service animals—trained not just to assist, but to guard user interests in an AI-saturated world. From scam detection to helping navigate automated marketing and opaque algorithms, these agents would act as user-first shields. 

At a recent Imagination in Action panel, Consumer Reports’ Ginny Fahs explained, “As companies embed AI deeper into commerce, it becomes harder for consumers to identify fair offers or make informed decisions. An AI that prioritizes users’ interests can build trust and help transition toward a more transparent digital economy.” The idea is rooted in giving users agency and control in a system where most AI is built to serve businesses. Panelists—including experts like Dazza Greenwood, Amir Sarhangi, and Tobin South—discussed how loyal, trustworthy AI advocates could reshape personal data rights, online trust, and legal accountability. 

Greenwood drew parallels to early internet-era reforms such as e-signatures and automated contracts, suggesting a similar legal evolution is needed now to govern AI agents. South added that AI agents must be “loyal by design,” ensuring they act within legal frameworks and always prioritize the user. Sarhangi introduced the concept of “Know Your Agent” (KYA), which promotes transparency by tracking the digital footprint of an AI. 

With unique agent wallets and activity histories, bad actors could be identified and held accountable. Fahs described a tool called “Permission Slip,” which automates user requests like data deletion. This form of AI advocacy predates current generative models but shows how user-authorized agents could manage privacy at scale. Agents could also learn from collective behavior. For instance, an AI noting a negative review of a product could share that experience with other agents, building an automated form of word-of-mouth. 

This concept, said panel moderator Sandy Pentland, mirrors how Consumer Reports aggregates user feedback to identify reliable products. South emphasized that cryptographic tools could ensure safe data-sharing without blindly trusting tech giants. He also referenced NANDA, a decentralized protocol from MIT that aims to enable trustworthy AI infrastructure. Still, implementing AI agents raises usability questions. “We want agents to understand nuanced permissions without constantly asking users to approve every action,” Fahs said. 

Getting this right will be crucial to user adoption. Pentland noted that current AI models struggle to align with individual preferences. “An effective agent must represent you—not a demographic group, but your unique values,” he said. Greenwood believes that’s now possible: “We finally have the tools to build AI agents with fiduciary responsibilities.” In closing, South stressed that the real bottleneck isn’t AI capability but structuring and contextualizing information properly. “If you want AI to truly act on your behalf, we must design systems that help it understand you.” 

As AI becomes deeply embedded in daily life, building personalized, privacy-conscious agents may be the key to ensuring technology serves people—not the other way around.

Cisco Introduces New Tools to Protect Networks from Rogue AI Agents

 



As artificial intelligence (AI) becomes more advanced, it also creates new risks for cybersecurity. AI agents—programs that can make decisions and act on their own—are now being used in harmful ways. Some are launched by cybercriminals or even unhappy employees, while others may simply malfunction and cause damage. Cisco, a well-known technology company, has introduced new security solutions aimed at stopping these unpredictable AI agents before they can cause serious harm inside company networks.


The Growing Threat of AI in Cybersecurity

Traditional cybersecurity methods, such as firewalls and access controls, were originally designed to block viruses and unauthorized users. However, these defenses may not be strong enough to deal with intelligent AI agents that can move within networks, find weak spots, and spread quickly. Attackers now have the ability to launch AI-powered threats that are faster, more complex, and cheaper to operate. This creates a huge challenge for cybersecurity teams who are already stretched thin.


Cisco’s Zero Trust Approach

To address this, Cisco is focusing on a security method called Zero Trust. The basic idea behind Zero Trust is that no one and nothing inside a network should be automatically trusted. Every user, device, and application must be verified every time they try to access something new. Imagine a house where every room has its own lock, and just because you entered one room doesn't mean you can walk freely into the next. This layered security helps block the movement of malicious AI agents.

Cisco’s Universal Zero Trust Network Access (ZTNA) applies this approach across the entire network. It covers everything from employee devices to Internet of Things (IoT) gadgets that are often less secure. Cisco’s system also uses AI-powered insights to monitor activity and quickly detect anything unusual.


Building Stronger Defenses

Cisco is also introducing a Hybrid Mesh Firewall, which is not just a single device but a network-wide security system. It is designed to protect companies across different environments, whether their data is stored on-site or in the cloud.

To make identity checks easier and more reliable, Cisco is updating its Duo Identity and Access Management (IAM) service. This tool will help confirm that the right people and devices are accessing the right resources, with features like passwordless logins and location-based verification. Cisco has been improving this service since acquiring Duo Security in 2018.


New Firewalls for High-Speed Data

In addition to its Zero Trust solutions, Cisco is launching two new firewall models: the Secure Firewall 6100 Series and the Secure Firewall 200 Series. These firewalls are built for modern data centers that handle large amounts of information, especially those using AI. The 6100 series, for example, can process high-speed data traffic while taking up minimal physical space.

Cisco’s latest security solutions are designed to help organizations stay ahead in the fight against rapidly evolving AI-powered threats.