Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label API. Show all posts

Researchers Reproduce Anthropic-Style AI Vulnerability Findings Using Public Models at Low Cost

 


New research suggests that the ability to discover software vulnerabilities using artificial intelligence is becoming both inexpensive and widely accessible, raising concerns that advanced cyber capabilities may be spreading faster than anticipated.

A study by Vidoc Security demonstrates that vulnerability discovery techniques similar to those highlighted in Anthropic’s recent “Mythos” work can be reproduced using publicly available AI models. By leveraging GPT-5.4 and Claude Opus 4.6 within an open-source framework called opencode, researchers were able to replicate key findings for under $30 per scan, without access to Anthropic’s internal systems or restricted programs.

Anthropic had earlier positioned its Mythos research as highly sensitive, limiting access to a small group of major organizations and prompting concern across policy and financial circles. Reports indicated that senior figures, including Scott Bessent and Jerome Powell, discussed the implications alongside leading financial executives. The term “vulnpocalypse” resurfaced in cybersecurity discussions, reflecting fears of large-scale AI-driven exploitation.

The Vidoc team sought to test whether such capabilities were truly restricted. Using patched vulnerability examples referenced in Anthropic’s public materials, they examined issues affecting a file-sharing protocol, a security-focused operating system’s networking components, widely used video-processing software, and cryptographic libraries used for identity verification online.

Across three independent runs, both models successfully reproduced two of the documented vulnerability cases each time. Claude Opus 4.6 also independently rediscovered a flaw in OpenBSD in all three attempts, while GPT-5.4 failed to identify that specific issue. In other instances, including vulnerabilities tied to FFmpeg and wolfSSL, the systems correctly identified relevant code regions but did not fully determine the root cause.

The methodology closely mirrored workflows described by Anthropic. Instead of relying on a single prompt, the system first analyzed entire codebases, divided them into smaller segments, and ran parallel detection processes. These processes filtered meaningful signals from noise and cross-checked findings across files. Importantly, the selection of code segments was automated through earlier planning steps, rather than manually guided.

Despite these results, the study underlines a clear distinction. Anthropic’s system reportedly went beyond identifying vulnerabilities by constructing detailed exploit pathways, such as chaining code fragments across multiple network packets to achieve full remote control of a system. The public models, while capable of locating weaknesses, did not reach that level of execution.

According to researcher Dawid Moczadło, this indicates a new turn of events in cybersecurity economics. The most resource-intensive part of the process, identifying credible vulnerability signals, is becoming accessible to anyone with standard API access. However, validating those findings and converting them into reliable security insights or exploit strategies remains significantly more complex.

Anthropic itself has acknowledged that traditional benchmarks like Cybench are no longer sufficient to measure modern AI cyber capabilities, noting that its Mythos system exceeded those standards. The company estimated that comparable capabilities could become widespread within six to eighteen months.

The Vidoc findings suggest that, at least for vulnerability discovery, this transition may already be underway. By publishing their methodology, prompts, and results, the researchers highlight how open tools and commercially available models can replicate parts of workflows once considered highly restricted.

For organizations, the implications are instrumental. As AI reduces the cost and effort required to uncover software flaws, defenders may need to adopt continuous monitoring, faster remediation cycles, and deeper behavioral analysis. The challenge is no longer just identifying vulnerabilities, but managing the scale and speed at which they can now be discovered.

Salesforce’s New “Headless 360” Lets AI Agents Run Its Platform

 


Salesforce has introduced what it describes as the most crucial architectural overhaul in its 27-year history, launching a new initiative called “Headless 360.” The update is designed to allow artificial intelligence agents to control and operate the company’s entire platform without requiring a traditional graphical interface such as a dashboard or browser.

The announcement was made during the company’s annual TDX developer conference in San Francisco, where Salesforce revealed that it is releasing more than 100 new developer tools and capabilities. These tools immediately enable AI systems to interact directly with Salesforce environments. The move reflects a deeper shift in enterprise software, where the rise of intelligent agents capable of reasoning and executing tasks is forcing companies to rethink whether conventional user interfaces are still necessary.

Salesforce’s answer to that question is direct: instead of designing software primarily for human interaction, the platform is now being rebuilt so that machines can access and operate it programmatically. According to the company, this transformation began over two years ago with a strategic decision to expose all internal capabilities rather than keeping them hidden behind user interfaces.

This shift is taking place during a period of uncertainty in the broader software industry. Concerns that advanced AI models developed by companies like OpenAI and Anthropic could disrupt traditional software business models have already impacted market performance. Industry indicators, including software-focused exchange-traded funds, have declined substantially, reflecting investor anxiety about the long-term relevance of existing SaaS platforms.

Senior leadership at Salesforce has indicated that the new architecture is based on practical challenges observed while deploying AI systems across enterprise clients. According to internal insights, building an AI agent is only the initial step. Organizations also face ongoing challenges related to development workflows, system reliability, updates, and long-term maintenance.

To address these challenges, Headless 360 is structured around three foundational pillars.

The first pillar focuses on development flexibility. Salesforce has introduced more than 60 tools based on Model Context Protocol, along with over 30 pre-configured coding capabilities. These allow external AI coding agents, including systems such as Claude Code, Cursor, Codex, and Windsurf, to gain direct, real-time access to a company’s Salesforce environment. This includes data, workflows, and underlying business logic. Developers are no longer required to use Salesforce’s own integrated development environment and can instead operate from any terminal or external setup.

In addition, Salesforce has upgraded its native development environment, Agentforce Vibes 2.0, by introducing an “open agent harness.” This system supports multiple agent frameworks, including those from OpenAI and Anthropic, and dynamically adjusts capabilities depending on which AI model is being used. The platform also supports multiple models simultaneously, including advanced systems like Claude Sonnet and GPT-5, while maintaining full awareness of the organization’s data from the start.

A notable technical enhancement is the introduction of native React support. During demonstrations, developers created a fully functional application using React instead of Salesforce’s traditional Lightning framework. The application connected to Salesforce data through GraphQL while still inheriting built-in security controls. This significantly expands front-end flexibility for developers.

The second pillar focuses on deployment. Salesforce has introduced an “experience layer” that separates how an AI agent functions from how it is presented to users. This allows developers to design an experience once and deploy it across multiple platforms, including Slack, mobile applications, Microsoft Teams, ChatGPT, Claude, Gemini, and other compatible environments. Importantly, this can be done without rewriting code for each platform. The approach represents a change from requiring users to enter Salesforce interfaces to delivering Salesforce-powered experiences directly within existing workflows.

The third pillar addresses trust, control, and scalability. Salesforce has introduced a comprehensive set of tools that manage the entire lifecycle of AI agents. These include systems for testing, evaluation, monitoring, and experimentation. A central component is “Agent Script,” a new programming language designed to combine structured, rule-based logic with the flexible reasoning capabilities of AI models. It allows organizations to define which parts of a process must follow strict rules and which parts can rely on AI-driven decision-making.

Additional tools include a Testing Center that identifies logical errors and policy violations before deployment, custom evaluation systems that define performance standards, and an A/B testing interface that allows multiple agent versions to run simultaneously under real-world conditions.

One of the key technical challenges addressed by Salesforce is the difference between probabilistic and deterministic systems. AI agents do not always produce identical results, which can create instability in enterprise environments where consistency is critical. Early adopters reported that once agents were deployed, even small modifications could lead to unpredictable outcomes, forcing teams to repeat extensive testing processes.

Agent Script was developed to solve this problem by introducing a structured framework. It defines agent behavior as a state machine, where certain steps are fixed and controlled while others allow flexible reasoning. This approach ensures both reliability and adaptability.

Salesforce also distinguishes between two types of AI system architectures. Customer-facing agents, such as those used in sales or support, require strict control to ensure they follow predefined rules and maintain brand consistency. These operate within structured workflows. In contrast, employee-facing agents are designed to operate more freely, exploring multiple paths and refining their outputs dynamically before presenting results. Both systems operate on a unified underlying architecture, allowing organizations to manage them without maintaining separate platforms.

The company is also expanding its ecosystem. It now supports integration with a wide range of AI models, including those from Google and other providers. A new marketplace brings together thousands of applications and tools, supported by a $50 million initiative aimed at encouraging further development.

At the same time, Salesforce is taking a flexible approach to emerging technical standards such as Model Context Protocol. Rather than relying on a single method, the company is offering APIs, command-line interfaces, and protocol-based integrations simultaneously to remain adaptable as the industry evolves.

A real-world example surfaced during the announcement demonstrated how one company built an AI-powered customer service agent in just 12 days. The system now handles approximately half of customer interactions, improving efficiency while reducing operational costs.

Finally, Salesforce is also changing its business model. The company is shifting away from traditional per-user pricing toward a consumption-based approach, reflecting a future where AI agents, rather than human users, perform the majority of work within enterprise systems.

This transformation suggests a new layer in strategic operations. Instead of resisting the rise of AI, Salesforce is restructuring its platform to align with it, betting that its existing data infrastructure, enterprise integrations, and accumulated operational logic will continue to provide value even as software becomes increasingly autonomous.

New Chaos Malware Variant Expands to Cloud Targets, Introduces Proxy Capability

 



A newly observed version of the Chaos malware is now targeting poorly secured cloud environments, indicating a defining shift in how this threat is being deployed and scaled.

According to analysis by Darktrace, the malware is increasingly exploiting misconfigured cloud systems, moving beyond its earlier focus on routers and edge devices. This change suggests that attackers are adapting to the growing reliance on cloud infrastructure, where configuration errors can expose critical services.

Chaos was first identified in September 2022 by Lumen Black Lotus Labs. At the time, it was described as a cross-platform threat capable of infecting both Windows and Linux machines. Its functionality included executing remote shell commands, deploying additional malicious modules, spreading across systems by brute-forcing SSH credentials, mining cryptocurrency, and launching distributed denial-of-service attacks using protocols such as HTTP, TLS, TCP, UDP, and WebSocket.

Researchers believe Chaos developed from an earlier DDoS-focused malware strain known as Kaiji, which specifically targeted exposed Docker instances. While the exact operators behind Chaos remain unidentified, the presence of Chinese-language elements in the code and the use of infrastructure linked to China suggest a possible connection to threat actors from that region.

Darktrace detected the latest variant within its honeypot network, specifically on a deliberately misconfigured Hadoop deployment that allowed remote code execution. The attack began with an HTTP request sent to the Hadoop service to initiate the creation of a new application.

That application contained a sequence of shell commands designed to download a Chaos binary from an attacker-controlled domain, identified as “pan.tenire[.]com.” The commands then modified the file’s permissions using “chmod 777,” allowing full access to all users, before executing the binary and deleting it from the system to reduce forensic evidence.

Notably, the same domain had previously been linked to a phishing operation conducted by the cybercrime group Silver Fox. That campaign, referred to as Operation Silk Lure by Seqrite Labs in October 2025, was used to distribute decoy documents and ValleyRAT malware, suggesting infrastructure reuse across campaigns.

The newly identified sample is a 64-bit ELF binary that has been reworked and updated. While it retains much of its original functionality, several features have been removed. In particular, capabilities for spreading via SSH and exploiting router vulnerabilities are no longer present.

In their place, the malware now incorporates a SOCKS proxy feature. This allows compromised systems to relay network traffic, effectively masking the origin of malicious activity and making detection and mitigation more difficult for defenders.

Darktrace also noted that components previously associated with Kaiji have been modified, indicating that the malware has likely been rewritten or significantly refactored rather than simply reused.

The addition of proxy functionality points to a broader monetization strategy. Beyond cryptocurrency mining and DDoS-for-hire operations, attackers may now leverage infected systems to provide anonymized traffic routing or other illicit services, reflecting increasing competition within cybercriminal ecosystems.

This shift aligns with a wider trend observed in other botnets, such as AISURU, where proxy services are becoming a central feature. As a result, the threat infrastructure is expanding beyond traditional service disruption to include more complex abuse scenarios.

Security experts emphasize that misconfigured cloud services, including platforms like Hadoop and Docker, remain a critical risk factor. Without proper access controls, attackers can exploit these systems to gain initial entry and deploy malware with minimal resistance.

The continued evolution of Chaos underlines how threat actors are persistently enhancing their tools to expand botnet capabilities. It also reinforces the need for continuous security monitoring, as changes in how APIs and services function may not always appear as direct vulnerabilities but can exponentially increase exposure.

Organizations are advised to regularly audit configurations, restrict unnecessary access, and monitor for unusual behavior to mitigate the risks posed by increasingly adaptive malware threats.

Hackers Use Fake Oura AI Server to Spread StealC Malware

 



Cybersecurity analysts have uncovered a fresh wave of malicious activity involving the SmartLoader malware framework. In this campaign, attackers circulated a compromised version of an Oura Model Context Protocol server in order to deploy a data-stealing program known as StealC.

Researchers from Straiker’s AI Research team, also referred to as STAR Labs, reported that the perpetrators replicated a legitimate Oura MCP server. This genuine tool is designed to connect artificial intelligence assistants with health metrics collected from the Oura Ring through Oura’s official API. To make their fraudulent version appear authentic, the attackers built a network of fabricated GitHub forks and staged contributor activity, creating the illusion of a credible open-source project.

The ultimate objective was to use the altered MCP server as a delivery vehicle for StealC. Once installed, StealC is capable of harvesting usernames, saved browser passwords, cryptocurrency wallet information, and other valuable credentials from infected systems.

SmartLoader itself was initially documented by OALABS Research in early 2024. It functions as a loader, meaning it prepares and installs additional malicious components after gaining a foothold. Previous investigations showed that SmartLoader was commonly distributed through deceptive GitHub repositories that relied on AI-generated descriptions and branding to appear legitimate.

In March 2025, Trend Micro published findings explaining that these repositories frequently masqueraded as gaming cheats, cracked software tools, or cryptocurrency utilities. Victims were enticed with promises of free premium functionality and encouraged to download compressed ZIP files, which ultimately executed SmartLoader on their devices.

Straiker’s latest analysis reveals an evolution of that tactic. Instead of merely posting suspicious repositories, the threat actors established multiple counterfeit GitHub profiles and interconnected projects that hosted weaponized MCP servers. They then submitted the malicious server to a recognized MCP registry called MCP Market. According to the researchers, the listing remains visible within the MCP directory, increasing the risk that developers searching for integration tools may encounter it.

By infiltrating trusted directories and leveraging reputable platforms such as GitHub, the attackers exploited the inherent trust developers place in established ecosystems. Unlike rapid, high-volume malware campaigns, this operation progressed slowly. Straiker noted that the group spent months cultivating legitimacy before activating the malicious payload, demonstrating a calculated effort to gain access to valuable developer environments.

The staged operation unfolded in four key phases. First, at least five fabricated GitHub accounts, identified as YuzeHao2023, punkpeye, dvlan26, halamji, and yzhao112, were created to generate convincing forks of the authentic Oura MCP project. Second, a separate repository containing the harmful payload was introduced under another account named SiddhiBagul. Third, these fabricated accounts were listed as contributors to reinforce the appearance of collaboration, while the original project author was intentionally omitted. Finally, the altered MCP server was submitted to MCP Market for broader visibility.

If downloaded and executed, the malicious package runs an obfuscated Lua script. This script installs SmartLoader, which then deploys StealC. The campaign signals a shift from targeting individuals seeking pirated content to focusing on developers, whose systems often store API keys, cloud credentials, cryptocurrency wallets, and access to production infrastructure. Stolen information could facilitate subsequent intrusions into larger networks.

To mitigate the threat, organizations are advised to catalogue all installed MCP servers, implement formal security reviews before adopting such tools, confirm the authenticity and source of repositories, and monitor network traffic for unusual outbound communications or persistence behavior.

Straiker concluded that the incident exposes weaknesses in how companies assess developing AI tools. The attackers capitalized on outdated trust assumptions applied to a rapidly expanding attack surface, underscoring the need for stricter validation practices in modern development environments.

Instagram Refutes Breach Allegations After Claims of 17 Million User Records Circulating Online

 



Instagram has firmly denied claims of a new data breach following reports that personal details linked to more than 17 million accounts are being shared across online forums. The company stated that its internal systems were not compromised and that user accounts remain secure.

The clarification comes after concerns emerged around a technical flaw that allowed unknown actors to repeatedly trigger password reset emails for Instagram users. Meta, Instagram’s parent company, confirmed that this issue has been fixed. According to the company, the flaw did not provide access to accounts or expose passwords. Users who received unexpected reset emails were advised to ignore them, as no action is required.

Public attention intensified after cybersecurity alerts suggested that a large dataset allegedly connected to Instagram accounts had been released online. The data, which was reportedly shared without charge on several hacking forums, was claimed to have been collected through an unverified Instagram API vulnerability dating back to 2024.

The dataset is said to include information from over 17 million profiles. The exposed details reportedly vary by record and include usernames, internal account IDs, names, email addresses, phone numbers, and, in some cases, physical addresses. Analysis of the data shows that not all records contain complete personal details, with some entries listing only basic identifiers such as a username and account ID.

Researchers discussing the incident on social media platforms have suggested that the data may not be recent. Some claim it could originate from an older scraping incident, possibly dating back to 2022. However, no technical evidence has been publicly provided to support these claims. Meta has also stated that it has no record of Instagram API breaches occurring in either 2022 or 2024.

Instagram has previously dealt with scraping-related incidents. In one earlier case, a vulnerability allowed attackers to collect and sell personal information associated with millions of accounts. Due to this history, cybersecurity experts believe the newly surfaced dataset could be a collection of older information gathered from multiple sources over several years, rather than the result of a newly discovered vulnerability.

Attempts to verify the origin of the data have so far been unsuccessful. The individual responsible for releasing the dataset did not respond to requests seeking clarification on when or how the information was obtained.

At present, there is no confirmation that this situation represents a new breach of Instagram’s systems. No evidence has been provided to demonstrate that the data was extracted through a recently exploited flaw, and Meta maintains that there has been no unauthorized access to its infrastructure.

While passwords are not included in the leaked information, users are still urged to remain cautious. Such datasets are often used in phishing emails, scam messages, and social engineering attacks designed to trick individuals into revealing additional information.

Users who receive password reset emails or login codes they did not request should delete them and take no further action. Enabling two-factor authentication is fiercely recommended, as it provides an added layer of security against unauthorized access attempts.


Gainsight Breach Spread into Salesforce Environments; Scope Under Investigation

 



An ongoing security incident at Gainsight's customer-management platform has raised fresh alarms about how deeply third-party integrations can affect cloud environments. The breach centers on compromised OAuth tokens connected with Gainsight's Salesforce connectors, leaving unclear how many organizations touched and the type of information accessed.

Salesforce was the first to flag suspicious activity originating from Gainsight's connected applications. As a precautionary measure, Salesforce revoked all associated access tokens and, for some time, disabled the concerned integrations. The company also released detailed indicators of compromise, timelines of malicious activity, and guidance urging customers to review authentication logs and API usage within their own environments.

Gainsight later confirmed that unauthorized parties misused certain OAuth tokens linked to its Salesforce-connected app. According to its leadership, only a small number of customers have so far reported confirmed data impact. However, several independent security teams-including Google's Threat Intelligence Group-reported signs that the intrusion may have reached far more Salesforce instances than initially acknowledged. These differing numbers are not unusual: supply-chain incidents often reveal their full extent only after weeks of log analysis and correlation.

At this time, investigators understand the attack as a case of token abuse, not a failure of Salesforce's underlying platform. OAuth tokens are long-lived keys that let approved applications make API calls on behalf of customers. Once attackers have them, they can access the CRM records through legitimate channels, and the detection is far more challenging. This approach enables the intruders to bypass common login checks, and therefore Salesforce has focused on log review and token rotation as immediate priorities.

To enhance visibility, Gainsight has onboarded Mandiant to conduct a forensic investigation into the incident. The company is investigating historical logs, token behavior, connector activity, and cross-platform data flows to understand the attacker's movements and whether other services were impacted. As a precautionary measure, Gainsight has also worked with platforms including HubSpot, Zendesk, and Gong to temporarily revoke related tokens until investigators can confirm they are safe to restore.

The incident is similar to other attacks that happened this year, where other Salesforce integrations were used to siphon customer records without exploiting any direct vulnerability in Salesforce. Repeated patterns here illustrate a structural challenge: organizations may secure their main cloud platform rigorously, but one compromised integration can open a path to wider unauthorized access.

But for customers, the best steps are as straightforward as ever: monitor Salesforce authentication and API logs for anomalous access patterns; invalidate or rotate existing OAuth tokens; reduce third-party app permissions to the bare minimum; and, if possible, apply IP restrictions or allowlists to further restrict the range of sources from which API calls can be made.

Both companies say they will provide further updates and support customers who have been affected by the issue. The incident served as yet another wake-up call that in modern cloud ecosystems, the security of one vendor often relies on the security practices of all in its integration chain. 



65% of Top AI Companies Leak Secrets on GitHub

 

Leading AI companies continue to face significant cybersecurity challenges, particularly in protecting sensitive information, as highlighted in recent research from Wiz. The study focused on the Forbes top 50 AI firms, revealing that 65% of them were found to be leaking verified secrets—such as API keys, tokens, and credentials—on public GitHub repositories. 

These leaks often occurred in places not easily accessible to standard security scanners, including deleted forks, developer repositories, and GitHub gists, indicating a deeper and more persistent problem than surface-level exposure. Wiz's approach to uncovering these leaks involved a framework called "Depth, Perimeter, and Coverage." Depth allowed researchers to look beyond just the main repositories, reaching into less visible parts of the codebase. 

Perimeter expanded the search to contributors and organization members, recognizing that individuals could inadvertently upload company-related secrets to their own public spaces. Coverage ensured that new types of secrets, such as those used by AI-specific platforms like Tavily, Langchain, Cohere, and Pinecone, were included in the scan, which many traditional tools overlook.

The findings show that despite being leaders in cutting-edge technology, these AI companies have not adequately addressed basic security hygiene. The researchers disclosed the discovered leaks to the affected organisations, but nearly half of these notifications either failed to reach the intended recipients, were ignored, or received no actionable response, underscoring the lack of dedicated channels for vulnerability disclosure.

Security Tips 

Wiz recommends several essential security measures for all organisations, regardless of size. First, deploying robust secret scanning should be a mandatory practice to proactively identify and remove sensitive information from codebases. Second, companies should prioritise the detection of their own unique secret formats, especially if they are new or specific to their operations. Engaging vendors and the open source community to support the detection of these formats is also advised.

Finally, establishing a clear and accessible disclosure protocol is crucial. Having a dedicated channel for reporting vulnerabilities and leaks enables faster remediation and better coordination between researchers and organisations, minimising potential damage from exposure. The research serves as a stark reminder that even the most advanced companies must not overlook fundamental cybersecurity practices to safeguard sensitive data and maintain trust in the rapidly evolving AI landscape.

Ernst & Young Exposes 4TB Database Backup Online, Leaking Company Secrets

 

Ernst & Young (EY), one of the world’s largest accounting firms, reportedly left a massive 4TB SQL database backup exposed online, containing highly sensitive company secrets and credentials accessible to anyone who knew where to find it. 

The backup, in the form of a .BAK file, contained not only schema and stored procedures but also application secrets, API keys, session tokens, user credentials, cached authentication tokens, and service account passwords. Security researchers from Neo Security discovered this alarming exposure during routine tooling work, verifying that the file was indeed publicly accessible.

The researchers emphasized that an exposed database backup like this is equivalent to releasing the master blueprints and keys to a vault, noting that such exposure could lead to catastrophic consequences, including large-scale breaches and ransomware attacks. Due to legal and ethical concerns, the researchers did not download the backup in full, but they warned that any skilled threat actor could have already accessed the data, potentially leading to severe security fallout.

Upon discovering the issue, Neo Security promptly alerted EY, who were praised for their professional and prompt response; the company did not deflect, show defensiveness, or issue legal threats, but instead acknowledged the risk and began triaging the problem. Despite the quick engagement, EY took a full week to remediate the issue, which is considered a significant delay given the urgency and potential for malicious exploitation in such security incidents.

The breach highlights the dangers of misconfigured cloud storage and the need for organizations, especially those handling sensitive data, to rigorously audit and secure their backups and databases. The exposure of such a large database could have resulted in the theft of proprietary information, customer data, and even facilitated coordinated cyberattacks on EY and its clients.

Experts urge companies to assume that any publicly accessible database backup may have already been compromised, as even a brief window of exposure can be enough for malicious actors to exploit the data. The incident underscores the importance of robust security practices, regular audits, and rapid incident response protocols to minimize the risk and impact of data breaches.

This incident serves as a cautionary tale for organizations to take extra precautions in securing all forms of sensitive data, especially those stored in backups, and to act swiftly to remediate publicly exposed databases.

The Strategic Imperatives of Agentic AI Security


 

In terms of cybersecurity, agentic artificial intelligence is emerging as a transformative force that is fundamentally transforming the way digital threats are perceived and handled. It is important to note that, unlike conventional artificial intelligence systems that typically operate within predefined parameters, agentic AI systems can make autonomous decisions by interacting dynamically with digital tools, complex environments, other AI agents, and even sensitive data sets. 

There is a new paradigm emerging in which AI is not only supporting decision-making but also initiating and executing actions independently in pursuit of achieving its objective in this shift. As the evolution of cybersecurity brings with it significant opportunities for innovation, such as automated threat detection, intelligent incident response, and adaptive defence strategies, it also poses some of the most challenging challenges. 

As much as agentic AI is powerful for defenders, the same capabilities can be exploited by adversaries as well. If autonomous agents are compromised or misaligned with their targets, they can act at scale in a very fast and unpredictable manner, making traditional defence mechanisms inadequate. As organisations increasingly implement agentic AI into their operations, enterprises must adopt a dual-security posture. 

They need to take advantage of the strengths of agentic AI to enhance their security frameworks, but also prepare for the threats posed by it. There is a need to strategically rethink cybersecurity principles as they relate to robust oversight, alignment protocols, and adaptive resilience mechanisms to ensure that the autonomy of AI agents is paired with the sophistication of controls that go with it. Providing security for agentic systems has become more than just a technical requirement in this new era of AI-driven autonomy. 

It is a strategic imperative as well. In the development lifecycle of Agentic AI, several interdependent phases are required to ensure that the system is not only intelligent and autonomous but also aligned with organisational goals and operational needs. Using this structured progression, agents can be made more effective, reliable, and ethically sound across a wide variety of use cases. 

The first critical phase in any software development process is called Problem Definition and Requirement Analysis. This lays the foundation for all subsequent efforts in software development. In this phase, organisations need to be able to articulate a clear and strategic understanding of the problem space that the artificial intelligence agent will be used to solve. 

As well as setting clear business objectives, defining the specific tasks that the agent is required to perform, and assessing operational constraints like infrastructure availability, regulatory obligations, and ethical obligations, it is imperative for organisations to define clear business objectives. As a result of a thorough requirements analysis, the system design is streamlined, scope creep is minimised, and costly revisions can be avoided during the later stages of the deployment. 

Additionally, this phase helps stakeholders align the AI agent's technical capabilities with real-world needs, enabling it to deliver measurable results. It is arguably one of the most crucial components of the lifecycle to begin with the Data Collection and Preparation phase, which is arguably the most vital. A system's intelligence is directly affected by the quality and comprehensiveness of the data it is trained on, regardless of which type of agentic AI it is. 

It has utilised a variety of internal and trusted external sources to collect relevant datasets for this stage. These datasets are meticulously cleaned, indexed, and transformed in order to ensure that they are consistent and usable. As a further measure of model robustness, advanced preprocessing techniques are employed, such as augmentation, normalisation, and class balancing to reduce bias, es and mitigate model failures. 

In order for an AI agent to function effectively across a variety of circumstances and edge cases, a high-quality, representative dataset needs to be created as soon as possible. These three phases together make up the backbone of the development of an agentic AI system, ensuring that it is based on real business needs and is backed up by data that is dependable, ethical, and actionable. Organisations that invest in thorough upfront analysis and meticulous data preparation have a significantly greater chance of deploying agentic AI solutions that are scalable, secure, and aligned with long-term strategic goals, when compared to those organisations that spend less. 

It is important to note that the risks that a systemic AI system poses are more than technical failures; they are deeply systemic in nature. Agentic AI is not a passive system that executes rules; it is an active system that makes decisions, takes action and adapts as it learns from its mistakes. Although dynamic autonomy is powerful, it also introduces a degree of complexity and unpredictability, which makes failures harder to detect until significant damage has been sustained.

The agentic AI systems differ from traditional software systems in the sense that they operate independently and can evolve their behaviour over time as they become more and more complex. OWASP's Top Ten for LLM Applications (2025) highlights how agents can be manipulated into misusing tools or storing deceptive information that can be detrimental to the users' security. If not rigorously monitored, this very feature can turn out to be a source of danger.

It is possible that corrupted data penetrates a person's memory in such situations, so that future decisions will be influenced by falsehoods. In time, these errors may compound, leading to cascading hallucinations in which the system repeatedly generates credible but inaccurate outputs, reinforcing and validating each other, making it increasingly challenging for the deception to be detected. 

Furthermore, agentic systems are also susceptible to more traditional forms of exploitation, such as privilege escalation, in which an agent may impersonate a user or gain access to restricted functions without permission. As far as the extreme scenarios go, agents may even override their constraints by intentionally or unintentionally pursuing goals that do not align with the user's or organisation's goals. Taking advantage of deceptive behaviours is a challenging task, not only ethically but also operationally. Additionally, resource exhaustion is another pressing concern. 

Agents can be overloaded by excessive queues of tasks, which can exhaust memory, computing bandwidth, or third-party API quotas, whether through accident or malicious attacks. When these problems occur, not only do they degrade performance, but they also can result in critical system failures, particularly when they arise in a real-time environment. Moreover, the situation is even worse when agents are deployed on lightweight frameworks, such as lightweight or experimental multi-agent control platforms (MCPs), which may not have the essential features like logging, user authentication, or third-party validation mechanisms, as the situation can be even worse. 

When security teams are faced with such a situation, tracking decision paths or identifying the root cause of failures becomes increasingly difficult or impossible, leaving them blind to their own internal behaviour as well as external threats. A systemic vulnerability in agentic artificial intelligence must be considered a core design consideration rather than a peripheral concern, as it continues to integrate into high-stakes environments. 

It is essential, not only for safety to be ensured, but also to build the long-term trust needed to enable enterprise adoption, that agents act in a transparent, traceable, and ethical manner. Several core functions give agentic AI systems the agency that enables them to make autonomous decisions, behave adaptively, and pursue long-term goals. These functions are the foundation of their agency. The essence of agentic intelligence is the autonomy of agents, which means that they operate without being constantly overseen by humans. 

They perceive their environment with data streams or sensors, evaluate contextual factors, and execute actions that are in keeping with the predefined objectives of these systems. There are a number of examples in which autonomous warehouse robots adjust their path in real time without requiring human input, demonstrating both situational awareness and self-regulation. The agentic AI system differs from reactive AI systems, which are designed to respond to isolated prompts, since they are designed to pursue complex, sometimes long-term goals without the need for human intervention. 

As a result of explicit or non-explicit instructions or reward systems, these agents can break down high-level tasks, such as organising a travel itinerary, into actionable subgoals that are dynamically adjusted according to the new information available. In order for the agent to formulate step-by-step strategies, planner-executor architectures and techniques such as chain-of-thought prompting or ReAct are used by the agent to formulate strategies. 

In order to optimise outcomes, these plans may use graph-based search algorithms or simulate multiple future scenarios to achieve optimal results. Moreover, reasoning further enhances a user's ability to assess alternatives, weigh tradeoffs, and apply logical inferences to them. Large language models are also used as reasoning engines, allowing tasks to be broken down and multiple-step problem-solving to be supported. The final feature of memory is the ability to provide continuity. 

Using previous interactions, results, and context-often through vector databases-agents can refine their behavior over time by learning from their previous experiences and avoiding unnecessary or unnecessary actions. An agentic AI system must be secured more thoroughly than incremental changes to existing security protocols. Rather, it requires a complete rethink of its operational and governance models. A system capable of autonomous decision-making and adaptive behaviour must be treated as an enterprise entity of its own to be considered in a competitive market. 

There is a need for rigorous scrutiny, continuous validation, and enforceable safeguards in place throughout the lifecycle of any influential digital actor, including AI agents. In order to achieve a robust security posture, it is essential to control non-human identities. As part of this process, strong authentication mechanisms must be implemented, along with behavioural profiling and anomaly detection, to identify and neutralise attempts to impersonate or spoof before damage occurs. 

As a concept, identity cannot stay static in dynamic systems, since it must change according to the behaviour and role of the agent in the environment. The importance of securing retrieval-augmented generation (RAG) systems at the source cannot be overstated. As part of this strategy, organisations need to enforce rigorous access policies over knowledge repositories, examine embedding spaces for adversarial interference, and continually evaluate the effectiveness of similarity matching methods to avoid data leaks or model manipulations that are not intended. 

The use of automated red teaming is essential to identifying emerging threats, not just before deployment, but constantly in order to mitigate them. It involves adversarial testing and stress simulations that are designed to expose behavioural anomalies, misalignments with the intended goals, and configuration weaknesses in real-time. Further, it is imperative that comprehensive governance frameworks be established in order to ensure the success of generative and agentic AI. 

As a part of this process, the agent behaviour must be codified in enforceable policies, runtime oversight must be enabled, and detailed, tamper-evident logs must be maintained for auditing and tracking lifecycles. The shift towards agentic AI is more than just a technological evolution. The shift represents a profound change in the way decisions are made, delegated, and monitored in the future. A rapid adoption of these systems often exceeds the ability of traditional security infrastructures to adapt in a way that is not fully understood by them.

Without meaningful oversight, clearly defined responsibilities, and strict controls, AI agents could inadvertently or maliciously exacerbate risk, rather than delivering what they promise. In response to these trends, organisations need to ensure that agents operate within well-defined boundaries, under continuous observation, and aligned with organisational intent, as well as being held to the same standards as human decision-makers. 

There are enormous benefits associated with agentic AI, but there are also huge risks associated with it. Moreover, these systems should not just be intelligent; they should also be trustworthy, transparent, and their rules should be as precise and robust as those they help enforce to be truly transformative.

AI as a Key Solution for Mitigating API Cybersecurity Threats

 


Artificial Intelligence (AI) is continuously evolving, and it is fundamentally changing the cybersecurity landscape, enabling organizations to mitigate vulnerabilities more effectively as a result. As artificial intelligence has improved the speed and scale with which threats can be detected and responded, it has also introduced a range of complexities that necessitate a hybrid approach to security management. 

An approach that combines traditional security frameworks with human-digital interventions is necessary. There is one of the biggest challenges AI presents to us, and that is the expansion of the attack surface for Application Programming Interfaces (APIs). The proliferation of AI-powered systems raises questions regarding API resilience as sophisticated threats become increasingly sophisticated. As AI-driven functionality is integrated into APIs, security concerns have increased, which has led to the need for robust defensive strategies. 

In the context of AI security, the implications of the technology extend beyond APIs to the very foundation of Machine Learning (ML) applications as well as large language models. Many of these models are trained on highly sensitive datasets, raising concerns about their privacy, integrity, and potential exploitation. When training data is handled improperly, unauthorized access can occur, data poisoning can occur, and model manipulation may occur, which can further increase the security vulnerability. 

It is important to note, however, that artificial intelligence is also leading security teams to refine their threat modeling strategies while simultaneously posing security challenges. Using AI's analytical capabilities, organizations can enhance their predictive capabilities, automate risk assessments, and implement smarter security frameworks that can be adapted to the changing environment. By adapting to this evolution, security professionals are forced to adopt a proactive and adaptive approach to reducing potential threats. 

Using artificial intelligence effectively while safeguarding digital assets requires an integrated approach that combines traditional security mechanisms with AI-driven security solutions. This is necessary to ensure an effective synergy between automation and human oversight. Enterprises must foster a comprehensive security posture that integrates both legacy and emerging technologies to be more resilient in the face of a changing threat landscape. However, the deployment of AI in cybersecurity requires a well-organized, strategic approach. While AI is an excellent tool for cybersecurity, it does need to be embraced in a strategic and well-organized manner. 

Building a robust and adaptive cybersecurity ecosystem requires addressing API vulnerabilities, strengthening training data security, and refining threat modeling practices. A major part of modern digital applications is APIs, allowing seamless data exchange between various systems, enabling seamless data exchange. However, the widespread adoption of APIs has also led to them becoming prime targets for cyber threats, which have put organizations at risk of significant risks, such as data breaches, financial losses, and disruptions in services.

AI platforms and tools, such as OpenAI, Google's DeepMind, and IBM's Watson, have significantly contributed to advancements in several technological fields over the years. These innovations have revolutionized natural language processing, machine learning, and autonomous systems, leading to a wide range of applications in critical areas such as healthcare, finance, and business. Consequently, organizations worldwide are turning to artificial intelligence to maximize operational efficiency, simplify processes, and unlock new growth opportunities. 

While artificial intelligence is catalyzing progress, it also introduces potential security risks. In addition to manipulating the very technologies that enable industries to orchestrate sophisticated cyber threats, cybercriminals can also use those very technologies. As a result, AI is viewed as having two characteristics: while it is possible for AI-driven security systems to proactively identify, predict, and mitigate threats with extraordinary accuracy, adversaries can weaponize such technologies to create highly advanced cyberattacks, such as phishing schemes and ransomware. 

It is important to keep in mind that, as AI continues to grow, its role in cybersecurity is becoming more complex and dynamic. Organizations need to take proactive measures to protect their organizations from AI attacks by implementing robust frameworks that harness its defensive capabilities and mitigate its vulnerabilities. For a secure digital ecosystem that fosters innovation without compromising cybersecurity, it will be crucial for AI technologies to be developed ethically and responsibly. 

The Application Programming Interface (API) is the fundamental component of digital ecosystems in the 21st century, enabling seamless interactions across industries such as mobile banking, e-commerce, and enterprise solutions. They are also a prime target for cyber-attackers due to their widespread adoption. The consequences of successful breaches can include data compromises, financial losses, and operational disruptions that can pose significant challenges to businesses as well as consumers alike. 

Pratik Shah, F5 Networks' Managing Director for India and SAARC, highlighted that APIs are an integral part of today's digital landscape. AIM reports that APIs account for nearly 90% of worldwide web traffic and that the number of public APIs has grown 460% over the past decade. Despite this rapid proliferation, the company has been exposed to a wide array of cyber risks, including broken authentication, injection attacks, and server-side request forgery. According to him, the robustness of Indian API infrastructure significantly influences India's ambitions to become a global leader in the digital industry. 

“APIs are the backbone of our digital economy, interconnecting key sectors such as finance, healthcare, e-commerce, and government services,” Shah remarked. Shah claims that during the first half of 2024, the Indian Computer Emergency Response Team (CERT-In) reported a 62% increase in API-targeted attacks. The extent of these incidents goes beyond technical breaches, and they represent substantial economic risks that threaten data integrity, business continuity, and consumer trust in addition to technological breaches.

Aside from compromising sensitive information, these incidents have also undermined business continuity and undermined consumer confidence, in addition to compromising business continuity. APIs will continue to be at the heart of digital transformation, and for that reason, ensuring robust security measures will be critical to mitigating potential threats and protecting organisational integrity. 


Indusface recently published an article on API security that underscores the seriousness of API-related threats for the next 20 years. There has been an increase of 68% in attacks on APIs compared to traditional websites in the report. Furthermore, there has been a 94% increase in Distributed Denial-of-Service (DDoS) attacks on APIs compared with the previous quarter. This represents an astounding 1,600% increase when compared with website-based DDoS attacks. 

Additionally, bot-driven attacks on APIs increased by 39%, emphasizing the need to adopt robust security measures that protect these vital digital assets from threats. As a result of Artificial Intelligence, cloud security is being transformed by enhancing threat detection, automating responses, and providing predictive insights to mitigate cyber risks. 

Several cloud providers, including Google Cloud, Microsoft, and Amazon Web Services, employ artificial intelligence-driven solutions for monitoring security events, detecting anomalies, and preventing cyberattacks.

The solutions include Chronicle, Microsoft Defender for Cloud, and Amazon GuardDuty. Although there are challenges like false positives, adversarial AI attacks, high implementation costs, and concerns about data privacy, they are still important to consider. 

Although there are still some limitations, advances in self-learning AI models, security automation, and quantum computing are expected to raise AI's profile in the cybersecurity space to a higher level. The cloud environment should be safeguarded against evolving threats by using AI-powered security solutions that can be deployed by businesses.

Weak Cloud Credentials Behind Most Cyber Attacks: Google Cloud Report

 



A recent Google Cloud report has found a very troubling trend: nearly half of all cloud-related attacks in late 2024 were caused by weak or missing account credentials. This is seriously endangering businesses and giving attackers easy access to sensitive systems.


What the Report Found

The Threat Horizons Report, which was produced by Google's security experts, looked into cyberattacks on cloud accounts. The study found that the primary method of access was poor credential management, such as weak passwords or lack of multi-factor authentication (MFA). These weak spots comprised nearly 50% of all incidents Google Cloud analyzed.

Another factor was screwed up cloud services, which constituted more than a third of all attacks. The report further noted a frightening trend of attacks on the application programming interfaces (APIs) and even user interfaces, which were around 20% of the incidents. There is a need to point out several areas where cloud security seems to be left wanting.


How Weak Credentials Cause Big Problems

Weak credentials do not just unlock the doors for the attackers; it lets them bring widespread destruction. For instance, in April 2024, over 160 Snowflake accounts were breached due to the poor practices regarding passwords. Some of the high-profile companies impacted included AT&T, Advance Auto Parts, and Pure Storage and involved some massive data leakages.

Attackers are also finding accounts with lots of permissions — overprivileged service accounts. These simply make it even easier for hackers to step further into a network, bringing harm to often multiple systems within an organization's network. Google concluded that more than 60 percent of all later attacker actions, once inside, involve attempts to step laterally within systems.

The report warns that a single stolen password can trigger a chain reaction. Hackers can use it to take control of apps, access critical data, and even bypass security systems like MFA. This allows them to establish trust and carry out more sophisticated attacks, such as tricking employees with fake messages.


How Businesses Can Stay Safe

To prevent such attacks, organizations should focus on proper security practices. Google Cloud suggests using multi-factor authentication, limiting excessive permissions, and fixing misconfigurations in cloud systems. These steps will limit the damage caused by stolen credentials and prevent attackers from digging deeper.

This report is a reminder that weak passwords and poor security habits are not just small mistakes; they can lead to serious consequences for businesses everywhere.


Fake Invoices Spread Through DocuSign’s API in New Scam

 



Cyber thieves are making use of DocuSign's Envelopes API to send fake invoices in good faith, complete with names that are giveaways of well-known brands such as Norton and PayPal. Because these messages are sent from a verified domain - namely DocuSign's - they go past traditional email security methods and therefore sneak through undetected as malicious messages.

How It Works

DocuSign is an electronic signing service that the user often provides for sending, signing, and managing documents in a digital manner. Using the envelopes API within its eSignature system, document requests can be sent out, signed, and tracked entirely automatically. Conversely, attackers discovered how to take advantage of this API, where accounts set up for free by paying customers on DocuSign are available to them, giving them access to the templates and the branding feature. They now can create fake-looking invoices that are almost indistinguishable from official ones coming from established companies.

These scammers use the "Envelopes: create" function to send an enormous number of fake bills to a huge list of recipients. In most cases, the charges in the bill are very realistic and therefore appear more legitimate. In order to get a proper signature, attackers command the user to "sign" the documents. The attackers then use the signed document to ask for payment. In some other instances, attackers will forward the "signed" documents directly to the finance department to complete the scam.


Mass Abuse of the DocuSign Platform

According to the security research firm Wallarm, this type of abuse has been ongoing for some time. The company noted that this mass exploitation is exposed by DocuSign customers on online forums as users have marked complaints about constant spamming and phishing emails from the DocuSign domain. "I'm suddenly receiving multiple phishing emails per week from docusign.net, and there doesn't seem to be an obvious way to report it," complained one user.

All of these complaints imply that such abuse occurs on a really huge scale, which makes the attacker's spread of false invoices very probably done with some kind of automation tools and not done by hand.

Wallarm already has raised the attention of the abuse at DocuSign, but it is not clear what actions or steps, if any, are being taken by DocuSign in order to resolve this issue.


Challenges in Safeguarding APIs Against Abuse

Such widespread abuse of the DocuSign Envelopes API depicts how openness in access can really compromise the security of API endpoints. Although the DocuSign service is provided for verified businesses to utilise it, the attack teams will buy valid accounts and utilize these functions offered by the API for malicious purposes. It does not even resemble the case of the DocuSign company because several other companies have had the same abuses of their APIs as well. For instance, hackers used APIs to search millions of phone numbers associated with Authy accounts to validate them, scraping information about millions of Dell customers, matching millions of Trello accounts with emails, and much more.

The case of DocuSign does show how abuses of a platform justify stronger protections for digital services that enable access to sensitive tools. Because these API-based attacks have become so widespread, firms like DocuSign may be forced to consider further steps they are taking in being more watchful and tightening the locks on the misuses of their products with regards to paid accounts in which users have full access to the tools at their disposal.


CrossBarking Exploit in Opera Browser Exposes Users to Extensive Risks

 

A new browser vulnerability called CrossBarking has been identified, affecting Opera users through “private” APIs that were meant only for select trusted sites. Browser APIs bridge websites with functionalities like storage, performance, and geolocation to enhance user experience. Most APIs are widely accessible and reviewed, but private ones are reserved for preferred applications. Researchers at Guardio found that these Opera-specific APIs were vulnerable to exploitation, especially if a malicious Chrome extension gained access. Guardio’s demonstration showed that once a hacker gained access to these private APIs through a Chrome extension — easily installable by Opera users — they could run powerful scripts in a user’s browser context. 
The malicious extension was initially disguised as a harmless tool, adding pictures of puppies to web pages. 

However, it also contained scripts capable of extensive interference with Opera settings. Guardio used this approach to hijack the settingsPrivate API, which allowed them to reroute a victim’s DNS settings through a malicious server, providing the attacker with extensive visibility into the user’s browsing activities. With control over the DNS settings, they could manipulate browser content and even redirect users to phishing pages, making the potential for misuse significant. Guardio emphasized that getting malicious extensions through Chrome’s review process is relatively easier than with Opera’s, which undergoes a more intensive manual review. 

The researchers, therefore, leveraged Chrome’s automated, less stringent review process to create a proof-of-concept attack on Opera users. CrossBarking’s implications go beyond Opera, underscoring the complex relationship between browser functionality and security. Opera took steps to mitigate this vulnerability by blocking scripts from running on private domains, a strategy that Chrome itself uses. However, they have retained the private APIs, acknowledging that managing security with third-party apps and maintaining functionality is a delicate balance. 

Opera’s decision to address the CrossBarking vulnerability by restricting script access to domains with private API access offers a practical, though partial, solution. This approach minimizes the risk of malicious code running within these domains, but it does not fully eliminate potential exposure. Guardio’s research emphasizes the need for Opera, and similar browsers, to reevaluate their approach to third-party extension compatibility and the risks associated with cross-browser API permissions.


This vulnerability also underscores a broader industry challenge: balancing user functionality with security. While private APIs are integral to offering customized features, they open potential entry points for attackers when not adequately protected. Opera’s reliance on responsible disclosure practices with cybersecurity firms is a step forward. However, ongoing vigilance and a proactive stance toward enhancing browser security are essential as threats continue to evolve, particularly in a landscape where third-party extensions can easily be overlooked as potential risks.


In response, Opera has collaborated closely with researchers and relies on responsible vulnerability disclosures from third-party security firms like Guardio to address any potential risks preemptively. Security professionals highlight that browser developers should consider the full ecosystem, assessing how interactions across apps and extensions might introduce vulnerabilities.

The Impact of Google’s Manifest V3 on Chrome Extensions

 

Google’s Manifest V3 rules have generated a lot of discussion, primarily because users fear it will make ad blockers, such as Ublock Origin, obsolete. This concern stems from the fact that Ublock Origin is heavily used and has been affected by these changes. However, it’s crucial to understand that these new rules don’t outright disable ad blockers, though they may impact some functionality. The purpose of Manifest V3 is to enhance the security and privacy of Chrome extensions. A significant part of this is limiting remote code execution within extensions, a measure meant to prevent malicious activities that could lead to data breaches. 

This stems from incidents like DataSpii, where extensions harvested sensitive user data including tax returns and financial information. Google’s Manifest V3 aims to prevent such vulnerabilities by introducing stricter regulations on the code that can be used within extensions. For developers, this means adapting to new APIs, notably the WebRequest API, which has been altered to restrict certain network activities that extensions used to perform. While these changes are designed to increase user security, they require extension developers to modify how their tools work. Ad blockers like Ublock Origin can still function, but some users may need to manually enable or adjust settings to get them working effectively under Manifest V3. 

Although many users believe that the update is intended to undermine ad blockers—especially since Google’s main revenue comes from ads—the truth is more nuanced. Google maintains that the changes are intended to bolster security, though skepticism remains high. Users are still able to use ad blockers such as Ublock Origin or switch to alternatives like Ublock Lite, which complies with the new regulations. Additionally, users can choose other browsers like Firefox that do not have the same restrictions and can still run extensions under their older, more flexible frameworks. While Manifest V3 introduces hurdles, it doesn’t spell the end for ad blockers. The changes force developers to ensure that their tools follow stricter security protocols, but this could ultimately lead to safer browsing experiences. 

If some extensions stop working, alternatives or updates are available to address the gaps. For now, users can continue to enjoy ad-free browsing with the right tools and settings, though they should remain vigilant in managing and updating their extensions. To further protect themselves, users are advised to explore additional options such as using privacy-focused extensions like Privacy Badger or Ghostery. For more tech-savvy individuals, setting up hardware-based ad-blocking solutions like Pi-Hole can offer more comprehensive protection. A virtual private network (VPN) with built-in ad-blocking capabilities is another effective solution. Ultimately, while Manifest V3 may introduce limitations, it’s far from the end of ad-blocking extensions. 

Developers are adapting, and users still have a variety of tools to block intrusive ads and enhance their browsing experience. Keeping ad blockers up to date and understanding how to manage extensions is key to ensuring a smooth transition into Google’s new extension framework.

Why Non-Human Identities Are the New Cybersecurity Nightmare







In April, business intelligence company Sisense fell victim to a critical security breach that exposed all vulnerability in managing non-human identities (NHIs). The hackers accessed the company's GitLab repository that contained hardcoded SSH keys, API credentials, and access tokens. Indeed, this really opened the book on why NHIs are a must and how indispensable they have become in modern digital ecosystems.

Unlike human users, NHIs such as service accounts, cloud instances, APIs, and IoT manage data flow and automate processes. Therefore, in the majority of enterprise networks, with NHIs now far outscaling human users, their security is crucial to prevent cyberattacks and ensure business continuity.

The Threat of Non-Human Identities

With thousands or even millions of NHIs in use within an organisation, no wonder cybercrooks are turning their attention to these. Typically, digital identities are less comprehensively understood and protected, so that easily becomes an easy target for them. In fact, data breaches involving NHIs have already become more widespread, especially as companies increase their usage of cloud infrastructures and automation.

Healthcare and finance are basically soft targets because these industries have strict regulations on compliance. Getting found in violation of standards such as the Health Insurance Portability and Accountability Act (HIPAA) or the Payment Card Industry Data Security Standard (PCI DSS) could come in the form of a fine, reputational damage, and a loss of customer trust.

Why Secure NHIs?

With the complexity of digital ecosystems constantly growing, the security of NHIs becomes all the more important. Companies are drifting toward a "zero-trust" security model, where no user--neither human nor non-human-is trusted by default. Every access request needs to be verified. And especially, this concept has been very effective in decentralised networks that come with large numbers of NHIs.

Locking down NHIs lets the organisations control sensitive data, reduce unauthorised access, and comply with regulation. In the case of Sisense, when management of NHIs is poor, they very soon become a gateway for the cybercriminals.

Best Practices in Managing NHI

To ensure the security of non-human identity, these best practices have to be adopted by an organisation:


 1. Continuous Discovery and Inventory
Automated processes should be in place so that there is always a live inventory of all the NHI across the network. This inventory captures proper details of the owner, permissions, usage patterns, and related risks associated with that NHI. Control and monitoring over these digital identities is enhanced through this live catalog.


 2. Risk-Based Approach
Not all NHIs are the same, however. Some have access to highly sensitive information, while others simply get to perform routine tasks. Companies should have a risk-scoring system that analyses what the NHI has access to, what it accesses in terms of sensitivity, and the effect if broken into.

3. Incident Response Action Plan
A percentage of security will then be allocated based on those with the highest scores. Organisations should have a structured incident response plan aligned with NHIs. They  should also have pre-defined playbooks on the breach related to non-human identities. These playbooks should outline the phases involved in the incident containment, mitigation, and resolution process, as well as the communication protocols with all stakeholders.

4. NHI Education Program
A good education program limits security risks associated with NHI. Developers should be trained on coding secure practices, including the dangers of hardcoded credentials, and operations teams on proper rotation and monitoring NHIs. Regular training ensures that all employees are aware of best practices.


 5. Automated Lifecycle Management
The NHIs will also get instantiated, updated, and retired automatically. Thus, security policies will be enforced for all the identity lifecycle stages. This will eradicate human errors in the form of unused or misconfigured NHIs with possible exploits by attackers.


 6. Non-Human Identity Detection and Response (NHIDR)
The NHIDR tools set baseline behaviour patterns for NHIs and detect the anomaly that could indicate a breach. Organisations can monitor the activities of NHIs with these tools and respond quickly to suspicious behaviour, thereby preventing more breaches.


 7. Change Approval Workflow
In most cases, change approval workflow should be embedded before changes to NHIs like the change of permissions or transfers between systems are affected. The security and IT teams must assess and approve the process so that there are no unnecessary risks developed.

8. Exposure Monitoring and Rapid Response
Organisations must expose NHIs, which means they must identify and resolve the vulnerabilities quickly. Automated monitoring solutions can find exposed credentials or compromised APIs, set off alerts, and initiate incident response procedures before a potentially malicious actor could act.

The Business Case for NHI Management

Investments in the proper management of NHI can produce large, long-term benefits. Companies can prevent data breaches that cost on average $4.45 million per incident and keep money at the bottom line. Simplified NHI process also helps save precious IT resources, thereby redirecting security teams' efforts toward strategic initiatives.

For industries that require high levels of compliance, such as health and finance, much of the NHI management investment often pays for itself through better regulatory compliance. Organisations can innovate more safely, knowing their digital identities are safe, through a good NHI management system.

As businesses start relying more and more on automation and the cloud, it will be based on the solid and well-rounded management of NHI. A good approach toward NHI management would largely prevent security breaches and ensure industry compliance. Such a posture will not only save the data but help the organisation position itself as a long-term winner in the fast-changing digital world.