Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label API. Show all posts

The Strategic Imperatives of Agentic AI Security


 

In terms of cybersecurity, agentic artificial intelligence is emerging as a transformative force that is fundamentally transforming the way digital threats are perceived and handled. It is important to note that, unlike conventional artificial intelligence systems that typically operate within predefined parameters, agentic AI systems can make autonomous decisions by interacting dynamically with digital tools, complex environments, other AI agents, and even sensitive data sets. 

There is a new paradigm emerging in which AI is not only supporting decision-making but also initiating and executing actions independently in pursuit of achieving its objective in this shift. As the evolution of cybersecurity brings with it significant opportunities for innovation, such as automated threat detection, intelligent incident response, and adaptive defence strategies, it also poses some of the most challenging challenges. 

As much as agentic AI is powerful for defenders, the same capabilities can be exploited by adversaries as well. If autonomous agents are compromised or misaligned with their targets, they can act at scale in a very fast and unpredictable manner, making traditional defence mechanisms inadequate. As organisations increasingly implement agentic AI into their operations, enterprises must adopt a dual-security posture. 

They need to take advantage of the strengths of agentic AI to enhance their security frameworks, but also prepare for the threats posed by it. There is a need to strategically rethink cybersecurity principles as they relate to robust oversight, alignment protocols, and adaptive resilience mechanisms to ensure that the autonomy of AI agents is paired with the sophistication of controls that go with it. Providing security for agentic systems has become more than just a technical requirement in this new era of AI-driven autonomy. 

It is a strategic imperative as well. In the development lifecycle of Agentic AI, several interdependent phases are required to ensure that the system is not only intelligent and autonomous but also aligned with organisational goals and operational needs. Using this structured progression, agents can be made more effective, reliable, and ethically sound across a wide variety of use cases. 

The first critical phase in any software development process is called Problem Definition and Requirement Analysis. This lays the foundation for all subsequent efforts in software development. In this phase, organisations need to be able to articulate a clear and strategic understanding of the problem space that the artificial intelligence agent will be used to solve. 

As well as setting clear business objectives, defining the specific tasks that the agent is required to perform, and assessing operational constraints like infrastructure availability, regulatory obligations, and ethical obligations, it is imperative for organisations to define clear business objectives. As a result of a thorough requirements analysis, the system design is streamlined, scope creep is minimised, and costly revisions can be avoided during the later stages of the deployment. 

Additionally, this phase helps stakeholders align the AI agent's technical capabilities with real-world needs, enabling it to deliver measurable results. It is arguably one of the most crucial components of the lifecycle to begin with the Data Collection and Preparation phase, which is arguably the most vital. A system's intelligence is directly affected by the quality and comprehensiveness of the data it is trained on, regardless of which type of agentic AI it is. 

It has utilised a variety of internal and trusted external sources to collect relevant datasets for this stage. These datasets are meticulously cleaned, indexed, and transformed in order to ensure that they are consistent and usable. As a further measure of model robustness, advanced preprocessing techniques are employed, such as augmentation, normalisation, and class balancing to reduce bias, es and mitigate model failures. 

In order for an AI agent to function effectively across a variety of circumstances and edge cases, a high-quality, representative dataset needs to be created as soon as possible. These three phases together make up the backbone of the development of an agentic AI system, ensuring that it is based on real business needs and is backed up by data that is dependable, ethical, and actionable. Organisations that invest in thorough upfront analysis and meticulous data preparation have a significantly greater chance of deploying agentic AI solutions that are scalable, secure, and aligned with long-term strategic goals, when compared to those organisations that spend less. 

It is important to note that the risks that a systemic AI system poses are more than technical failures; they are deeply systemic in nature. Agentic AI is not a passive system that executes rules; it is an active system that makes decisions, takes action and adapts as it learns from its mistakes. Although dynamic autonomy is powerful, it also introduces a degree of complexity and unpredictability, which makes failures harder to detect until significant damage has been sustained.

The agentic AI systems differ from traditional software systems in the sense that they operate independently and can evolve their behaviour over time as they become more and more complex. OWASP's Top Ten for LLM Applications (2025) highlights how agents can be manipulated into misusing tools or storing deceptive information that can be detrimental to the users' security. If not rigorously monitored, this very feature can turn out to be a source of danger.

It is possible that corrupted data penetrates a person's memory in such situations, so that future decisions will be influenced by falsehoods. In time, these errors may compound, leading to cascading hallucinations in which the system repeatedly generates credible but inaccurate outputs, reinforcing and validating each other, making it increasingly challenging for the deception to be detected. 

Furthermore, agentic systems are also susceptible to more traditional forms of exploitation, such as privilege escalation, in which an agent may impersonate a user or gain access to restricted functions without permission. As far as the extreme scenarios go, agents may even override their constraints by intentionally or unintentionally pursuing goals that do not align with the user's or organisation's goals. Taking advantage of deceptive behaviours is a challenging task, not only ethically but also operationally. Additionally, resource exhaustion is another pressing concern. 

Agents can be overloaded by excessive queues of tasks, which can exhaust memory, computing bandwidth, or third-party API quotas, whether through accident or malicious attacks. When these problems occur, not only do they degrade performance, but they also can result in critical system failures, particularly when they arise in a real-time environment. Moreover, the situation is even worse when agents are deployed on lightweight frameworks, such as lightweight or experimental multi-agent control platforms (MCPs), which may not have the essential features like logging, user authentication, or third-party validation mechanisms, as the situation can be even worse. 

When security teams are faced with such a situation, tracking decision paths or identifying the root cause of failures becomes increasingly difficult or impossible, leaving them blind to their own internal behaviour as well as external threats. A systemic vulnerability in agentic artificial intelligence must be considered a core design consideration rather than a peripheral concern, as it continues to integrate into high-stakes environments. 

It is essential, not only for safety to be ensured, but also to build the long-term trust needed to enable enterprise adoption, that agents act in a transparent, traceable, and ethical manner. Several core functions give agentic AI systems the agency that enables them to make autonomous decisions, behave adaptively, and pursue long-term goals. These functions are the foundation of their agency. The essence of agentic intelligence is the autonomy of agents, which means that they operate without being constantly overseen by humans. 

They perceive their environment with data streams or sensors, evaluate contextual factors, and execute actions that are in keeping with the predefined objectives of these systems. There are a number of examples in which autonomous warehouse robots adjust their path in real time without requiring human input, demonstrating both situational awareness and self-regulation. The agentic AI system differs from reactive AI systems, which are designed to respond to isolated prompts, since they are designed to pursue complex, sometimes long-term goals without the need for human intervention. 

As a result of explicit or non-explicit instructions or reward systems, these agents can break down high-level tasks, such as organising a travel itinerary, into actionable subgoals that are dynamically adjusted according to the new information available. In order for the agent to formulate step-by-step strategies, planner-executor architectures and techniques such as chain-of-thought prompting or ReAct are used by the agent to formulate strategies. 

In order to optimise outcomes, these plans may use graph-based search algorithms or simulate multiple future scenarios to achieve optimal results. Moreover, reasoning further enhances a user's ability to assess alternatives, weigh tradeoffs, and apply logical inferences to them. Large language models are also used as reasoning engines, allowing tasks to be broken down and multiple-step problem-solving to be supported. The final feature of memory is the ability to provide continuity. 

Using previous interactions, results, and context-often through vector databases-agents can refine their behavior over time by learning from their previous experiences and avoiding unnecessary or unnecessary actions. An agentic AI system must be secured more thoroughly than incremental changes to existing security protocols. Rather, it requires a complete rethink of its operational and governance models. A system capable of autonomous decision-making and adaptive behaviour must be treated as an enterprise entity of its own to be considered in a competitive market. 

There is a need for rigorous scrutiny, continuous validation, and enforceable safeguards in place throughout the lifecycle of any influential digital actor, including AI agents. In order to achieve a robust security posture, it is essential to control non-human identities. As part of this process, strong authentication mechanisms must be implemented, along with behavioural profiling and anomaly detection, to identify and neutralise attempts to impersonate or spoof before damage occurs. 

As a concept, identity cannot stay static in dynamic systems, since it must change according to the behaviour and role of the agent in the environment. The importance of securing retrieval-augmented generation (RAG) systems at the source cannot be overstated. As part of this strategy, organisations need to enforce rigorous access policies over knowledge repositories, examine embedding spaces for adversarial interference, and continually evaluate the effectiveness of similarity matching methods to avoid data leaks or model manipulations that are not intended. 

The use of automated red teaming is essential to identifying emerging threats, not just before deployment, but constantly in order to mitigate them. It involves adversarial testing and stress simulations that are designed to expose behavioural anomalies, misalignments with the intended goals, and configuration weaknesses in real-time. Further, it is imperative that comprehensive governance frameworks be established in order to ensure the success of generative and agentic AI. 

As a part of this process, the agent behaviour must be codified in enforceable policies, runtime oversight must be enabled, and detailed, tamper-evident logs must be maintained for auditing and tracking lifecycles. The shift towards agentic AI is more than just a technological evolution. The shift represents a profound change in the way decisions are made, delegated, and monitored in the future. A rapid adoption of these systems often exceeds the ability of traditional security infrastructures to adapt in a way that is not fully understood by them.

Without meaningful oversight, clearly defined responsibilities, and strict controls, AI agents could inadvertently or maliciously exacerbate risk, rather than delivering what they promise. In response to these trends, organisations need to ensure that agents operate within well-defined boundaries, under continuous observation, and aligned with organisational intent, as well as being held to the same standards as human decision-makers. 

There are enormous benefits associated with agentic AI, but there are also huge risks associated with it. Moreover, these systems should not just be intelligent; they should also be trustworthy, transparent, and their rules should be as precise and robust as those they help enforce to be truly transformative.

AI as a Key Solution for Mitigating API Cybersecurity Threats

 


Artificial Intelligence (AI) is continuously evolving, and it is fundamentally changing the cybersecurity landscape, enabling organizations to mitigate vulnerabilities more effectively as a result. As artificial intelligence has improved the speed and scale with which threats can be detected and responded, it has also introduced a range of complexities that necessitate a hybrid approach to security management. 

An approach that combines traditional security frameworks with human-digital interventions is necessary. There is one of the biggest challenges AI presents to us, and that is the expansion of the attack surface for Application Programming Interfaces (APIs). The proliferation of AI-powered systems raises questions regarding API resilience as sophisticated threats become increasingly sophisticated. As AI-driven functionality is integrated into APIs, security concerns have increased, which has led to the need for robust defensive strategies. 

In the context of AI security, the implications of the technology extend beyond APIs to the very foundation of Machine Learning (ML) applications as well as large language models. Many of these models are trained on highly sensitive datasets, raising concerns about their privacy, integrity, and potential exploitation. When training data is handled improperly, unauthorized access can occur, data poisoning can occur, and model manipulation may occur, which can further increase the security vulnerability. 

It is important to note, however, that artificial intelligence is also leading security teams to refine their threat modeling strategies while simultaneously posing security challenges. Using AI's analytical capabilities, organizations can enhance their predictive capabilities, automate risk assessments, and implement smarter security frameworks that can be adapted to the changing environment. By adapting to this evolution, security professionals are forced to adopt a proactive and adaptive approach to reducing potential threats. 

Using artificial intelligence effectively while safeguarding digital assets requires an integrated approach that combines traditional security mechanisms with AI-driven security solutions. This is necessary to ensure an effective synergy between automation and human oversight. Enterprises must foster a comprehensive security posture that integrates both legacy and emerging technologies to be more resilient in the face of a changing threat landscape. However, the deployment of AI in cybersecurity requires a well-organized, strategic approach. While AI is an excellent tool for cybersecurity, it does need to be embraced in a strategic and well-organized manner. 

Building a robust and adaptive cybersecurity ecosystem requires addressing API vulnerabilities, strengthening training data security, and refining threat modeling practices. A major part of modern digital applications is APIs, allowing seamless data exchange between various systems, enabling seamless data exchange. However, the widespread adoption of APIs has also led to them becoming prime targets for cyber threats, which have put organizations at risk of significant risks, such as data breaches, financial losses, and disruptions in services.

AI platforms and tools, such as OpenAI, Google's DeepMind, and IBM's Watson, have significantly contributed to advancements in several technological fields over the years. These innovations have revolutionized natural language processing, machine learning, and autonomous systems, leading to a wide range of applications in critical areas such as healthcare, finance, and business. Consequently, organizations worldwide are turning to artificial intelligence to maximize operational efficiency, simplify processes, and unlock new growth opportunities. 

While artificial intelligence is catalyzing progress, it also introduces potential security risks. In addition to manipulating the very technologies that enable industries to orchestrate sophisticated cyber threats, cybercriminals can also use those very technologies. As a result, AI is viewed as having two characteristics: while it is possible for AI-driven security systems to proactively identify, predict, and mitigate threats with extraordinary accuracy, adversaries can weaponize such technologies to create highly advanced cyberattacks, such as phishing schemes and ransomware. 

It is important to keep in mind that, as AI continues to grow, its role in cybersecurity is becoming more complex and dynamic. Organizations need to take proactive measures to protect their organizations from AI attacks by implementing robust frameworks that harness its defensive capabilities and mitigate its vulnerabilities. For a secure digital ecosystem that fosters innovation without compromising cybersecurity, it will be crucial for AI technologies to be developed ethically and responsibly. 

The Application Programming Interface (API) is the fundamental component of digital ecosystems in the 21st century, enabling seamless interactions across industries such as mobile banking, e-commerce, and enterprise solutions. They are also a prime target for cyber-attackers due to their widespread adoption. The consequences of successful breaches can include data compromises, financial losses, and operational disruptions that can pose significant challenges to businesses as well as consumers alike. 

Pratik Shah, F5 Networks' Managing Director for India and SAARC, highlighted that APIs are an integral part of today's digital landscape. AIM reports that APIs account for nearly 90% of worldwide web traffic and that the number of public APIs has grown 460% over the past decade. Despite this rapid proliferation, the company has been exposed to a wide array of cyber risks, including broken authentication, injection attacks, and server-side request forgery. According to him, the robustness of Indian API infrastructure significantly influences India's ambitions to become a global leader in the digital industry. 

“APIs are the backbone of our digital economy, interconnecting key sectors such as finance, healthcare, e-commerce, and government services,” Shah remarked. Shah claims that during the first half of 2024, the Indian Computer Emergency Response Team (CERT-In) reported a 62% increase in API-targeted attacks. The extent of these incidents goes beyond technical breaches, and they represent substantial economic risks that threaten data integrity, business continuity, and consumer trust in addition to technological breaches.

Aside from compromising sensitive information, these incidents have also undermined business continuity and undermined consumer confidence, in addition to compromising business continuity. APIs will continue to be at the heart of digital transformation, and for that reason, ensuring robust security measures will be critical to mitigating potential threats and protecting organisational integrity. 


Indusface recently published an article on API security that underscores the seriousness of API-related threats for the next 20 years. There has been an increase of 68% in attacks on APIs compared to traditional websites in the report. Furthermore, there has been a 94% increase in Distributed Denial-of-Service (DDoS) attacks on APIs compared with the previous quarter. This represents an astounding 1,600% increase when compared with website-based DDoS attacks. 

Additionally, bot-driven attacks on APIs increased by 39%, emphasizing the need to adopt robust security measures that protect these vital digital assets from threats. As a result of Artificial Intelligence, cloud security is being transformed by enhancing threat detection, automating responses, and providing predictive insights to mitigate cyber risks. 

Several cloud providers, including Google Cloud, Microsoft, and Amazon Web Services, employ artificial intelligence-driven solutions for monitoring security events, detecting anomalies, and preventing cyberattacks.

The solutions include Chronicle, Microsoft Defender for Cloud, and Amazon GuardDuty. Although there are challenges like false positives, adversarial AI attacks, high implementation costs, and concerns about data privacy, they are still important to consider. 

Although there are still some limitations, advances in self-learning AI models, security automation, and quantum computing are expected to raise AI's profile in the cybersecurity space to a higher level. The cloud environment should be safeguarded against evolving threats by using AI-powered security solutions that can be deployed by businesses.

Weak Cloud Credentials Behind Most Cyber Attacks: Google Cloud Report

 



A recent Google Cloud report has found a very troubling trend: nearly half of all cloud-related attacks in late 2024 were caused by weak or missing account credentials. This is seriously endangering businesses and giving attackers easy access to sensitive systems.


What the Report Found

The Threat Horizons Report, which was produced by Google's security experts, looked into cyberattacks on cloud accounts. The study found that the primary method of access was poor credential management, such as weak passwords or lack of multi-factor authentication (MFA). These weak spots comprised nearly 50% of all incidents Google Cloud analyzed.

Another factor was screwed up cloud services, which constituted more than a third of all attacks. The report further noted a frightening trend of attacks on the application programming interfaces (APIs) and even user interfaces, which were around 20% of the incidents. There is a need to point out several areas where cloud security seems to be left wanting.


How Weak Credentials Cause Big Problems

Weak credentials do not just unlock the doors for the attackers; it lets them bring widespread destruction. For instance, in April 2024, over 160 Snowflake accounts were breached due to the poor practices regarding passwords. Some of the high-profile companies impacted included AT&T, Advance Auto Parts, and Pure Storage and involved some massive data leakages.

Attackers are also finding accounts with lots of permissions — overprivileged service accounts. These simply make it even easier for hackers to step further into a network, bringing harm to often multiple systems within an organization's network. Google concluded that more than 60 percent of all later attacker actions, once inside, involve attempts to step laterally within systems.

The report warns that a single stolen password can trigger a chain reaction. Hackers can use it to take control of apps, access critical data, and even bypass security systems like MFA. This allows them to establish trust and carry out more sophisticated attacks, such as tricking employees with fake messages.


How Businesses Can Stay Safe

To prevent such attacks, organizations should focus on proper security practices. Google Cloud suggests using multi-factor authentication, limiting excessive permissions, and fixing misconfigurations in cloud systems. These steps will limit the damage caused by stolen credentials and prevent attackers from digging deeper.

This report is a reminder that weak passwords and poor security habits are not just small mistakes; they can lead to serious consequences for businesses everywhere.


Fake Invoices Spread Through DocuSign’s API in New Scam

 



Cyber thieves are making use of DocuSign's Envelopes API to send fake invoices in good faith, complete with names that are giveaways of well-known brands such as Norton and PayPal. Because these messages are sent from a verified domain - namely DocuSign's - they go past traditional email security methods and therefore sneak through undetected as malicious messages.

How It Works

DocuSign is an electronic signing service that the user often provides for sending, signing, and managing documents in a digital manner. Using the envelopes API within its eSignature system, document requests can be sent out, signed, and tracked entirely automatically. Conversely, attackers discovered how to take advantage of this API, where accounts set up for free by paying customers on DocuSign are available to them, giving them access to the templates and the branding feature. They now can create fake-looking invoices that are almost indistinguishable from official ones coming from established companies.

These scammers use the "Envelopes: create" function to send an enormous number of fake bills to a huge list of recipients. In most cases, the charges in the bill are very realistic and therefore appear more legitimate. In order to get a proper signature, attackers command the user to "sign" the documents. The attackers then use the signed document to ask for payment. In some other instances, attackers will forward the "signed" documents directly to the finance department to complete the scam.


Mass Abuse of the DocuSign Platform

According to the security research firm Wallarm, this type of abuse has been ongoing for some time. The company noted that this mass exploitation is exposed by DocuSign customers on online forums as users have marked complaints about constant spamming and phishing emails from the DocuSign domain. "I'm suddenly receiving multiple phishing emails per week from docusign.net, and there doesn't seem to be an obvious way to report it," complained one user.

All of these complaints imply that such abuse occurs on a really huge scale, which makes the attacker's spread of false invoices very probably done with some kind of automation tools and not done by hand.

Wallarm already has raised the attention of the abuse at DocuSign, but it is not clear what actions or steps, if any, are being taken by DocuSign in order to resolve this issue.


Challenges in Safeguarding APIs Against Abuse

Such widespread abuse of the DocuSign Envelopes API depicts how openness in access can really compromise the security of API endpoints. Although the DocuSign service is provided for verified businesses to utilise it, the attack teams will buy valid accounts and utilize these functions offered by the API for malicious purposes. It does not even resemble the case of the DocuSign company because several other companies have had the same abuses of their APIs as well. For instance, hackers used APIs to search millions of phone numbers associated with Authy accounts to validate them, scraping information about millions of Dell customers, matching millions of Trello accounts with emails, and much more.

The case of DocuSign does show how abuses of a platform justify stronger protections for digital services that enable access to sensitive tools. Because these API-based attacks have become so widespread, firms like DocuSign may be forced to consider further steps they are taking in being more watchful and tightening the locks on the misuses of their products with regards to paid accounts in which users have full access to the tools at their disposal.


CrossBarking Exploit in Opera Browser Exposes Users to Extensive Risks

 

A new browser vulnerability called CrossBarking has been identified, affecting Opera users through “private” APIs that were meant only for select trusted sites. Browser APIs bridge websites with functionalities like storage, performance, and geolocation to enhance user experience. Most APIs are widely accessible and reviewed, but private ones are reserved for preferred applications. Researchers at Guardio found that these Opera-specific APIs were vulnerable to exploitation, especially if a malicious Chrome extension gained access. Guardio’s demonstration showed that once a hacker gained access to these private APIs through a Chrome extension — easily installable by Opera users — they could run powerful scripts in a user’s browser context. 
The malicious extension was initially disguised as a harmless tool, adding pictures of puppies to web pages. 

However, it also contained scripts capable of extensive interference with Opera settings. Guardio used this approach to hijack the settingsPrivate API, which allowed them to reroute a victim’s DNS settings through a malicious server, providing the attacker with extensive visibility into the user’s browsing activities. With control over the DNS settings, they could manipulate browser content and even redirect users to phishing pages, making the potential for misuse significant. Guardio emphasized that getting malicious extensions through Chrome’s review process is relatively easier than with Opera’s, which undergoes a more intensive manual review. 

The researchers, therefore, leveraged Chrome’s automated, less stringent review process to create a proof-of-concept attack on Opera users. CrossBarking’s implications go beyond Opera, underscoring the complex relationship between browser functionality and security. Opera took steps to mitigate this vulnerability by blocking scripts from running on private domains, a strategy that Chrome itself uses. However, they have retained the private APIs, acknowledging that managing security with third-party apps and maintaining functionality is a delicate balance. 

Opera’s decision to address the CrossBarking vulnerability by restricting script access to domains with private API access offers a practical, though partial, solution. This approach minimizes the risk of malicious code running within these domains, but it does not fully eliminate potential exposure. Guardio’s research emphasizes the need for Opera, and similar browsers, to reevaluate their approach to third-party extension compatibility and the risks associated with cross-browser API permissions.


This vulnerability also underscores a broader industry challenge: balancing user functionality with security. While private APIs are integral to offering customized features, they open potential entry points for attackers when not adequately protected. Opera’s reliance on responsible disclosure practices with cybersecurity firms is a step forward. However, ongoing vigilance and a proactive stance toward enhancing browser security are essential as threats continue to evolve, particularly in a landscape where third-party extensions can easily be overlooked as potential risks.


In response, Opera has collaborated closely with researchers and relies on responsible vulnerability disclosures from third-party security firms like Guardio to address any potential risks preemptively. Security professionals highlight that browser developers should consider the full ecosystem, assessing how interactions across apps and extensions might introduce vulnerabilities.

The Impact of Google’s Manifest V3 on Chrome Extensions

 

Google’s Manifest V3 rules have generated a lot of discussion, primarily because users fear it will make ad blockers, such as Ublock Origin, obsolete. This concern stems from the fact that Ublock Origin is heavily used and has been affected by these changes. However, it’s crucial to understand that these new rules don’t outright disable ad blockers, though they may impact some functionality. The purpose of Manifest V3 is to enhance the security and privacy of Chrome extensions. A significant part of this is limiting remote code execution within extensions, a measure meant to prevent malicious activities that could lead to data breaches. 

This stems from incidents like DataSpii, where extensions harvested sensitive user data including tax returns and financial information. Google’s Manifest V3 aims to prevent such vulnerabilities by introducing stricter regulations on the code that can be used within extensions. For developers, this means adapting to new APIs, notably the WebRequest API, which has been altered to restrict certain network activities that extensions used to perform. While these changes are designed to increase user security, they require extension developers to modify how their tools work. Ad blockers like Ublock Origin can still function, but some users may need to manually enable or adjust settings to get them working effectively under Manifest V3. 

Although many users believe that the update is intended to undermine ad blockers—especially since Google’s main revenue comes from ads—the truth is more nuanced. Google maintains that the changes are intended to bolster security, though skepticism remains high. Users are still able to use ad blockers such as Ublock Origin or switch to alternatives like Ublock Lite, which complies with the new regulations. Additionally, users can choose other browsers like Firefox that do not have the same restrictions and can still run extensions under their older, more flexible frameworks. While Manifest V3 introduces hurdles, it doesn’t spell the end for ad blockers. The changes force developers to ensure that their tools follow stricter security protocols, but this could ultimately lead to safer browsing experiences. 

If some extensions stop working, alternatives or updates are available to address the gaps. For now, users can continue to enjoy ad-free browsing with the right tools and settings, though they should remain vigilant in managing and updating their extensions. To further protect themselves, users are advised to explore additional options such as using privacy-focused extensions like Privacy Badger or Ghostery. For more tech-savvy individuals, setting up hardware-based ad-blocking solutions like Pi-Hole can offer more comprehensive protection. A virtual private network (VPN) with built-in ad-blocking capabilities is another effective solution. Ultimately, while Manifest V3 may introduce limitations, it’s far from the end of ad-blocking extensions. 

Developers are adapting, and users still have a variety of tools to block intrusive ads and enhance their browsing experience. Keeping ad blockers up to date and understanding how to manage extensions is key to ensuring a smooth transition into Google’s new extension framework.

Why Non-Human Identities Are the New Cybersecurity Nightmare







In April, business intelligence company Sisense fell victim to a critical security breach that exposed all vulnerability in managing non-human identities (NHIs). The hackers accessed the company's GitLab repository that contained hardcoded SSH keys, API credentials, and access tokens. Indeed, this really opened the book on why NHIs are a must and how indispensable they have become in modern digital ecosystems.

Unlike human users, NHIs such as service accounts, cloud instances, APIs, and IoT manage data flow and automate processes. Therefore, in the majority of enterprise networks, with NHIs now far outscaling human users, their security is crucial to prevent cyberattacks and ensure business continuity.

The Threat of Non-Human Identities

With thousands or even millions of NHIs in use within an organisation, no wonder cybercrooks are turning their attention to these. Typically, digital identities are less comprehensively understood and protected, so that easily becomes an easy target for them. In fact, data breaches involving NHIs have already become more widespread, especially as companies increase their usage of cloud infrastructures and automation.

Healthcare and finance are basically soft targets because these industries have strict regulations on compliance. Getting found in violation of standards such as the Health Insurance Portability and Accountability Act (HIPAA) or the Payment Card Industry Data Security Standard (PCI DSS) could come in the form of a fine, reputational damage, and a loss of customer trust.

Why Secure NHIs?

With the complexity of digital ecosystems constantly growing, the security of NHIs becomes all the more important. Companies are drifting toward a "zero-trust" security model, where no user--neither human nor non-human-is trusted by default. Every access request needs to be verified. And especially, this concept has been very effective in decentralised networks that come with large numbers of NHIs.

Locking down NHIs lets the organisations control sensitive data, reduce unauthorised access, and comply with regulation. In the case of Sisense, when management of NHIs is poor, they very soon become a gateway for the cybercriminals.

Best Practices in Managing NHI

To ensure the security of non-human identity, these best practices have to be adopted by an organisation:


 1. Continuous Discovery and Inventory
Automated processes should be in place so that there is always a live inventory of all the NHI across the network. This inventory captures proper details of the owner, permissions, usage patterns, and related risks associated with that NHI. Control and monitoring over these digital identities is enhanced through this live catalog.


 2. Risk-Based Approach
Not all NHIs are the same, however. Some have access to highly sensitive information, while others simply get to perform routine tasks. Companies should have a risk-scoring system that analyses what the NHI has access to, what it accesses in terms of sensitivity, and the effect if broken into.

3. Incident Response Action Plan
A percentage of security will then be allocated based on those with the highest scores. Organisations should have a structured incident response plan aligned with NHIs. They  should also have pre-defined playbooks on the breach related to non-human identities. These playbooks should outline the phases involved in the incident containment, mitigation, and resolution process, as well as the communication protocols with all stakeholders.

4. NHI Education Program
A good education program limits security risks associated with NHI. Developers should be trained on coding secure practices, including the dangers of hardcoded credentials, and operations teams on proper rotation and monitoring NHIs. Regular training ensures that all employees are aware of best practices.


 5. Automated Lifecycle Management
The NHIs will also get instantiated, updated, and retired automatically. Thus, security policies will be enforced for all the identity lifecycle stages. This will eradicate human errors in the form of unused or misconfigured NHIs with possible exploits by attackers.


 6. Non-Human Identity Detection and Response (NHIDR)
The NHIDR tools set baseline behaviour patterns for NHIs and detect the anomaly that could indicate a breach. Organisations can monitor the activities of NHIs with these tools and respond quickly to suspicious behaviour, thereby preventing more breaches.


 7. Change Approval Workflow
In most cases, change approval workflow should be embedded before changes to NHIs like the change of permissions or transfers between systems are affected. The security and IT teams must assess and approve the process so that there are no unnecessary risks developed.

8. Exposure Monitoring and Rapid Response
Organisations must expose NHIs, which means they must identify and resolve the vulnerabilities quickly. Automated monitoring solutions can find exposed credentials or compromised APIs, set off alerts, and initiate incident response procedures before a potentially malicious actor could act.

The Business Case for NHI Management

Investments in the proper management of NHI can produce large, long-term benefits. Companies can prevent data breaches that cost on average $4.45 million per incident and keep money at the bottom line. Simplified NHI process also helps save precious IT resources, thereby redirecting security teams' efforts toward strategic initiatives.

For industries that require high levels of compliance, such as health and finance, much of the NHI management investment often pays for itself through better regulatory compliance. Organisations can innovate more safely, knowing their digital identities are safe, through a good NHI management system.

As businesses start relying more and more on automation and the cloud, it will be based on the solid and well-rounded management of NHI. A good approach toward NHI management would largely prevent security breaches and ensure industry compliance. Such a posture will not only save the data but help the organisation position itself as a long-term winner in the fast-changing digital world.


Club Penguin Fans Target Disney Server, Exposing 2.5 GB of Internal Data

 

Club Penguin fans reportedly hacked a Disney Confluence server to collect information about their favourite game but ended up with 2.5 GB of internal corporate data instead. 

From 2005 until 2018, Club Penguin was a multiplayer online game (MMO) that included a virtual world where users could engage in games, activities, and talk with one another. The game was produced by New Horizon Interactive, which Disney later purchased. 

While Club Penguin was officially closed in 2017 and replaced by Club Penguin Island in 2018, the game is still available on private servers hosted by fans and independent developers. Despite Disney's opposition to a more prominent 'Club Penguin Rewritten' replica, which resulted in the arrest of its owners, private servers with thousands of players continue to exist today. 

Earlier this week, an anonymous user posted a link to "Internal Club Penguin PDFs" on the 4Chan message board, with the simple statement, "I no longer need these:).” 

The link takes you to a 415 MB collection with 137 PDFs including old Club Penguin internal information such as correspondence, design schematics, documentation, and character sheets. All of this data is at least seven years old, making it solely interesting to game fans. 

BleepingComputer has recently discovered that the Club Penguin data is simply a small part of a much bigger data set stolen from Disney's Confluence server, which houses documentation for different business, software, and IT initiatives used internally by Disney. 

The source says Disney's Confluence servers were compromised using previously leaked passwords. According to the insider, the threat actors were initially looking for Club Penguin data but ended up collecting 2.5 GB of data regarding Disney's corporate strategies, advertising plans, Disney+, internal developer tools, commercial projects, and infrastructure. 

The data includes documentation on a wide range of initiatives and projects, as well as information on internal developer tools Helios and Communicore, which were not previously made public.