Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cyber Security. Show all posts

Critical OpenClaw Flaws Allow Persistent Access and Credential Abuse


 

OpenClaw, a self-hosted AI agent runtime which has gained rapid adoption by enterprises, introduces a new type of security exposure for enterprises as dynamically executed content, external skill integrations, and cloud-based authentication mechanisms are convergent without adequate defensive control mechanisms.

The OpenClaw platform is unlike conventional applications that are constructed using fixed execution logic, as it is capable of accepting untrusted inputs, retrieving and executing third-party code modules, and interacting with connected environments with assigned credentials, effectively extending the trust boundary far beyond the application layer itself. These architectural flexibility and the recently disclosed ClawJacked exploitation technique expose critical weaknesses in authentication handling and token protection within browser-based cloud development environments, according to security researchers. 

It has been demonstrated that malicious web content can exploit active developer sessions to extract sensitive access tokens, thereby granting attackers unauthorized access to source repositories, cloud infrastructures, and privileged enterprise resources. Increasingly, organizations are integrating cloud-native development platforms into their engineering workflows. This disclosure highlights concerns regarding privilege scoping, identity isolation, and other security aspects associated with autonomous AI-powered runtime environments.

A coordinated vulnerability chain, collectively known as the "Claw Chain," was identified by Cyera researchers in response to these concerns, demonstrating how multiple vulnerabilities within OpenClaw can be combined to compromise a system, gain unauthorized access to data, and escalate privileges across affected systems. 

In particular, two vulnerabilities have been assigned CVE-2026-44113 and CVE-2026-2026-44112, which contain time-of-check/time-of-use (TOCTOU) race conditions within the OpenShell managed sandbox backend, which could allow attackers to circumvent sandbox enforcement and interact with files outside of the mounted root. 

In contrast to the first issue, which permits arbitrary write operations which can lead to configuration changes, backdoor installations, and long-term control over compromised hosts, the second issue provides a pathway for unauthorized disclosure of system artifacts, credentials, and sensitive internal data through unauthorized file disclosure. 

Researchers also disclosed CVE-2026-44115, a vulnerability resulting from an incomplete denylist implementation that allows adversaries to conceal shell expansion tokens in heredoc payloads and execute commands that bypass runtime restrictions. 

A fourth vulnerability known as CVE-2026-44118 introduces an improper access control condition in which non-owner loopback clients can impersonate privileged users to manipulate gateway configurations, alter scheduled cron operations, and gain greater control of execution environments through unauthorized use of privileged accounts. These flaws collectively demonstrate the possibility of insufficient isolation, weak privilege boundaries, and inadequate runtime validation mechanisms within modern AI agent infrastructures resulting in a full compromise chain which can sustain stealthy and persistent access despite seemingly isolated weaknesses.

OpenClaw's rapid adoption and permissive architecture have contributed to its rapid transformation from a niche automation framework into a widely deployed AI-driven orchestration environment, further amplifying its security implications.

In late 2025, Austrian engineer Peter Steinberger released a public version of the project that gained wide traction because of its unique capability to provide custom automation capabilities outside of tightly controlled commercial ecosystems. The OpenClaw assistant does not rely on vendor-defined integrations, but rather allows users to develop, modify, and distribute executable "skills."

The result is a large repository containing thousands of automation scenarios developed by the community without centrally managing, categorizing, or validating their security. Due to its “self-hackability” design, where configurations, memory stores, and executable logic are maintained using local Markdown-based structures that can be modified by the user, it has attracted both developer interest and growing scrutiny from security researchers concerned about the absence of hardened trust boundaries. 

It was discovered that hundreds of OpenClaw administrative interfaces were accessible over the internet and did not require authentication. These concerns escalated. Investigations revealed that improperly configured reverse proxies could forward external traffic through localhost-trusted channels, causing the platform to mistakenly treat remote requests as privileged local connections. 

Security researcher Jamieson O'Reilly demonstrated the severity of the issue by gaining access to sensitive assets such as credentials for Anthropic APIs, Telegram bot tokens, Slack environments, and archived conversations. Further research revealed that prompt injection attacks could be used to manipulate the agent to perform unintended behavior by embedding malicious instructions in emails, files, or web content processed by the underlying large language model. 

One such scenario was demonstrated by Matvey Kukuy's delivery of crafted email payloads which coerced the bot to provide private cryptographic keys from the host environment upon receiving instructions to review inbox contents. Several independent experiments have demonstrated the system discloses confidential email data, exposes the contents of home directories via automated shell commands, and searches local storage automatically after receiving psychologically manipulative prompts. 

In aggregate, these incidents illustrate an industry concern that autonomous AI agents operating with wide filesystem visibility, persistent memory, and delegated execution privileges may be highly susceptible to indirect command manipulation when deployed in a manner that does not adhere to strict authentication controls, runtime isolation, and contextual validation controls.

Despite the fact that there is no publicly verified link to any known advanced persistent threat group linking the exploitation of the OpenClaw vulnerabilities, security analysts note that the operational characteristics of the attack are in line with tradecraft commonly utilized in credential theft, browser hijacking, and adversary-in-the-middle intrusion campaigns.

MITRE ATT&CK framework techniques, including T1185 related to browser session hijacking as well as T1557 related to man-in-the-middle attacks, have been identified as parallel techniques, and both of these techniques are frequently used in targeted attacks against enterprise authentication systems and cloud-based environments. There has been a growing concern that financially motivated threat actors and state-aligned operators may incorporate the technique into broader intrusion toolsets due to the availability of publicly available proof-of-concept exploit methods and the relatively low complexity required to weaponize these flaws. 

It was discovered that all versions of OpenClaw and Clawdbot before version 2026.2.2, including all builds up to version 2026.2.1, have been vulnerable to the vulnerability. Researchers stated that in the updated version, unauthorized WebSocket interactions are restricted and authentication checks are enforced on the exposed /cdp interface, which previously permitted unsafe assumptions regarding local trust. 

During the deployment of immediate patches, security teams are advised to monitor for suspicious localhost WebSocket activity, unauthorized browser extension behaviors, and attempts to communicate outbound via ws://127.0.0.1:17892/cdp or infrastructure controlled by known attackers. 

When rapid patching is an operational challenge, experts recommend that the OpenClaw browser extension be temporarily disabled, that host-level firewall restrictions be enforced around local WebSocket services, and that browser session telemetry and endpoint indicators of compromise be continuously reviewed to determine if there has been an unauthorized persistence of credentials or credential interception. 

OpenClaw's vulnerability chain is a reflection of an overall security reckoning taking place in the rapidly expanding AI agent ecosystem, in which convenience-driven automation is outpacing the maturation of defensive safeguards designed to contain it in a rapidly expanding ecosystem. There is an increasing tendency for autonomous assistants to gain access to developer environments, authentication tokens, local storage, messaging platforms, and cloud infrastructure, so that the traditional boundaries between trusted execution and untrusted input are being eroded. 

Platforms with the ability to self-modify, delegate command execution, and persist contextual memory present significant security risks that are fundamentally different from conventional software, particularly when deployed with excessive privileges and inadequate isolation during runtime. 

Despite the fact that OpenClaw's vulnerabilities may be mitigated by patching, access restrictions, and stronger authentication enforcement, the incident emphasizes the larger industry concern that artificial intelligence-driven operational tools may become a high value target for both cybercriminals and advanced intrusion groups in the very near future. 

These findings serve as a reminder that, as organizations adopt autonomous AI systems, security architecture, privilege segmentation, and continuous monitoring must no longer be overlooked.

Cybersecurity Can No Longer Be Left to IT Teams Alone, Experts Warn

 



As cyber attacks continue to grow in frequency and complexity, organizations are facing increasing pressure to rethink who should be responsible for protecting their systems, operations, and sensitive data. Security experts say cybersecurity is no longer simply an IT issue. Instead, it has become a business-wide responsibility that requires involvement from leadership teams, employees, and external security partners alike.

The discussion comes at a time when cyber threats are affecting organizations at an alarming scale. According to the UK Government’s Cyber Security Breaches Survey 2025/2026, 43% of businesses and 28% of charities reported experiencing cybersecurity breaches or attacks during the past year. The numbers were considerably higher among medium-sized businesses, where 65% faced incidents, and large enterprises, where the figure rose to 69%. High-income charities were also heavily targeted, with 34% reporting attacks.

Phishing continued to dominate as the most common threat. The survey found that 93% of affected businesses and 95% of impacted charities encountered phishing-related attacks. These scams often involve deceptive emails, fake websites, fraudulent login portals, or impersonation attempts designed to steal credentials and sensitive information. Other cyber threats, including malware infections and digital impersonation schemes, also remain a persistent concern for organizations.

The financial damage linked to cybercrime is equally significant. Research associated with cybersecurity company ESET estimated that cyber attacks cost UK businesses nearly £64 billion annually, highlighting the growing economic impact of digital threats.

With risks continuing to escalate, many organizations are reassessing who should oversee cybersecurity strategy and decision-making. Experts say there is no universal model, as responsibility often depends on a company’s size, structure, industry requirements, and risk exposure.

In smaller businesses, cybersecurity duties are frequently managed by IT managers or internal technology teams. However, industry specialists warn that relying solely on technical departments may create gaps between security planning and broader business objectives. As organizations expand, many experts believe cybersecurity leadership should move closer to executive management.

Durgan Cooper, director at CETSAT, emphasized that cybersecurity accountability should ultimately rest with senior leadership or board-level executives. According to Cooper, effective protection requires coordination between technical teams, company leadership, and third-party partners while ensuring that security priorities align with organizational goals.

Within larger enterprises, cybersecurity responsibilities are commonly led by Chief Information Security Officers, often working alongside Chief Information Officers and other senior executives. Spencer Summons, founder of Opliciti, stated that organizations need cybersecurity leaders capable of understanding evolving threats, communicating risks clearly to boards, and integrating security into long-term business planning. He also noted that sectors such as healthcare and finance face additional regulatory pressure that makes executive oversight even more important.

Cybersecurity professionals increasingly stress that protecting organizations cannot remain the responsibility of a single department. Matthew Riley, European Head of Information Security at Sharp Europe, recommended that businesses establish clear governance frameworks defining who is responsible for different security tasks. Many companies now rely on systems such as RACI matrices, which identify who is responsible, accountable, consulted, and informed during cybersecurity operations and incident response.

Experts caution that assigning cybersecurity entirely to IT departments may leave important business risks overlooked. At the same time, distributing responsibility too broadly can weaken accountability and slow decision-making during critical incidents. Instead, many specialists advocate a shared-responsibility culture where cybersecurity awareness is integrated across the entire organization.

The growing intensity of cyber attacks has also increased pressure on cybersecurity professionals themselves. Security teams are now managing ransomware campaigns, phishing attacks, supply chain compromises, and AI-assisted threats at an unprecedented pace, often with limited staffing and resources. Experts say spreading cybersecurity awareness and responsibilities throughout the organization can help reduce burnout while improving overall resilience.

Thom Langford, EMEA Chief Technology Officer at Rapid7, argued that cybersecurity must become part of every business function rather than remaining isolated within security teams. According to Langford, organizations are more resilient when employees across all levels actively participate in protecting systems and identifying suspicious activity.

Industry leaders also believe executive involvement plays a decisive role in cybersecurity effectiveness. Specialists from Qualys noted that Chief Information Security Officers should ideally report directly to CEOs or boards rather than operating solely under IT leadership. This structure helps organizations approach cybersecurity as a broader business risk issue instead of treating it purely as a technical challenge.

Alongside internal leadership, many businesses are increasingly turning to external cybersecurity providers for additional expertise and support. Outsourcing security operations can help companies address skill shortages and resource limitations, but experts warn that organizations must still maintain strategic oversight. Businesses are advised to conduct thorough vendor assessments, establish strong service-level agreements, and continuously monitor external providers to reduce operational risks.

Security specialists say outsourcing works most effectively when external consultants collaborate closely with internal teams instead of replacing them entirely. Maintaining internal visibility and control remains critical for ensuring cybersecurity strategies stay aligned with company objectives.

As cyber threats continue growing, experts increasingly agree that cybersecurity ownership cannot rest with one person alone. Effective security strategies require executive accountability, technical expertise, employee participation, and continuous collaboration across departments and external partners. Organizations that treat cybersecurity as a company-wide responsibility rather than a siloed IT function are likely to be better prepared for the growing challenges of the modern digital threat environment.

Indian Banks Step Up IT Spending Over AI Security Fears

 

Public sector banks are preparing to spend more on technology because a new wave of AI-driven cyber risk is making their existing systems look vulnerable. The main concern is Anthropic’s Claude Mythos, which has raised alarms for its ability to identify software weaknesses and potentially help attackers exploit them. 

Indian banks are being pushed to treat IT spending as a survival need, not just an operating cost. Senior bank executives have said they will raise budgets this financial year, with a large share going into cybersecurity, stronger defenses, and monitoring tools to reduce exposure to attacks. 

The issue is especially serious because banks depend on legacy systems that run critical operations in real time. One successful breach can ripple across payments, forex, clearing, depositories, and other linked financial networks, making the whole sector more exposed than a single institution might appear on its own.

The concern grew after Anthropic’s tests suggested Mythos could perform advanced cybersecurity and hacking-related tasks at a level that outpaced humans in some cases. Reports also noted that the model found thousands of high-severity vulnerabilities, which made regulators and bank leaders worry that similar tools could shorten the time between discovering a flaw and weaponizing it. 

In response, the government formed a panel under SBI Chairman C S Setty to study the risks and recommend safeguards. Finance Minister Nirmala Sitharaman has also urged banks to take pre-emptive measures, while institutions are expected to coordinate in the coming weeks to identify weak points and decide where additional investment is needed.

Axon Police Taser and Body Camera Bluetooth Flaw Raises Officer Tracking Concerns

 

Australian police may unknowingly be exposing their live locations through Bluetooth-enabled devices made by Axon. Researchers discovered that body cameras and tasers used across the country broadcast signals without modern privacy protections, potentially allowing anyone nearby to detect and track officers in real time. 

Unlike smartphones that randomize Bluetooth MAC addresses to prevent tracking, Axon devices reportedly use static identifiers. This means simple apps or laptops can detect nearby police equipment and reveal device details, coordinates, and movement patterns. 

A security researcher demonstrated the issue in Melbourne using publicly available Android software capable of identifying Axon devices. Custom tools reportedly extended the tracking range to nearly 400 meters, raising concerns for undercover officers, tactical teams, and police returning home after shifts. 

Experts warn criminal groups could deploy low-cost Bluetooth scanners across neighborhoods to monitor police activity, detect raids, or map officer movement in real time. The flaw has reportedly been known since 2024, when warnings were sent to police agencies, ministers, federal authorities, and national security offices urging immediate action. 

Internal reviews within Victoria Police reportedly acknowledged the threat and recommended protections for covert units. However, after discussions with Axon, the issue was later downgraded internally. Victoria Police later stated there had been no confirmed cases of officers being tracked through the devices. Police agencies across New South Wales, Queensland, Western Australia, South Australia, Tasmania, the Northern Territory, and the Australian Federal Police were also informed of the vulnerability. 

Most declined to explain whether officers were warned or if safeguards had been introduced. Researchers believe the flaw stems from hardware design rather than software alone, making simple patches unlikely to fully resolve the problem. Fixing it may require redesigning core system components entirely. 

Axon has acknowledged on its security pages that its cameras emit detectable Bluetooth and Wi-Fi signals and advises customers to consider operational risks before deployment in sensitive situations. Critics argue these warnings remain buried in technical documentation instead of being clearly communicated to frontline officers. 

The issue highlights growing concerns about modern policing’s dependence on connected technology. As law enforcement increasingly relies on wireless devices, AI systems, and cloud-based tools, small cybersecurity flaws can quickly become serious operational and physical safety risks.

AI Chatbot Training Raises Growing Privacy and Data Security Concerns

 

Most conversations with AI bots carry hidden layers behind simple replies. While offering answers, some firms quietly gather exchanges to refine machine learning models. Personal thoughts, job-related facts, or private topics might slip into data pools shaping tomorrow's algorithms. Experts studying digital privacy point out people rarely notice how freely they share in routine bot talks. Hidden purposes linger beneath what seems like casual back-and-forth. Most chatbots rely on what experts call a large language model. 

Through exposure to massive volumes of text - pulled from sites, online discussions, video transcripts, published works, and similar open resources - these models grow sharper. Exposure shapes their ability to spot trends, suggest fitting answers, and produce dialogue resembling natural speech. As their learning material expands, so does their skill in managing complex questions and forming thorough outputs. Wider input often means smoother interactions. 

Still, official data isn’t what fills these models alone. Input from people using apps now feeds just as much raw material to tech firms building artificial intelligence. Each message entered into a conversational program might later get saved, studied, then applied to sharpen how future versions respond. Often, that process runs by default - only pausing if someone actively adjusts their preferences or chooses to withdraw when given the chance. Worries about digital privacy keep rising.

Talking to artificial intelligence systems means sharing intimate details - things like medical issues, money problems, mental health, job conflicts, legal questions, or relationship secrets. Even though firms say data gets stripped of identities prior to being used in machine learning, skeptics point out people must rely on assurances they can’t personally check. 

Some data marked as private today might lose that status later. Experts who study system safety often point out how new tools or pattern-matching tricks could link disguised inputs to real people down the line. Talks involving personal topics kept inside artificial intelligence platforms can thus pose hidden exposure dangers years after they happen. Most jobs now involve some form of digital tool interaction. 

As staff turn to AI assistants for tasks like interpreting files, generating scripts, organizing data tables, composing summaries, or solving tech glitches, risks grow quietly. Information meant to stay inside - such as sensitive project notes, client histories, budget figures, unique program logic, compliance paperwork, or strategic plans - can slip out without warning. When typed into an assistant interface, those fragments might linger in remote servers, later shaping how the system responds to others. Hidden patterns emerge where private inputs feed public outputs. 

One concern among privacy experts involves possible legal risks for firms in tightly controlled sectors. When companies send sensitive details - like internal strategies or customer records - to artificial intelligence tools without caution, trouble might follow. Problems may emerge later, such as failing to meet confidentiality duties or drawing attention from oversight authorities. These exposures stem not from malice but from routine actions taken too quickly. 

Because reliance on AI helpers keeps rising, people and companies must reconsider what details they hand over to chatbots. Speedy answers tend to push aside careful thinking, particularly when automated aids respond quickly with helpful outcomes. Still, specialists insist grasping how these learning models are built matters greatly - especially for shielding private data and corporate secrets amid expanding artificial intelligence use.

Maryland’s New Grocery Pricing Rules Leave Critics Unconvinced


 

Despite the increasing acceptance of algorithmic pricing systems in today's retail ecosystem, Maryland has taken action to establish the first statewide legal ban on grocery pricing that incorporates consumer surveillance data. 

Upon signing House Bill 895 into law on April 28, 2026, Governor Wes Moore introduced a regulatory framework to restrict the use of personal data by food retailers and third-party delivery platforms to influence consumer costs by establishing a regulatory framework. 

The Act is formally titled the Protection From Predatory Pricing Act. Specifically, this legislation addresses the use of artificial intelligence-driven pricing engines and behavioral analytics that may adjust prices according to factors such as purchase history, browser activity, geographical location, and demographic traits. 

The law, framed by state officials as an effective consumer protection measure against profit optimization powered by data, prohibits large food retailers, qualified delivery service providers, and others operating stores over 15,000 square feet from imposing higher prices on consumers based upon individual data signals. Supporters see the measure as a significant step in responding to the increasing commercialization of consumer data, but critics claim that the measure’s limited scope and enforcement structures may significantly erode its practical significance.

The Maryland approach is being closely examined as a possible template for pricing regulation in the future by policymakers and industry stakeholders throughout the United States. The debate is centered on the increasing use of surveillance-based dynamic pricing systems that continuously adjust product costs based on an analysis of the consumer’s digital footprint as well as their purchasing patterns, geographic location, and demographics. These models may result in completely different prices for the same grocery item if two shoppers purchase the item within minutes of each other. The results are determined by algorithms that analyze shoppers' perceived purchase tolerance.

A consumer advocate or competition analyst contends that such practices shift pricing strategy away from traditional market factors and toward individualised revenue extraction, enabling businesses to identify and charge the highest amount that a specific customer is statistically most likely to accept. 

In spite of Maryland's legislation being specifically tailored to the grocery sector, federal regulators, such as the Federal Trade Commission, have identified similar pricing mechanisms across retail categories including apparel, cosmetics, home improvement products, and consumer goods previously. 

Several advocacy groups claim that the impact of price volatility is even more significant within the food retail industry, where pricing volatility directly impacts household affordability and access to essentials. In the wake of committee-level debates regarding enforcement language and consumer protection standards, the legislation quickly gained momentum, culminating in Senate approval on March 23, 2026, followed by final House concurrence after several weeks of sustained lobbying by the industry. 

By passing HB 895 on April 28, Governor Wes Moore established Maryland as the first state to pass legislation prohibiting discriminatory surveillance-driven grocery pricing practices. As the state's Attorney General prepares interpretive guidance later this summer, retailers and third-party delivery platforms will have a limited five-month compliance window to comply with the statute, which is scheduled to take effect on October 1, 2026. 

While the legislation has received broad bipartisan support, the accelerated legislative process has left unresolved compliance and evidentiary questions that industry stakeholders are now seeking to clarify. In Maryland, enforcement authority is primarily delegated to the Maryland Consumer Protection Division and the Attorney General, where violations can be prosecuted as unfair and deceptive trade practices subject to civil penalties of up to $10,000 per violation, with repeat offenses subject to double fines. 

Furthermore, the law provides that individuals may be subject to misdemeanor penalties, including imprisonment for up to a year and a fine of up to $1,000 for committing a misdemeanor. The law will also provide businesses accused of violations with 45 days to remedy the alleged misconduct prior to formal enforcement, which critics claim could substantially lessen its deterrent effect. 

Due to the narrowly limited rights to sue outside of limited labor-related circumstances, early legal interpretations are anticipated to be primarily determined by state-led enforcement actions which identify whether algorithmic pricing decisions are based on protected categories of personal information.

Regulatory specialists anticipate that the forthcoming guidance will clarify the evidence standards necessary to establish data-driven pricing manipulation, particularly when such manipulation involves opaque artificial intelligence systems and automated pricing engines. For retailers with mature compliance programs, financial penalties are likely to remain manageable. However, legal observers observe that reputational damage, regulatory scrutiny, and the erosion of consumer trust may ultimately prove more consequential than statutory fines. 

Labor unions, consumer advocacy organizations, and analysts of digital rights have increased the debate over Maryland's surveillance pricing law by arguing that the legislation has significant operational gaps retailers could potentially exploit by utilizing sophisticated pricing strategies. Public awareness campaigns have already been launched by United Food and Commercial Workers International Union, including a 30-second advertisement in which algorithmic pricing systems are illustrated as a possible way to reshape grocery shopping based on predictions of consumer behavior.

The advocacy groups maintain that despite the statute's significant legal precedent, the exemptions and enforcement structure may ultimately permit the continuation of many forms of data-driven price discrimination. Before the bill was enacted, Consumer Reports researchers had warned lawmakers about the bill's weaknesses, arguing that it lacks a clear baseline price standard against which discriminatory pricing could be measured.

Policy analysts have suggested that this omission creates a situation where nearly any fluctuating price could be viewed as a promotional discount instead of a targeted surcharge. Additionally, criticism has focused on the law's narrow restrictions against individualized pricing while allowing hyper-segmented pricing models to segment consumers into highly specific groups based on demographics or behavioral characteristics. There has been a growing consensus among consumer advocates that pricing strategies that target narrowly defined groups of consumers such as elderly individuals living alone in restricted retail markets - can result in similar outcomes to direct targeting of individual consumers. 

The broad exemptions granted to loyalty programs, membership pricing structures, subscription-based purchases, and recurring service models are also being criticized as providing retailers with alternative mechanisms for deploying surveillance-based pricing systems that would not technically violate the law. 

Maryland's legislation has sparked widespread national interest as at least a dozen states are considering similar restrictions on algorithmic price personalization practices, including New York, New Jersey and Illinois. According to consumer rights advocates, the Maryland experience is an early example of a regulatory stress test that may provide guidance for how future state legislatures will address the intersection of artificial intelligence, behavioral analytics, and retail pricing governance in the future. 

Some critics of the current framework, such as consumer advocate Oyefeso, contend that it risk legitimizing more extensive surveillance-based pricing practices by implying to retailers that some forms of algorithmic personalization remain legal. Supporters of stronger reforms, however, believe the legislation may be revisited in subsequent sessions as lawmakers grapple with the practical realities of enforcing transparency and accountability in increasingly opaque AI-driven pricing environments. 

Regulating surveillance pricing in Maryland marks a significant shift in the broader debate about how artificial intelligence, consumer data, and algorithmic commerce should be regulated in essential retail markets. It is argued that the law's exemptions, cure periods, and enforcement limitations may reduce the law's effectiveness immediately; however, the legislation has already set a national standard by requiring policymakers, retailers, and technology companies to consider the ethical and regulatory implications of data-driven price personalization. 

Maryland's framework may serve as both a cautionary example and a basis for future policies relating to the protection of consumers from algorithmic pricing as more states consider similar measures and consumer scrutiny over algorithmic pricing increases. 

A growing number of grocery retailers and delivery platforms have become aware that pricing systems that use behavioral analytics and artificial intelligence will no longer be exempt from regulatory oversight, particularly when affordability, transparency, and public trust are at stake.

India’s Cybersecurity Workforce Struggles to Keep Pace as AI and Cloud Systems Expand

 



India’s fast-growing digital economy is creating an urgent demand for cybersecurity professionals, but companies across the country are finding it increasingly difficult to hire people with the technical expertise required to secure modern systems.

A new study released by the Data Security Council of India and SANS Institute found that businesses are facing a serious shortage of skilled cybersecurity workers as technologies such as artificial intelligence, cloud computing, and API-driven infrastructure become more deeply integrated into daily operations.

According to the Indian Cyber Security Skilling Landscape Report 2025–26, nearly 73 per cent of enterprises and 68 per cent of service providers said there is a limited supply of qualified cybersecurity professionals in the country. The report suggests that organisations are struggling to build teams capable of handling increasingly advanced cyber risks at a time when companies are rapidly digitising services, storing more information online, and adopting AI-powered tools.

The hiring process itself is also becoming slower. Around 84 per cent of organisations surveyed said cybersecurity positions often remain vacant for one to six months before suitable candidates are found. This delay reflects a growing mismatch between industry expectations and the skills available in the job market.

Researchers noted that many applicants entering the cybersecurity workforce lack practical exposure to real-world security environments. Around 63 per cent of enterprises and 59 per cent of service providers said candidates often do not possess sufficient hands-on technical experience. Employers are no longer only looking for basic security knowledge. Companies increasingly require professionals who understand multiple areas at once, including cloud infrastructure, application security, digital identity systems, and access management technologies. Nearly 58 per cent of enterprises and 60 per cent of providers admitted they are struggling to find candidates with this type of cross-functional expertise.

The report connects this shortage to the changing structure of enterprise technology systems. Many organisations are moving away from traditional on-premise setups and shifting toward cloud-native environments, interconnected APIs, and AI-supported operations. As businesses automate more routine tasks, demand is gradually moving away from entry-level operational positions and toward specialised cybersecurity roles that require analytical thinking, threat detection capabilities, and advanced technical decision-making.

Artificial intelligence is now becoming one of the largest drivers of cybersecurity hiring demand. Around 83 per cent of organisations surveyed described AI and generative AI security skills as essential for future operations, while 78 per cent reported strong demand for AI security engineers. The findings also show that nearly 62 per cent of enterprises are already running active AI or generative AI projects, which experts say can create additional security risks if systems are not properly monitored and protected.

As companies deploy AI systems, the attack surface for cybercriminals also expands. Security teams are now expected to defend AI models, protect sensitive datasets, monitor automated systems for manipulation, and secure APIs connecting multiple digital services. Industry experts have repeatedly warned that many organisations are adopting AI tools faster than they are building security frameworks around them.

Some cybersecurity positions remain especially difficult to fill. The report found that almost half of service providers and nearly 40 per cent of enterprises are struggling to recruit security architects, professionals responsible for designing secure digital infrastructure and long-term defence strategies. Demand is also increasing for specialists in operational technology and industrial control system security, commonly known as OT/ICS security. These professionals help protect critical infrastructure such as manufacturing facilities, power systems, transportation networks, and industrial operations from cyberattacks.

At the same time, companies are facing growing retention problems. Around 70 per cent of service providers and 42 per cent of enterprises said employees are frequently leaving for competitors offering better salaries and career opportunities. Limited access to advanced training and upskilling programs is also contributing to workforce attrition across the sector.

The findings point to a larger issue facing the cybersecurity industry globally: technology is evolving faster than workforce development. Experts believe companies, educational institutions, and training organisations may need to work more closely together to create industry-focused learning pathways that prepare professionals for modern cyber threats instead of relying heavily on theoretical instruction alone.

With India continuing to expand digital public infrastructure, cloud adoption, fintech services, AI development, and connected industrial systems, cybersecurity professionals are expected to play a central role in protecting sensitive information, maintaining operational stability, and preserving trust in digital platforms.

Ransomware Attacks Reach All Time High, Leaked Over 2.6 Billion Records

 

A recent analysis of cybercrime data of last year (2025) disclosed that ransomware victims have risen rapidly by 45% in the previous year. But this is not important, as there exists something more dangerous. The passive dependence on hacked credentials as the primary entry point tactic is the main concern. Regardless of the platforms used, the accounts you are trying to protect, it is high time users start paying attention to password security. 

State of Cybercrime report 2026


The report from KELA found over 2.86 billion hacked credentials, passwords, session cookies, and other info that allows 2FA authentication. Surprisingly, authentication services and business cloud accounted for over 30% of the leaked data in 2025.

The analysis also revealed that infostealer malware which compromised credentials is immune to whatever OS you are using, “infections on macOS devices increased from fewer than 1,000 cases in 2024 to more than 70,000 in 2025, a 7,000% increase,” the report said.

Expert advice


Experts from Forbes have warned users about the risks associated with infostealer malware endless times. The leaked data includes FBI operations aimed at shutting down cybercrime gangs, millions of gmail passwords within leaked infostealer logs, and much more. Despite the KELA analysis, the risk continues. To make things worse, the damage is increasing year after year.

About infostealer


Kela defined the malware as something that is “designed to exfiltrate sensitive data from compromised machines, including login credentials, authentication tokens, and other critical account information.” What is more troublesome is the ubiquity of malware-as-a-service campaigns in the dark web world. The entry barrier is not closed, but the gates have been kicked wide open for experts as well as amateur threat actors. Data compromise in billions

Infostealer malware, according to Kela, ‘is designed to exfiltrate sensitive data from compromised machines, including login credentials, authentication tokens, and other critical account information.” And with the now almost universal availability of malware-as-a-service operations to the infostealer criminal world, the barrier to entry has not only been lowered but kicked to the curb completely.

In 2025, Kela found around “3.9 million unique machines infected with infostealer malware globally, which collectively yielded 347.5 million compromised credentials.” The grand total amounts to 2.86 billion hacked credentials throughout all platforms: databases of infostealer logs and dark web criminal marketplaces.

Tricks used by infostealers:


AI-generated tailored scams, messaging apps, and email frequently use Phishing-as-a-Service to get around MFA. In so-called "hack your own password" assaults, users are duped into manually running scripts in order to circumvent conventional security measures.

Trojanized software is promoted by malicious advertisements and search results, increasing the risk of infection. In supply chain assaults, high-privilege credentials are the target of poisoned packages and DevTools impersonation. Form-grabbing and cookie theft are made possible via compromised browser extension updates. Fake software updates and pirated apps continued to be successful.

OpenAI Codex Bug Leads to GitHub Token Breach

 

In March 2026, researchers from BeyondTrust showed that a tailored GitHub branch name was enough to steal Codex’s OAuth token in cleartext. Tech giant OpenAI termed it as “Critical P1”. Soon after, Anthropic’s Claude Code source code leaked into the public npm registry, and Adversa’s Claude Code mutely ignored its own deny protocols once a prompt (command) exceeded over 50 subcommands.

Malicious codes in AI These codes were not isolated vulnerabilities. They were new in a nine-month campaign: six research teams revealed exploits against Copilot, Vertex AI, Codex, Claude Code. Every exploit followed the same strategy. An AI agent kept a credential, performed an action, and verified to a production system without any human session supporting the request.

The attack surface was first showcased at Balck Hat USA 2025, where experts hacked ChatGPT, Microsoft Copilot Studio, Gemini, Cursor and many more, on stage, with zero clicks. After nine, threat actors breached those same credentials.

How a branch name in Codex compromised GitHub


Researchers at BeyondTrust found Codex cloned repositories using a GitHub OAuth token attached in the git remote URL. While cloning, the branch name label allowed malicious data into the setup script. A backtick subshell and a semicolon changed the branch name into an extraction payload.

About the bug


The vulnerability affects the ChatGPT website, Codex CLI, Codex SDK, and the Codex IDE Extension. All reported issues have since been fixed in collaboration with OpenAI's security team.

This vulnerability allows an attacker to inject arbitrary commands through the GitHub branch name parameter, potentially leading to the theft of a victim's GitHub User Access Token—the same token Codex uses to authenticate with GitHub—through automated techniques. A victim's GitHub User Access Token, which Codex needs to authenticate with GitHub, may be stolen as a result.

Vulnerability impact


This vulnerability can scale to compromise numerous people interacting with a shared environment or GitHub repository using automated ways. The Codex CLI, Codex SDK, Codex IDE Extension, and the ChatGPT website are all impacted by the vulnerability. Since then, every issue that was reported has been fixed in collaboration with OpenAI's security team.

“OpenAI Codex is a cloud-based coding agent, accessible through ChatGPT. It allows users to point the tool toward a codebase and submit tasks through a prompt. Codex then spins up a managed container instance to execute these tasks—such as generating code, answering questions about a codebase, creating pull requests, and performing code reviews against the selected repository,” said Beyond Trust.

Spotify Verified Badge Targets AI Music Confusion as Human Artist Authentication Expands

 

Now appearing beside artist profiles, Spotify’s new “Verified by Spotify” badge uses a green checkmark to highlight real human creators. Only accounts meeting the platform’s internal authenticity checks receive the label. Rather than algorithm-built personas, these profiles represent actual musicians behind the music. The rollout is happening gradually, changing how artists appear in searches, playlists, and recommendations. 

The update arrives as concerns continue growing around AI-generated music flooding streaming services. Spotify says verification depends on signals such as active social media accounts, consistent listener activity, merchandise listings, and live performance schedules - indicators suggesting a genuine person is tied to the profile. 

According to the company, these measures are designed to separate human creators from automated content increasingly appearing online.  Spotify says most artists users actively search for will eventually receive verification. Artists recognized for meaningful contributions to music culture are expected to be prioritized ahead of bulk-uploaded or mass-generated accounts. 

Over the coming weeks, the checkmarks will gradually appear across the platform, with influence and authenticity carrying more weight than upload volume. The move comes as streaming platforms face mounting criticism over how they handle AI-generated tracks. While the badge confirms a profile belongs to a real person, some critics quickly pointed out that it does not indicate whether artificial intelligence was used to help create the music itself. 

Questions around what counts as “real” music continue growing as AI tools become more involved in production. Creator-rights advocate and former AI executive Ed Newton-Rex warned that systems like Spotify’s may unintentionally disadvantage independent musicians who do not tour, sell merchandise, or maintain strong social media visibility. 

Instead, he suggested platforms should directly label AI-generated songs rather than relying solely on artist verification. Experts also note that defining AI involvement in music is increasingly difficult. Professor Nick Collins from Durham University described AI-assisted music creation as a broad spectrum rather than a simple divide between human-made and machine-made work. Many songs now involve software-assisted mixing, mastering, composition, or editing, making it far harder to classify music by origin alone. 

Spotify has faced years of criticism over AI-generated audio. Across forums and online communities, users have repeatedly called for clearer labels showing whether tracks were created by humans or algorithms. Some developers have even built independent tools aimed at detecting and filtering AI-generated songs on the platform. Concerns intensified after projects like The Velvet Sundown attracted large audiences despite having no interviews, live performances, or publicly traceable history. 

The group later described itself as a “synthetic music project” supported by artificial intelligence, fueling debate around transparency in digital music spaces. Spotify’s latest verification effort appears aimed at rebuilding trust while balancing support for evolving AI technologies. The move also reflects a broader trend across digital platforms, where companies are introducing verification systems to distinguish human-created content from synthetic material as AI-generated media becomes harder to identify.

Why Europe Is Rethinking Its Dependence on US Cloud Providers




Concerns around digital sovereignty are rapidly becoming one of the most important debates shaping the future of cloud computing, artificial intelligence, and government technology infrastructure across Europe and the UK.

The discussion recently gained attention after Chi Onwurah, chair of the UK Science, Innovation and Technology Select Committee, criticized Britain’s broader technology strategy and warned about growing dependence on a small group of major US technology companies. Her remarks pointed to reliance on providers such as Microsoft and Amazon Web Services, while also referencing Palantir Technologies because of its involvement in NHS and defence-related contracts. She also raised concerns about foreign-controlled technology supply chains supporting critical public infrastructure.

At the centre of the debate is the meaning of “digital sovereignty,” a term that is increasingly used by governments but often interpreted differently. In practical terms, sovereignty refers to a country maintaining legal authority and control over its citizens’ sensitive data, including where that information is processed, accessed, and governed. Experts argue that sovereign data should only fall under the jurisdiction of the nation to which it belongs, rather than being exposed to foreign legal systems or overseas regulatory reach.

The issue has become especially significant in the era of public cloud computing. Before large-scale cloud adoption, most government and enterprise data was stored and processed inside domestic datacentres, limiting both physical and remote access to national borders. While foreign software vendors occasionally required access for maintenance or support purposes, control over infrastructure largely remained local.

That model changed as governments and businesses increasingly adopted cloud services operated by US-headquartered providers. As organizations shifted toward subscription-based cloud platforms, concerns began emerging over whether sensitive national data could still be considered sovereign if it was processed through globally distributed infrastructure.

Much of the modern sovereignty debate intensified following the Schrems II ruling, a landmark European court decision that challenged how personal data could be transferred outside the EU to countries viewed as having weaker privacy protections. Since then, governments across Europe have pushed for tighter oversight of where data travels and who ultimately controls cloud infrastructure.

Although sovereignty concerns are often framed as a problem tied only to hyperscalers, industry analysts say the challenge is broader. Companies including IBM, Oracle Corporation, and Hewlett Packard Enterprise also face pressure to adapt their cloud and data processing models to meet stricter sovereignty expectations.

The debate has also been intensified by geopolitical tensions. European governments have become increasingly cautious about long-term dependence on foreign-owned digital infrastructure, particularly as cloud computing and artificial intelligence become more deeply connected to defence, healthcare, and public services. Analysts note that data infrastructure is now being viewed similarly to energy or telecommunications infrastructure: strategically important and politically sensitive.

Among the prominent providers, Microsoft was one of the earliest companies to experiment with sovereign cloud initiatives, including a dedicated German version of Microsoft 365. However, that model was eventually discontinued in 2022. Critics argue the company now faces greater difficulties adapting because many of its cloud services operate through highly interconnected global systems spread across more than 100 countries.

Questions around transparency have also created challenges. Reports previously indicated that Microsoft struggled to provide detailed information about certain data flows when requested by the Scottish Police Authority under data protection obligations. Investigative reporting from ProPublica also stated that US authorities encountered similar difficulties while attempting to evaluate Microsoft cloud services under FedRAMP certification requirements for government environments.

Additional scrutiny has emerged around Microsoft’s artificial intelligence infrastructure plans. The company had previously indicated that in-country AI processing capabilities for Copilot services in the UK would arrive by the end of 2025, though timelines have reportedly shifted into 2026. Some European customers are also expected to receive regional AI processing instead of fully sovereign national deployments.

Industry experts increasingly categorize sovereign cloud approaches into multiple levels. One common method involves creating “data boundaries,” where providers attempt to restrict where customer data is stored or processed while still operating under global cloud architectures. Critics argue this model may not fully satisfy stricter interpretations of sovereignty because some operational control can still remain overseas.

A second approach focuses on partnerships with local operators that manage sovereign services regionally. Amazon Web Services has promoted its European Sovereign Cloud initiative using this framework, arguing that the platform aligns with EU regulatory requirements. However, some analysts contend that EU-level governance is not the same as national sovereignty, particularly for non-EU countries such as the UK. Concerns have also been raised over whether US legislation, including the CLOUD Act, could still apply in certain circumstances.

Meanwhile, Google Cloud has attracted attention through its partnership with French defence and technology company Thales Group. Their joint venture, S3NS, is designed around France-specific sovereign infrastructure with air-gapped operations, meaning the systems can function independently without continuously communicating with external global networks for updates or validation checks.

Security specialists consider air-gapped architecture an important benchmark for sovereign cloud environments because it reduces reliance on foreign operational control. Google’s Distributed Cloud Air-Gapped platform is currently viewed by some analysts as one of the more mature sovereign cloud offerings available, despite still lacking some features present in its broader public cloud ecosystem.

The approach has already attracted major defence-related interest. France, NATO members, and the German military have all shown interest in sovereign infrastructure models, while the UK Ministry of Defence recently announced a £400 million contract spanning five years tied to these types of capabilities.

Competing alternatives are still evolving. AWS offers LocalStack-focused options largely aimed at development environments, while Microsoft’s disconnected Azure Local products have faced criticism from some analysts who argue the offerings remain less mature than competing sovereign platforms.

Despite rapid investment, experts say the sovereign cloud market is still in its early stages. Google’s France-based partnership model currently appears to offer one of the clearest examples of locally controlled hyperscale infrastructure, while AWS continues refining its European-focused model and Microsoft works through broader architectural and transparency challenges.

At the same time, the sovereignty movement may create new opportunities for regional cloud providers and domestic technology companies. However, analysts warn that building competitive sovereign infrastructure will require long-term investment, government support, and procurement strategies that allow interoperability between multiple vendors rather than locking public institutions into a single provider.

Many experts believe the future of sovereign technology infrastructure will likely depend on hybrid and partnership-driven models combining hyperscale cloud capabilities with locally managed operations. Supporters of the S3NS approach argue it offers an early blueprint for how global cloud providers and national operators could collaborate while still preserving local control over sensitive data and critical digital systems.

Ransomware Victims Jump 45% in 2025 as Stolen Credentials Fuel Global Cybercrime Surge

 

A newly released cybercrime analysis has revealed a dramatic rise in ransomware activity during 2025, with the number of victims increasing by 45% compared to the previous year. However, cybersecurity experts say the bigger concern lies in the growing dependence on stolen credentials as the main entry point for cyberattacks.

According to the State of Cybercrime 2026 report published by KELA, researchers identified nearly 2.86 billion compromised credentials, including passwords and session cookies capable of bypassing two-factor authentication (2FA). More than 30% of the exposed data originated from business cloud platforms and authentication services throughout 2025.

The report also highlighted a sharp increase in malware infections targeting Apple users. “infections on macOS devices increased from fewer than 1,000 cases in 2024 to more than 70,000 in 2025, a 7,000% increase,” the report confirmed.

Cybersecurity researchers have repeatedly warned about the growing threat posed by infostealer malware. Despite multiple law enforcement crackdowns and investigations into cybercriminal groups operating stolen password databases, the threat landscape continues to worsen year after year.

KELA described infostealer malware as software “designed to exfiltrate sensitive data from compromised machines, including login credentials, authentication tokens, and other critical account information.” The report further noted that the rise of malware-as-a-service platforms has significantly lowered the barrier for cybercriminals, making these tools widely accessible.

Between January 1 and December 31, 2025, KELA stated that it “observed approximately 3.9 million unique machines infected with infostealer malware globally, which collectively yielded 347.5 million compromised credentials.” Across all monitored criminal marketplaces and leaked databases, the total number of compromised credentials tracked reached 2.86 billion.

The report identified several major attack methods commonly used by infostealer operators during 2025:
  • Email and messaging scams powered by AI-generated personalization, often bypassing MFA through Phishing-as-a-Service operations.
  • Social engineering tactics that trick users into manually running malicious scripts, known as “hack your own password” attacks.
  • Malicious advertisements and fake search engine results distributing trojanized software.
  • Supply chain attacks involving poisoned software packages and fake developer tools targeting privileged accounts.
  • Compromised browser extension updates enabling cookie theft and form-grabbing attacks.
  • Pirated applications and counterfeit software updates continuing to spread infections effectively.
Security experts recommend several preventive measures to reduce exposure to these attacks. Users are advised to keep operating systems and software updated only through official sources and avoid clicking links from unsolicited emails or messages, even if they appear legitimate.

Experts also stress the importance of using password managers to prevent password reuse across multiple accounts, limiting the damage caused by a single breach. Enabling two-factor authentication on all supported accounts remains essential, although attackers are increasingly using session-cookie theft to bypass MFA protections.

To strengthen account security further, cybersecurity professionals are encouraging users to adopt passkeys instead of traditional passwords wherever possible. Passkeys offer built-in phishing resistance, are randomly generated, and do not share private authentication keys during sign-ins, making them significantly harder for infostealer malware to compromise.

Purple Team Myth Exposed: Why It's Just Red vs Blue in 2026

 

Many organizations tout their "purple teams" as the pinnacle of cybersecurity collaboration, blending offensive red team tactics with defensive blue team strategies. However, a critical issue persists: these teams often remain siloed, functioning more like red and blue in disguise rather than a true integrated purple force. This misnomer stems from superficial exercises where attackers simulate breaches while defenders watch passively, failing to foster real-time learning or adaptive defenses. 

The problem intensifies in 2026's threat landscape, where exploit windows have shrunk dramatically to just 10 hours on average, demanding rapid response capabilities. Traditional purple teaming, limited to periodic workshops, cannot keep pace with agile adversaries exploiting zero-days and supply chain vulnerabilities. Without genuine fusion, red teams uncover flaws that blue teams log but rarely operationalize, leading to repeated failures during live incidents. This disconnect leaves enterprises exposed, as detections remain unrefined and defenses static. 

At its core, authentic purple teaming requires shared goals, continuous feedback loops, and joint ownership of outcomes, not just shared meeting rooms. Many setups falter here, with red teams prioritizing stealthy attacks over teachable moments and blue teams focusing on alerts without contextual adversary emulation. The result is a performative exercise that boosts resumes but not resilience, ignoring metrics like mean-time-to-respond or coverage of MITRE ATT&CK frameworks. 

To evolve, organizations must shift to autonomous, continuous purple teaming powered by AI agents that simulate attacks, investigate alerts, and map to real-world tactics. This approach validates detections in real-time, bridges the red-blue gap, and scales beyond human bandwidth. Forward-thinking teams are adopting adversarial exposure validation, ensuring defenses evolve proactively rather than reactively. Ultimately, ditching the purple label for hollow collaborations unlocks true synergy, fortifying organizations against 2026's relentless threats. By measuring success through integrated KPIs and embracing automation, security programs can transform from fragmented efforts into unified powerhouses.

Apricorn Launches 32TB Encrypted Drive to Strengthen Offline Data Security Against Cyber Threats

 

Security feels stronger when data is scrambled, yet that strength vanishes if login steps or secret codes fall into the wrong hands. Instead of relying on system files tucked inside computers - where sneaky programs like spyware or digital snoopers lurk - real protection means keeping those pieces far away from risk. Enter a fresh take from Apricorn: their updated Aegis Padlock DT FIPS line now includes a 32TB model built to lock out the host machine completely. 

This shift sidesteps common traps by handling safeguards directly on the drive itself. Authentication happens right on the device, using keys embedded into the drive's own interface. Rather than typing codes through the host machine, individuals enter their access number straight into the unit. Because of this setup, login details do not pass through the computer’s software layer, lowering risks tied to infected endpoints. 

According to Apricorn, cryptographic operations are managed entirely within the hardware via custom-built AegisWare code, ensuring private information stays separate from vulnerable environments. Isolated encrypted storage remains key for strong cyber defenses, says Apricorn's Kurt Markley. Not limited to online solutions, the device fits into wider efforts for securing data without connectivity. 

Instead of relying on the host system, access control moves directly onto the hardware itself. Threats often exploit weaknesses in software-driven methods - this design helps avoid those pitfalls. With every file saved, encryption happens instantly on the Aegis Padlock DT FIPS. Even at rest, both data and access codes stay locked down through strong encoding. Firmware tampering? Not possible - Apricorn built it so updates can’t sneak in. 

That wall keeps out threats like BadUSB, which twists ordinary USB gear into tools for system breaches. Priced close to $2,000, the 32TB model enters alongside lower-capacity encrypted drives. With built-in 256-bit AES XTS encryption, it operates directly through hardware protection. Verified under FIPS 140-2 Level 2 by NIST, its design meets strict governmental requirements. Compatibility spans across Windows, Linux, macOS, Android, and ChromeOS - no extra software needed. Despite higher cost, access remains smooth on multiple platforms out of the box. 

Despite limitations in certain setups, the device works reliably where standard encryption methods fail - think medical scanners, factory machines, isolated storage units, or built-in controllers. Transfer rates reach 5 gigabits per second thanks to a USB 3.2 Gen 1 connection. Inside, vital parts are shielded by a dense epoxy layer, resisting drops, impacts, and deliberate interference. Built tough, it handles rough conditions without compromising security. 

Even with strong built-in protections, the device cannot block all digital threats. Though separating encryption and login checks from the host machine lowers infection chances, firms have to protect where the drive is kept. Should someone get hold of the unit physically, how it's managed day-to-day matters as much as its coded defenses. Firms relying on this tool must enforce clear rules for where it's stored, who can reach it, and which verified machines link to it. 

Security hardware gains traction amid rising digital risks, driven by frequent attacks on weak software defenses and leaked login data. A surge in complex breaches pushes companies to adopt built-in protection methods instead of relying solely on traditional programs. This move reflects deeper changes across sectors aiming to reduce exposure through physical safeguards. Growing reliance on embedded tools marks a departure from older models dependent on patch-prone applications.

ClickUp API Key Exposure Leaves Corporate and Government Email Data Public for Over a Year

 

A previously unnoticed weakness in ClickUp’s web infrastructure sat undetected - exposing private data due to an embedded API key left visible on its public site. For over twelve months, access to internal records remained possible because safeguards were missing at a basic level. Emails tied to businesses and official agencies could be pulled by outside parties; no login required. This gap emerged not from complex hacking but from routine coding oversights ignored during deployment. Hidden credentials like these often escape review until examined closely. Months passed before scrutiny revealed what should have been caught earlier. Security gaps of this kind stem less from advanced threats and more from everyday lapses repeated across teams. 

Open talk about the problem began when security analyst Impulsive shared findings showing the leaked credential sat inside a JavaScript file served by ClickUp's site, even before login steps occurred. Since code running in browsers can always be seen, grabbing the API key took little effort and allowed contact with internal servers. Without needing any special access, one basic query allegedly pulled close to a thousand emails plus vast numbers of hidden development settings from the system. The study showed that 959 employee email addresses were part of the leaked data, tied to staff in large companies and public institutions spanning various locations. 

About 3,165 feature flags also turned up in the exposure - visible without restriction. Hidden inside what looks like routine code, these flags might expose how teams test software, plan releases, roll out new tools, or shape future updates. Because of that, malicious actors might mine them to craft deceptive emails, manipulate individuals through tailored messages, or collect insights on rivals’ progress. Surprisingly useful intel often hides where it seems least likely. Early in 2025, news of the exposure surfaced - yet by April 2026, it still hadn’t been fixed, stretching out the time hackers could act. Because access stayed open so long, experts say attackers gained more chances to try breaking in using stolen login details, fake identities, or personalized emails targeting workers linked to the affected websites. 

What happened shows a wider issue for groups depending on cloud-based services. Though easy to avoid, fixed login details remain common in today’s coding practices. When secret access tokens appear in open-source repositories, bots usually find them fast - sometimes in under sixty seconds. Even low-level access codes can lead to large data leaks if internal systems lack strong verification rules. Rotating API keys often helps lower exposure over time. Client-side apps without embedded secrets tend to withstand attacks better. Strict limits on backend access form another layer of defense. 

Protection against phishing gains strength when using tools like DMARC, SPF, or DKIM. Unusual logins catch attention faster with constant tracking. Exposed domains become visible through active threat data streams. Security improves not by one fix alone, but steady adjustments across systems. A quiet mistake lingered unseen within ClickUp's system, revealing data widely before detection. When operations move into shared online environments, oversight gaps often emerge - making careful monitoring essential. Security lapses like this highlight growing pressure on organizations to act earlier, respond smarter, stay alert longer.

Cybersecurity Industry Split Over Impact of Anthropic’s Mythos AI

 





Advanced artificial intelligence systems are rapidly reshaping the cybersecurity industry, but experts remain sharply divided over whether the technology represents a manageable evolution in security research or the beginning of a large-scale vulnerability crisis.

The debate escalated after Anthropic introduced Claude Mythos Preview, an experimental version of its language model that the company says demonstrates unusually strong performance in identifying software vulnerabilities and handling advanced cybersecurity tasks. Concerned about the possible risks of releasing such capabilities broadly, Anthropic restricted access to a limited initiative known as Glasswing, allowing only a select group of organizations to test the system while the security community prepares for the implications.

Since the announcement, discussions across the cybersecurity sector have centered not only on the model’s technical abilities, but also on whether restricting access to it is realistic at all. Reports surfaced this week suggesting unauthorized individuals may already have accessed the Mythos preview, raising concerns that attempts to tightly control the technology may prove ineffective once similar capabilities become reproducible elsewhere.

The industry’s reaction has largely fallen into three competing schools of thought.

One group believes AI-driven vulnerability discovery could overwhelm existing security infrastructure. Supporters of this view warn that highly capable models may dramatically increase the speed at which attackers uncover exploitable weaknesses, potentially leading to widespread cyber incidents before defenders can respond effectively. Analysts aligned with this perspective argue that the cybersecurity ecosystem is already struggling to keep pace with current levels of vulnerability reporting.

A second group has taken a more operational approach, focusing on how organizations can defend themselves if AI-assisted exploit discovery becomes commonplace. This position has been reflected in work published through the Cloud Security Alliance, where hundreds of chief information security officers collaborated on guidance discussing defensive strategies. However, even within this camp, some security professionals have criticized Anthropic’s rollout process, arguing that patch management and vulnerability remediation are far more complex than the company appears to acknowledge.

A third camp remains skeptical of the broader panic surrounding Mythos. Researchers associated with AISLE argued that the model’s capabilities are not entirely unique because similar vulnerability discovery results can already be reproduced using publicly accessible open-weight AI models. In one cited example, researchers reportedly recreated a FreeBSD exploit demonstrated during the Mythos announcement using multiple open models, including systems inexpensive enough to operate at minimal cost. The finding suggests that moderately skilled attackers may already possess access to comparable capabilities independent of Anthropic’s platform.

This debate arrives as the cybersecurity industry is already experiencing a dramatic increase in vulnerability disclosures. The National Institute of Standards and Technology recently adjusted how it processes entries for the National Vulnerability Database after reporting a 263 percent increase in submissions between 2020 and 2025, including a sharp rise within the past year alone. The agency stated that it would prioritize only the most critical Common Vulnerabilities and Exposures entries for enrichment, highlighting how existing human review systems are struggling to scale alongside the growing volume of reported flaws.

Some experts believe artificial intelligence is already contributing to that acceleration, even before systems such as Mythos become widely available.

At the same time, defenders argue that existing security architectures still provide meaningful protection. Anthropic’s own findings reportedly acknowledged that while Mythos could identify vulnerabilities, it was unable to remotely exploit many of them because layered security controls prevented deeper compromise. This concept, commonly referred to as “defense in depth,” relies on multiple overlapping safeguards designed to stop attackers even if one weakness is discovered.

Despite disagreements over the severity of the threat, there is broad consensus that AI-assisted vulnerability discovery will continue advancing. The larger disagreement centers on how the software industry should adapt.

Some researchers argue that attempting to restrict access to advanced models through programs like Glasswing may ultimately fail because comparable capabilities are increasingly emerging in open-source ecosystems. Others believe the long-term answer may resemble principles already established in modern cryptography.

The discussion frequently references the work of 19th-century cryptographer Auguste Kerckhoffs, who argued that secure systems should remain safe even if attackers understand how they operate, except for protected keys or credentials. Over time, cybersecurity researchers have increasingly adopted a similar philosophy in software security, where openly scrutinized systems often become more resilient because flaws are exposed and corrected publicly.

Supporters of this approach believe AI could eventually force the software industry toward more rigorously tested open-source infrastructure. Under such a future, software components would face continuous AI-driven scrutiny before gaining widespread trust. However, experts also caution that this transition would be difficult because many companies still depend on proprietary code to protect intellectual property and maintain competitive advantages.

Another striking concern involves economics. Much of the modern internet depends heavily on open-source software, yet relatively few organizations financially contribute to securing and auditing the projects they rely upon. Although AI models may simplify vulnerability discovery, the computational resources required to run these systems remain expensive. Analysts warn that access to large-scale vulnerability analysis may increasingly depend on who can afford the computing power necessary to operate advanced models.

Some researchers fear this imbalance could create repeating cycles of major cyberattacks followed by emergency patching efforts before the industry temporarily stabilizes again. Recent supply chain attacks affecting widely used software tools have reinforced concerns that large-scale exploitation campaigns may become more frequent as AI-assisted discovery improves.

The sharp turn of events could also redefine the cybersecurity market itself. Companies specializing in vulnerability discovery may face mounting pressure as AI automates portions of their work. By contrast, vendors focused on remediation and layered defensive protections may see increased demand as organizations attempt to strengthen prevention measures and respond more rapidly to emerging threats.

For users and organizations heavily dependent on open-source software, the transition period may prove particularly difficult. However, some analysts remain cautiously optimistic that continuous scrutiny from increasingly advanced AI systems could eventually produce stronger and more resilient software ecosystems over the long term.