Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Hackers Exploit Claude to Target Multiple Mexican Government Agencies

 


As generative artificial intelligence emerges, digital innovation is evolving at an unprecedented rate, but it is also quietly reshaping cybercrime in a subtle way. Tools originally designed for the purpose of research, coding, and problem-solving are now being explored for a variety of less benign purposes as well. 

This fact has been illustrated in a troubling fashion by recent revelations that threat actors have exploited the capabilities of Claude in order to support a large-scale intrusion targeting Mexican government networks. 

A security researcher at Gambit Security reported that attackers extracted approximately 150 gigabytes of sensitive information from multiple Mexican government agencies, demonstrating how widely accessible artificial intelligence systems can be manipulated to assist sophisticated cyber operations despite built-in safeguards despite their ease of use. 

It has been determined that the intrusion was not limited to passive reconnaissance. The attacker is believed to have used Claude throughout the campaign as an interactive tool for research and development. 

Gambit Security has released an analysis that indicates that the activity began in December, and continued for approximately a month, during which the chatbot was repeatedly instructed to identify potential vulnerabilities within government networks and to create scripts for exploiting those vulnerabilities. 

Using the same AI model, methods were also outlined for automating sensitive information extraction, effectively turning the model into an assistant for data extraction. In a series of carefully structured prompts, the operator gradually weakened the built-in safeguards of the model, thereby manipulating it slowly. 

There have been reports that the system has rejected initial requests, but subsequent iterations seem to have bypassed the platform's guardrails and generated increasingly more actionable material. The extent of the assistance presented by the model raised particular concerns among analysts. 

According to Curtis Simpson, the system produced thousands of analytical outputs which detailed potential attack paths, internal network targets, and credential-related strategies, thereby providing guidance on how to proceed within compromised environments. These outputs were more structured operational guidance for the campaign's human operator than casual responses. 

According to Anthropic, an internal investigation had been initiated following the disclosure and that the activity had been disrupted and the accounts associated with the misuse were permanently banned. According to a company representative, safeguards are continuing to develop. 

For example, the Claude Opus 4.6 model incorporates additional mechanisms to detect and block similar forms of abuse in the latest iteration. In the time of publishing, it had not been officially determined that the individuals responsible for the intrusion were part of any advanced persistent threat group that had been publicly identified.

Nonetheless, analysts examining the operation noted several similarities with tactics historically associated with espionage campaigns involving Chinese actors. As a result of intelligence gathered by Gambit Security and corroborated by SecurityAffairs, the tradecraft demonstrated in the operation - particularly disciplined operational security and systematic reconnaissance - appears to resemble patterns previously observed in state-aligned cyber espionage. 

A separate disclosure from Anthropic confirmed that state-sponsored actors have misused its AI programming tools to benefit dozens of organizations worldwide. It has been determined that investigators at this incident heavily relied on artificial intelligence-assisted workflows to accelerate the exploit development process, effectively reducing the technical barrier to assembling complex multi-stage intrusion chains while retaining high levels of operational secrecy. 

Technical analysis indicates that the campaign aimed at weaponizing Claude Code, by utilizing prompt engineering techniques in order to circumvent the system's built-in security measures. Over 1,000 prompts were submitted to the artificial intelligence environment, some of which were presented as legitimate bug bounty testing scenarios to bypass ethical restrictions embedded within the model by the researchers. 

In this iterative process, attackers were reported to have developed customized exploit scripts, lateral movement tooling, and operational playbooks tailored to the architecture of compromised networks through this iterative interaction. 

Following the generation of AI-generated material, successive phases of the intrusion chain, including privilege escalation, credential harvesting, and automated data extraction, were carried out. According to reports, the operators began shifting portions of their workflow to GPT-4.1 to continue developing credential handling utilities and refine network traversal techniques when restrictions began limiting output from Claude's environment. 

It was possible for the attackers to maintain a workflow that was largely automated and able to quickly adapt to defensive obstacles within the targeted infrastructure by chaining outputs from both AI systems. As a result of this approach, investigators identified behavioural indicators that stood out during forensic examination.

Among them were unusually large amounts of automated scripting activity, repeated instances of AI-generated code fragments appearing within attack tools, and the presence of AI-aided development processes operating from compromised government infrastructures. 

A series of stages has been involved in the intrusion, which began with compromising systems related to the Mexican tax authority before spreading to other public infrastructures. The attacker, according to investigators, then moved through a network of interconnected systems involving several regional government environments, municipal systems in Mexico City, public utility infrastructure in Monterrey, as well as at least one major financial institution, as well as the national electoral institute. 

As a result of the operation, approximately 150 gigabytes of sensitive data - including administrative information and individually identifiable information - were exfiltrated from these environments. MITER ATT&CK knowledge base analysis revealed a familiar sequence of intrusion techniques based on the observed activity. There is evidence that the initial access was obtained through valid accounts, followed by lateral movement with remote services, credential acquisition through operating system credential dump mechanisms, and large-scale data exfiltration. 

The researchers also observed additional measures intended to undermine defensive monitoring by interfering with security controls within the targeted environments in order to weaken defensive monitoring. 

Researchers noted that each of these tactics has been observed in conventional cyberespionage operations; however, the distinctive feature of the campaign was the systematic integration of generative artificial intelligence into the attack process. 

It is possible for attackers to coordinate complex intrusion chains at a speed and scale that is not possible with traditional manual methods, as they were able to automate reconnaissance, exploit development, and operational planning. This incident underscores how generative artificial intelligence systems are rapidly becoming a new layer within the cyber threat landscape that can enhance both defensive and offensive capabilities. 

In response to the threat of AI-aided attacks, security experts recommend that organizations, particularly those operating critical public infrastructure, adapt their defensive strategies accordingly. A number of measures are being taken to strengthen identity and access controls, identify anomalous automation patterns, and implement advanced behavioral analytics to identify tooling and scripting generated by AI. 

It is also recommended that AI developers, cybersecurity firms, and government agencies collaborate continuously so that safeguards can be refined to ensure that large language models are not manipulated for malicious purposes. 

It is becoming increasingly important for the cybersecurity community to ensure that innovations in artificial intelligence do not inadvertently become a force multiplier for sophisticated digital intrusions as platforms such as Claude and other generative AI systems continue to evolve.

Fake Google Meet Update Can Give Attackers Control of Your Windows PC

 



Cybersecurity analysts have identified a phishing campaign that can quietly hand control of a Windows computer to attackers after a single click. The scam appears as a routine update notice for Google Meet, but the prompt is fraudulent and redirects victims into a device management system controlled by threat actors.

Unlike many phishing schemes, the technique does not steal passwords, download obvious malware, or display clear warning signs. Instead, the attack relies on convincing users to interact with a page that imitates a standard software update message.


A convincing but fake update message

The deceptive webpage tells visitors they must install the latest version of Meet in order to continue using the service. The design closely resembles a legitimate update notification and uses familiar colors and branding that many users associate with Google products.

However, both the “Update now” button and the “Learn more” link do not connect to any official Google resource. Instead, they activate a special Windows deep link known as ms-device-enrollment:.

This feature is a built-in Windows mechanism designed for corporate environments. IT administrators commonly use it to send employees a link that allows a computer to be enrolled in a company’s device management system with minimal effort. In the attack campaign, the same capability is redirected to infrastructure operated by the attacker.


How the enrollment process begins

Windows enrollment links such as ms-device-enrollment: are commonly used in corporate environments where organizations need to configure large numbers of laptops quickly. The link automatically opens Windows settings and connects the device to an enterprise management server.

Once enrolled, the device becomes part of a management framework that allows administrators to deploy software updates, enforce security policies, and manage system configurations remotely.

Attackers exploit this workflow because users are accustomed to seeing this setup process when joining corporate networks, making it appear legitimate.

When a victim clicks the link, Windows immediately bypasses the browser and opens the operating system’s “Set up a work or school account” dialog. This is the same interface that appears when an organization configures a new employee laptop.

The enrollment request arrives with several fields already filled in. The username displayed is collinsmckleen@sunlife-finance.com, a domain designed to resemble the financial services firm Sun Life Financial. Meanwhile, the server connection is preconfigured to an endpoint hosted at tnrmuv-api.esper[.]cloud, which is part of infrastructure operated by Esper.

The attacker’s objective is not to impersonate the victim’s account perfectly. Instead, the goal is to persuade the user to continue through the legitimate Windows enrollment process. Even if only a small portion of targeted users proceed, that is enough for attackers to gain access to some systems.


What attackers gain after enrollment

If the victim clicks Next and completes the setup wizard, the computer becomes registered with a remote Mobile Device Management (MDM) server.

MDM platforms are commonly used by organizations to manage employee devices. Once a device joins such a system, administrators can remotely install or remove applications, modify operating system settings, access stored files, lock the device, or completely erase its contents.

Because the commands come from a legitimate management platform rather than a malicious program, the operating system performs the actions itself. As a result, there may be no suspicious malware process running on the machine.

The infrastructure used in this campaign relies on Esper, a legitimate enterprise management service that many companies use to control corporate hardware.

Further analysis of the malicious link shows encoded configuration data embedded in the server address. When decoded, the data reveals two identifiers associated with the Esper platform: a blueprint ID that determines which management configuration will be applied and a group ID that specifies the device group the computer will join once enrolled.


Abuse of legitimate features

Both the Windows enrollment handler and the Esper management service are functioning exactly as designed. The attacker’s tactic simply redirects these legitimate tools toward unsuspecting users.

Because no malicious software is delivered and no login credentials are requested, the attack can be difficult for security tools to detect. The enrollment prompt displayed to the user is an authentic Windows system dialog rather than a fake webpage. This means typical browser warnings or email filters that look for credential-stealing forms may not flag the activity.

Additionally, the command infrastructure operates on a trusted cloud-based platform, making domain reputation filtering less effective. Security specialists warn that many traditional detection tools are not designed to recognize situations where legitimate operating system features are misused to gain control of a system.

This technique reflects a broader trend in cybercrime. Increasingly, attackers are abandoning conventional malware and instead exploiting built-in operating system capabilities or legitimate cloud services to carry out their operations.


Steps to take if you interacted with the page

Users who believe they may have clicked the fake update prompt should first check whether their device has been enrolled in an unfamiliar management system.

On Windows computers, this can be done by navigating to Settings → Accounts → Access work or school. If an unfamiliar entry appears, particularly one associated with domains such as sunlife-finance or esper, it should be selected and disconnected immediately.

Anyone who clicked the “Update now” link on the malicious site and proceeded through the enrollment wizard should treat the computer as potentially compromised. Running a current anti-malware scan is recommended to determine whether the management server deployed additional software after enrollment.

For organizations, administrators may also want to review device management policies. Endpoint management platforms such as Microsoft Intune allow companies to restrict which MDM servers corporate devices are permitted to join. Implementing such restrictions can reduce the risk of unauthorized device enrollment in similar attacks.

Security researchers have warned that misuse of device management systems can be particularly dangerous because they grant deep administrative control over enrolled devices.

According to analysts from Gartner, enterprise device management platforms often have privileged system access comparable to local administrators, allowing them to modify system policies, install applications, and control security settings remotely.

When such privileges fall into the wrong hands, attackers can effectively operate the device as if they were legitimate administrators.

Apple Rolls Out Global Age-Verification System to Protect Kids Online

 

Apple has rolled out a new global age-verification system across its platforms, aimed at keeping kids safer online while helping developers comply with tightening child safety laws worldwide. The move targets both app downloads and in‑app experiences, with a particular focus on blocking underage access to adult‑rated content without sacrificing user privacy.

Under the new rules, users in countries such as Brazil, Australia and Singapore will be blocked from downloading apps rated 18+ unless Apple can confirm they are adults. Similar protections are being extended to parts of the United States, where states like Utah and Louisiana are introducing strict online age‑assurance laws, pushing platforms to verify whether users are children, teens or adults before allowing access to certain apps or features.This marks one of Apple’s strongest steps yet to align its App Store with regional regulations on children’s digital safety.

At the heart of the initiative is Apple’s privacy‑focused Declared Age Range API, which lets apps learn a user’s age category instead of their exact birthdate. Developers can use this signal to tailor content, enable or disable features, or trigger parental consent flows for younger users, while never seeing sensitive identity details. Apple says this design is meant to minimize data collection and reduce the risk of intrusive ID checks or third‑party age‑verification databases.

For parents, the age‑verification push builds on Apple’s existing child account system and content restrictions.Parents can already set up child profiles, choose age ranges and apply web content filters, and now those settings can flow through to third‑party apps via the new tools.This means a game, social app or streaming service can automatically recognize that a user is a child or teen and adjust what they can see or do without asking for new personal information.

For developers, Apple is introducing an expanded toolkit that includes the updated Declared Age Range API, new age‑rating properties in StoreKit, and improved server notifications to track compliance. These tools will be essential in regions where apps must prove they are screening out underage users from adult content or obtaining parental consent for significant changes. As more governments pass online safety laws, Apple’s global age‑verification framework is likely to become a key part of how the App Store balances regulatory demands with user privacy.

Conduent Leak: One of the Largest Breaches in The U.S


Conduent, a business that offers printing, payment, and document processing services to some of the biggest health insurance companies in the nation, has had at least 25 million people's personal information stolen. Addresses, social security numbers, and health information were exposed to ransomware hackers in what some have already dubbed one of the biggest data breaches in American history. 

According to a letter the business issued online, Conduent initially learned it was the victim of a "cyber incident" more than a year ago on January 13, 2025. The actual breach occurred between October 21, 2024, and January 13, 2025, and it included Conduent's data because the company offers services to health plans.

Names, social security numbers, health insurance details, and unspecified medical information were among the data. In its notice, the business stressed that "not every data element was present for every individual," which implies that some individuals may have had their health insurance information taken but not their social security number, or vice versa. 

According to Bleeping Computer, the Safepay ransomware organization claimed responsibility for the attack, which allegedly captured more than 8 gigabytes of data. Conduent stated online, "Presently, we are unaware of any attempted or actual misuse of any information involved in this incident," while it is unclear if Safepay has demanded payment for the information's recovery.

10.5 million people were affected by the incident, according to Oregon's consumer protection website, although it's unknown how many people in Oregon alone were affected. According to Wisconsin, the national total is more than 25 million. 

Notifications have also been sent to residents of other states, such as California, Delaware, Massachusetts, New Hampshire, and New Mexico. According to the state's attorney general, just 374 people's data was compromised in Maine, one of the states with very tiny numbers. Conduent, a New Jersey-based company, did not reply to emails on Tuesday inquiring about the full extent of the incident and what victims could do about it.

Conduent is providing free credit monitoring and identity restoration services through Epiq to certain individuals, but those affected must join before April 30, 2026, according to a letter given to victims in California.

Age Verification Laws for Social Media Raise Privacy Concerns and Enforcement Challenges

 

Across nations, governments push tighter rules limiting young users’ access to social media. Because of worries over endless scrolling, disturbing material online, or growing emotional struggles in teens, officials demand change. Minimum entry ages - often 13 or 16 - are now common in draft laws shaping platform duties. While debates continue, one thing holds: unrestricted teenage access faces mounting resistance. 

Still, putting such policies into practice stirs up both technological hurdles and concerns about personal privacy. To make sure people are old enough, services need proof - yet proving age typically means gathering private details. Meanwhile, current regulations push firms to keep data collection minimal. That tension forms what specialists call an “age-verification trap,” where tighter control over access can weaken safeguards meant to protect individual information. 

While many rules about age limits demand that services make "reasonable efforts" to block young users, clear guidance on checking someone's actual age is almost never included. One way firms handle this gap: they lean heavily on just two methods when deciding what to do. Starting off, identity checks require people to show their age using official ID or online identity tools. 

Although more reliable, keeping such data creates worries over privacy breaches. Handling vast collections of private details increases exposure to cyber threats. Security weakens when too much sensitive material gathers in one place. Age guesses shape the next method. By watching how someone uses a device, or analyzing video selfies with face-scanning tech, systems try to judge their years without asking for ID cards. 

Still, since these outcomes depend on likelihoods instead of confirmed proof, doubt remains part of the process. Some big tech firms now run these kinds of tools. While Meta applies face-based age checks on Instagram in select regions - asking certain users to send brief video clips if they seem underage - TikTok examines openly shared videos to guess how old someone might be. 

Elsewhere, Google and its platform YouTube lean on activity patterns; yet when doubt remains, they can ask for official identification or payment details. These steps aim at confirming ages without relying solely on stated information. Mistakes happen within these systems. Though meant to protect, they occasionally misidentify adults as children - leading to sudden account access issues. 

At times, underage individuals slip through gaps, using borrowed IDs or setting up more than one profile. Restrictions fail when shared credentials enter the picture. A single appeal can expose personal details when systems retain proof materials past their immediate need. Stored face scans, ID photos, or validation logs may linger just to satisfy legal checks. These files attract digital intrusions simply by existing. Every extra day they remain increases the chance of breach. 

Where identity infrastructure is weak, the difficulty grows. Biometrics might step in when official systems fall short. Oversight tends to be sparse, even as outside verifiers take on bigger roles. Still, shielding kids on the web without losing grip on private information is far from simple. When authorities roll out tighter rules for confirming age, the tools built to follow these laws could change how identities and personal details move through digital spaces.

Rising Cyber Threats Linked to Ongoing Middle East Conflict


A geopolitical crisis has historically been fought on physical battlefields, but its effects are seldom confined to borders in the modern threat landscape. While tensions are swirling across the Middle East as a result of the United States' military operations in Iran and Tehran's retaliatory actions, a parallel surge of activity is being witnessed in the digital world. 


There is increasing concern among security analysts as well as government cyber agencies about how geopolitical instability provides fertile ground for cybercriminals and state-aligned actors. In order to manipulate public curiosity, exploit fear, and conceal malicious campaigns, attackers have utilized this rapidly evolving situation as a convenient narrative.

As soon as the escalation began, researchers began tracking a growing ecosystem of cyber infrastructure based on conflict that lures unsuspecting users into fraudulent websites, phishing scams, and malware downloads. 

In many cases, what appears to be breaking news or urgent updates about a crisis hides carefully designed traps meant to infiltrate corporations, collect credentials, or spread malicious software designed to steal data. 

Due to this, the conflict's digital shadow has expanded beyond the immediate region, raising concerns among cybersecurity professionals that opportunistic attacks may become increasingly targeted against individuals and organizations worldwide. 

The intensification of hostilities in late February 2026, when the United States and Israel are said to have conducted coordinated airstrikes against multiple Iranian facilities, has further compounded the escalation of cyber threats. 

Security analysts have identified a pattern where cyber activity closely follows developments on the ground following the strikes and retaliatory actions which have reverberated across several Middle Eastern nations following the strikes. 

According to researchers, digital operations played a supporting role long before the first missiles were deployed. Iran's command-and-control infrastructure was disrupted by coordinated electronic warfare tactics and large-scale distributed denial-of-service campaigns. This temporarily impeded national internet access and could potentially complicate real-time military coordination by reducing national internet connectivity to a fraction of its usual capacity. 

It is clear from such incidents that cyber capabilities are becoming increasingly integrated into broader strategic operations, influencing the circumstances under which conventional military engagements occur. However, analysts note that the cyber dimension of the conflict cannot be limited to state-directed operations alone. 

As a result, it is widely expected that Iranian digital response will follow an asymmetric model, with loosely aligned or ideologically sympathetic groups operating outside its borders typically executing these actions. They vary considerably in capability, but their activities often involve defacing websites, leaking data, and launching disruptive attacks intended to generate publicity in addition to operational damage. 

A team tracking online channels associated with hacktivist communities has observed hundreds of claims of cyberattack within days of the escalation, many of which were shared via propaganda platforms and messaging platforms aligned with geopolitical agendas. 

In spite of the fact that not all claims reflect a verified breach, the rapid dissemination of such announcements can create confusion, inflate perceived impact, and press targeted organizations into responding before technical verification is possible. It is becoming increasingly clear that the target list is expanding beyond political disruption. 

Monitoring of cybersecurity indicates that activities related to the conflict extend beyond Israel to Gulf States, Jordan, Cyprus, and American organizations based abroad. As a result of financial motivation, ransomware operators and threat groups have attempted to frame attacks against Israeli and Western-related entities as political alignments rather than criminal attacks.

A gradual blurring of the distinction between state-aligned disruption and extortion involving financial gain is being caused by the blending of ideological messaging and traditional cybercrime tactics. Moreover, security teams have warned that opportunistic actors are leveraging geopolitical tensions as a narrative hook for phishing and fraud operations. 

It has been observed increasingly that travel-related scams are targeting individuals stranded or traveling within the region, and credential harvesting campaigns are targeting diplomats, journalists, humanitarian organizations and defense contractors. There has been an increase in interest in industrial and operational technology environments in recent years, which has created an alarm. 

It is important to note that early cyber activity linked to the conflict was primarily defacements and distributed denial-of-service attacks against public websites. In recent reports, threat intelligence reports have indicated an attempt to probe systems linked to industrial control components such as programmable logic controllers and other industrial control components. 

Consequently, if substantiated, this shift would represent a substantial escalation of both technical ambition and potential impact for energy facilities, utilities, and other critical infrastructure operators throughout the Middle East and Gulf region, should reevaluate their operational network resilience, particularly those that connect information technology with industrial control systems. 

Together, these developments suggest a broad range of potential cyber activity, including high-volume DDoS campaigns that target government portals as well as targeted spear-phishing activities that seek credentials from diplomats, media organizations, and defense contractors. 

A number of analysts have warned that ransomware incidents can be politicized, hack-and-leak operations will target military-linked entities, and destructive malware may be used to disable government systems. 

The influence campaigns and fabricated breach claims being circulated through social media platforms are expected to play a parallel role in shaping public perception as well as these technical threats. As a result of the possibility of both verified attacks and exaggerated narratives producing real-world consequences, enhancing situational awareness and improving defensive monitoring is becoming an integral aspect of risk management in organizations. 

It is also evident from the broader regional context why geopolitical escalation often results in heightened cyber security risks in the Middle East. Over the past decade, countries across the region have taken steps to transform public services, financial systems, telecommunications infrastructure, and energy operations through large-scale digital transformation initiatives. 

Particularly, Gulf Cooperation Council members have led these efforts. In addition to strengthening economic diversification and technological capacity, these efforts have increased the digital attack surface available to threat actors at the same time.

Monitoring of cybercrime activities in the Gulf has indicated an increasing number of traditional cybercrime activities targeting both private and state institutions. In recent years, financial fraud campaigns, ransomware attacks, and political-motivated web defacements have disrupted a wide range of industries, including banking, telecommunications, and more. 

There have been several high-profile incidents in recent years that involved financial institution and mobile banking platform breaches, while ransomware groups have increasingly targeted large regional service providers as targets. These campaigns have grown in frequency as well as sophistication, reflecting the region's interconnected digital infrastructure’s increasing strategic value. 

In addition, the threat environment is not limited to conventional cybercrime. Researchers continue to report advanced persistent threat groups conducting cyberespionage operations against governmental agencies, defense organizations, and energy infrastructure throughout the region, in addition to conventional cybercrime. 

There is a widespread belief that many of these campaigns are associated with states and geopolitical rivalries, with a particular focus being placed on individuals associated with Iran following earlier cyber incidents against its nuclear facilities. 

Several activities attributed to this group have included deployment of destructive malware, covert surveillance campaigns, and data destruction attacks, all aimed at disrupting critical infrastructure without providing any indication as to whether the underlying motive is political disruption or financial gain. 

Consequently, attribution efforts have been complicated by the convergence of these motives, resulting in the increasing overlap between cyber espionage, sabotage, and criminal activity. Cybersecurity dynamics are also influenced by the political and social significance of the digital space within the region.

Digital platforms, data flows, and communication infrastructure are frequently regulated by Middle Eastern governments as a matter of national stability and regime security. Consequently, social media platforms and messaging platforms have evolved into contested environments where state institutions, activists, extremist organizations, and influence networks compete to shape narratives in contested environments. 

In times of conflict or political instability, this competition can take the form of distributed denial-of-service attacks, coordinated disinformation campaigns, doxxing operations, and claims of data breaches aimed at putting pressure on political opponents or influencing public opinion. 

With the increasing use of artificial intelligence tools for creating synthetic media, automating propaganda, or manipulating information flow, it has become increasingly difficult for organizations to maintain reliable situational awareness during emergencies. In addition to the integration of artificial intelligence and autonomous technologies into military and security operations across the region, there is an emerging dimension. 

New cybersecurity vulnerabilities are inevitable as governments and non-state actors experiment with artificial intelligence-enabled surveillance, targeting, and operational coordination systems. It is important to be aware that when systems depend on complex supply chains of software or foreign technological expertise, cyber intrusions, manipulation, and espionage can be a potential entry point. 

According to security specialists, interference with these technologies could have consequences beyond the theft of data, impacting battlefield decision-making, operational reliability, or strategic control over sensitive defense capabilities, among other things. 

Institutions are not the only ones to face such risks. Technology-facilitated abuse has become increasingly problematic for vulnerable communities as it intersects with personal safety concerns and digital rights. 

A number of places in the region have experienced an increase in the spread of manipulated images and deepfake content as a result of technology-facilitated abuse, including impersonation schemes and sextortion. Many victims experience significant social stigma or legal barriers when seeking assistance, which can discourage them from reporting and allow perpetrators to operate with relative impunity. 

In combination, these trends illustrate that cybersecurity is not limited to protecting networks or infrastructure in the Middle East. A complex intersection of national security, information control, technological competition, and social vulnerability has resulted in a situation where the region is particularly vulnerable to cyber activity arising from geopolitical tensions.

Cyberattacks Shift Tactics as Hackers Exploit User Behavior and AI, Experts Warn

 

Cybersecurity threats are evolving rapidly, forcing businesses to rethink how they approach digital security. Experts say modern cyberattacks are no longer focused solely on breaking technical defenses but are increasingly designed to exploit everyday user behavior. 
 
According to industry observers, files downloaded by employees have become a common entry point for cybercriminals. Items such as invoices, installers, documents, and productivity tools are often downloaded without careful verification, creating opportunities for attackers. 

“The Downloads folder has quietly become one of the hottest pieces of real estate for cybercriminals,” said Sanket Atal, senior vice president of engineering and country head at OpenText India. 

“Attackers are not trying to break cryptography anymore. They’re hijacking habits.” Research cited by the company indicates that more than one third of consumer malware infections are first detected in the Downloads directory. 

Security specialists say this reflects a broader shift in how cyberattacks are designed, with attackers relying more on social engineering and multi-stage malware. Atal said malicious files frequently appear harmless when first opened. “These files often look completely harmless at first,” he said. 

“They only later pull in ransomware components or credential-stealing payloads. It is a multi-stage approach that is very difficult to catch with signature-based tools.” Experts say the rise in such attacks is also linked to the growing industrialization of cybercrime. 

Modern ransomware groups and information-stealing operations increasingly operate like structured businesses that continuously test and refine their methods. “Ransomware-as-a-service groups and info-stealer operators are constantly refining their lures,” Atal said. 

“They are comfortable using SEO-poisoned websites, fake update prompts, and even ‘productivity tools’ to get users to download something that looks normal.” India’s rapidly expanding digital ecosystem has made it an attractive target for attackers. 

The combination of millions of new internet users, the widespread use of personal devices for work, and the overlap between personal and professional computing environments increases exposure to risk. 

“When a poisoned file lands in a Downloads folder on a personal device, it can easily become an entry point into enterprise systems,” Atal said. “Especially when that same device is used for banking, office work, and email.” Artificial intelligence is further changing the threat landscape. 

Generative AI tools can now produce convincing phishing messages that mimic corporate communication styles and reference real projects. “AI has removed the traditional visual cues people relied on to spot scams,” Atal said. 

“Generative models now write in perfect business language, reuse an organisation’s tone, and reference real projects scraped from public sources.” Security analysts say deepfake technology is also being used to manipulate business processes. 

Synthetic video calls and cloned voices have been used to approve financial transactions in some cases. Another emerging pattern is the rise of malware-free intrusions, where attackers rely on stolen credentials or legitimate remote access tools instead of traditional malicious software. 

“We’re also seeing a rise in malware-free intrusions,” Atal said. “Attackers use stolen credentials and legitimate remote access tools. Nothing matches a known signature, yet the breach is very real.” Experts say these developments are forcing organizations to shift their security strategies. 

Instead of focusing solely on scanning files and attachments, security teams are increasingly monitoring behavior patterns across users, devices, and systems. “The first shift is moving from content to behaviour,” Atal said. 

“Instead of just scanning attachments, organisations need to focus on whether a user or service account is behaving consistently with historical and peer norms.” Security specialists also emphasize the importance of integrating identity verification with threat detection systems. 

When phishing messages become difficult to distinguish from legitimate communication, identity context becomes a key factor in identifying suspicious activity. In addition, companies are beginning to rely on artificial intelligence for defensive purposes. 

Automated systems can help security teams manage the growing volume of alerts by identifying patterns and highlighting potential threats more quickly. “Security teams are overwhelmed by alerts,” Atal said. 

“AI-based triage is essential to reduce noise, correlate weak signals, and generate plain-language narratives so analysts can act faster.” Despite increased awareness of cybersecurity threats, several misconceptions persist. 

Many organizations assume that the most serious cyberattacks originate from sophisticated state-backed actors. “One big myth is that serious attacks only come from exotic nation-state actors,” Atal said. “The truth is, most breaches begin with everyday issues such as phishing, malicious downloads, weak passwords, or cloud misconfigurations.” 

Another misconception is that smaller organizations are less likely to be targeted. However, experts say attackers often focus on industries with weaker security controls, including healthcare providers, hospitality companies, and smaller financial institutions. 

Cybersecurity specialists also warn that many attacks no longer rely on traditional malware. Techniques such as identity-based attacks, business email compromise, and misuse of legitimate administrative tools often bypass standard antivirus defenses. “Identity-based attacks, business email compromise, and abuse of legitimate tools often never trigger traditional antivirus,” Atal said. 

“The starting point can be any user, device, or partner that has access to data.” Industry leaders say the challenge is compounded by the fact that many cybersecurity systems were designed for a different technological environment. 

Vinayak Godse, chief executive of the Data Security Council of India, said existing security frameworks were built before the widespread adoption of digital services and artificial intelligence. 

“In the digitalisation space, we are creating tremendous experiences, productivity gains, and new possibilities,” Godse said. “But the security frameworks we have in place were designed for an older paradigm.” He added that attackers today are capable of identifying and exploiting even a single vulnerability in complex digital systems. 

“The current attack ecosystem can identify and exploit even one vulnerability out of millions, or even billions,” Godse said. Experts say the erosion of traditional network boundaries has further complicated security efforts. Remote work, cloud computing, software-as-a-service platforms, and third-party integrations mean that sensitive systems can now be accessed from a wide range of devices and locations. 

“A user on a personal phone, accessing a SaaS application from home Wi-Fi, is still inside your risk perimeter,” Atal said. As a result, organizations are increasingly focusing on continuous verification and context-aware monitoring rather than relying solely on perimeter defenses. 

According to Atal, the effectiveness of AI-driven security tools ultimately depends on the quality of underlying data. If data sources are fragmented or poorly labeled, even advanced analytics systems may struggle to detect threats. 
 
“Every advanced AI-driven security use case boils down to whether you can see your data and whether you can trust it,” he said. Security experts say that integrating identity signals, access patterns, and data sensitivity into unified monitoring systems can help organizations identify suspicious activity more effectively. 

“When data, identity, and threat signals are unified, security teams can see a connected narrative,” Atal said. “A login, a download, and a data access event stop being isolated alerts and start telling a story.” 

 
Despite advances in technology, experts say human behavior remains a critical factor in cybersecurity. 

“In today’s cyber landscape, the front line is no longer the firewall,” Atal said. “It is the file you choose to open and the behaviour that follows.”

New Copilot Setting May Access Activity From Other Microsoft Services. Here’s How Users Can Disable It

 



A recently noticed configuration inside Microsoft Copilot may allow the AI tool to reference activity from several other Microsoft platforms, prompting renewed discussion around data privacy and AI personalization. The option, which appears within Copilot’s settings, enables the assistant to use information connected to services such as Bing, MSN, and the Microsoft Edge browser. Users who are uncomfortable with this level of integration can switch the feature off.

Like many modern artificial intelligence systems, Copilot attempts to improve the usefulness of its responses by understanding more about the person interacting with it. The assistant normally does this by remembering past conversations and storing certain details that users intentionally share during chats. These stored elements help the AI maintain context across multiple interactions and generate responses that feel more tailored.

However, a specific configuration called “Microsoft usage data” expands that capability. According to reporting first highlighted by the technology outlet Windows Latest, this setting allows Copilot to reference information associated with other Microsoft services a user has interacted with. The option appears within the assistant’s Memory controls and is available through both the Copilot website and its mobile applications. Observers believe the setting was introduced recently as part of Microsoft’s effort to strengthen personalization features in its AI tools.

The Memory feature in Copilot is designed to help the assistant retain useful context. Through this system, the AI can recall earlier conversations, remember instructions or factual information shared by users, and potentially reference certain account-linked activity from other Microsoft products. The idea is that by understanding more about a user’s interests or previous discussions, the assistant can provide more relevant answers.

In practice, such capabilities can be helpful. For instance, a user who discussed a topic with Copilot previously may want to continue that conversation later without repeating the entire background. Similarly, individuals seeking guidance about personal or professional matters may receive more relevant suggestions if the assistant has some awareness of their preferences or circumstances.

Despite the convenience, the feature also raises questions about privacy. Some users may be concerned that allowing an AI assistant to accumulate information from multiple services could expose more personal data than expected. Others may want to know how that information is used beyond personalizing conversations.

Microsoft addresses these concerns in its official Copilot documentation. In its frequently asked questions section, the company states that user conversations are processed only for limited purposes described in its privacy policies. According to Microsoft, this information may be used to evaluate Copilot’s performance, troubleshoot operational issues, identify software bugs, prevent misuse of the service, and improve the overall quality of the product.

The company also says that conversations are not used to train AI models by default. Model training is controlled through a separate configuration, which users can choose to disable if they do not want their interactions contributing to AI development.

Microsoft further clarifies that Copilot’s personalization settings do not determine whether a user receives targeted advertisements. Advertising preferences are managed through a different option available in the Microsoft account privacy dashboard. Users who want to stop personalized advertising must adjust the Personalized ads and offers setting separately.

Even with these explanations, privacy concerns remain understandable, particularly because Microsoft documentation indicates that Copilot’s personalization features may already be activated automatically in some cases. When reviewing the settings on a personal device, these options were found to be switched on. Users who prefer not to allow Copilot to access broader usage data may therefore wish to disable them.

Checking these settings is straightforward. Users can open Copilot through its website or mobile application and ensure they are signed in with their Microsoft account. On the web interface, selecting the account name at the bottom of the left-hand panel opens the Settings menu, where the Memory section can be accessed. In the mobile application, the same controls are available through the side navigation menu by tapping the account name and choosing Memory.

Inside the Memory settings, users will see a general control labeled “Personalization and memory.” Two additional options appear beneath it: “Facts you’ve shared,” which stores information provided directly during conversations, and “Microsoft usage data,” which allows Copilot to reference activity from other Microsoft services.

To limit this behavior, users can switch off the Microsoft usage data toggle. They may also disable the broader Personalization and memory option if they prefer that the AI assistant does not retain contextual information about their interactions. Copilot also provides a “Delete all memory” function that removes all stored data from the system. If individual personal details have been recorded, they can be reviewed and deleted through the editing option next to “Facts you’ve shared.”

Security and privacy experts generally advise caution when sharing information with AI assistants, even when personalization features remain enabled. Sensitive or confidential details should not be entered into conversations. Microsoft itself recommends avoiding the disclosure of certain types of highly personal data, including information related to health conditions or sexual orientation.

The broader development reflects a growing trend in the technology industry. As AI assistants become integrated across multiple platforms and services, companies are increasingly using cross-service data to make these tools more helpful and personalized. While this approach can improve convenience and usability, it also underlines the grave necessity for transparent privacy controls so users remain aware of how their information is being used and can adjust those settings when necessary.

Mental Health Apps With Million Downloads Filled With Security Vulnerabilities


Mental health apps may have flaws

Various mental health mobile applications with over millions of downloads on Google Play have security flaws that could leak users’ personal medical data.

Researchers found over 85 medium and high-severity vulnerabilities in one of the apps that can be abused to hack users' therapy data and privacy. 

Few products are AI companions built to help people having anxiety, clinical depression, bipolar disorder and stress. 

Six of the ten studied applications said that user chats are private and encoded safely on the vendor's servers. 

Oversecured CEO Sergey Toshin said that “Mental health data carries unique risks. On the dark web, therapy records sell for $1,000 or more per record, far more than credit card numbers.”

More than 1500 security vulnerabilities reported 

Experts scanned ten mobile applications promoted as tools that help with mental health issues, and found 1,575 security flaws: 938 low-severity, 538 medium-severity, and 54 rated high-severity. 

No critical issues were found, a few can be leveraged to hack login credentials, HTML injection, locate the user, or spoof notifications. 

Experts used the Oversecured scanner to analyse the APK files of the mental health apps for known flaw patterns in different categories. 

Using Intent.parseUri() on an externally controlled string, one treatment app with over a million downloads launches the generated messaging object (intent) without verifying the target component. 

This makes it possible for an attacker to compel the application to launch any internal activity, even if it isn't meant for external access.

Oversecured said, “Since these internal activities often handle authentication tokens and session data, exploitation could give an attacker access to a user’s therapy records.”

Another problem is storing data locally that gives read access to all apps on the device. This can expose therapy details, depending on the saved data. Therapy details such as Cognitive Behavioural Therapy (CBT), session notes, therapy entries. Experts found plaintext configuration data and backend API endpoints inside the APK resources. 

 “These apps collect and store some of the most sensitive personal data in mobile: therapy session transcripts, mood logs, medication schedules, self-harm indicators, and in some cases, information protected under HIPAA,” Oversecured said.

AI-Powered Cybercrime Hits 600+ FortiGate Firewalls Across 55 Countries, AWS Warns

 

Cybercriminals using readily available generative AI tools managed to breach more than 600 internet-facing FortiGate firewalls across 55 countries within a little over a month, according to a recent incident analysis released by Amazon Web Services (AWS).

The operation, active between mid-January and mid-February, did not rely on sophisticated zero-day vulnerabilities. Instead, attackers automated large-scale attempts to access exposed systems by rapidly testing weak or reused credentials—essentially the digital equivalent of trying every unlocked door, but at high speed with the assistance of AI.

AWS investigators believe the operation was carried out by a financially motivated Russian-speaking group. The attackers scanned for publicly accessible FortiGate management interfaces, attempted to log in using commonly reused passwords, and once successful, extracted configuration files that provided detailed insight into the victims’ network environments.

According to AWS’s security team, the threat actors leveraged multiple commercially available AI tools to produce attack playbooks, scripts, and operational documentation. This allowed a relatively small or less technically advanced group to conduct a campaign that would typically require greater manpower and development effort. Analysts also discovered traces of AI-generated code and planning materials on compromised systems, indicating that AI tools were used extensively throughout the operation rather than just for occasional scripting tasks.

"The volume and variety of custom tooling would typically indicate a well-resourced development team," said CJ Moses, CISO at Amazon. "Instead, a single actor or very small group generated this entire toolkit through AI-assisted development."

After gaining access to the firewalls, the attackers retrieved configuration data containing administrator and VPN credentials, network architecture information, and firewall policies. Armed with these details, they attempted deeper intrusions by targeting directory services such as Active Directory, harvesting credentials, and exploring options for lateral movement across compromised networks. Backup infrastructure, including servers running Veeam, was also targeted during the intrusions.

AWS researchers noted that although the tools used in the campaign were functional, they appeared somewhat crude. The scripts showed basic parsing methods and repetitive comments often associated with machine-generated drafts. Despite their imperfections, the tools proved effective enough for large-scale automated attacks. When systems proved difficult to compromise, the attackers often abandoned them and shifted focus to easier targets, suggesting that their strategy prioritized volume over precision.

The affected organizations were spread across several regions, including Europe, Asia, Africa, and Latin America. The activity did not appear to focus on a single sector or country, indicating opportunistic targeting. However, investigators observed clusters of incidents suggesting that some breaches may have provided access to managed service providers or shared infrastructure, potentially increasing the scale of downstream exposure.

AWS emphasized that many of the compromises could have been avoided with standard cybersecurity practices. Preventing management interfaces from being publicly accessible, implementing multi-factor authentication, and avoiding password reuse would have significantly reduced the attackers’ chances of success.

The report comes shortly after Google cautioned that cybercriminal groups are increasingly integrating generative AI technologies—including tools such as Gemini AI—into their operations. These technologies are being used for tasks such as reconnaissance, target profiling, phishing campaign creation, and malware development


Researchers Find Critical Zero-Day Vulnerabilities in Foxit and Apryse PDF Platforms

 

PDF files are often seen as simple digital documents, but recent research shows they have evolved into complex software environments that can expose corporate systems to cyber risks. Modern PDF tools now function more like application platforms than basic viewers, potentially giving attackers pathways into private networks. 

A study by Novee Security examined two widely used platforms, Foxit and Apryse. Released on February 18, 2026, the report identified 13 categories of vulnerabilities and 16 potential attack paths that could allow systems to be compromised. 

Researchers say these issues are more than minor bugs. Some zero-day flaws could allow attackers to run commands on backend servers or take over user accounts without needing to compromise a browser or operating system. To find the vulnerabilities, analysts first identified common patterns that signal security weaknesses. These patterns were then used to train an AI system that scanned large volumes of code much faster than manual review alone. 

By combining human insight with automated analysis, the system detected several high-impact issues that conventional scanning tools might miss. One major flaw appeared in Foxit’s digital signature server, which verifies electronically signed documents. Some of the most serious findings involve one-click exploits where simply opening a document or loading a link can trigger malicious activity. Vulnerabilities CVE-2025-70402 and CVE-2025-70400 affect Apryse WebViewer by allowing the software to trust remote configuration files without proper validation, enabling attackers to run malicious scripts. 

Another flaw, CVE-2025-70401, showed that malicious code could be hidden in the “Author” field of a PDF comment and executed when a user interacts with it. Researchers also identified CVE-2025-66500, which affects Foxit browser plugins. In this case, manipulated messages could trick the plugin into running harmful scripts within the application. Testing further showed that certain weaknesses could allow attackers to send a simple request that triggers command execution on a server, granting unauthorized access to parts of the system. 

These vulnerabilities highlight how small interactions or overlooked behaviors can lead to significant security risks. Experts say the core problem lies in how modern PDF platforms are built. Many now rely on web technologies such as iframes and server-side processing, yet organizations still treat PDF files as harmless static documents. This mismatch can create “trust boundary” failures where software accepts external data without sufficient validation. 

Both vendors were notified before the research was published, and the vulnerabilities were assigned official CVE identifiers to support patching efforts. The findings highlight how document-processing systems—often overlooked in security planning—can become complex attack surfaces if not properly secured.

ECB Tightens Oversight of Banks’ Growing AI Sector Risks

 

The European Central Bank is intensifying its oversight of how eurozone lenders finance the fast‑growing artificial intelligence ecosystem, reflecting concern that the boom in data‑centre and AI‑related infrastructure could hide pockets of credit and concentration risk.

In recent weeks, the ECB has sent targeted requests to a select group of major European banks, asking for granular data on their loans and other exposures to AI‑linked activities such as data‑centre construction, vendor financing and large project‑finance structures. Supervisors want to map where credit is clustering around a small set of hyperscalers, cloud providers and specialized hardware suppliers, amid global estimates of trillions of dollars in planned AI‑related capital spending. Officials stress this is a diagnostic exercise rather than an immediate step toward higher capital charges, but it marks a shift from general discussion to hands‑on information gathering.

The push comes as European banks race to harness AI inside their own operations, from credit scoring and fraud detection to automating back‑office tasks and enhancing customer service. Supervisors acknowledge that these technologies promise sizeable efficiency gains and new revenue opportunities, yet warn that many institutions still lack mature governance for AI models, including robust data‑quality controls, explainability, and clear accountability for automated decisions. The ECB has repeatedly argued that AI adoption must be matched by stronger risk‑management frameworks and continuous human oversight over model life cycles.

Regulators are also increasingly uneasy about systemic dependencies created by the dominance of a handful of mostly non‑EU AI and cloud providers. Heavy reliance on these external platforms raises concerns about operational resilience, data protection, and geopolitical risk that could spill over into financial stability if disruptions occur. At the same time, the ECB’s broader financial‑stability assessments have highlighted stretched valuations in some AI‑linked equities, warning that a sharp correction could transmit stress into bank balance sheets through both direct exposures and wider market channels. 

For now, supervisors frame their AI‑sector review as part of a wider effort to “encourage innovation while managing risks,” aligning prudential expectations with Europe’s new AI Act and digital‑operational‑resilience rules. Banks are being nudged to tighten contract terms, strengthen model‑validation teams and improve documentation before scaling AI‑driven business lines. The message from Frankfurt is that AI remains welcome as a driver of competitiveness in European finance—but only if lenders can demonstrate they understand, measure and contain the new concentrations of credit, market and operational risk that accompany the technology’s rapid rise.

Optimizely Reports Data Breach Linked to Sophisticated Vishing Incident


 

In addition to serving as a crossroads of technology, marketing intelligence, and vast networks of corporate data, digital advertising platforms are becoming increasingly attractive targets for cybercriminals seeking an entry point into enterprise infrastructure.

Optimizely recently revealed that a security incident was initiated not by sophisticated malware, but rather by a social engineering scheme that was carefully orchestrated. A voice-phishing tactic was utilized by attackers linked to the threat group ShinyHunters to deceive a company employee earlier in February 2026 and gain access to some parts of the company's internal environment without authorization. 

Investigators determined that the attackers were able to extract limited business contact information from internal resources even though the intrusion was contained before it could reach sensitive customer databases or critical operational systems. 

Throughout this episode, we learn that even mature technology companies remain vulnerable to manipulation-based attacks aimed at bypassing technical defenses and targeting the human layer of security. 

Optimizely, a leading provider of digital experience infrastructure, develops tools that assist organizations in managing web properties, conducting marketing experiments, and refining online customer journeys based on data. 

Among its many capabilities are A/B experimentation frameworks, enterprise-grade content management systems, and integrated ecommerce tools that are designed to assist businesses in improving conversion performance and audience engagement across a variety of digital channels. 

Over 10,000 organizations worldwide use the company's technology stack, including H&M, PayPal, Toyota, Nike, and Salesforce, among others. A number of customers have recently received notifications detailing this incident. According to the company, the attackers gained access through what it described as a "sophisticated voice-phishing attack" on February 11. 

The internal investigation indicates that although the threat actors were able to penetrate a limited segment of the corporate environment, the intrusion did not result in privilege escalation and no malicious payloads or malware were deployed within the network during the intrusion. 

Therefore, the breach remained constrained within a narrow scope, confirming our assessment that the attackers were limited in their access and were not permitted to reach sensitive customer and operational data. Researchers have identified the intrusion as the work of the threat actor collective ShinyHunters, a financially motivated group involved in cybercrime since at least 2020. 

It is well known for orchestrating high-visibility data theft operations and subsequently distributing or monetizing compromised databases through dark web forums and underground marketplaces. A great deal of its campaign effort has been directed toward technology and telecommunications organizations, areas where internal access to corporate databases and partner information can prove to be very useful. 

According to analysts, this group has demonstrated a high degree of flexibility in its intrusion techniques, combining credential-based attacks, such as credential stuffing, with increasingly persuasive social engineering techniques, such as voice-based deception schemes, to achieve their objectives. 

Despite the fact that the precise geographical origins of the actors remain unknown, their operational footprint spans multiple regions, reflecting their focus on monetizing corporate information or using stolen data to exert reputational and financial pressure on targeted organizations through exploitation of stolen data. In the immediate case, organizations connected to the affected environment appear to be only exposed to basic business contact information, not sensitive customer information. 

Cybersecurity specialists caution, however, that even seemingly routine information can provide a foothold for follow-on attacks. By using contact directories, email addresses, and professional identifiers, attackers may be able to craft convincing phishing emails or conduct additional social engineering attempts in order to gather credentials or financial information. 

In addition to facilitating spam operations, this type of data can also facilitate fraudulent outreach that impersonates trusted partners or internal employees. A precautionary measure, security experts recommend that employees and partners be aware of unexpected communications, independently verify the legitimacy of telephone calls or e-mail requests, and maintain multi-factor authentication on all corporate accounts as a precautionary measure. 

A proactive approach to security hygiene and maintaining open communication with affected stakeholders are widely regarded as essential measures in order to minimize the impact of incidents of this nature on organizational operations and reputations. 

Optimizely did not disclose the exact number of customers whose information may have been exposed; however, it indicated in its breach notification that the activity closely resembles that of a network of attackers known for persistent social engineering campaigns involving loosely connected attackers. 

According to the firm, during the incident, communications were received that reflected patterns commonly associated with groups that utilize voice phishing to manipulate employees into providing access to corporate systems. 

As stated in the description, the operational style of ShinyHunters is commonly attributed to those responsible for a series of breaches affecting major online platforms and consumer brands recently, such as Canada Goose, Panera Bread, Betterment, SoundCloud, Pornhub, Figure, and Match Group, which operates Tinder, Hinge, Meetic, Match.com, and OkCupid, among others. 

It should be noted that not every incident has been related to a single coordinated campaign; however, numerous victims have reported a successful intrusion attempt related to voice phishing operations designed to compromise enterprise single sign-on environments. 

It has been reported that attackers have impersonated internal IT support staff and contacted employees directly, leading them to counterfeit authentication portals that mimic legitimate corporate logins. These interactions led to the attackers bypassing standard access controls by obtaining account credentials and one-time multi-factor authentication codes from victims. These techniques have also been observed to evolve, with threat actors using device-code phishing methods to obtain authentication tokens tied to enterprise identity services by exploiting the legitimate OAuth device authorization flow. 

Once a single sign-on account has been compromised, attackers can pivot among integrated corporate applications and cloud-based platforms using compromised employee accounts. The same access may be extended to enterprise tools such as Microsoft Entra ID, Microsoft 365, Google Workspace, Salesforce, Zendesk, Dropbox, SAP, Slack, Adobe, and Atlassian, enabling an intruder to move laterally across connected services and collect additional corporate information once an initial foothold has been established. 

Ultimately, this incident serves as a reminder that technical safeguards alone are rarely sufficient to prevent determined social engineering campaigns from gaining traction. It is not uncommon for attackers to exploit human trust and routine operational processes to breach the security architecture of organizations with mature security architectures. 

Identify-verification procedures should be strengthened for internal support interactions, voice-based fraud should be regularly discussed with employees and strong monitoring should be implemented around single sign-on activity and unusual authentication requests, according to security professionals. 

Taking measures such as implementing conditional access policies, enforcing multi-factor authentication strictly, and implementing rapid incident response protocols can greatly reduce an attacker's ability to attack after the initial attempt has been made. 

The development of voice-driven deception tactics is continuing to prompt companies across the technology sector to prioritize social engineering resilience as a core component of enterprise cybersecurity strategy, rather than as a peripheral issue.

OpenAI’s Codex Security Flags Over 10,000 High-Risk Vulnerabilities in Code Scan

 



Artificial intelligence is increasingly being used to help developers identify security weaknesses in software, and a new tool from OpenAI reflects that shift.

The company has introduced Codex Security, an automated security assistant designed to examine software projects, detect vulnerabilities, confirm whether they can actually be exploited, and recommend ways to fix them.

The feature is currently being released as a research preview and can be accessed through the Codex interface by users subscribed to ChatGPT Pro, Enterprise, Business, and Edu plans. OpenAI said customers will be able to use the capability without cost during its first month of availability.

According to the company, the system studies how a codebase functions as a whole before attempting to locate security flaws. By building a detailed understanding of how the software operates, the tool aims to detect complicated vulnerabilities that may escape conventional automated scanners while filtering out minor or irrelevant issues that can overwhelm security teams.

The technology is an advancement of Aardvark, an internal project that entered private testing in October 2025 to help development and security teams locate and resolve weaknesses across large collections of source code.

During the last month of beta testing, Codex Security examined more than 1.2 million individual code commits across publicly accessible repositories. The analysis produced 792 critical vulnerabilities and 10,561 issues classified as high severity.

Several well-known open-source projects were affected, including OpenSSH, GnuTLS, GOGS, Thorium, libssh, PHP, and Chromium.

Some of the identified weaknesses were assigned official vulnerability identifiers. These included CVE-2026-24881 and CVE-2026-24882 linked to GnuPG, CVE-2025-32988 and CVE-2025-32989 affecting GnuTLS, and CVE-2025-64175 along with CVE-2026-25242 associated with GOGS. In the Thorium browser project, researchers also reported seven separate issues ranging from CVE-2025-35430 through CVE-2025-35436.

OpenAI explained that the system relies on advanced reasoning capabilities from its latest AI models together with automated verification techniques. This combination is intended to reduce the number of incorrect alerts while producing remediation guidance that developers can apply directly.

Repeated scans of the same repositories during testing also showed measurable improvements in accuracy. The company reported that the number of false alarms declined by more than 50 percent while the precision of vulnerability detection increased.

The platform operates through a multi-step process. It begins by examining a repository in order to understand the structure of the application and map areas where security risks are most likely to appear. From this analysis, the system produces an editable threat model describing the software’s behavior and potential attack surfaces.

Using that model as a reference point, the tool searches for weaknesses and evaluates how serious they could be in real-world scenarios. Suspected vulnerabilities are then executed in a sandbox environment to determine whether they can actually be exploited.

When configured with a project-specific runtime environment, the system can test potential vulnerabilities directly against a functioning version of the software. In some cases it can also generate proof-of-concept exploits, allowing security teams to confirm the problem before deploying a fix.

Once validation is complete, the tool suggests code changes designed to address the weakness while preserving the original behavior of the application. This approach is intended to reduce the risk that security patches introduce new software defects.

The launch of Codex Security follows the introduction of Claude Code Security by Anthropic, another system that analyzes software repositories to uncover vulnerabilities and propose remediation steps.

The emergence of these tools reflects a broader trend within cybersecurity: using artificial intelligence to review vast amounts of software code, detect vulnerabilities earlier in the development cycle, and assist developers in securing critical digital infrastructure.