Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

How to Spot and Avoid LinkedIn Scams: A Complete Guide to Staying Safe Online

 

Most people trust LinkedIn for connecting careers, finding jobs, or growing businesses - yet that very trust opens doors for fraudsters. Because profiles often reveal detailed backgrounds, attackers pull facts straight from bios to craft believable tricks. Spotting odd requests or sudden offers helps block risks before they grow. Awareness matters, especially when messages seem too eager or oddly timed. 

Most people come across false job listings on LinkedIn at some point. Fake recruiter accounts tend to advertise positions offering large salaries, little work, fast placement, or overseas moves. Often, these deals turn out poorly once applicants get asked for private details or required to cover costs like setup fees, instruction modules, or tools. A different but frequent method relies on deceptive messages that mimic real notifications from the platform - these contain harmful web addresses meant to capture account passwords and access codes. 

One way attackers operate now involves tailored tactics, including spear-phishing. Studying someone's online activity helps them design messages appearing genuine and familiar. Sometimes these interactions shift from LinkedIn to apps such as WhatsApp or Telegram, avoiding detection more easily. Moving communication elsewhere raises serious concerns - it typically precedes deeper manipulation. Another trend gaining ground includes scams based on fake investments or romantic connections; here, confidence grows slowly until false money offers appear, frequently tied to digital currency. Watch out for certain red flags when using professional platforms. 

When messages push you to act fast, promise big rewards, or ask for private data, stay cautious. A profile showing few contacts, missing background, or odd job timelines might not be genuine. Confirm who you're dealing with by checking corporate sites - this basic move often gets ignored. Start smart - shielding your online presence begins with straightforward habits. Click only trusted links, since risky ones open doors to trouble. Two-step login adds a layer of safety, making breaches harder. Strong passwords matter; reusing them weakens protection. 

Staying inside LinkedIn messages helps keep exchanges secure. Sharing less personal detail lowers exposure quietly. Privacy controls fine-tune who sees what - adjust them often. Safety grows when small steps add up behind the scenes. Right away, cut contact if something feels off - then alert LinkedIn about the account. 

When financial data might be exposed, changing passwords fast becomes key, while also warning your bank without delay. Even as the platform expands, threats rise at the same pace, which means staying alert matters more than any tool. Awareness acts quietly but powerfully, standing between safety and harm.

Residential Proxies Evade IP Reputation Checks in 78% of 4 Billion Sessions

 

Residential proxy networks are now evading IP‑reputation‑based security controls in a majority of malicious sessions, greatly undercutting a core pillar of network defense. A recent analysis by cybersecurity intelligence firm GreyNoise found that residential‑proxy‑routed traffic escaped IP‑reputation checks in 78% of roughly 4 billion malicious sessions over a three‑month window. Attackers rely on ordinary home and mobile‑network IP addresses passed through these proxies, making it hard for defenders to distinguish malicious scans from legitimate user traffic.

How residential proxies work 

Residential proxies route traffic through real‑world consumer devices—home routers, mobile phones, and small‑business connections—owned by ordinary users or enrolled into third‑party bandwidth‑sharing schemes. Many of these IPs are short‑lived, appearing only once or twice in attacker logs before being rotated, which prevents reputation feeds from cataloging them in time. About 89.7% of the residential IPs involved in attacks are active for under a month, with only small fractions persisting beyond two or three months.

The main problem is that IP reputation typically tags long‑running or heavily abused addresses, yet most residential proxy IPs are highly transient and geographically scattered. GreyNoise’s data shows the attacking residential IPs come from 683 different ISPs, blending with normal customer traffic and diluting any clear “bad‑IP” signal. Because attackers mainly use these proxies for low‑volume network scanning and reconnaissance instead of direct exploits, traffic patterns look benign at the network layer, letting 78% of such sessions slip past reputation‑based filters.

The study points to China, India, and Brazil as major sources of residential‑proxy traffic, with usage patterns that mirror human behavior, such as a noticeable drop in activity at night. GreyNoise identifies two main ecosystems behind these proxies: IoT botnets and compromised consumer devices whose installed software—such as free VPNs and ad‑blocking apps—secretly sells the device’s bandwidth. SDKs embedded in these apps enroll consenting or unaware users into proxy networks that monetize idle home‑network capacity.

Implications and future defenses 

The high evasion rate means relying solely on IP reputation is no longer sufficient for detecting threats routed through residential proxies. GreyNoise recommends shifting toward behavior‑based detection, including tracking sequential probing from rotating residential IPs, blocking unsupported enterprise protocols from ISP‑facing networks, and persistently fingerprinting devices even when their IP changes. Security teams will need layered analytics—combining session‑level behavior, device profiles, and protocol anomalies—to stay effective as attackers continue to exploit the camouflage of residential‑proxy infrastructure.

TruffleHog Targets European Commission, Breach Leaked Data of 30 EU Entities


The European Union Cybersecurity Service (CERT-EU) has linked the European Commission cloud breach to the TeamPCP gang. The breach leaked the information of 29 Union organizations.

The breach

The commission disclosed the attack on March 27, when Bleeping Computer confirmed the breach of the European Union’s primary executive body.

Recently, the European Commission informed CERT-EU about the breach, informing them that their Cybersecurity Operations was not warned about an API exploit, a possible account hack, or any malicious network traffic until March 24.

TeamPCP's attack tactic

In March, TeamPCP exploited a compromised AWS API key to manage rights over different Commission AWS accounts (hacked in the Trivy supply-chain breach).

After that, the gang deployed TruffleHog to look for more secrets, then added a new access key to an existing user to escape detection before doing more spying and data theft. 

In the past, TeamPCP has been known for supply-chain attacks targeting developer code forums like NPM, Docker, PyPi, and GitHub. The gang also attacked the LiteLLM PyPI package in a campaign that affected tens of thousands of devices via its “TeamPCP Cloud Stealer” data-stealing malware. 

ShinyHunters' role

Later, data extortion gang ShinyHunters posted the stolen data on their dark web leak site as a 90 GB archive of documents (around 340GB uncompressed), which includes email addresses, contacts, and email information. 

According to the CERT-EU analysis, hackers have stolen tens of thousands of documents; the leak affects around 42 internal European Commission clients and around 20 other Union firms. 

"The threat actor used the compromised AWS secret to exfiltrate data from the affected cloud environment. The exfiltrated data relates to websites hosted for up to 71 clients of the Europa web hosting service: 42 internal clients of the European Commission, and at least 29 other Union entities,” CERT-EU said. Regarding the dataset, CERT-EU said it also contained “at least 51,992 files related to outbound email communications, totalling 2.22 GB. The majority of these are automated notifications with little to no content. However, 'bounce-back' notifications, which are responses to incoming messages from users, may contain the original user-submitted content, posing a risk of personal data exposure."

The impact

No websites were taken offline or altered as a result of this attack, and no lateral movement to other Commission AWS accounts has been found, according to CERT-EU.

Although it would probably take "a considerable amount of time" to analyze the exfiltrated databases and information, the Commission has informed the appropriate data protection authorities and is in direct contact with the impacted organizations.

After learning that a mobile device management platform used to oversee employees' devices had been compromised, the European Commission revealed another data breach in February.

Hims and Hers Discloses Cyberattack Impacting Customer Support Infrastructure


 

The integrity of digital systems has become inextricably linked to patient trust in an industry where discretion is not only expected but is fundamental. Telehealth providers, by design, are at the intersection of convenience and confidentiality, handling deeply personal disclosures ranging from routine wellness concerns to highly sensitive conditions, delivering a balance between convenience and confidentiality. 

In spite of their rapid scaling and increasing reliance on third-party services for customer interactions, these platforms have a security posture that extends far beyond their own infrastructure. External integrations no matter how efficient they may be operationally introduce a new layer of vulnerability, increasing the attack surface in ways often not apparent until the incident has occurred. 

A breach involving the company’s customer support environment has now materialized that risk for Hims & Hers, which is notifying customers. In fact, the incident did not result from the organization's core medical systems, but from its third-party customer service platform which handles user queries and support tickets an often overlooked repository of information submitted by users. 

A preliminary investigation was initiated by the company on February 5, which resulted in unauthorized access to support tickets between February 4 and February 7. Upon conducting a comprehensive review of those tickets, which was concluded on March 3, the company confirmed that personal information was contained therein. It was disclosed to the Office of the California Attorney General that an unidentified threat actor gained access to what was described as "certain tickets sent to our customer service team." This had a limited impact on a limited number of users. 

The company has not fully disclosed the scope of exposed data, but acknowledges that names, contact information, and additional user-provided information was likely accessed. Some of these details are redacted in the filing. As a matter of fact, Hims & Hers stated that no medical records or direct doctor-patient communications were compromised. 

Nevertheless, the nature of the exposed data underscores a more general concern concerning telehealth ecosystems. Support tickets frequently contain contextual clues symptoms described in plain language, product inquiries pertaining to specific conditions, or follow-ups that reveal treatment journeys implicitly. 

When a platform offers services such as hair loss, erectile dysfunction, mental health, skincare, and weight management, even limited identifiers may be used to communicate unintended sensitivity. Thus, this breach highlights a critical reality of healthcare-related digital services: operational information and deeply personal information are far more closely linked than they appear to be in these services. It is unclear at this time what the extent of the exposure is. 

The company has not yet confirmed the number of individuals affected. The California data breach notification framework mandates disclosures when there are 500 or more residents involved, a threshold that often indicates that the event is of higher materiality. An employee spokesperson of the company, Jake Martin, stated in the report that the intrusion had been caused by a social engineering attack, suggesting that the attackers were exploiting a purely technical vulnerability rather than manipulating internal personnel to gain unauthorized access. 

A granular breakdown of the information accessed was not provided by the company despite follow-up inquiries, which indicated that the compromised dataset primarily consisted of customer names and email addresses. As an important point, the organization has not disclosed whether it has received direct communication from the threat actors, including extortion demands or ransom demands, leaving open the question of the attacker's intent and post-compromise activities.

The ambiguity is indicative of a wider and increasingly familiar threat landscape trend characterized by customer support and ticketing environments emerging as highly valued targets for adversaries motivated by financial gain. 

In addition to being information-rich, these systems are also less fortified than core transactional or clinical systems because they aggregate user-submitted data in less structured formats. Additionally, this incident aligns with a growing number of breaches involving similar infrastructures. As part of its customer service ticketing system compromise in 2025, Discord disclosed the exposure of 70,000 users' sensitive identity documents, including government-issued identifications, submitted for age verification purposes by approximately 70,000 users. 

A critical shift in attacker focus can be observed in these cases, where peripheral service layers, particularly those that are managed by third parties, are increasingly used as entry points for accessing highly sensitive data by compromising primary systems rather than confronting them directly. 

Keeping in line with industry practice, Hims & Hers is now providing complimentary credit monitoring to affected customers for a period of 12 months. These measures provide a minimum level of financial oversight, but they do little to mitigate the risk of targeted social engineering that is more immediate and sophisticated. 

Specifically, the release of support ticket data provides an opportunity for highly contextual phishing campaigns, in which threat actors use authentic user interactions, such as prescription-related queries or treatment discussions, to create messages that are significantly more convincing than generic fraud attempts. By utilizing personalized communications instead of direct breaches of financial systems, these tactics achieve maximum effectiveness. 

The security analyst community has consistently warned that even small amounts of health-related context can be used to weaponize datasets for coercion, fraud, and reputational damage. It is unclear whether such misuse has taken place in this case, but it remains plausible. If sensitive treatment or condition information is linked to identifiable contact information, it can be used in extortion schemes or deceptive outreach campaigns to obtain more information.

It is noteworthy that this emerging threat model aligns with prior Federal Bureau of Investigation advisories, which have documented cases in which adversaries impersonated insurance companies, claims investigators, or healthcare representatives to obtain medical records and financial information. Due to this backdrop, affected individuals are encouraged to take a more defensive position in addition to passive monitoring in order to protect themselves from harm. 

In particular, users are advised to be cautious when responding to unsolicited communications referencing specific treatments, past support interactions, or account activity, as well as verifying any requests for information through official, trusted communication channels before engaging with embedded links or attachments in unexpected messages. 

An enhanced level of situational awareness can be enhanced by taking proactive measures, such as monitoring for data exposure across illicit marketplaces. It may be possible to identify downstream misuse early when utilizing tools such as Malwarebytes Digital Footprint Scanner, which tracks credential and personal information circulation. This can allow individuals to act before such information is actively exploited.

According to prevailing industry practice, Hims & Hers is offering 12 months' complimentary credit monitoring to affected users. Although such measures provide a baseline layer of financial oversight, they are insufficient to mitigate the more immediate and sophisticated risks associated with targeted social engineering. 

A particular concern with the availability of support ticket data is the possibility of highly contextual phishing campaigns, where threat actors can craft messages based on genuine user interactions, such as prescription-related queries or treatment discussions, which are much more convincing than generic fraud attempts. In order to successfully utilize these tactics, it is imperative that trust be exploited through personalization, not by directly breaching financial systems. 

The security analyst community has consistently warned that even small amounts of health-related context can be used to weaponize datasets for coercion, fraud, and reputational damage. It is unclear whether such misuse has taken place in this case, but it remains plausible. 

In combination with identifiable contact details, information related to sensitive treatments or conditions may be used to perpetrate extortion schemes or deceptive outreach aimed at eliciting further disclosures. In line with prior advice from the Federal Bureau of Investigation, this evolving threat model aligns with cases in which adversaries have impersonated insurance companies, claims investigators, and healthcare representatives in order to extract medical records and financial information. This background is being used to encourage affected individuals to adopt a more defensive posture which goes beyond passive monitoring. 

Taking note of unsolicited communications especially those referencing specific treatments, past interactions with support staff, or account activity is essential. It is advised that users avoid engaging with embedded links or attachments within unexpected messages and verify all requests for information using official and trusted channels. 

Monitoring for potential data exposure across illicit marketplaces can further enhance situational awareness by enhancing proactive measures. It is possible for malwarebytes to provide early indications of downstream misuse through tools like the Malwarebytes Digital Footprint Scanner, which tracks credentials and personal data circulation. Therefore, individuals can respond before such information is actively exploited. 

The nature of incidents such as these underscores the need for digital health providers to redesign their security strategies beyond traditional system boundaries in light of these incidents. A healthcare platform's resilience is increasingly dependent on the governance of third-party integrations, employee awareness and a visibility of data flows across support ecosystems, as demonstrated by Hims & Hers. 

In order to protect themselves against social engineering threats in the future, organizations operating in this field will need to adopt a layered security posture integrating continuous monitoring, stricter access controls, and targeted training. 

While maintaining caution and being informed, users must realize that even limited data exposures can be exploited by sophisticated attack chains. As the threat landscape evolves, it is evident that safeguarding healthcare data is not limited to clinical systems but is also extended to every interface which creates, shares, or stores personal information.

AMD Announces Plan to Acquire Intel in Unprecedented Industry Turn

 




Advanced Micro Devices has revealed plans to acquire long-time rival Intel Corporation, marking a dramatic reversal in one of the most enduring rivalries in the semiconductor industry.

The proposed transaction, structured entirely as a stock-based deal, signals a major shift in industry power. Once viewed as the underdog, AMD has now surpassed Intel in market valuation, and the acquisition would further cement that transition.

For over four decades, the relationship between the two companies has been defined by competition, imitation, legal disputes, and strategic overlap. AMD historically operated in Intel’s shadow, often positioning itself as a secondary supplier while attempting to challenge its dominance. In recent years, however, AMD has strengthened its position across multiple computing segments and improved investor confidence, while Intel has faced setbacks.

Intel’s struggles have included delays in manufacturing advancements, inconsistent product execution, and repeated strategic adjustments. These challenges have contributed to a broader shift in market perception, allowing AMD to close the gap and eventually move ahead in key areas.

The idea of AMD acquiring Intel would have seemed highly unlikely just a few years ago, given Intel’s long-standing dominance as the central force in the personal computing ecosystem. The potential merger now reflects how drastically that balance has changed.

If completed, integrating the two companies could present organizational and cultural challenges, given their long history as direct competitors. Leadership from AMD indicated that the combined entity could accelerate product development timelines, streamline user experience, and maintain a level of internal competition despite operating under one structure.

In its response, Intel stated that the agreement could enhance shareholder value while providing its engineering teams with clearer direction and stronger operational support to rebuild competitive product offerings.

Industry analysts are still assessing the broader implications. Historically, Intel’s scale and manufacturing capabilities positioned it at the center of the computing market, while AMD functioned as a challenger that introduced competitive pressure. That dynamic has shifted as AMD expanded its presence in servers, desktops, and mobile computing, while Intel’s recovery efforts remain ongoing.

Several practical questions remain unresolved. These include how branding will be handled, whether both product lines will continue independently, and how regulators will evaluate the consolidation of two primary x86 architecture competitors under a single entity.

Sources familiar with the matter suggest AMD may adopt a structure that retains both brands in the near term. One internal concept reportedly frames Intel as a legacy-focused division, reflecting its historical significance while redefining its position within the organization.

Investor reaction has ranged from surprise to cautious optimism. Some market participants see the potential for operational efficiency and reduced rivalry, while others are concerned that combining the two companies could limit competition in the x86 processor market.

From a regulatory perspective, the deal is likely to face scrutiny due to the potential concentration of market power. The long-standing competition between AMD and Intel has historically driven innovation and pricing balance, and its reduction could reshape industry dynamics.

The announcement comes at a time when the semiconductor sector is undergoing rapid transformation, driven by demand for artificial intelligence, high-performance computing, and evolving global supply chains. Both companies have been investing heavily in these areas, alongside competitors such as NVIDIA Corporation.

At present, the timeline for completion remains subject to regulatory approvals and further review. While the companies have indicated confidence in moving forward, the scale and implications of the deal mean that its outcome will be closely watched across the industry.

GlassWorm Malware Campaign Attacks Developer IDEs, Steals Data


About GlassWorm campaign 

Cybersecurity experts have discovered another incident of the ongoing GlassWorm campaign, which uses a new Zig dropper that's built to secretly compromise all integrated development environments (IDEs) on a developer's system. 

The tactic was found in an Open VSX extension called "specstudio.code-wakatime-activity-tracker”, which disguised as WakaTime, a famous tool that calculates the time programmes spend with the IDE. The extension can not be downloaded now. 

Attack tactic 

In previous attacks, GlassWorm used the same native compiled code in extensions. Instead of using the binary as the payload directly, it is deployed as a covert indirection for the visible GlassWorm dropper. It can secretly compromise all other IDEs that may be present in your device. 

The recently discovered Microsoft Visual Studio Code (VS Code) extension is a replica (almost).

The extension installs a universal Mach-O binary called "mac.node," if the system is running Apple macOS, and a binary called "win.node" for Windows computers.

Execution 

These Zig-written compiled shared libraries that load straight into Node's runtime and run outside of the JavaScript sandbox with complete operating system-level access are Node.js native addons.

Finding every IDE on the system that supports VS Code extensions is the binary's main objective once it has been loaded. This includes forks like VSCodium, Positron, and other AI-powered coding tools like Cursor and Windsurf, in addition to Microsoft VS Code and VS Code Insiders.

Malicious code installation 

Once this is achieved, the binary installs an infected VS Code extension (.VSIX) from a hacker-owned GitHub account. The extension, known as “floktokbok.autoimport”, imitates “steoates.autoimport”, an authentic extension with over 5 million downloads on the office Visual Studio Marketplace.

After that, the installed .VSIX file is written to a secondary path and secretly deployed into each IDE via editor's CLI installer. 

In the second-stage, VS Code extension works as a dropper that escapes deployment on Russian devices, interacts with the Solana blockchain, gets personal data, and deploys a remote access trojan (RAT). In the final stage, RAT installs a data-stealing Google Chrome extension. 

“The campaign has expanded repeatedly since then, compromising hundreds of projects across GitHub, npm, and VS Code, and most recently delivering a persistent RAT through a fake Chrome extension that logged keystrokes and dumped session cookies. The group keeps iterating, and they just made a meaningful jump,” cybersecurity firm aikido reported. 

Salesforce Unveils AI-Powered Slack Overhaul with 30 Game-Changing Features

 

Salesforce has unveiled a transformative AI overhaul for its Slack platform, introducing 30 new features designed to elevate it from a mere messaging tool to a comprehensive AI-powered workflow engine. Announced by CEO Marc Benioff at a San Francisco event in late March 2026, this update builds on Slack's acquisition five years ago, which has driven two-and-a-half times revenue growth across a million businesses. The changes position Slack at the heart of Salesforce's AI-centric strategy, aiming to automate repetitive tasks and boost enterprise productivity. 

Central to the makeover is an enhanced Slackbot, now boasting agentic capabilities far beyond basic queries. Following a January 2026 update that enabled it to draft emails, schedule meetings, and scan inboxes, the new features introduce reusable AI skills. Users can define custom tasks—like generating a project budget—that Slackbot executes across contexts by pulling data from channels, connected apps, and external sources. These skills come pre-built in a library but allow personalization, slashing manual effort dramatically. 

For instance, commanding Slackbot to "create a budget for the team retreat" triggers it to aggregate expenses from Slack threads, integrate CRM data, draft a plan, and auto-schedule a review meeting with relevant stakeholders based on their roles. This seamless automation extends to Slackbot acting as an MCP client, interfacing with external tools like Salesforce's Agentforce platform from 2024. It routes queries intelligently to the optimal agent or app, minimizing human oversight. 

Meeting management sees significant upgrades too, with Slackbot now transcribing huddles, generating summaries, and extracting action items. Missed details? A quick ask delivers a personalized recap, including your assigned tasks. The bot's reach expands beyond Slack, monitoring desktop activities such as calendars, deals, conversations, and habits to offer proactive suggestions—like drafting follow-ups. Privacy controls let users tweak permissions, ensuring data access aligns with comfort levels. 

These 30 features, rolling out gradually over coming months, underscore Salesforce's vision to embed AI deeply into daily work. Early tests report up to 20 hours weekly productivity gains, powered partly by models like Anthropic’s Claude. Slack evolves into a versatile hub where communication, automation, and decision-making converge, potentially redefining enterprise tools. As businesses grapple with AI integration, this Slack revamp highlights both promise and challenges—like dependency on vendor ecosystems and data governance. For teams already in Salesforce's orbit, it promises efficiency; for others, it signals a competitive push in AI-driven collaboration. The update arrives amid rapid tech shifts, urging companies to adapt swiftly.

Windows 11 Faces Rising Threats from AI Malware and Critical Security Flaws

 

Pressure on Windows 11 security grows - driven by emerging AI-powered malware alongside unpatched flaws threatening companies and everyday users alike. The pace of change in digital threats becomes clearer through recent incidents, especially within large organizational networks. DeepLoad sits at the heart of recent cybersecurity worries. This particular threat skips typical download tactics altogether. 

Instead of dropping files, it operates without any - earning its "fileless" label. Users themselves become part of the breach process. By following deceptive prompts, they run benign-looking instructions in system utilities such as Command Prompt. Once executed, those inputs quietly trigger malicious activity behind the scenes. Since nothing gets written to disk, standard virus scanners often miss what's happening. 

Detection becomes difficult when there’s no file footprint to flag. After running, the malware stays active by embedding itself into system processes while reaching out to remote servers through standard Windows tools. Because it targets confidential information like passwords, its presence poses serious risks inside business environments. What makes it harder to detect is how it blends malicious activity with normal operating routines. Security teams may overlook it during routine checks due to this camouflage technique. 

Artificial intelligence makes existing threats more dangerous. Because AI-driven malware adjusts on the fly, it slips past standard detection systems. As a result, security tools struggle to keep up. With each change the malware makes, response times shrink. The gap between finding a flaw and facing an attack grows narrower by the hour. Meanwhile, security patches have been rolled out by Microsoft to fix numerous high-risk weaknesses. 

Affected are various business-focused builds of Windows 11 - both recent iterations and extended support variants. One major concern involves defects within the Routing and Remote Access Service (RRAS), where exploitation might let threat actors run harmful software from a distance. Full administrative access to compromised machines becomes possible through these gaps. Not just isolated systems feel the impact. 

That last Patch Tuesday, Microsoft fixed over eighty security gaps in its programs - problems hiding even inside tools such as Excel and Outlook. Opening an attachment wasn’t needed; sometimes, just looking at it could activate harmful code, showing how dangerous these weaknesses really are. Experts warn that even emerging AI tools, such as Microsoft Copilot, could introduce new risks if not properly secured, particularly when sensitive data is handled automatically. 

Though companies face the most attacks, regular individuals can still be affected. When new patches arrive, it helps to apply them without delay - timing often matters more than assumed. Opening unknown scripts carries risk; many breaches begin there. Unexpected requests, especially those demanding immediate steps, deserve extra skepticism. 

Change is shaping a new kind of digital danger - cleverer, slyer, built to exploit how people act just as much as system flaws. One moment it mimics trust; the next, it slips through unnoticed.

Hidden Android Malware Capable of Controlling Devices Raises Security Concerns


 

Smartphones have become increasingly important as repositories of identity, finances, and daily communications. The recent identification of a new Android malware strain, recently flagged by the National Cybercrime Threat Analytics Unit and ominously dubbed "God Mode", is indicative of a worrying escalation in mobile security threats. 

As opposed to conventional scams that employ visible deception or user interaction, this variant is designed to persist silently, enabling attackers to gain an unsettling degree of control without prompting immediate suspicion. 

The name of the program is not accidental; it reflects its ability to assume a wide range of permissions and surveillance capabilities once deployed, reducing users to the position of unaware bystanders.  It is noteworthy that this development coincides with an increase in sophisticated malware campaigns throughout India, where cybercriminals are increasingly utilizing the perception of legitimacy of digital services to exploit public trust, mimicking official government platforms. 

Often deployed through widely used messaging channels, these operations take advantage of urgency and limited verification by utilizing carefully orchestrated social engineering tactics, resulting in a seamless illusion of authenticity that has already led to widespread identity theft and financial fraud. In view of these concerns, researchers have identified a threat class that is more deeply ingrained into the Android operating system.

The Oblivion Remote Access Trojan, observed recently, signals the shift from surface-level compromise to systemic invasion. Based on reports, the malware is being distributed through subscription-based distribution models across a wide range of Android devices running versions 8 through 16 and is designed to operate across a broad range of devices.

Using Certo's analysis, it appears that the toolkit is not simply a standalone payload, but rather a structured package with a configurable builder that enables operators to create malicious applications that resemble legitimate applications. As a complement, a dropper mechanism was developed to mimic routine system update prompts, a tactic that blends seamlessly with user expectations and greatly increases the likelihood of execution. 

Kaspersky has found parallel evidence linking this activity to a strain they call "Keenadu," discovered during deeper investigations into firmware-level threats that resembled the earlier Triada threat. It is noteworthy that this variant is persistent: instead of being installed solely by the user, it has been observed embedded within the device firmware itself, indicating a compromise within the supply chain. 

The researchers claim that a tainted dependency introduced during firmware development enabled the malware to be integrated into the core system environment by allowing the malware to persist. Upon attachment to Android’s Zygote process, the malicious code replicates across all running applications on the device, resulting in widespread and difficult to detect control. Because affected devices may reach end users already compromised, manufacturers may be unaware of the intrusion prior to their products being distributed, which has significant consequences. 

There is a deceptively simple entry point into the infection chain associated with such threats: the link or application file is delivered via messaging platforms under the guise of legitimate notifications, often posing as bank alerts, service updates, or time-sensitive announcements. As soon as the application is executed, it strategically requests access to the Accessibility Service an Android feature intended to make the application more usable for people who are differently abled. 

A systemic abuse of this permission occurs in the context described above in order to establish extensive control over device operations. By gaining access to this level of access, the malware can monitor on-screen activity, intercept text communications, and perform autonomous user interactions. The ability to capture one-time passwords, navigate applications, and authorize transactions without explicit user awareness is included in this category. 

Most of the times observed, the initial payload is distributed via widely used communication channels such as instant messaging platforms as an APK file, where it appears as a routine application or system update via widely used communication channels. As a result of its outward appearance, the malware is often not suspected and is more likely to succeed during installation.

The malicious process embeds itself within the device and is designed to maintain persistence and stealth. By avoiding visibility within the standard application interface, the malicious process is evading casual detection while remaining silently operating in the background. The degree of risk introduced by this level of compromise is substantial. 

Through the malware's ability to access sensitive inputs, such as OTPs, personal messages, and contact databases, conventional authentication procedures are effectively bypassed. Further, by utilizing its ability to initiate or redirect calls, overlay fraudulent interfaces over legitimate banking applications, and simulate genuine user behavior, sophisticated financial exploitation and data exfiltration can be accomplished. 

Additionally, the threat is lowly visible; the lack of overt indicators, combined with its ability to avoid basic scrutiny, make it difficult for users to become aware of a breach until tangible damage has already occurred - financial or otherwise. Because the vulnerability does not uniformly impact all Android devices, assessing exposure becomes an important first step when confronted with this backdrop. 

According to current findings, the risk is primarily confined to smartphones equipped with MediaTek system-on-chip architectures, although devices that are powered by Qualcomm Snapdragon or Google Tensor are not affected. 

Users can verify their device's status by verifying its exact model in system settings and referencing its hardware specifications using manufacturer documentation. It becomes more urgent when the MediaTek chipset is identified to ensure that the latest security patches are applied as soon as possible. 

While a fix has been reportedly issued at the chipset level, its effectiveness is determined by the timely distribution by individual device manufacturers, making timely system updates a decisive factor in preventing exposures. A broader defensive posture requires a combination of technical safeguards and user discipline in addition to identification and patching. 

Security applications can not directly address firmware-level vulnerabilities, but they still play an important role in detecting secondary payloads, such as spyware or malicious applications, which may be deployed following a compromise. It is also important to minimize sensitive data stored locally on devices, particularly credentials, recovery keys, and financial information that could be accessed if access is obtained. Also highlighted in this case is the importance of physical security, as certain exploit vectors may require direct device access, which makes unattended or improperly handled devices potentially vulnerable. 

Additionally, complementary measures add essential layers of resistance against unauthorised activity, such as robust screen locks, shorter auto-lock intervals, and multi-factor authentication across critical accounts. In addition to reducing credential exposure, using encrypted password managers will help reduce device-level control capabilities, such as USB-restricted mode, when available, to limit data transfer capabilities while locked. 

As a result of these measures, the underlying vulnerability remains, however a layered security framework is established that significantly reduces the likelihood and impact of exploitation in the real world. As a result, these deeply embedded Android threats highlight a significant shift in the mobile security landscape, where risks are no longer restricted to user-level interactions, but extend to the underlying architecture of the device itself. 

With this evolving technology, users and manufacturers need to remain vigilant and informed, emphasizing proactive security hygiene, timely software maintenance, and carefully examining digital interactions. As threat actors continue to refine their methods, resilience will be determined by the development of layered, adaptive defense strategies that anticipate compromise and limit its impact, rather than a single safeguard.

Microsoft Releases AI Upgrades, Launches Copilot Cowork to Early Access Customers


In an effort to enhance its AI offering and increase adoption, Microsoft (MSFT.O) recently introduced new features in its Copilot research assistant that would enable users to employ various AI models concurrently within the same workflow.

Instead of relying on a single model, Copilot's Researcher agent can now pull outputs from both OpenAI's GPT and Anthropic's Claude models for each response, thanks to a new feature called "Critique."

According to Microsoft, Claude will check the quality and correctness of the response before GPT provides it to the user. In the future, the business hopes to make that workflow bidirectional so that GPT may also evaluate Claude's writings.

"Having different models from ​different vendors in Copilot is highly attractive - but we're taking this to the next level, where customers actually get the benefits of the models working together," Nicole Herskowitz, VP of Copilot and  Microsoft, said to Reuters. 

The multi-model strategy will assist in increasing productivity and quality for customers by accelerating user workflow, controlling AI hallucinations, which occur when systems give incorrect information, and producing more dependable outputs.

Additionally, Microsoft is introducing a feature called "Council" that will let users compare results from various AI models side by side. The updates coincide with Microsoft expanding access to its new Copilot Cowork agentic AI tool for members of its "Frontier" program, which gives users early access to some of its most recent AI innovations.

According to Jared Spataro, Microsoft's AI-at-Work efforts leader, “We work only in a cloud environment, and we work only on behalf of the user. So you know exactly what information it (Copilot Cowork) has access ​to.”

On Monday, the company's stock increased by almost 1%. However, as investor confidence in AI declines, the stock is poised for its worst quarter since the global financial crisis of 2008, with a nearly 25% decline.

Microsoft capitalized on the increasing demand for autonomous AI agents earlier this month by releasing Copilot Cowork, a solution based on Anthropic's popular Claude Cowork product, in testing mode.

In the face of fierce competition from rivals like Google (GOOGL.O), the new tab Gemini, and autonomous agents like Claude Cowork, the Windows manufacturer has been rushing to enhance its Copilot assistant to promote greater usage.

Quantum Computing Could Threaten Bitcoin Security Sooner Than Expected, Study Finds

 



New research suggests the cryptocurrency industry may have less time than anticipated to prepare for the risks posed by quantum computing, with potential implications for Bitcoin, Ethereum, and other major digital assets.

A whitepaper released on March 31 by researchers at Google indicates that breaking the cryptographic systems securing these networks may require fewer than 500,000 physical qubits on a superconducting quantum computer. This marks a sharp reduction from earlier estimates, which placed the requirement in the millions.

The study brings together contributors from both academia and industry, including Justin Drake of the Ethereum Foundation and Dan Boneh, alongside Google Quantum AI researchers led by Ryan Babbush and Hartmut Neven. The research was also shared with U.S. government agencies prior to publication, with input from organizations such as Coinbase and the Ethereum Foundation.

At present, no quantum system is capable of carrying out such an attack. Google’s most advanced processor, Willow, operates with 105 qubits. However, researchers warn that the gap between current hardware and attack-capable machines is narrowing. Drake has estimated at least a 10% probability that a quantum computer could extract a private key from a public key by 2032.

The concern centers on how cryptocurrencies are secured. Bitcoin relies on a mathematical problem known as the Elliptic Curve Discrete Logarithm Problem, which is considered practically unsolvable using classical computers. However, Peter Shor demonstrated that quantum algorithms could solve this problem far more efficiently, potentially allowing attackers to recover private keys, forge signatures, and access funds.

Importantly, this threat does not extend to Bitcoin mining, which relies on the SHA-256 algorithm. Experts suggest that using quantum computing to meaningfully disrupt mining remains decades away. Instead, the vulnerability lies in signature schemes such as ECDSA and Schnorr, both based on the secp256k1.

The research outlines three potential attack scenarios. “On-spend” attacks target transactions in progress, where an attacker could intercept a transaction, derive the private key, and submit a fraudulent replacement before confirmation. With Bitcoin’s average block time of 10 minutes, the study estimates such an attack could be executed in roughly nine minutes using optimized quantum systems, with parallel processing increasing success rates. Faster blockchains such as Ethereum and Solana offer narrower windows but are not entirely immune.

“At-rest” attacks focus on wallets with already exposed public keys, such as reused or inactive addresses, where attackers have significantly more time. A third category, “on-setup” attacks, involves exploiting protocol-level parameters. While Bitcoin appears resistant to this method, certain Ethereum features and privacy tools like Tornado Cash may face higher exposure.

Technically, the researchers developed quantum circuits requiring fewer than 1,500 logical qubits and tens of millions of computational operations, translating to under 500,000 physical qubits under current assumptions. This is a substantial improvement over earlier estimates, such as a 2023 study that suggested around 9 million qubits would be needed. More optimistic models could reduce this further, though they depend on hardware capabilities not yet demonstrated.

In an unusual move, the team did not publish the full attack design. Instead, they used a zero-knowledge proof generated through the SP1 zero-knowledge virtual machine to validate their findings without exposing sensitive details. This approach, rarely used in quantum research, allows independent verification while limiting misuse.

The findings arrive as both industry and governments begin preparing for a post-quantum future. The National Security Agency has called for quantum-resistant systems by 2030, while Google has set a 2029 target for transitioning its own infrastructure. Ethereum has been actively working toward similar goals, aiming for a full migration within the same timeframe. Bitcoin, however, faces slower progress due to its decentralized governance model, where major upgrades can take years to implement.

Early mitigation efforts are underway. A recent Bitcoin proposal introduces new address formats designed to obscure public keys and support future quantum-resistant signatures. However, a full transition away from current cryptographic systems has not yet been finalized.

For now, users are advised to take precautionary steps. Moving funds to new addresses, avoiding address reuse, and monitoring updates from wallet providers can reduce exposure, particularly for long-term holdings. While the threat is not immediate, researchers emphasize that preparation must begin well in advance, as advances in quantum computing continue to accelerate.

Anthropic's Claude Code Leak: 500K Lines Exposed

 

On March 31, 2026, Anthropic, the safety-focused AI company behind Claude, accidentally leaked over 500,000 lines of proprietary source code for its Claude Code tool through a public npm package update. This incident, the second such breach in a year, exposed nearly 2,000 TypeScript files via a misincluded debugging file in version 2.1.88, which linked to a publicly accessible zip archive on Anthropic's Cloudflare storage.Security researcher Chaofan Shou quickly spotted the error, sparking rapid mirroring on GitHub where repositories amassed thousands of stars before takedowns. 

The leak revealed Claude Code's full architecture, including 44 feature flags for unreleased capabilities like a "persistent assistant" that runs in the background even when users are inactive. Other hidden gems included session review for performance improvement across conversations, remote control from mobile devices, and a roadmap toward longer autonomous tasks, enhanced memory, and multi-agent collaboration. Developers also uncovered internal tools, prompts, and even a "pet system" codenamed Buddy with species and rarity tiers, hinting at gamified enterprise features. 

Anthropic swiftly responded, calling it "human error" in a release packaging issue, not a security breach, with no sensitive data exposed. The company issued over 8,000 DMCA takedown requests to platforms like GitHub, removing thousands of forks within days. Claude Code creator Boris Cherny confirmed a skipped manual deploy step caused the mishap, and Anthropic pledged process improvements to prevent recurrence. 

This incident underscores vulnerabilities in AI firms' deployment pipelines, especially for a lab positioning itself as security-conscious amid IPO preparations. Competitors now gain insights into production-grade AI coding agents, potentially accelerating their own developments in agent orchestration and tools. While unlikely to derail Anthropic's $340 billion valuation, it highlights how securing AI systems rivals defending against AI-powered threats. 

Ultimately, the Claude Code leak serves as a stark reminder for the AI industry to fortify internal safeguards as innovations race ahead. It boosts hype around Anthropic's capabilities while exposing the human element in high-stakes tech releases. As external developers reverse-engineer remnants, the focus shifts to ethical use and robust verification in open-source ecosystems.

Axios Supply Chain Attack Exposes npm Security Gaps with Token-Based Compromise

 

A breach in the Axios library - one of many relied upon in modern web development - has exposed flaws that linger beneath surface-level fixes. Through stolen access, hackers slipped harmful updates into what users assumed was safe code. This event underscores how fragile trust can be, even when systems claim stronger defenses. Progress in verifying packages and securing logins appears incomplete, given such exploits still succeed. Confidence in tools like those hosted on npm remains shaken by failures that feel both avoidable and familiar. 

A lead developer’s extended-use npm token was accessed by hackers, reports show from Huntress and Wiz. Through this entry point, altered builds of Axios emerged - versions laced with hidden code deploying a multi-system remote control tool. Not limited to one environment, the harmful update reached machines running on macOS, Windows, or Linux setups. Lasting just under three hours, the rogue releases stayed active online until taken down. 

Axios ranks among the top tools in JavaScript, downloaded more than a hundred million times each week, found in roughly eight out of ten cloud setups. Moments after the tainted update went live, malware started spreading fast; Huntress later verified infection on 135 machines while the vulnerability was active. Hidden within a third-party addition, plain-crypto-js slipped into Axios’s environment without touching its main codebase. Not through direct changes but via a concealed payload activated after installation. 

Running quietly once set up, it triggered deployment of a remote access tool on developers’ systems. Built to avoid notice, the malicious code erased itself under certain conditions. Altered components were restored automatically, masking traces left behind. One reason this breach stands out lies in its method - evading defenses thought secure. Even after adopting standard safeguards like OIDC for verified publishing and robust supply chain models, outdated tools remained active. 

A leftover npm access key opened the door despite stronger systems being in place. Where two login paths existed, preference went to the original token, rendering recent upgrades useless under that condition. This is now the third significant breach of the npm supply chain in just a few months, after events such as the Shai-Hulud incident. 

Each time, hackers used compromised maintainer login details to gain access, revealing a recurring weakness across the system. Though security professionals highlight benefits of measures like multi-factor verification and origin monitoring, these fail to block every threat when login data is exposed. 

With growing pressure, companies must examine third-party links, apply tighter rules on software setup, yet phase out outdated access methods instead. When trust rests on open-source tools, weaknesses in how credentials are handled can still invite breaches. A single event shows flaws aren’t always in the code itself - sometimes they hide where access is managed.

Arbitrary File Write Bug in Gigabyte Control Center Sparks Security Alerts


 

It is becoming increasingly apparent that trusted system utilities are embedded with persistent security risks, as GIGABYTE Control Center, a widely deployed Windows-based management tool that is packaged with select devices, has been put under scrutiny following the disclosure of a critical security flaw. 

Inadvertently, the software designed to give users centralized control over essential hardware functions exposed a potential pathway for threat actors to alter system behavior on a fundamental level. Despite the fact that the vulnerability has been addressed, it is potential to exploit it in order to execute unauthorized code, write arbitrary files, and potentially disrupt system availability through denial-of-service. 

Since the utility is deeply entwined with device operations and is installed on GIGABYTE motherboards, the vulnerability has significant implications for users as well as enterprises, making it increasingly important to deploy patches and harden systems in a timely manner. Software vulnerable to this vulnerability is GIGABYTE Control Center, which is pre-installed on all laptops and supported motherboards, serving as a central point of configuration and oversight for the entire system.

Integrated with Windows, it provides a comprehensive set of operational controls for monitoring and managing hardware, adjusting thermal and fan curves, optimizing performance, customizing RGB lighting, and installing driver and firmware updates. 

The broad access to underlying system functions, which is intended to enhance user convenience, amplifies the potential impact of any vulnerabilities in the system. There is a particular concern regarding an integrated "pairing" feature designed to facilitate communication between host systems and external devices or services over a network. 

When enabled in versions of Control Center up to and including 25.07.21.01, this function significantly expands the application's interaction surface. Thus, it introduces a vulnerability that can be exploited under specific circumstances, increasing the attack surface of affected systems by creating a network-exposed vector. It is this feature that makes it an important focal point when assessing the overall risk profile associated with the vulnerability because it is linked to elevated system privileges and network-enabled communication. 

According to additional technical analysis, the issue may be related to the vulnerability CVE-2026-4415, which has a rating of 9.2 under CVSS 4.0 framework, and has been identified within the pairing mechanism within GIGABYTE Control Center versions 25.07.21.01 and earlier. As a result of insufficient safeguards regarding how the application handles network-initiated interactions, David Sprüngli is credited with discovering the vulnerability. 

The pairing feature provides an opportunity for unauthenticated remote actors to write arbitrary files across the system's file structure when it is active. With the utility's elevated privileges and close integration with system processes, such access is potentially useful for the execution of remote code, escalation of privileges, or disruption of system availability. 

A particularly concerning aspect of the vulnerability is its ability to bypass conventional trust boundaries, effectively creating a potential attack vector from a legitimate management feature. A new version of GIGABYTE's Control Center has been released, titled 25.12.10.01, which introduces a series of corrections across multiple functional layers, including download handling routines, message validation processes, and command-level encryption, as well as corrective measures for multiple functional layers. In combination, these enhancements mitigate the risks associated with the exposed pairing interface. 

According to the company's advisory, users should update immediately and obtain the patched version only through official software distribution channels, thereby reducing the possibility of compromised or tampered installers occurring. Such incidents reinforce the importance of treating vendor-supplied utilities the same way we'd treat any externally sourced software, especially when they're elevated privileges and have network access. 

The company and individual users should both adopt a proactive patch management strategy, audit pre-installed applications on a regular basis, and disable features not specifically required for use, such as remote pairing. The implementation of multiple security controls, including endpoint monitoring, network segmentation, and strict access policies, can significantly reduce exposure to similar threats. 

The integration of hardware ecosystems and software-driven management layers becomes increasingly complex, so maintaining vigilance over these trusted components is crucial to maintaining the integrity of the overall system.

New Chaos Malware Variant Expands to Cloud Targets, Introduces Proxy Capability

 



A newly observed version of the Chaos malware is now targeting poorly secured cloud environments, indicating a defining shift in how this threat is being deployed and scaled.

According to analysis by Darktrace, the malware is increasingly exploiting misconfigured cloud systems, moving beyond its earlier focus on routers and edge devices. This change suggests that attackers are adapting to the growing reliance on cloud infrastructure, where configuration errors can expose critical services.

Chaos was first identified in September 2022 by Lumen Black Lotus Labs. At the time, it was described as a cross-platform threat capable of infecting both Windows and Linux machines. Its functionality included executing remote shell commands, deploying additional malicious modules, spreading across systems by brute-forcing SSH credentials, mining cryptocurrency, and launching distributed denial-of-service attacks using protocols such as HTTP, TLS, TCP, UDP, and WebSocket.

Researchers believe Chaos developed from an earlier DDoS-focused malware strain known as Kaiji, which specifically targeted exposed Docker instances. While the exact operators behind Chaos remain unidentified, the presence of Chinese-language elements in the code and the use of infrastructure linked to China suggest a possible connection to threat actors from that region.

Darktrace detected the latest variant within its honeypot network, specifically on a deliberately misconfigured Hadoop deployment that allowed remote code execution. The attack began with an HTTP request sent to the Hadoop service to initiate the creation of a new application.

That application contained a sequence of shell commands designed to download a Chaos binary from an attacker-controlled domain, identified as “pan.tenire[.]com.” The commands then modified the file’s permissions using “chmod 777,” allowing full access to all users, before executing the binary and deleting it from the system to reduce forensic evidence.

Notably, the same domain had previously been linked to a phishing operation conducted by the cybercrime group Silver Fox. That campaign, referred to as Operation Silk Lure by Seqrite Labs in October 2025, was used to distribute decoy documents and ValleyRAT malware, suggesting infrastructure reuse across campaigns.

The newly identified sample is a 64-bit ELF binary that has been reworked and updated. While it retains much of its original functionality, several features have been removed. In particular, capabilities for spreading via SSH and exploiting router vulnerabilities are no longer present.

In their place, the malware now incorporates a SOCKS proxy feature. This allows compromised systems to relay network traffic, effectively masking the origin of malicious activity and making detection and mitigation more difficult for defenders.

Darktrace also noted that components previously associated with Kaiji have been modified, indicating that the malware has likely been rewritten or significantly refactored rather than simply reused.

The addition of proxy functionality points to a broader monetization strategy. Beyond cryptocurrency mining and DDoS-for-hire operations, attackers may now leverage infected systems to provide anonymized traffic routing or other illicit services, reflecting increasing competition within cybercriminal ecosystems.

This shift aligns with a wider trend observed in other botnets, such as AISURU, where proxy services are becoming a central feature. As a result, the threat infrastructure is expanding beyond traditional service disruption to include more complex abuse scenarios.

Security experts emphasize that misconfigured cloud services, including platforms like Hadoop and Docker, remain a critical risk factor. Without proper access controls, attackers can exploit these systems to gain initial entry and deploy malware with minimal resistance.

The continued evolution of Chaos underlines how threat actors are persistently enhancing their tools to expand botnet capabilities. It also reinforces the need for continuous security monitoring, as changes in how APIs and services function may not always appear as direct vulnerabilities but can exponentially increase exposure.

Organizations are advised to regularly audit configurations, restrict unnecessary access, and monitor for unusual behavior to mitigate the risks posed by increasingly adaptive malware threats.

Apple Reinforces Digital Privacy for Users Without Restricting Law Enforcement Oversight


 

The company has long positioned its privacy architecture as a defining aspect of its ecosystem, marketing it as more than a feature, but a fundamental right built into its products as well. However, the latest disclosures emerging from US legal proceedings suggest that privacy boundaries are neither absolute nor impermeable, and that a more nuanced reality emerges. 

It is the "Hide My Email" function that is under scrutiny, a tool designed to hide users' real email addresses from third-party apps and websites. Despite its success in minimizing commercial tracking and unsolicited exposure, recent legal revelations indicate that this layer of anonymity can be effectively reversed under lawful authority to ensure effectiveness. 

Moreover, the development highlights the important distinction between consumer privacy assurances and judicial obligations imposed by technology companies, reframing conditional anonymity as a controlled filter operating within clearly defined legal limits rather than as a cloak of invisibility. 

Subsequent disclosures from investigative proceedings provide additional insight into how this conditional anonymity works in practice. Apple has received a request from federal authorities, including the Federal Bureau of Investigation, for subscriber information regarding a threatening communication directed at Alexis Wilkins, a person who was reported to have been associated with FBI Director Kash Patel.

According to the warrant application, Apple was able to correlate the anonymized "Hide My Email" alias to a specific user account by providing details on subscriber identification along with a wider dataset that contained over a hundred additional aliases created under the same profile. It was found that Homeland Security Investigations investigated an alleged identity fraud operation in a similar manner, in which multiple masked email identities were linked to Apple accounts under underlying identity fraud schemes, allowing investigators to consolidate disparate digital footprints into one framework for attribution. 

Collectively, these examples reveal an important structural aspect of Apple's ecosystem: while certain layers of iCloud services are protected by end-to-end encryption, a portion of account and communication information is still accessible under valid legal processes. Despite the fact that subscriber information, including names, billing credentials, and associated identifiers, remains within the compliance boundary rather than a cryptographic boundary, which does not contain end-to-end encryption of the content. 

The delineation reinforces an issue of broader significance to the industry, in which conventional email infrastructure is built without pervasive encryption safeguards, making it inherently vulnerable to lawful interception by its users. It is against this backdrop that privacy-conscious individuals are increasingly turning to platforms such as Signal, which offer default end-to-end encryption and minimal data retention. 

As for Apple, it has not responded directly to these developments, although the disclosures have prompted a review of how privacy assurances are communicated and understood within technologically advanced and legally obligated environments. A sustained increase in government access requests against major technology providers is reflective of the context in which these disclosures are made. 

According to Apple's transparency data, it processed more than 13,000 such requests for customer information during the first half of 2025, with email-related records contributing significantly to account attribution, threat analysis, and criminal investigations due to their evidentiary value. Nevertheless, this dynamic is not limited to Apple's ecosystem.

Similar constraints exist among providers such as Google and Microsoft, where legacy email protocols - architected in an era before modern encryption standards - continue to limit the amount of privacy protection inherent within their systems. Although niche services such as Proton have attempted to address this issue by implementing end-to-end encryption by design, their adoption remains marginal relative to the global email user base, which underscores the persistence of structurally exposed communication channels within this environment. 

Apple’s position is especially interesting in light of the divergence between its privacy-oriented messaging and its email infrastructure's technical realities. Hide My Email provides demonstrably reduced exposure to commercial tracking and data aggregation, however it does not alter the underlying compliance model governing lawful data access. 

The distinction has re-ignited an ongoing policy debate around encryption, a controversy Apple has previously encountered with the use of iMessage and other Apple services. Regulations and law enforcement agencies contend that inaccessible communications impede legitimate investigations, and extending comparable end-to-end encryption to iCloud Mail may result in renewed friction.

In contrast, privacy advocates contend that any lowering of encryption standards introduces systemic security risks. Thus, email privacy remains a compromise governed both by legal frameworks as well as engineering decisions at present. 

It is common for users seeking stronger privacy to rely on specialized encryption platforms, but such platforms present usability constraints and interoperability challenges with the larger email ecosystem. There is an important distinction to be drawn from recent federal requests: privacy controls designed to limit the visibility of corporate data do not automatically ensure that government access is restricted. 

The implementation of Apple's products is within this boundary, balancing user expectations with statutory obligations. However, there remains a considerable gap between perceptions and operational realities that calls for reevaluation. It is unclear if the company will extend its end-to-end encryption model to email services, particularly in light of the political and regulatory implications of such a shift. 

It is important to note that privacy is not a binary guarantee, but rather a layered construct that is shaped by both technical design and legal jurisdiction as a result of the developments. As such, organizations and individuals alike should reassess their threat models, identifying clearly between protections required for sensitive communications as opposed to protections against commercial data exposure. 

In cases where confidentiality is extremely important, standard email services may be insufficient, which necessitates selective adoption of stronger encryption techniques, secure communication channels, and disciplined data handling procedures. As a result of clear, and often misunderstood, boundaries within which privacy features operate, informed usage remains the most reliable safeguard in an environment where privacy features operate within clearly defined boundaries.

How Duck.ai Offer Better Privacy Compared to Commercial Chatbots


Better privacy with DuckDuckGo's AI bot

Privacy issues have always bothered users and business organizations. With the rapid adoption of AI, the threats are also rising. DuckDuckGo’s Duck.ai chatbot benefits from this.

The latest report from Similarweb revealed that traffic to Duck.ai increased rapidly last month. The traffic recorded 11.1 million visits in February 2026, 300% more than January. 

Duck.ai's sudden traffic jump

The statistics seem small when compared with the most popular chatbots such as ChatGPT, Claude, or Gemini. 

Similarweb estimates that ChatGPT recorded 5.4 billion visits in February 2026, and Google’s Gemini recorded 2.1 billion, whereas Claude recorded 290.3 million. 

For DuckDuckGo, the numbers show a good sign, as the bot was launched as beta in 2025, and has shown a sharp rise in visits. 

DuckDuckGo browser is known for its privacy, and the company aims to apply the same principle to its AI bot. Duck.ai doesn't run a bespoke LLM, it uses frontier models from Meta, Anthropic, and OpenAI, but it doesn't expose your IP address and personal data. 

Duck.ai's privacy policy reads, "In addition, we have agreements in place with all model providers that further limit how they can use data from these anonymous requests, including not using Prompts and Outputs to develop or improve their models, as well as deleting all information received once it is no longer necessary to provide Outputs (at most within 30 days, with limited exceptions for safety and legal compliance),”

Duck.ai is famous now

What is the reason for this sudden surge? The bot has two advantages over individual commercial bots like ChatGPT and Gemini, it offers an option to toggle between multiple models and better privacy security. The privacy aspect sets it apart. Users on Reddit have praised Duck.ai, one person noting "it's way better than Google's," which means Gemini. 

Privacy concerns in AI bots

In March, Anthropic rejected a few applications of its technology for mass surveillance and weapons submitted by the Department of Defense. The DoD retaliated by breaking the contract. Soon after, OpenAI stepped in. 

The incident stirred controversies around privacy concerns and ethical AI use. This explains why users may prefer chatbots like Duck.ai that safeguard user data from both the government and the big tech.