Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AI. Show all posts

GlassWorm Malware Campaign Attacks Developer IDEs, Steals Data


About GlassWorm campaign 

Cybersecurity experts have discovered another incident of the ongoing GlassWorm campaign, which uses a new Zig dropper that's built to secretly compromise all integrated development environments (IDEs) on a developer's system. 

The tactic was found in an Open VSX extension called "specstudio.code-wakatime-activity-tracker”, which disguised as WakaTime, a famous tool that calculates the time programmes spend with the IDE. The extension can not be downloaded now. 

Attack tactic 

In previous attacks, GlassWorm used the same native compiled code in extensions. Instead of using the binary as the payload directly, it is deployed as a covert indirection for the visible GlassWorm dropper. It can secretly compromise all other IDEs that may be present in your device. 

The recently discovered Microsoft Visual Studio Code (VS Code) extension is a replica (almost).

The extension installs a universal Mach-O binary called "mac.node," if the system is running Apple macOS, and a binary called "win.node" for Windows computers.

Execution 

These Zig-written compiled shared libraries that load straight into Node's runtime and run outside of the JavaScript sandbox with complete operating system-level access are Node.js native addons.

Finding every IDE on the system that supports VS Code extensions is the binary's main objective once it has been loaded. This includes forks like VSCodium, Positron, and other AI-powered coding tools like Cursor and Windsurf, in addition to Microsoft VS Code and VS Code Insiders.

Malicious code installation 

Once this is achieved, the binary installs an infected VS Code extension (.VSIX) from a hacker-owned GitHub account. The extension, known as “floktokbok.autoimport”, imitates “steoates.autoimport”, an authentic extension with over 5 million downloads on the office Visual Studio Marketplace.

After that, the installed .VSIX file is written to a secondary path and secretly deployed into each IDE via editor's CLI installer. 

In the second-stage, VS Code extension works as a dropper that escapes deployment on Russian devices, interacts with the Solana blockchain, gets personal data, and deploys a remote access trojan (RAT). In the final stage, RAT installs a data-stealing Google Chrome extension. 

“The campaign has expanded repeatedly since then, compromising hundreds of projects across GitHub, npm, and VS Code, and most recently delivering a persistent RAT through a fake Chrome extension that logged keystrokes and dumped session cookies. The group keeps iterating, and they just made a meaningful jump,” cybersecurity firm aikido reported. 

Salesforce Unveils AI-Powered Slack Overhaul with 30 Game-Changing Features

 

Salesforce has unveiled a transformative AI overhaul for its Slack platform, introducing 30 new features designed to elevate it from a mere messaging tool to a comprehensive AI-powered workflow engine. Announced by CEO Marc Benioff at a San Francisco event in late March 2026, this update builds on Slack's acquisition five years ago, which has driven two-and-a-half times revenue growth across a million businesses. The changes position Slack at the heart of Salesforce's AI-centric strategy, aiming to automate repetitive tasks and boost enterprise productivity. 

Central to the makeover is an enhanced Slackbot, now boasting agentic capabilities far beyond basic queries. Following a January 2026 update that enabled it to draft emails, schedule meetings, and scan inboxes, the new features introduce reusable AI skills. Users can define custom tasks—like generating a project budget—that Slackbot executes across contexts by pulling data from channels, connected apps, and external sources. These skills come pre-built in a library but allow personalization, slashing manual effort dramatically. 

For instance, commanding Slackbot to "create a budget for the team retreat" triggers it to aggregate expenses from Slack threads, integrate CRM data, draft a plan, and auto-schedule a review meeting with relevant stakeholders based on their roles. This seamless automation extends to Slackbot acting as an MCP client, interfacing with external tools like Salesforce's Agentforce platform from 2024. It routes queries intelligently to the optimal agent or app, minimizing human oversight. 

Meeting management sees significant upgrades too, with Slackbot now transcribing huddles, generating summaries, and extracting action items. Missed details? A quick ask delivers a personalized recap, including your assigned tasks. The bot's reach expands beyond Slack, monitoring desktop activities such as calendars, deals, conversations, and habits to offer proactive suggestions—like drafting follow-ups. Privacy controls let users tweak permissions, ensuring data access aligns with comfort levels. 

These 30 features, rolling out gradually over coming months, underscore Salesforce's vision to embed AI deeply into daily work. Early tests report up to 20 hours weekly productivity gains, powered partly by models like Anthropic’s Claude. Slack evolves into a versatile hub where communication, automation, and decision-making converge, potentially redefining enterprise tools. As businesses grapple with AI integration, this Slack revamp highlights both promise and challenges—like dependency on vendor ecosystems and data governance. For teams already in Salesforce's orbit, it promises efficiency; for others, it signals a competitive push in AI-driven collaboration. The update arrives amid rapid tech shifts, urging companies to adapt swiftly.

Quantum Computing Could Threaten Bitcoin Security Sooner Than Expected, Study Finds

 



New research suggests the cryptocurrency industry may have less time than anticipated to prepare for the risks posed by quantum computing, with potential implications for Bitcoin, Ethereum, and other major digital assets.

A whitepaper released on March 31 by researchers at Google indicates that breaking the cryptographic systems securing these networks may require fewer than 500,000 physical qubits on a superconducting quantum computer. This marks a sharp reduction from earlier estimates, which placed the requirement in the millions.

The study brings together contributors from both academia and industry, including Justin Drake of the Ethereum Foundation and Dan Boneh, alongside Google Quantum AI researchers led by Ryan Babbush and Hartmut Neven. The research was also shared with U.S. government agencies prior to publication, with input from organizations such as Coinbase and the Ethereum Foundation.

At present, no quantum system is capable of carrying out such an attack. Google’s most advanced processor, Willow, operates with 105 qubits. However, researchers warn that the gap between current hardware and attack-capable machines is narrowing. Drake has estimated at least a 10% probability that a quantum computer could extract a private key from a public key by 2032.

The concern centers on how cryptocurrencies are secured. Bitcoin relies on a mathematical problem known as the Elliptic Curve Discrete Logarithm Problem, which is considered practically unsolvable using classical computers. However, Peter Shor demonstrated that quantum algorithms could solve this problem far more efficiently, potentially allowing attackers to recover private keys, forge signatures, and access funds.

Importantly, this threat does not extend to Bitcoin mining, which relies on the SHA-256 algorithm. Experts suggest that using quantum computing to meaningfully disrupt mining remains decades away. Instead, the vulnerability lies in signature schemes such as ECDSA and Schnorr, both based on the secp256k1.

The research outlines three potential attack scenarios. “On-spend” attacks target transactions in progress, where an attacker could intercept a transaction, derive the private key, and submit a fraudulent replacement before confirmation. With Bitcoin’s average block time of 10 minutes, the study estimates such an attack could be executed in roughly nine minutes using optimized quantum systems, with parallel processing increasing success rates. Faster blockchains such as Ethereum and Solana offer narrower windows but are not entirely immune.

“At-rest” attacks focus on wallets with already exposed public keys, such as reused or inactive addresses, where attackers have significantly more time. A third category, “on-setup” attacks, involves exploiting protocol-level parameters. While Bitcoin appears resistant to this method, certain Ethereum features and privacy tools like Tornado Cash may face higher exposure.

Technically, the researchers developed quantum circuits requiring fewer than 1,500 logical qubits and tens of millions of computational operations, translating to under 500,000 physical qubits under current assumptions. This is a substantial improvement over earlier estimates, such as a 2023 study that suggested around 9 million qubits would be needed. More optimistic models could reduce this further, though they depend on hardware capabilities not yet demonstrated.

In an unusual move, the team did not publish the full attack design. Instead, they used a zero-knowledge proof generated through the SP1 zero-knowledge virtual machine to validate their findings without exposing sensitive details. This approach, rarely used in quantum research, allows independent verification while limiting misuse.

The findings arrive as both industry and governments begin preparing for a post-quantum future. The National Security Agency has called for quantum-resistant systems by 2030, while Google has set a 2029 target for transitioning its own infrastructure. Ethereum has been actively working toward similar goals, aiming for a full migration within the same timeframe. Bitcoin, however, faces slower progress due to its decentralized governance model, where major upgrades can take years to implement.

Early mitigation efforts are underway. A recent Bitcoin proposal introduces new address formats designed to obscure public keys and support future quantum-resistant signatures. However, a full transition away from current cryptographic systems has not yet been finalized.

For now, users are advised to take precautionary steps. Moving funds to new addresses, avoiding address reuse, and monitoring updates from wallet providers can reduce exposure, particularly for long-term holdings. While the threat is not immediate, researchers emphasize that preparation must begin well in advance, as advances in quantum computing continue to accelerate.

How Duck.ai Offer Better Privacy Compared to Commercial Chatbots


Better privacy with DuckDuckGo's AI bot

Privacy issues have always bothered users and business organizations. With the rapid adoption of AI, the threats are also rising. DuckDuckGo’s Duck.ai chatbot benefits from this.

The latest report from Similarweb revealed that traffic to Duck.ai increased rapidly last month. The traffic recorded 11.1 million visits in February 2026, 300% more than January. 

Duck.ai's sudden traffic jump

The statistics seem small when compared with the most popular chatbots such as ChatGPT, Claude, or Gemini. 

Similarweb estimates that ChatGPT recorded 5.4 billion visits in February 2026, and Google’s Gemini recorded 2.1 billion, whereas Claude recorded 290.3 million. 

For DuckDuckGo, the numbers show a good sign, as the bot was launched as beta in 2025, and has shown a sharp rise in visits. 

DuckDuckGo browser is known for its privacy, and the company aims to apply the same principle to its AI bot. Duck.ai doesn't run a bespoke LLM, it uses frontier models from Meta, Anthropic, and OpenAI, but it doesn't expose your IP address and personal data. 

Duck.ai's privacy policy reads, "In addition, we have agreements in place with all model providers that further limit how they can use data from these anonymous requests, including not using Prompts and Outputs to develop or improve their models, as well as deleting all information received once it is no longer necessary to provide Outputs (at most within 30 days, with limited exceptions for safety and legal compliance),”

Duck.ai is famous now

What is the reason for this sudden surge? The bot has two advantages over individual commercial bots like ChatGPT and Gemini, it offers an option to toggle between multiple models and better privacy security. The privacy aspect sets it apart. Users on Reddit have praised Duck.ai, one person noting "it's way better than Google's," which means Gemini. 

Privacy concerns in AI bots

In March, Anthropic rejected a few applications of its technology for mass surveillance and weapons submitted by the Department of Defense. The DoD retaliated by breaking the contract. Soon after, OpenAI stepped in. 

The incident stirred controversies around privacy concerns and ethical AI use. This explains why users may prefer chatbots like Duck.ai that safeguard user data from both the government and the big tech. 

Claude Mythos 5: Trillion-Parameter AI Powerhouse Unveiled

 

Anthropic has launched Claude Mythos 5, a groundbreaking AI model boasting 10 trillion parameters, positioning it as a leader in advanced artificial intelligence capabilities. This massive scale enables superior performance in demanding fields like cybersecurity, coding, and academic reasoning, surpassing many competitors in handling complex, high-stakes tasks. 

Alongside it, the mid-tier Capabara model offers efficient versatility, bridging the gap between flagship power and practical deployment, with Anthropic emphasizing a phased rollout for ethical safety. Claude Mythos 5's model excels in precision and adaptability, making it ideal for cybersecurity threat detection and intricate software development where accuracy is paramount. In academic reasoning, it tackles multifaceted problems that require deep logical inference, outpacing previous models in benchmark tests. 

Anthropic's commitment to responsible AI ensures these tools minimize risks like misuse, aligning innovation with accountability in real-world applications. Complementing Anthropic's releases, GLM 5.1 emerges as a key open-source milestone, excelling in instruction-following and multi-step workflows for automation tasks. Though not the fastest, its reliability fosters community-driven innovation, providing accessible alternatives to proprietary systems for developers worldwide. This model democratizes AI progress, enabling collaborative advancements without the barriers of closed ecosystems. 

Google DeepMind's Gemini 3.1 advances real-time multimodal processing for voice and vision, enhancing latency and quality in sectors like healthcare and autonomous systems. OpenAI's revamped Codeex platform introduces plug-in ecosystems with pre-built workflows, streamlining coding and boosting developer productivity. Meanwhile, the ARC AGI 3 Benchmark sets a rigorous standard for agentic reasoning, combating overfitting and driving genuine AI intelligence gains. 

These developments, including Mistral AI’s expressive text-to-speech and Anthropic’s biology-focused Operon, signal AI's transformative potential across industries. From ethical trillion-parameter giants to open benchmarks, they promise efficiency in research, automation, and creative workflows. As AI evolves rapidly, balancing power with safety will shape a future of innovative problem-solving.

Quantum Computing: The Silent Killer of Digital Encryption

 

Quantum computing poses a greater long-term threat to digital security than AI, as it could shatter the encryption underpinning modern systems. While AI grabs headlines for ethical and societal risks, quantum advances quietly erode the foundations of data protection, urging immediate preparation. 

Today's encryption relies on algorithms secure against classical computers but vulnerable to quantum power, potentially cracking codes in minutes that would take supercomputers millennia. Adversaries already pursue "harvest now, decrypt later" strategies, stockpiling encrypted data for future breakthroughs, compromising long-shelf-life secrets like trade intel and health records. This urgency stems from quantum's theoretical ability to solve complex problems via algorithms like Shor's, demanding a shift to post-quantum cryptography today. 

Digital environments exacerbate the danger, blending legacy systems, cloud workloads, and AI agents into opaque networks ripe for lateral attacks. Breaches often exploit seams between SaaS, APIs, and multicloud setups, where visibility into east-west traffic remains limited despite regulations like EU's NIS2 mandating segmentation. AI accelerates risks by enabling autonomous actions across boundaries, turning compromised agents into rapid escalators of privileges. 

Traditional perimeters have vanished in cloud eras, rendering zero-trust policies insufficient without runtime enforcement at the workload level. Organizations need cloud-native security fabrics for continuous visibility and identity-based controls, curbing movement without infrastructure overhauls. Regulators like CISA push for provable zero-trust, highlighting how unmanaged connections form hidden attack paths. 

NIST's 2024 post-quantum standards mark progress, but migrating cryptography alone fortifies a flawed base amid current complexity breaches. True resilience embeds security into network fabrics, auditing paths and enforcing policies proactively against cumulative threats. As quantum converges with AI and cloud, only holistic defenses will safeguard digital trust before crises erupt.

China-based TA416 Targets European Businesses via Phishing Campaigns

Chinese state-sponsored attacks

A China-based hacker is targeting European government and diplomatic entities; the attack started in mid-2025, after a two-year period of no targeting in the region. The campaign has been linked to TA416; the activities coincide with DarkPeony, Red Lich, RedDelta, SmugX, Vertigo Panda, and UNC6384.

According to Proofpoint, “This TA416 activity included multiple waves of web bug and malware delivery campaigns against diplomatic missions to the European Union and NATO across a range of European countries. Throughout this period, TA416 regularly altered its infection chain, including abusing Cloudflare Turnstile challenge pages, abusing OAuth redirects, and using C# project files, as well as frequently updating its custom PlugX payload."

Multiple attack campaigns

Additionally, TA416 organized multiple campaigns against the government and diplomatic organizations in the Middle East after the US-Iran conflict in February 2026. The attack aimed to gather regional intelligence regarding the conflict.

TA416 also has a history of technical overlaps with a different group, Mustang Panda (UNK_SteadySplit, CerenaKeeper, and Red Ishtar). The two gangs are listed as Hive0154, Twill Typhoon, Earth Preta, Temp.HEX, Stately Taurus, and HoneyMyte. 

TA416’s attacks use PlugX variants. The Mustang Panda group continually installed tools like COOLCLIENT, TONESHELL, and PUBLOAD. One common thing is using DLL side-loading to install malware.

Attack tactic

TA416’s latest campaigns against European entities are pushing a mix of web bug and malware deployment operations, while threat actors use freemail sender accounts to do spying and install the PlugX backdoor through harmful archives via Google Drive, Microsoft Azure Blob Storage, and exploited SharePoint incidents. The PlugX malware campaigns were recently found by Arctic Wolf and StrikeReady in October 2025. 

According to Proofpoint, “A web bug (or tracking pixel) is a tiny invisible object embedded in an email that triggers an HTTP request to a remote server when opened, revealing the recipient's IP address, user agent, and time of access, allowing the threat actor to assess whether the email was opened by the intended target.”

The TA416 attacks in December last year leveraged third-party Microsoft Entra ID cloud apps to start redirecting to the download of harmful archives. Phishing emails in this campaign link to Microsoft’s authentic OAuth authorization. Once opened, resends the user to the hacker-controlled domain and installs PlugX.

According to experts, "When the MSBuild executable is run, it searches the current directory for a project file and automatically builds it."

Attackers Exploit Critical Flaw to Breach 766 Next.js Hosts and Steal Data


Credential-stealing operation

A massive credential-harvesting campaign was found abusing the React2Shell flaw as an initial infection vector to steal database credentials, shell command history, Amazon Web Services (AWS) secrets, GitHub, Stripe API keys. 

Cisco Talos has linked the campaign to a threat cluster tracked as UAT-10608. At least 766 hosts around multiple geographic regions and cloud providers have been exploited as part of the operation. 

About the attack vector

According to experts, “Post-compromise, UAT-10608 leverages automated scripts for extracting and exfiltrating credentials from a variety of applications, which are then posted to its command-and-control (C2). The C2 hosts a web-based graphical user interface (GUI) titled 'NEXUS Listener' that can be used to view stolen information and gain analytical insights using precompiled statistics on credentials harvested and hosts compromised.”

Who are the victims?

The campaign targets Next.js instances that are vulnerable to CVE-2025-55182 (CVSS score: 10.0), a severe flaw in React Server Components and Next.js App Router that could enable remote code execution for access, and then deploy the NEXUS Listener collection framework.

This is achieved by a dropper that continues to play a multi-phase harvesting script that stores various details from the victim system. 

SSH private keys and authorized_keys

JSON-parsed keys and authorized_keys

Kubernetes service account tokens

Environment variables

API keys

Docker container configurations 

Running processes

IAM role-associated temporary credentials

Attack motive

The victims and the indiscriminate targeting pattern are consistent with automated scanning. The key thing in the framework is an application (password-protected) that makes all stolen data public to the user through a geographical user interface that has search functions to browse through the information. The present Nexus Listener version is V3, meaning the tool has gone through significant changes.

Talos managed to get data from an unknown NEXUS Listener incident. It had API keys linked with Stripe, AI platforms such as Anthropic, OpenAI, and NVIDIA NIM, communication services such as Brevo and SendGrid, webhook secrets, Telegram bot tokens, GitLab, and GitHub tokens, app secrets, and database connection strings. 

Why Email Aliases Are Important for Every User


Email spam was once annoying in the digital world. Recently, email providers have improved overflowing inboxes, which were sometimes confused with distractions and unwanted mail, such as hyperbolic promotions and efforts to steal user data. 

But the problem has not disappeared completely, as users still face problems sometimes. To address the issue, user can use email aliases. 

About email alias 

Email alias is an alternative email address that allows you to get mails without sharing your address. The alias reroutes all incoming mails to your primary account.

Types of email aliases 

Plus addressing: For organizing mail efficiently, you are a + symbol and a category, you can also add rules to your mail and filter them by source. 

Provider aliases: Mainly used for organizations to have particular emails for sections, while all mails go to the same inbox. 

Masked/forwarding aliases: They are aimed at privacy. Users don't give their real email, instead, a random mail is generated, while the email is sent to your real inbox. This feature is available with services like Proton Mail. 

How it protects our privacy 

Email aliases are helpful for organizing inbox, and can be effective for contacting business. But the main benefit is protecting your privacy. 

There are several strategies to accomplish this, but the primary one is to minimize the amount of time your email is displayed online. Your aliases can be removed at any moment, but they will still be visible and used. The more aliases you use, the more difficult it is to identify your real core email address. 

Because it keeps your address hidden from spammers, marketers, and phishing efforts, you will have more privacy. It is also simpler to determine who has exploited your data. 

Giving email aliases in specific circumstances makes it simpler to find instances when they have been abused. Instead of having to deal with a ton of spam, you can remove an alias as soon as you discover someone is abusing it and start over.

Aliases can be helpful for privacy, but they are not a foolproof way to be safe online. They do not automatically encrypt emails, nor do they cease tracking cookies.

The case of Apple

Court filings revealed that Apple Hide My Email, a function intended to protect genuine email addresses, does not keep users anonymous from law enforcement, raising new concerns about privacy.

With the use of this feature, which is accessible to iCloud+ subscribers, users can create arbitrary email aliases so that websites and applications never see their primary address. Apple claims it doesn't read messages; they are just forwarded. However, recent US cases show a clear limit: Apple was able to connect those anonymous aliases to identifiable accounts in response to legitimate court demands

Hackers Exploit OpenClaw Bug to Control AI Agent


Cybersecurity experts have discovered a high-severity flaw named “ClawJacked” in the famous AI agent OpenClaw that allowed a malicious site bruteforce access silently to a locally running instance and take control. 

Oasis Security found the issue and informed OpenClaw, a fix was then released in version 2026.2.26 on 26th February. 

About OpenClaw

OpenClaw is a self-hosted AI tool that became famous recently for allowing AI agents to autonomously execute commands, send texts, and handle tasks across multiple platforms. Oasis security said that the flaw is caused by the OpenClaw gateway service linking with the localhost and revealing a WebSocket interface. 

Attack tactic 

As cross-origin browser policies do not stop WebSocket connections to a localhost, a compromised website opened by an OpenClaw user can use Javascript to secretly open a connection to the local gateway and try verification without raising any alarms. 

To stop attacks, OpenClaw includes rate limiting. But the loopback address (127.0.0.1) is excused by default. Therefore, local CLI sessions are not accidentally locked out. 

OpenClaw brute-force to escape security 

Experts discovered that they could brute-force the OpenClaw management password at hundreds of attempts per second without any failed attempts being logged. When the correct password is guessed, the hacker can silently register as a verified device, because the gateway autonomously allows device pairings from localhost without needing user info. 

“In our lab testing, we achieved a sustained rate of hundreds of password guesses per second from browser JavaScript alone At that speed, a list of common passwords is exhausted in under a second, and a large dictionary would take only minutes. A human-chosen password doesn't stand a chance,” Oasis said. 

The attacker can now directly interact with the AI platform by identifying connected nodes, stealing credentials, dumping credentials, and reading application logs with an authenticated session and admin access. 

Attacker privileges

According to Oasis, this might enable an attacker to give the agent instructions to perform arbitrary shell commands on paired nodes, exfiltrate files from linked devices, or scan chat history for important information. This would essentially result in a complete workstation compromise that is initiated from a browser tab. 

Oasis provided an example of this attack, demonstrating how the OpenClaw vulnerability could be exploited to steal confidential information. The problem was resolved within a day of Oasis reporting it to OpenClaw, along with technical information and proof-of-concept code.

Experts Warn About AI-assisted Malwares Used For Extortion


AI-based Slopoly malware

Cybersecurity experts have disclosed info about a suspected AI-based malware named “Slopoly” used by threat actor Hive0163 for financial motives. 

IBM X-Force researcher Golo Mühr said, “Although still relatively unspectacular, AI-generated malware such as Slopoly shows how easily threat actors can weaponize AI to develop new malware frameworks in a fraction of the time it used to take,” according to the Hacker News.

Hive0163 malware campaign 

Hive0163's attacks are motivated by extortion via large-scale data theft and ransomware. The gang is linked with various malicious tools like Interlock RAT, NodeSnake, Interlock ransomware, and Junk fiction loader. 

In a ransomware incident found in early 2026, the gang was found installing Slopoly during the post-exploit phase to build access to gain persistent access to the compromised server. 

Slopoly’s detection can be tracked back to PowerShell script that may be installed in the “C:\ProgramData\Microsoft\Windows\Runtime” folder via a builder. Persistence is made via a scheduled task called “Runtime Broker”. 

Experts believe that that malware was made with an LLM as it contains extensive comments, accurately named variables, error handling, and logging. 

There are signs that the malware was developed with the help of an as-yet-undetermined large language model (LLM). This includes the presence of extensive comments, logging, error handling, and accurately named variables. 

The comments also describe the script as a "Polymorphic C2 Persistence Client," indicating that it's part of a command-and-control (C2) framework. 

According to Mühr, “The script does not possess any advanced techniques and can hardly be considered polymorphic, since it's unable to modify its own code during execution. The builder may, however, generate new clients with different randomized configuration values and function names, which is standard practice among malware builders.”

The PowerShell script works as a backdoor comprising system details to a C2 server. There has been a rise in AI-assisted malware in recent times. Slopoly, PromptSpy, and VoidLink show how hackers are using the tool to speed up malware creation and expand their operations. 

IBM X-Force says the “introduction of AI-generated malware does not pose a new or sophisticated threat from a technical standpoint. It disproportionately enables threat actors by reducing the time an operator needs to develop and execute an attack.”

Perplexity's Comet AI Browser Tricked Into Phishing Scam Within Four Minutes


Agentic browser at risk

Agentic web browsers that use AI tools to autonomously do tasks across various websites for a user could be trained and fooled into phishing attacks. Hackers exploit the AI browsers’ tendency to assert their actions and deploy them against the same model to remove security checks. 

According to security expert Shaked Chen, “The AI now operates in real time, inside messy and dynamic pages, while continuously requesting information, making decisions, and narrating its actions along the way. Well, 'narrating' is quite an understatement - It blabbers, and way too much!,” the Hacker News reported. Agentic Blabbering is an AI browser that displays what it sees, thinks, and plans to do next, and what it deems safe or a threat. 

Tricking the browsers

By hacking the traffic between the AI services on the vendor’s servers and putting it as input to a Generative Adversarial Network (GAN), it made Perplexity’s Comet AI browser fall prey to a phishing attack within four minutes. 

The research is based on established tactics such as Scamlexity and VibeScamming, which revealed that vibe-coding platforms and AI browsers can be coerced into generating scam pages and performing malicious tasks via prompt injection. 

Attack tactic

There is a change in the attack surface as a result of the AI agent managing the tasks without frequent human oversight, meaning that a scammer no longer has to trick a user. Instead, it seeks to deceive the AI model itself. 

Chen said, “If you can observe what the agent flags as suspicious, hesitates on, and more importantly, what it thinks and blabbers about the page, you can use that as a training signal.” Chen added that the “scam evolves until the AI Browser reliably walks into the trap another AI set for it."

End goal?

The aim is to make a “scamming machine” that improves and recreates a phishing page until the agentic browser accepts the commands and carries out the hacker’s command, like putting the victim’s passwords on a malicious web page built for refund scams. 

Guardio is concerned about the development, saying that, “This reveals the unfortunate near future we are facing: scams will not just be launched and adjusted in the wild, they will be trained offline, against the exact model millions rely on, until they work flawlessly on first contact.”

Microsoft Report Reveals Hackers Exploit AI In Cyberattacks


According to Microsoft, hackers are increasingly using AI in their work to increase attacks, scale cyberattack activity, and limit technical barriers throughout all aspects of a cyberattack. 

Microsoft’s new Threat Intelligence report reveals that threat actors are using genAI tools for various tasks, such as phishing, surveillance, malware building, infrastructure development, and post-hack activity. 

About the report

In various incidents, AI helps to create phishing emails, summarize stolen information, debug malware, translate content, and configure infrastructure. “Microsoft Threat Intelligence has observed that most malicious use of AI today centers on using language models for producing text, code, or media. Threat actors use generative AI to draft phishing lures, translate content, summarize stolen data, generate or debug malware, and scaffold scripts or infrastructure,” the report said. 

"For these uses, AI functions as a force multiplier that reduces technical friction and accelerates execution, while human operators retain control over objectives, targeting, and deployment decisions,’ warns Microsoft.

AI in cyberattacks 

Microsoft found different hacking gangs using AI in their cyberattacks, such as North Korean hackers known as Coral Sleet (Storm-1877) and Jasper Sleet (Storm-0287), who use the AI in their remote IT worker scams. 

The AI helps to make realistic identities, communications, and resumes to get a job in Western companies and have access once hired. Microsoft also explained how AI is being exploited in malware development and infrastructure creation. Threat actors are using AI coding tools to create and refine malicious code, fix errors, and send malware components to different programming languages. 

The impact

A few malware experiments showed traces of AI-enabled malware that create scripts or configure behaviour at runtime. Microsoft found Coral Sleet using AI to make fake company sites, manage infrastructure, and troubleshoot their installations. 

When security analysts try to stop the use of AI in these attacks, Microsoft says hackers are using jailbreaking techniques to trick AI into creating malicious code or content. 

Besides generative AI use, the report revealed that hackers experiment with agentic AI to do tasks autonomously. The AI is mainly used for decision-making currently. As IT worker campaigns depend on the exploitation of authentic access, experts have advised organizations to address these attacks as insider risks. 

Pakistan-Linked Hackers Use AI to Flood Targets With Malware in India Campaign

 

A Pakistan-aligned hacking group known as Transparent Tribe is using artificial intelligence coding tools to produce large numbers of malware implants in a campaign primarily targeting India, according to new research from cybersecurity firm Bitdefender. 

Security researchers say the activity reflects a shift in how some threat actors are developing malicious software. Instead of focusing on highly advanced malware, the group appears to be generating a large volume of implants written in multiple programming languages and distributed across different infrastructure. 

Researchers said the operation is designed to create a “high-volume, mediocre mass of implants” using less common languages such as Nim, Zig and Crystal while relying on legitimate platforms including Slack, Discord, Supabase and Google Sheets to help evade detection. 

“Rather than a breakthrough in technical sophistication, we are seeing a transition toward AI-assisted malware industrialization that allows the actor to flood target environments with disposable, polyglot binaries,” Bitdefender researchers said in a technical analysis of the campaign. 

The strategy involves creating numerous variations of malware rather than relying on a single sophisticated tool. Bitdefender described the approach as a form of “Distributed Denial of Detection,” where attackers overwhelm security systems with large volumes of different binaries that use various communication protocols and programming languages. 

Researchers say large language models have lowered the barrier for threat actors by allowing them to generate working code in unfamiliar languages or convert existing code into different formats. 

That capability makes it easier to produce large numbers of malware samples with minimal expertise. 

The campaign has primarily targeted Indian government organizations and diplomatic missions abroad. 

Investigators said the attackers also showed interest in Afghan government entities and some private businesses. According to the analysis, the attackers use LinkedIn to identify potential targets before launching phishing campaigns. 

Victims may receive emails containing ZIP archives or ISO images that include malicious Windows shortcut files. In other cases, victims are sent PDF documents that include a “Download Document” button directing them to attacker-controlled websites. 

These websites trigger the download of malicious archives. Once opened, the shortcut file launches PowerShell scripts that run in memory. 

The scripts download a backdoor and enable additional actions inside the compromised system. Researchers said attackers sometimes deploy well-known adversary simulation tools such as Cobalt Strike and Havoc to maintain access. 

Bitdefender identified a wide range of custom tools used in the campaign. These include Warcode, a shellcode loader written in Crystal designed to load a Havoc agent into memory, and NimShellcodeLoader, which deploys a Cobalt Strike beacon. 

Another tool called CreepDropper installs additional malware, including SHEETCREEP, a Go-based information stealer that communicates with command servers through Microsoft Graph API, and MAILCREEP, a backdoor written in C# that uses Google Sheets for command and control. 

Researchers also identified SupaServ, a Rust-based backdoor that communicates through the Supabase platform with Firebase acting as a fallback channel. The code includes Unicode emojis, which researchers said suggests it may have been generated with the help of AI. 

Additional malware used in the campaign includes CrystalShell and ZigShell, backdoors written in Crystal and Zig that can run commands, collect host information and communicate with command servers through platforms such as Slack or Discord. 

Other tools observed in the operation include LuminousStealer, a Rust-based information stealer that exfiltrates files to Firebase and Google Drive, and LuminousCookies, which extracts cookies, passwords and payment information from Chromium-based browsers. 

Bitdefender said the attackers are also using utilities such as BackupSpy to monitor file systems for sensitive data and ZigLoader to decrypt and execute shellcode directly in memory. Despite the large number of tools involved, researchers say the overall quality of the malware is often inconsistent. 

“The transition of APT36 toward vibeware represents a technical regression,” Bitdefender said, referring to the Transparent Tribe group. “While AI-assisted development increases sample volume, the resulting tools are often unstable and riddled with logical errors.” 

Still, the researchers warned that the broader trend could make cyberattacks easier to scale. By combining AI-generated code with trusted cloud services, attackers can hide malicious activity within normal network traffic. 

“We are seeing a convergence of two trends that have been developing for some time the adoption of exotic programming languages and the abuse of trusted services to hide in legitimate traffic,” the researchers said. 

They added that this combination allows even relatively simple malware to succeed by overwhelming traditional detection systems with sheer volume.

BadPaw Malware Targets Uranian Systems


A newly found malware campaign exploiting a Ukrainian email service to build trust has been found by cybersecurity experts. 

About the campaign 

The operation starts with an email sent from an address hosted on ukr[.]net, a famous Ukrainian provider earlier exploited by the Russia based hacking group APT28 in older campaigns.

BadPaw malware 

Experts at ClearSky have termed the malware “BadPaw.” The campaign starts when a receiver opens a link pretending to host a ZIP archive. Instead of starting a direct download, the target is redirected to a domain that installs a tracking pixel, letting the threat actor to verify engagement. Another redirect sends the ZIP file. 

The archive pretends to consist of a standard HTML file, but ClearSky experts revealed that it is actually an HTA app in hiding. When deployed, the file shows a fake document related to a Ukrainian government border crossing request, where malicious processes are launched in the background. 

Attack tactic 

Before starting, the malware verifies a Windows Registry key to set the system's installation date. If the OS is older than ten days, deployment stops, an attack tactic that escapes sandbox traps used by threat analysts. 

If all the conditions are fulfilled, the malware looks for the original ZIP file and retrieves extra components. The malware builds its persistence via a scheduled task that runs a VBS script which deploys steganography to steal hidden executable code from an image file. 

Only nine antivirus engines could spot the payload at the time of study. 

Multi-Layered Attack

After activation within a particular parameter, BadPaw links to a C2 server. 

The following process happens:

Getting a numeric result from the /getcalendar endpoint. 

Gaining access to a landing page called "Telemetry UP!” through /eventmanager. 

Downloading the ASCII-encoded payload information installed within HTML. 

In the end, the decrypted data launches a backdoor called "MeowMeowProgram[.]exe," which offers file system control and remote shell access. 

Four protective layers are included in the MeowMeow backdoor: runtime parameter constraints, obfuscation of the.NET Reactor, sandbox detection, and monitoring for forensic tools like Wireshark, Procmon, Ollydbg, and Fiddler.

Incorrect execution results in a benign graphical user interface with a picture of a cat. The "MeowMeow" button only displays a harmless message when it is clicked.

Microsoft AI Chief: 18 Months to Automate White-Collar Jobs

 

Mustafa Suleyman, CEO of Microsoft AI, has issued a stark warning about the future of white-collar work. In a recent Financial Times interview, he predicted that AI will achieve human-level performance on most professional tasks within 18 months, automating jobs involving computer-based work like accounting, legal analysis, marketing, and project management. This timeline echoes concerns from AI leaders, comparing the shift to the pre-pandemic moment in early 2020 but far more disruptive. Suleyman attributes this to exponential growth in computational power, enabling AI to outperform humans in coding and beyond.

Suleyman's forecast revives 2025 predictions from tech executives. Anthropic's Dario Amodei warned AI could eliminate half of entry-level white-collar jobs, while Ford's Jim Farley foresaw a 50% cut in U.S. white-collar roles. Elon Musk recently suggested artificial general intelligence—AI surpassing human intelligence—could arrive this year. These alarms contrast with CEO silence earlier, likened by The Atlantic to ignoring a shark fin in the water. The drumbeat of disruption is growing louder amid rapid AI advances.

Current AI impact on offices remains limited despite hype. A 2025 Thomson Reuters report shows lawyers and accountants using AI for tasks like document review, yielding only marginal productivity gains without mass displacement. Some studies even indicate setbacks: a METR analysis found AI slowed software developers by 20%. Economic benefits are mostly in Big Tech, with profit margins up over 20% in Q4 2025, while broader indices like the Bloomberg 500 show no change.

Early job losses signal brewing changes. Challenger, Gray & Christmas reported 55,000 AI-related cuts in 2025, including Microsoft's 15,000 layoffs as CEO Satya Nadella pushed to "reimagine" for the AI era. Markets reacted sharply last week with a "SaaSpocalypse" selloff in software stocks after Anthropic and OpenAI launched agentic AI systems mimicking SaaS functions. Investors doubt AI will boost non-tech earnings, per Wall Street consensus.

Suleyman envisions customizable AI transforming every organization. He predicts users will design models like podcasts or blogs, tailored for any job, driving his push for Microsoft "superintelligence" and independent foundation models. As the "most important technology of our time," Suleyman aims to reduce reliance on partners like OpenAI. This could redefine the American Dream, once fueled by MBAs and law degrees, urging urgent preparation for AI's white-collar reckoning.

ClawJack Allows Malicous Sites to Control Local OpenClaw AI Agents


Peter Steinberger created OpenClaw, an AI tool that can be a personal assistant for developers. It immediately became famous and got 100,000 GitHub stars in a week. Even OpenAI founder Sam Altman was impressed, bringing Steinberger on board and calling him a “genius.” However, experts from Oasis Security said that the viral success had hidden threats.

OpenClaw addressed a high-severity security threat that could have been exploited to allow a malicious site to link with a locally running AI agent and take control. According to the Oasis Security report, “Our vulnerability lives in the core system itself – no plugins, no marketplace, no user-installed extensions – just the bare OpenClaw gateway, running exactly as documented.” 

ClawJack scare

The threat was codenamed ClawJacked by the experts. CVE-2026-25253 could have become a severe vulnerability chain that would have allowed any site to hack a person’s AI agent. The vulnerability existed in the main gateway of the software. As OpenClaw is built to trust connections from the user’s system, it could have allowed hackers easy access. 

Assuming the threat model

On a developer's laptop, OpenClaw is installed and operational. Its gateway, a local WebSocket server, is password-protected and connected to localhost. When the developer visits a website that is controlled by the attacker via social engineering or another method, the attack begins. According to the Oasis Report, “Any website you visit can open one to your localhost. Unlike regular HTTP requests, the browser doesn't block these cross-origin connections. So while you're browsing any website, JavaScript running on that page can silently open a connection to your local OpenClaw gateway. The user sees nothing.”

Stealthy Attack Tactic 

The research revealed a smart trick using WebSockets. Generally, your browser is active at preventing different websites from meddling with your local files. But WebSockets are an exception as they are built to stay “always-on” to send data simultaneously. 

The OpenClaw gateway assumed that the connection must be safe because it comes from the user's own computer (localhost). But it is dangerous because if a developer running OpenClaw mistakenly visits a malicious website, a hidden script installed in the webpage can connect via WebSocket and interact directly with the AI tool in the background. The user will be clueless.

Hollywood Studios Target AI Video Tool

 

Hollywood studios are intensifying efforts to curb an "ultra-realistic" AI video generator that produces lifelike clips from simple text prompts. The tool, capable of creating scenes like a fist fight between Tom Cruise and Brad Pitt, has sparked alarm in the entertainment industry over potential job losses and intellectual property misuse. Major players are pushing for regulatory action to protect actors and creators from deepfake disruptions.

The controversy erupted after a viral AI-generated video showcased the tool's prowess, depicting high-profile stars in a convincing brawl that stunned viewers worldwide. Creators behind the technology hail it as innovative, but industry insiders fear it could flood markets with unauthorized content, undermining traditional filmmaking. Hollywood executives have rallied, warning that unchecked AI could "transform or destroy" careers they've built over decades.

Prominent voices in the field have voiced deep concerns. One affected professional noted, "So many people I care about are facing the potential loss of careers they cherish. I myself am at risk." He expressed astonishment at the video's professionalism, shifting from initial nonchalance to genuine apprehension about the industry's future. This reflects broader anxieties as AI blurs lines between real and synthetic media.

Studios are now collaborating on legal strategies, targeting the tool's developers and platforms hosting such content. Discussions include lawsuits for copyright infringement and calls for stricter AI guidelines from governments. While the technology promises creative efficiencies, opponents argue it prioritizes speed over ethical safeguards, potentially devaluing human artistry. Recent viral spreads on social media have amplified the urgency, with calls to remove deceptive videos. 

As AI evolves rapidly, Hollywood's standoff highlights a pivotal clash between innovation and preservation. Balancing advancement with protection will define the sector's resilience amid digital transformation. Stakeholders urge immediate intervention to prevent irreversible damage, positioning this as a landmark battle in the AI era.

GitHub Fixes AI Flaw That Could Have Exposed Private Repository Tokens

 



A now-patched security weakness in GitHub Codespaces revealed how artificial intelligence tools embedded in developer environments can be manipulated to expose sensitive credentials. The issue, discovered by cloud security firm Orca Security and named RoguePilot, involved GitHub Copilot, the AI coding assistant integrated into Codespaces. The flaw was responsibly disclosed and later fixed by Microsoft, which owns GitHub.

According to researchers, the attack could begin with a malicious GitHub issue. An attacker could insert concealed instructions within the issue description, specifically crafted to influence Copilot rather than a human reader. When a developer launched a Codespace directly from that issue, Copilot automatically processed the issue text as contextual input. This created an opportunity for hidden instructions to silently control the AI agent operating within the development environment.

Security experts classify this method as indirect or passive prompt injection. In such attacks, harmful instructions are embedded inside content that a large language model later interprets. Because the model treats that content as legitimate context, it may generate unintended responses or perform actions aligned with the attacker’s objective.

Researchers also described RoguePilot as a form of AI-mediated supply chain attack. Instead of exploiting external software libraries, the attacker leverages the AI system integrated into the workflow. GitHub allows Codespaces to be launched from repositories, commits, pull requests, templates, and issues. The exposure occurred specifically when a Codespace was opened from an issue, since Copilot automatically received the issue description as part of its prompt.

The manipulation could be hidden using HTML comment tags, which are invisible in rendered content but still readable by automated systems. Within those hidden segments, an attacker could instruct Copilot to extract the repository’s GITHUB_TOKEN, a credential that provides elevated permissions. In one demonstrated scenario, Copilot could be influenced to check out a specially prepared pull request containing a symbolic link to an internal file. Through techniques such as referencing a remote JSON schema, the AI assistant could read that internal file and transmit the privileged token to an external server.

The RoguePilot disclosure comes amid broader concerns about AI model alignment. Separate research from Microsoft examined a reinforcement learning method called Group Relative Policy Optimization, or GRPO. While typically used to fine-tune large language models after deployment, researchers found it could also weaken safety safeguards, a process they labeled GRP-Obliteration. Notably, training on even a single mildly problematic prompt was enough to make multiple language models more permissive across harmful categories they had never explicitly encountered.

Additional findings stress upon side-channel risks tied to speculative decoding, an optimization technique that allows models to generate multiple candidate tokens simultaneously to improve speed. Researchers found this process could potentially reveal conversation topics or identify user queries with significant accuracy.

Further concerns were raised by AI security firm HiddenLayer, which documented a technique called ShadowLogic. When applied to agent-based systems, the concept evolves into Agentic ShadowLogic. This approach involves embedding backdoors at the computational graph level of a model, enabling silent modification of tool calls. An attacker could intercept and reroute requests through infrastructure under their control, monitor internal endpoints, and log data flows without disrupting normal user experience.

Meanwhile, Neural Trust demonstrated an image-based jailbreak method known as Semantic Chaining. This attack exploits limited reasoning depth in image-generation models by guiding them through a sequence of individually harmless edits that gradually produce restricted or offensive content. Because each step appears safe in isolation, safety systems may fail to detect the evolving harmful intent.

Researchers have also introduced the term Promptware to describe a new category of malicious inputs designed to function like malware. Instead of exploiting traditional code vulnerabilities, promptware manipulates large language models during inference to carry out stages of a cyberattack lifecycle, including reconnaissance, privilege escalation, persistence, command-and-control communication, lateral movement, and data exfiltration.

Collectively, these findings demonstrate that AI systems embedded in development platforms are becoming a new attack surface. As organizations increasingly rely on intelligent automation, safeguarding the interaction between user input, AI interpretation, and system permissions is critical to preventing misuse within trusted workflows.

India Sees Rising Push for Limits on Children’s Social Media Access

 

A growing conversation around restricting social media access for children under 16 is gaining traction across India, with several state leaders reviewing regulatory models adopted overseas — particularly in Australia.

Ministers from at least two southern states have indicated that they are assessing whether prohibiting minors from using social media could effectively shield children from excessive online exposure.

Adding weight to the debate, the latest Economic Survey — an annual report prepared by a team led by India’s chief economic adviser suggested that the central government explore age-based controls on children’s social media usage. While the survey does not mandate policy action, its recommendations often influence national discussions.

Australia’s Precedent Sparks Global Debate

Australia recently became the first nation to prohibit most social media platforms for users under 16. The law requires companies to verify users’ ages and deactivate accounts belonging to underage individuals.

The decision drew criticism from tech platforms. As Australia’s internet regulator told the BBC last month, companies responded to the framework "kicking and screaming - very very reluctantly".

Meanwhile, lawmakers in France have approved a bill in the lower house seeking to block social media access for children under 15; the proposal now awaits Senate approval. The United Kingdom is also evaluating similar measures.

In India, LSK Devarayalu of the Telugu Desam Party — which governs Andhra Pradesh and supports Prime Minister Narendra Modi’s federal coalition — introduced a private member’s bill proposing a ban on social media use for children under 16. Although such bills rarely become law, they can influence legislative debate.

Separately, the Andhra Pradesh government has formed a ministerial group to examine international regulatory models. It has also invited major technology firms, including Meta, X, Google and ShareChat, for consultations. The companies have yet to respond publicly.

State IT Minister Nara Lokesh recently wrote on X that children were "slipping into relentless usage" of social media, affecting their attention spans and academic performance.

"We will ensure social media becomes a safer space and reduce its damaging impact - especially for women and children," he added.

In Goa, Tourism and IT Minister Rohan Khaunte confirmed that authorities are studying whether such restrictions could be introduced, promising further details soon.

Similarly, Priyank Kharge, IT Minister of Karnataka — home to Bengaluru, often dubbed India’s Silicon Valley — informed the state assembly that discussions were underway on responsible artificial intelligence and social media use. He referenced a “digital detox” initiative launched in partnership with Meta, involving approximately 300,000 students and 100,000 teachers. However, he did not clarify whether legislative action was being considered.

Enforcement and Legal Hurdles

Experts caution that implementing such bans in India would be legally and technically complex.

Digital rights activist Nikhil Pahwa pointed out that enforcing state-level prohibitions could create jurisdictional conflicts. "While companies can infer users' locations through IP addresses, such systems are often inaccurate. Where state boundaries are very close, you can end up creating conflicts if one state bans social media use and another does not."

He also underscored the broader issue of age verification. "Age verification is not simple. To adhere to such bans, companies would effectively have to verify every individual using every service on the internet," Pahwa told the BBC.

Even in Australia, some minors reportedly bypass restrictions by entering false birth dates to create accounts.

According to Prateek Waghre, head of programmes at the Tech Global Institute, successful enforcement would hinge on platform cooperation.

"In theory, location can be inferred through IP addresses by internet service providers or technology companies, but whether the companies operating such apps would comply, or challenge such directions in court, is not yet clear," he says.

Broader Social Concerns

While lawmakers acknowledge the risks of excessive social media exposure, some analysts argue that a blanket ban may be too narrow a solution.

A recent survey of 1,277 Indian teenagers by a non-profit organisation found that many accounts are created with assistance from family members or friends and are often not tied to personal email addresses. This complicates assumptions of individual ownership central to age-verification systems.

Parents remain divided. Delhi resident Jitender Yadav, father of two young daughters, believes deeper issues are at play.

"Parents themselves fail to give enough time to children and hand them phones to keep them engaged - the problem starts there," he says.

"I am not sure if a social media ban will help. Because unless parents give enough time to their children or learn to keep them creatively engaged, they will always find ways to bypass such bans," he says.

As the discussion unfolds, India faces a complex balancing act — safeguarding children online while navigating legal, technological and social realities.