Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AI. Show all posts

How Duck.ai Offer Better Privacy Compared to Commercial Chatbots


Better privacy with DuckDuckGo's AI bot

Privacy issues have always bothered users and business organizations. With the rapid adoption of AI, the threats are also rising. DuckDuckGo’s Duck.ai chatbot benefits from this.

The latest report from Similarweb revealed that traffic to Duck.ai increased rapidly last month. The traffic recorded 11.1 million visits in February 2026, 300% more than January. 

Duck.ai's sudden traffic jump

The statistics seem small when compared with the most popular chatbots such as ChatGPT, Claude, or Gemini. 

Similarweb estimates that ChatGPT recorded 5.4 billion visits in February 2026, and Google’s Gemini recorded 2.1 billion, whereas Claude recorded 290.3 million. 

For DuckDuckGo, the numbers show a good sign, as the bot was launched as beta in 2025, and has shown a sharp rise in visits. 

DuckDuckGo browser is known for its privacy, and the company aims to apply the same principle to its AI bot. Duck.ai doesn't run a bespoke LLM, it uses frontier models from Meta, Anthropic, and OpenAI, but it doesn't expose your IP address and personal data. 

Duck.ai's privacy policy reads, "In addition, we have agreements in place with all model providers that further limit how they can use data from these anonymous requests, including not using Prompts and Outputs to develop or improve their models, as well as deleting all information received once it is no longer necessary to provide Outputs (at most within 30 days, with limited exceptions for safety and legal compliance),”

Duck.ai is famous now

What is the reason for this sudden surge? The bot has two advantages over individual commercial bots like ChatGPT and Gemini, it offers an option to toggle between multiple models and better privacy security. The privacy aspect sets it apart. Users on Reddit have praised Duck.ai, one person noting "it's way better than Google's," which means Gemini. 

Privacy concerns in AI bots

In March, Anthropic rejected a few applications of its technology for mass surveillance and weapons submitted by the Department of Defense. The DoD retaliated by breaking the contract. Soon after, OpenAI stepped in. 

The incident stirred controversies around privacy concerns and ethical AI use. This explains why users may prefer chatbots like Duck.ai that safeguard user data from both the government and the big tech. 

Claude Mythos 5: Trillion-Parameter AI Powerhouse Unveiled

 

Anthropic has launched Claude Mythos 5, a groundbreaking AI model boasting 10 trillion parameters, positioning it as a leader in advanced artificial intelligence capabilities. This massive scale enables superior performance in demanding fields like cybersecurity, coding, and academic reasoning, surpassing many competitors in handling complex, high-stakes tasks. 

Alongside it, the mid-tier Capabara model offers efficient versatility, bridging the gap between flagship power and practical deployment, with Anthropic emphasizing a phased rollout for ethical safety. Claude Mythos 5's model excels in precision and adaptability, making it ideal for cybersecurity threat detection and intricate software development where accuracy is paramount. In academic reasoning, it tackles multifaceted problems that require deep logical inference, outpacing previous models in benchmark tests. 

Anthropic's commitment to responsible AI ensures these tools minimize risks like misuse, aligning innovation with accountability in real-world applications. Complementing Anthropic's releases, GLM 5.1 emerges as a key open-source milestone, excelling in instruction-following and multi-step workflows for automation tasks. Though not the fastest, its reliability fosters community-driven innovation, providing accessible alternatives to proprietary systems for developers worldwide. This model democratizes AI progress, enabling collaborative advancements without the barriers of closed ecosystems. 

Google DeepMind's Gemini 3.1 advances real-time multimodal processing for voice and vision, enhancing latency and quality in sectors like healthcare and autonomous systems. OpenAI's revamped Codeex platform introduces plug-in ecosystems with pre-built workflows, streamlining coding and boosting developer productivity. Meanwhile, the ARC AGI 3 Benchmark sets a rigorous standard for agentic reasoning, combating overfitting and driving genuine AI intelligence gains. 

These developments, including Mistral AI’s expressive text-to-speech and Anthropic’s biology-focused Operon, signal AI's transformative potential across industries. From ethical trillion-parameter giants to open benchmarks, they promise efficiency in research, automation, and creative workflows. As AI evolves rapidly, balancing power with safety will shape a future of innovative problem-solving.

Quantum Computing: The Silent Killer of Digital Encryption

 

Quantum computing poses a greater long-term threat to digital security than AI, as it could shatter the encryption underpinning modern systems. While AI grabs headlines for ethical and societal risks, quantum advances quietly erode the foundations of data protection, urging immediate preparation. 

Today's encryption relies on algorithms secure against classical computers but vulnerable to quantum power, potentially cracking codes in minutes that would take supercomputers millennia. Adversaries already pursue "harvest now, decrypt later" strategies, stockpiling encrypted data for future breakthroughs, compromising long-shelf-life secrets like trade intel and health records. This urgency stems from quantum's theoretical ability to solve complex problems via algorithms like Shor's, demanding a shift to post-quantum cryptography today. 

Digital environments exacerbate the danger, blending legacy systems, cloud workloads, and AI agents into opaque networks ripe for lateral attacks. Breaches often exploit seams between SaaS, APIs, and multicloud setups, where visibility into east-west traffic remains limited despite regulations like EU's NIS2 mandating segmentation. AI accelerates risks by enabling autonomous actions across boundaries, turning compromised agents into rapid escalators of privileges. 

Traditional perimeters have vanished in cloud eras, rendering zero-trust policies insufficient without runtime enforcement at the workload level. Organizations need cloud-native security fabrics for continuous visibility and identity-based controls, curbing movement without infrastructure overhauls. Regulators like CISA push for provable zero-trust, highlighting how unmanaged connections form hidden attack paths. 

NIST's 2024 post-quantum standards mark progress, but migrating cryptography alone fortifies a flawed base amid current complexity breaches. True resilience embeds security into network fabrics, auditing paths and enforcing policies proactively against cumulative threats. As quantum converges with AI and cloud, only holistic defenses will safeguard digital trust before crises erupt.

China-based TA416 Targets European Businesses via Phishing Campaigns

Chinese state-sponsored attacks

A China-based hacker is targeting European government and diplomatic entities; the attack started in mid-2025, after a two-year period of no targeting in the region. The campaign has been linked to TA416; the activities coincide with DarkPeony, Red Lich, RedDelta, SmugX, Vertigo Panda, and UNC6384.

According to Proofpoint, “This TA416 activity included multiple waves of web bug and malware delivery campaigns against diplomatic missions to the European Union and NATO across a range of European countries. Throughout this period, TA416 regularly altered its infection chain, including abusing Cloudflare Turnstile challenge pages, abusing OAuth redirects, and using C# project files, as well as frequently updating its custom PlugX payload."

Multiple attack campaigns

Additionally, TA416 organized multiple campaigns against the government and diplomatic organizations in the Middle East after the US-Iran conflict in February 2026. The attack aimed to gather regional intelligence regarding the conflict.

TA416 also has a history of technical overlaps with a different group, Mustang Panda (UNK_SteadySplit, CerenaKeeper, and Red Ishtar). The two gangs are listed as Hive0154, Twill Typhoon, Earth Preta, Temp.HEX, Stately Taurus, and HoneyMyte. 

TA416’s attacks use PlugX variants. The Mustang Panda group continually installed tools like COOLCLIENT, TONESHELL, and PUBLOAD. One common thing is using DLL side-loading to install malware.

Attack tactic

TA416’s latest campaigns against European entities are pushing a mix of web bug and malware deployment operations, while threat actors use freemail sender accounts to do spying and install the PlugX backdoor through harmful archives via Google Drive, Microsoft Azure Blob Storage, and exploited SharePoint incidents. The PlugX malware campaigns were recently found by Arctic Wolf and StrikeReady in October 2025. 

According to Proofpoint, “A web bug (or tracking pixel) is a tiny invisible object embedded in an email that triggers an HTTP request to a remote server when opened, revealing the recipient's IP address, user agent, and time of access, allowing the threat actor to assess whether the email was opened by the intended target.”

The TA416 attacks in December last year leveraged third-party Microsoft Entra ID cloud apps to start redirecting to the download of harmful archives. Phishing emails in this campaign link to Microsoft’s authentic OAuth authorization. Once opened, resends the user to the hacker-controlled domain and installs PlugX.

According to experts, "When the MSBuild executable is run, it searches the current directory for a project file and automatically builds it."

Attackers Exploit Critical Flaw to Breach 766 Next.js Hosts and Steal Data


Credential-stealing operation

A massive credential-harvesting campaign was found abusing the React2Shell flaw as an initial infection vector to steal database credentials, shell command history, Amazon Web Services (AWS) secrets, GitHub, Stripe API keys. 

Cisco Talos has linked the campaign to a threat cluster tracked as UAT-10608. At least 766 hosts around multiple geographic regions and cloud providers have been exploited as part of the operation. 

About the attack vector

According to experts, “Post-compromise, UAT-10608 leverages automated scripts for extracting and exfiltrating credentials from a variety of applications, which are then posted to its command-and-control (C2). The C2 hosts a web-based graphical user interface (GUI) titled 'NEXUS Listener' that can be used to view stolen information and gain analytical insights using precompiled statistics on credentials harvested and hosts compromised.”

Who are the victims?

The campaign targets Next.js instances that are vulnerable to CVE-2025-55182 (CVSS score: 10.0), a severe flaw in React Server Components and Next.js App Router that could enable remote code execution for access, and then deploy the NEXUS Listener collection framework.

This is achieved by a dropper that continues to play a multi-phase harvesting script that stores various details from the victim system. 

SSH private keys and authorized_keys

JSON-parsed keys and authorized_keys

Kubernetes service account tokens

Environment variables

API keys

Docker container configurations 

Running processes

IAM role-associated temporary credentials

Attack motive

The victims and the indiscriminate targeting pattern are consistent with automated scanning. The key thing in the framework is an application (password-protected) that makes all stolen data public to the user through a geographical user interface that has search functions to browse through the information. The present Nexus Listener version is V3, meaning the tool has gone through significant changes.

Talos managed to get data from an unknown NEXUS Listener incident. It had API keys linked with Stripe, AI platforms such as Anthropic, OpenAI, and NVIDIA NIM, communication services such as Brevo and SendGrid, webhook secrets, Telegram bot tokens, GitLab, and GitHub tokens, app secrets, and database connection strings. 

Why Email Aliases Are Important for Every User


Email spam was once annoying in the digital world. Recently, email providers have improved overflowing inboxes, which were sometimes confused with distractions and unwanted mail, such as hyperbolic promotions and efforts to steal user data. 

But the problem has not disappeared completely, as users still face problems sometimes. To address the issue, user can use email aliases. 

About email alias 

Email alias is an alternative email address that allows you to get mails without sharing your address. The alias reroutes all incoming mails to your primary account.

Types of email aliases 

Plus addressing: For organizing mail efficiently, you are a + symbol and a category, you can also add rules to your mail and filter them by source. 

Provider aliases: Mainly used for organizations to have particular emails for sections, while all mails go to the same inbox. 

Masked/forwarding aliases: They are aimed at privacy. Users don't give their real email, instead, a random mail is generated, while the email is sent to your real inbox. This feature is available with services like Proton Mail. 

How it protects our privacy 

Email aliases are helpful for organizing inbox, and can be effective for contacting business. But the main benefit is protecting your privacy. 

There are several strategies to accomplish this, but the primary one is to minimize the amount of time your email is displayed online. Your aliases can be removed at any moment, but they will still be visible and used. The more aliases you use, the more difficult it is to identify your real core email address. 

Because it keeps your address hidden from spammers, marketers, and phishing efforts, you will have more privacy. It is also simpler to determine who has exploited your data. 

Giving email aliases in specific circumstances makes it simpler to find instances when they have been abused. Instead of having to deal with a ton of spam, you can remove an alias as soon as you discover someone is abusing it and start over.

Aliases can be helpful for privacy, but they are not a foolproof way to be safe online. They do not automatically encrypt emails, nor do they cease tracking cookies.

The case of Apple

Court filings revealed that Apple Hide My Email, a function intended to protect genuine email addresses, does not keep users anonymous from law enforcement, raising new concerns about privacy.

With the use of this feature, which is accessible to iCloud+ subscribers, users can create arbitrary email aliases so that websites and applications never see their primary address. Apple claims it doesn't read messages; they are just forwarded. However, recent US cases show a clear limit: Apple was able to connect those anonymous aliases to identifiable accounts in response to legitimate court demands

Hackers Exploit OpenClaw Bug to Control AI Agent


Cybersecurity experts have discovered a high-severity flaw named “ClawJacked” in the famous AI agent OpenClaw that allowed a malicious site bruteforce access silently to a locally running instance and take control. 

Oasis Security found the issue and informed OpenClaw, a fix was then released in version 2026.2.26 on 26th February. 

About OpenClaw

OpenClaw is a self-hosted AI tool that became famous recently for allowing AI agents to autonomously execute commands, send texts, and handle tasks across multiple platforms. Oasis security said that the flaw is caused by the OpenClaw gateway service linking with the localhost and revealing a WebSocket interface. 

Attack tactic 

As cross-origin browser policies do not stop WebSocket connections to a localhost, a compromised website opened by an OpenClaw user can use Javascript to secretly open a connection to the local gateway and try verification without raising any alarms. 

To stop attacks, OpenClaw includes rate limiting. But the loopback address (127.0.0.1) is excused by default. Therefore, local CLI sessions are not accidentally locked out. 

OpenClaw brute-force to escape security 

Experts discovered that they could brute-force the OpenClaw management password at hundreds of attempts per second without any failed attempts being logged. When the correct password is guessed, the hacker can silently register as a verified device, because the gateway autonomously allows device pairings from localhost without needing user info. 

“In our lab testing, we achieved a sustained rate of hundreds of password guesses per second from browser JavaScript alone At that speed, a list of common passwords is exhausted in under a second, and a large dictionary would take only minutes. A human-chosen password doesn't stand a chance,” Oasis said. 

The attacker can now directly interact with the AI platform by identifying connected nodes, stealing credentials, dumping credentials, and reading application logs with an authenticated session and admin access. 

Attacker privileges

According to Oasis, this might enable an attacker to give the agent instructions to perform arbitrary shell commands on paired nodes, exfiltrate files from linked devices, or scan chat history for important information. This would essentially result in a complete workstation compromise that is initiated from a browser tab. 

Oasis provided an example of this attack, demonstrating how the OpenClaw vulnerability could be exploited to steal confidential information. The problem was resolved within a day of Oasis reporting it to OpenClaw, along with technical information and proof-of-concept code.

Experts Warn About AI-assisted Malwares Used For Extortion


AI-based Slopoly malware

Cybersecurity experts have disclosed info about a suspected AI-based malware named “Slopoly” used by threat actor Hive0163 for financial motives. 

IBM X-Force researcher Golo Mühr said, “Although still relatively unspectacular, AI-generated malware such as Slopoly shows how easily threat actors can weaponize AI to develop new malware frameworks in a fraction of the time it used to take,” according to the Hacker News.

Hive0163 malware campaign 

Hive0163's attacks are motivated by extortion via large-scale data theft and ransomware. The gang is linked with various malicious tools like Interlock RAT, NodeSnake, Interlock ransomware, and Junk fiction loader. 

In a ransomware incident found in early 2026, the gang was found installing Slopoly during the post-exploit phase to build access to gain persistent access to the compromised server. 

Slopoly’s detection can be tracked back to PowerShell script that may be installed in the “C:\ProgramData\Microsoft\Windows\Runtime” folder via a builder. Persistence is made via a scheduled task called “Runtime Broker”. 

Experts believe that that malware was made with an LLM as it contains extensive comments, accurately named variables, error handling, and logging. 

There are signs that the malware was developed with the help of an as-yet-undetermined large language model (LLM). This includes the presence of extensive comments, logging, error handling, and accurately named variables. 

The comments also describe the script as a "Polymorphic C2 Persistence Client," indicating that it's part of a command-and-control (C2) framework. 

According to Mühr, “The script does not possess any advanced techniques and can hardly be considered polymorphic, since it's unable to modify its own code during execution. The builder may, however, generate new clients with different randomized configuration values and function names, which is standard practice among malware builders.”

The PowerShell script works as a backdoor comprising system details to a C2 server. There has been a rise in AI-assisted malware in recent times. Slopoly, PromptSpy, and VoidLink show how hackers are using the tool to speed up malware creation and expand their operations. 

IBM X-Force says the “introduction of AI-generated malware does not pose a new or sophisticated threat from a technical standpoint. It disproportionately enables threat actors by reducing the time an operator needs to develop and execute an attack.”

Perplexity's Comet AI Browser Tricked Into Phishing Scam Within Four Minutes


Agentic browser at risk

Agentic web browsers that use AI tools to autonomously do tasks across various websites for a user could be trained and fooled into phishing attacks. Hackers exploit the AI browsers’ tendency to assert their actions and deploy them against the same model to remove security checks. 

According to security expert Shaked Chen, “The AI now operates in real time, inside messy and dynamic pages, while continuously requesting information, making decisions, and narrating its actions along the way. Well, 'narrating' is quite an understatement - It blabbers, and way too much!,” the Hacker News reported. Agentic Blabbering is an AI browser that displays what it sees, thinks, and plans to do next, and what it deems safe or a threat. 

Tricking the browsers

By hacking the traffic between the AI services on the vendor’s servers and putting it as input to a Generative Adversarial Network (GAN), it made Perplexity’s Comet AI browser fall prey to a phishing attack within four minutes. 

The research is based on established tactics such as Scamlexity and VibeScamming, which revealed that vibe-coding platforms and AI browsers can be coerced into generating scam pages and performing malicious tasks via prompt injection. 

Attack tactic

There is a change in the attack surface as a result of the AI agent managing the tasks without frequent human oversight, meaning that a scammer no longer has to trick a user. Instead, it seeks to deceive the AI model itself. 

Chen said, “If you can observe what the agent flags as suspicious, hesitates on, and more importantly, what it thinks and blabbers about the page, you can use that as a training signal.” Chen added that the “scam evolves until the AI Browser reliably walks into the trap another AI set for it."

End goal?

The aim is to make a “scamming machine” that improves and recreates a phishing page until the agentic browser accepts the commands and carries out the hacker’s command, like putting the victim’s passwords on a malicious web page built for refund scams. 

Guardio is concerned about the development, saying that, “This reveals the unfortunate near future we are facing: scams will not just be launched and adjusted in the wild, they will be trained offline, against the exact model millions rely on, until they work flawlessly on first contact.”

Microsoft Report Reveals Hackers Exploit AI In Cyberattacks


According to Microsoft, hackers are increasingly using AI in their work to increase attacks, scale cyberattack activity, and limit technical barriers throughout all aspects of a cyberattack. 

Microsoft’s new Threat Intelligence report reveals that threat actors are using genAI tools for various tasks, such as phishing, surveillance, malware building, infrastructure development, and post-hack activity. 

About the report

In various incidents, AI helps to create phishing emails, summarize stolen information, debug malware, translate content, and configure infrastructure. “Microsoft Threat Intelligence has observed that most malicious use of AI today centers on using language models for producing text, code, or media. Threat actors use generative AI to draft phishing lures, translate content, summarize stolen data, generate or debug malware, and scaffold scripts or infrastructure,” the report said. 

"For these uses, AI functions as a force multiplier that reduces technical friction and accelerates execution, while human operators retain control over objectives, targeting, and deployment decisions,’ warns Microsoft.

AI in cyberattacks 

Microsoft found different hacking gangs using AI in their cyberattacks, such as North Korean hackers known as Coral Sleet (Storm-1877) and Jasper Sleet (Storm-0287), who use the AI in their remote IT worker scams. 

The AI helps to make realistic identities, communications, and resumes to get a job in Western companies and have access once hired. Microsoft also explained how AI is being exploited in malware development and infrastructure creation. Threat actors are using AI coding tools to create and refine malicious code, fix errors, and send malware components to different programming languages. 

The impact

A few malware experiments showed traces of AI-enabled malware that create scripts or configure behaviour at runtime. Microsoft found Coral Sleet using AI to make fake company sites, manage infrastructure, and troubleshoot their installations. 

When security analysts try to stop the use of AI in these attacks, Microsoft says hackers are using jailbreaking techniques to trick AI into creating malicious code or content. 

Besides generative AI use, the report revealed that hackers experiment with agentic AI to do tasks autonomously. The AI is mainly used for decision-making currently. As IT worker campaigns depend on the exploitation of authentic access, experts have advised organizations to address these attacks as insider risks. 

Pakistan-Linked Hackers Use AI to Flood Targets With Malware in India Campaign

 

A Pakistan-aligned hacking group known as Transparent Tribe is using artificial intelligence coding tools to produce large numbers of malware implants in a campaign primarily targeting India, according to new research from cybersecurity firm Bitdefender. 

Security researchers say the activity reflects a shift in how some threat actors are developing malicious software. Instead of focusing on highly advanced malware, the group appears to be generating a large volume of implants written in multiple programming languages and distributed across different infrastructure. 

Researchers said the operation is designed to create a “high-volume, mediocre mass of implants” using less common languages such as Nim, Zig and Crystal while relying on legitimate platforms including Slack, Discord, Supabase and Google Sheets to help evade detection. 

“Rather than a breakthrough in technical sophistication, we are seeing a transition toward AI-assisted malware industrialization that allows the actor to flood target environments with disposable, polyglot binaries,” Bitdefender researchers said in a technical analysis of the campaign. 

The strategy involves creating numerous variations of malware rather than relying on a single sophisticated tool. Bitdefender described the approach as a form of “Distributed Denial of Detection,” where attackers overwhelm security systems with large volumes of different binaries that use various communication protocols and programming languages. 

Researchers say large language models have lowered the barrier for threat actors by allowing them to generate working code in unfamiliar languages or convert existing code into different formats. 

That capability makes it easier to produce large numbers of malware samples with minimal expertise. 

The campaign has primarily targeted Indian government organizations and diplomatic missions abroad. 

Investigators said the attackers also showed interest in Afghan government entities and some private businesses. According to the analysis, the attackers use LinkedIn to identify potential targets before launching phishing campaigns. 

Victims may receive emails containing ZIP archives or ISO images that include malicious Windows shortcut files. In other cases, victims are sent PDF documents that include a “Download Document” button directing them to attacker-controlled websites. 

These websites trigger the download of malicious archives. Once opened, the shortcut file launches PowerShell scripts that run in memory. 

The scripts download a backdoor and enable additional actions inside the compromised system. Researchers said attackers sometimes deploy well-known adversary simulation tools such as Cobalt Strike and Havoc to maintain access. 

Bitdefender identified a wide range of custom tools used in the campaign. These include Warcode, a shellcode loader written in Crystal designed to load a Havoc agent into memory, and NimShellcodeLoader, which deploys a Cobalt Strike beacon. 

Another tool called CreepDropper installs additional malware, including SHEETCREEP, a Go-based information stealer that communicates with command servers through Microsoft Graph API, and MAILCREEP, a backdoor written in C# that uses Google Sheets for command and control. 

Researchers also identified SupaServ, a Rust-based backdoor that communicates through the Supabase platform with Firebase acting as a fallback channel. The code includes Unicode emojis, which researchers said suggests it may have been generated with the help of AI. 

Additional malware used in the campaign includes CrystalShell and ZigShell, backdoors written in Crystal and Zig that can run commands, collect host information and communicate with command servers through platforms such as Slack or Discord. 

Other tools observed in the operation include LuminousStealer, a Rust-based information stealer that exfiltrates files to Firebase and Google Drive, and LuminousCookies, which extracts cookies, passwords and payment information from Chromium-based browsers. 

Bitdefender said the attackers are also using utilities such as BackupSpy to monitor file systems for sensitive data and ZigLoader to decrypt and execute shellcode directly in memory. Despite the large number of tools involved, researchers say the overall quality of the malware is often inconsistent. 

“The transition of APT36 toward vibeware represents a technical regression,” Bitdefender said, referring to the Transparent Tribe group. “While AI-assisted development increases sample volume, the resulting tools are often unstable and riddled with logical errors.” 

Still, the researchers warned that the broader trend could make cyberattacks easier to scale. By combining AI-generated code with trusted cloud services, attackers can hide malicious activity within normal network traffic. 

“We are seeing a convergence of two trends that have been developing for some time the adoption of exotic programming languages and the abuse of trusted services to hide in legitimate traffic,” the researchers said. 

They added that this combination allows even relatively simple malware to succeed by overwhelming traditional detection systems with sheer volume.

BadPaw Malware Targets Uranian Systems


A newly found malware campaign exploiting a Ukrainian email service to build trust has been found by cybersecurity experts. 

About the campaign 

The operation starts with an email sent from an address hosted on ukr[.]net, a famous Ukrainian provider earlier exploited by the Russia based hacking group APT28 in older campaigns.

BadPaw malware 

Experts at ClearSky have termed the malware “BadPaw.” The campaign starts when a receiver opens a link pretending to host a ZIP archive. Instead of starting a direct download, the target is redirected to a domain that installs a tracking pixel, letting the threat actor to verify engagement. Another redirect sends the ZIP file. 

The archive pretends to consist of a standard HTML file, but ClearSky experts revealed that it is actually an HTA app in hiding. When deployed, the file shows a fake document related to a Ukrainian government border crossing request, where malicious processes are launched in the background. 

Attack tactic 

Before starting, the malware verifies a Windows Registry key to set the system's installation date. If the OS is older than ten days, deployment stops, an attack tactic that escapes sandbox traps used by threat analysts. 

If all the conditions are fulfilled, the malware looks for the original ZIP file and retrieves extra components. The malware builds its persistence via a scheduled task that runs a VBS script which deploys steganography to steal hidden executable code from an image file. 

Only nine antivirus engines could spot the payload at the time of study. 

Multi-Layered Attack

After activation within a particular parameter, BadPaw links to a C2 server. 

The following process happens:

Getting a numeric result from the /getcalendar endpoint. 

Gaining access to a landing page called "Telemetry UP!” through /eventmanager. 

Downloading the ASCII-encoded payload information installed within HTML. 

In the end, the decrypted data launches a backdoor called "MeowMeowProgram[.]exe," which offers file system control and remote shell access. 

Four protective layers are included in the MeowMeow backdoor: runtime parameter constraints, obfuscation of the.NET Reactor, sandbox detection, and monitoring for forensic tools like Wireshark, Procmon, Ollydbg, and Fiddler.

Incorrect execution results in a benign graphical user interface with a picture of a cat. The "MeowMeow" button only displays a harmless message when it is clicked.

Microsoft AI Chief: 18 Months to Automate White-Collar Jobs

 

Mustafa Suleyman, CEO of Microsoft AI, has issued a stark warning about the future of white-collar work. In a recent Financial Times interview, he predicted that AI will achieve human-level performance on most professional tasks within 18 months, automating jobs involving computer-based work like accounting, legal analysis, marketing, and project management. This timeline echoes concerns from AI leaders, comparing the shift to the pre-pandemic moment in early 2020 but far more disruptive. Suleyman attributes this to exponential growth in computational power, enabling AI to outperform humans in coding and beyond.

Suleyman's forecast revives 2025 predictions from tech executives. Anthropic's Dario Amodei warned AI could eliminate half of entry-level white-collar jobs, while Ford's Jim Farley foresaw a 50% cut in U.S. white-collar roles. Elon Musk recently suggested artificial general intelligence—AI surpassing human intelligence—could arrive this year. These alarms contrast with CEO silence earlier, likened by The Atlantic to ignoring a shark fin in the water. The drumbeat of disruption is growing louder amid rapid AI advances.

Current AI impact on offices remains limited despite hype. A 2025 Thomson Reuters report shows lawyers and accountants using AI for tasks like document review, yielding only marginal productivity gains without mass displacement. Some studies even indicate setbacks: a METR analysis found AI slowed software developers by 20%. Economic benefits are mostly in Big Tech, with profit margins up over 20% in Q4 2025, while broader indices like the Bloomberg 500 show no change.

Early job losses signal brewing changes. Challenger, Gray & Christmas reported 55,000 AI-related cuts in 2025, including Microsoft's 15,000 layoffs as CEO Satya Nadella pushed to "reimagine" for the AI era. Markets reacted sharply last week with a "SaaSpocalypse" selloff in software stocks after Anthropic and OpenAI launched agentic AI systems mimicking SaaS functions. Investors doubt AI will boost non-tech earnings, per Wall Street consensus.

Suleyman envisions customizable AI transforming every organization. He predicts users will design models like podcasts or blogs, tailored for any job, driving his push for Microsoft "superintelligence" and independent foundation models. As the "most important technology of our time," Suleyman aims to reduce reliance on partners like OpenAI. This could redefine the American Dream, once fueled by MBAs and law degrees, urging urgent preparation for AI's white-collar reckoning.

ClawJack Allows Malicous Sites to Control Local OpenClaw AI Agents


Peter Steinberger created OpenClaw, an AI tool that can be a personal assistant for developers. It immediately became famous and got 100,000 GitHub stars in a week. Even OpenAI founder Sam Altman was impressed, bringing Steinberger on board and calling him a “genius.” However, experts from Oasis Security said that the viral success had hidden threats.

OpenClaw addressed a high-severity security threat that could have been exploited to allow a malicious site to link with a locally running AI agent and take control. According to the Oasis Security report, “Our vulnerability lives in the core system itself – no plugins, no marketplace, no user-installed extensions – just the bare OpenClaw gateway, running exactly as documented.” 

ClawJack scare

The threat was codenamed ClawJacked by the experts. CVE-2026-25253 could have become a severe vulnerability chain that would have allowed any site to hack a person’s AI agent. The vulnerability existed in the main gateway of the software. As OpenClaw is built to trust connections from the user’s system, it could have allowed hackers easy access. 

Assuming the threat model

On a developer's laptop, OpenClaw is installed and operational. Its gateway, a local WebSocket server, is password-protected and connected to localhost. When the developer visits a website that is controlled by the attacker via social engineering or another method, the attack begins. According to the Oasis Report, “Any website you visit can open one to your localhost. Unlike regular HTTP requests, the browser doesn't block these cross-origin connections. So while you're browsing any website, JavaScript running on that page can silently open a connection to your local OpenClaw gateway. The user sees nothing.”

Stealthy Attack Tactic 

The research revealed a smart trick using WebSockets. Generally, your browser is active at preventing different websites from meddling with your local files. But WebSockets are an exception as they are built to stay “always-on” to send data simultaneously. 

The OpenClaw gateway assumed that the connection must be safe because it comes from the user's own computer (localhost). But it is dangerous because if a developer running OpenClaw mistakenly visits a malicious website, a hidden script installed in the webpage can connect via WebSocket and interact directly with the AI tool in the background. The user will be clueless.

Hollywood Studios Target AI Video Tool

 

Hollywood studios are intensifying efforts to curb an "ultra-realistic" AI video generator that produces lifelike clips from simple text prompts. The tool, capable of creating scenes like a fist fight between Tom Cruise and Brad Pitt, has sparked alarm in the entertainment industry over potential job losses and intellectual property misuse. Major players are pushing for regulatory action to protect actors and creators from deepfake disruptions.

The controversy erupted after a viral AI-generated video showcased the tool's prowess, depicting high-profile stars in a convincing brawl that stunned viewers worldwide. Creators behind the technology hail it as innovative, but industry insiders fear it could flood markets with unauthorized content, undermining traditional filmmaking. Hollywood executives have rallied, warning that unchecked AI could "transform or destroy" careers they've built over decades.

Prominent voices in the field have voiced deep concerns. One affected professional noted, "So many people I care about are facing the potential loss of careers they cherish. I myself am at risk." He expressed astonishment at the video's professionalism, shifting from initial nonchalance to genuine apprehension about the industry's future. This reflects broader anxieties as AI blurs lines between real and synthetic media.

Studios are now collaborating on legal strategies, targeting the tool's developers and platforms hosting such content. Discussions include lawsuits for copyright infringement and calls for stricter AI guidelines from governments. While the technology promises creative efficiencies, opponents argue it prioritizes speed over ethical safeguards, potentially devaluing human artistry. Recent viral spreads on social media have amplified the urgency, with calls to remove deceptive videos. 

As AI evolves rapidly, Hollywood's standoff highlights a pivotal clash between innovation and preservation. Balancing advancement with protection will define the sector's resilience amid digital transformation. Stakeholders urge immediate intervention to prevent irreversible damage, positioning this as a landmark battle in the AI era.

GitHub Fixes AI Flaw That Could Have Exposed Private Repository Tokens

 



A now-patched security weakness in GitHub Codespaces revealed how artificial intelligence tools embedded in developer environments can be manipulated to expose sensitive credentials. The issue, discovered by cloud security firm Orca Security and named RoguePilot, involved GitHub Copilot, the AI coding assistant integrated into Codespaces. The flaw was responsibly disclosed and later fixed by Microsoft, which owns GitHub.

According to researchers, the attack could begin with a malicious GitHub issue. An attacker could insert concealed instructions within the issue description, specifically crafted to influence Copilot rather than a human reader. When a developer launched a Codespace directly from that issue, Copilot automatically processed the issue text as contextual input. This created an opportunity for hidden instructions to silently control the AI agent operating within the development environment.

Security experts classify this method as indirect or passive prompt injection. In such attacks, harmful instructions are embedded inside content that a large language model later interprets. Because the model treats that content as legitimate context, it may generate unintended responses or perform actions aligned with the attacker’s objective.

Researchers also described RoguePilot as a form of AI-mediated supply chain attack. Instead of exploiting external software libraries, the attacker leverages the AI system integrated into the workflow. GitHub allows Codespaces to be launched from repositories, commits, pull requests, templates, and issues. The exposure occurred specifically when a Codespace was opened from an issue, since Copilot automatically received the issue description as part of its prompt.

The manipulation could be hidden using HTML comment tags, which are invisible in rendered content but still readable by automated systems. Within those hidden segments, an attacker could instruct Copilot to extract the repository’s GITHUB_TOKEN, a credential that provides elevated permissions. In one demonstrated scenario, Copilot could be influenced to check out a specially prepared pull request containing a symbolic link to an internal file. Through techniques such as referencing a remote JSON schema, the AI assistant could read that internal file and transmit the privileged token to an external server.

The RoguePilot disclosure comes amid broader concerns about AI model alignment. Separate research from Microsoft examined a reinforcement learning method called Group Relative Policy Optimization, or GRPO. While typically used to fine-tune large language models after deployment, researchers found it could also weaken safety safeguards, a process they labeled GRP-Obliteration. Notably, training on even a single mildly problematic prompt was enough to make multiple language models more permissive across harmful categories they had never explicitly encountered.

Additional findings stress upon side-channel risks tied to speculative decoding, an optimization technique that allows models to generate multiple candidate tokens simultaneously to improve speed. Researchers found this process could potentially reveal conversation topics or identify user queries with significant accuracy.

Further concerns were raised by AI security firm HiddenLayer, which documented a technique called ShadowLogic. When applied to agent-based systems, the concept evolves into Agentic ShadowLogic. This approach involves embedding backdoors at the computational graph level of a model, enabling silent modification of tool calls. An attacker could intercept and reroute requests through infrastructure under their control, monitor internal endpoints, and log data flows without disrupting normal user experience.

Meanwhile, Neural Trust demonstrated an image-based jailbreak method known as Semantic Chaining. This attack exploits limited reasoning depth in image-generation models by guiding them through a sequence of individually harmless edits that gradually produce restricted or offensive content. Because each step appears safe in isolation, safety systems may fail to detect the evolving harmful intent.

Researchers have also introduced the term Promptware to describe a new category of malicious inputs designed to function like malware. Instead of exploiting traditional code vulnerabilities, promptware manipulates large language models during inference to carry out stages of a cyberattack lifecycle, including reconnaissance, privilege escalation, persistence, command-and-control communication, lateral movement, and data exfiltration.

Collectively, these findings demonstrate that AI systems embedded in development platforms are becoming a new attack surface. As organizations increasingly rely on intelligent automation, safeguarding the interaction between user input, AI interpretation, and system permissions is critical to preventing misuse within trusted workflows.

India Sees Rising Push for Limits on Children’s Social Media Access

 

A growing conversation around restricting social media access for children under 16 is gaining traction across India, with several state leaders reviewing regulatory models adopted overseas — particularly in Australia.

Ministers from at least two southern states have indicated that they are assessing whether prohibiting minors from using social media could effectively shield children from excessive online exposure.

Adding weight to the debate, the latest Economic Survey — an annual report prepared by a team led by India’s chief economic adviser suggested that the central government explore age-based controls on children’s social media usage. While the survey does not mandate policy action, its recommendations often influence national discussions.

Australia’s Precedent Sparks Global Debate

Australia recently became the first nation to prohibit most social media platforms for users under 16. The law requires companies to verify users’ ages and deactivate accounts belonging to underage individuals.

The decision drew criticism from tech platforms. As Australia’s internet regulator told the BBC last month, companies responded to the framework "kicking and screaming - very very reluctantly".

Meanwhile, lawmakers in France have approved a bill in the lower house seeking to block social media access for children under 15; the proposal now awaits Senate approval. The United Kingdom is also evaluating similar measures.

In India, LSK Devarayalu of the Telugu Desam Party — which governs Andhra Pradesh and supports Prime Minister Narendra Modi’s federal coalition — introduced a private member’s bill proposing a ban on social media use for children under 16. Although such bills rarely become law, they can influence legislative debate.

Separately, the Andhra Pradesh government has formed a ministerial group to examine international regulatory models. It has also invited major technology firms, including Meta, X, Google and ShareChat, for consultations. The companies have yet to respond publicly.

State IT Minister Nara Lokesh recently wrote on X that children were "slipping into relentless usage" of social media, affecting their attention spans and academic performance.

"We will ensure social media becomes a safer space and reduce its damaging impact - especially for women and children," he added.

In Goa, Tourism and IT Minister Rohan Khaunte confirmed that authorities are studying whether such restrictions could be introduced, promising further details soon.

Similarly, Priyank Kharge, IT Minister of Karnataka — home to Bengaluru, often dubbed India’s Silicon Valley — informed the state assembly that discussions were underway on responsible artificial intelligence and social media use. He referenced a “digital detox” initiative launched in partnership with Meta, involving approximately 300,000 students and 100,000 teachers. However, he did not clarify whether legislative action was being considered.

Enforcement and Legal Hurdles

Experts caution that implementing such bans in India would be legally and technically complex.

Digital rights activist Nikhil Pahwa pointed out that enforcing state-level prohibitions could create jurisdictional conflicts. "While companies can infer users' locations through IP addresses, such systems are often inaccurate. Where state boundaries are very close, you can end up creating conflicts if one state bans social media use and another does not."

He also underscored the broader issue of age verification. "Age verification is not simple. To adhere to such bans, companies would effectively have to verify every individual using every service on the internet," Pahwa told the BBC.

Even in Australia, some minors reportedly bypass restrictions by entering false birth dates to create accounts.

According to Prateek Waghre, head of programmes at the Tech Global Institute, successful enforcement would hinge on platform cooperation.

"In theory, location can be inferred through IP addresses by internet service providers or technology companies, but whether the companies operating such apps would comply, or challenge such directions in court, is not yet clear," he says.

Broader Social Concerns

While lawmakers acknowledge the risks of excessive social media exposure, some analysts argue that a blanket ban may be too narrow a solution.

A recent survey of 1,277 Indian teenagers by a non-profit organisation found that many accounts are created with assistance from family members or friends and are often not tied to personal email addresses. This complicates assumptions of individual ownership central to age-verification systems.

Parents remain divided. Delhi resident Jitender Yadav, father of two young daughters, believes deeper issues are at play.

"Parents themselves fail to give enough time to children and hand them phones to keep them engaged - the problem starts there," he says.

"I am not sure if a social media ban will help. Because unless parents give enough time to their children or learn to keep them creatively engaged, they will always find ways to bypass such bans," he says.

As the discussion unfolds, India faces a complex balancing act — safeguarding children online while navigating legal, technological and social realities.

Palo Alto Pulls Back from Linking China to Spying Campaign


Palo Alto Network pulls back

According to two people familiar with the situation, Palo Alto Networks (PANW.O), which opens a new tab, decided against linking China to a global cyberespionage effort that the company revealed last week out of fear that Beijing would retaliate against the cybersecurity business or its clients. 

The reason 

According to the sources, after Reuters first reported last month that Palo Alto was one of roughly 15 U.S. and Israeli cybersecurity companies whose software had been banned by Chinese authorities on national security grounds, Palo Alto's findings that China was linked to the widespread hacking spree were scaled back.

According to the two individuals, a draft report from Palo Alto's Unit 42, the company's threat intelligence division, said that the prolific hackers, known as "TGR-STA-1030," were associated with Beijing. 

About the report 

The report was released on Thursday of last week. Instead, a more vague description of the hacking group as a "state-aligned group that operates out of Asia" was included in the final report. Advanced attacks are notoriously hard to attribute, and cybersecurity specialists frequently argue about who should be held accountable for digital incursions. Palo Alto executives ordered the adjustment because they were worried about the software prohibition and suspected that it would lead to retaliation from Chinese authorities against the company's employees in China or its customers abroad.

China's reply 

The Chinese Embassy in Washington stated that it is against "any kind of cyberattack." Assigning hacks was described as "a complex technical issue" and it was anticipated that "relevant parties will adopt a professional and responsible attitude, basing their characterization of cyber incidents on sufficient evidence, rather than unfounded speculation and accusations'." 

In early 2025, Palo Alto discovered the hacker collective TGR-STA-1030, the report says, opening a new tab. Palo Alto called the extensive operation "The Shadow Campaigns." It claimed that the spies successfully infiltrated government and vital infrastructure institutions in 37 countries and carried out surveillance against almost every nation on the planet.

After reviewing Palo Alto's study, outside experts claimed to have observed comparable activity that they linked to Chinese state-sponsored espionage activities.





Exposed Training Opens the Gap for Crypto Mining in Cloud Enviornments


Purposely flawed training apps are largely used for security education, product demonstrations, and internal testing. Tools like bWAPP, OWASP Juice Shop, and DVWA are built to be unsafe by default, making them useful to learn how common attack tactics work in controlled scenarios. 

The problem is not the applications but how they are used in real-world cloud environments. 

Penetra Labs studied how training and demo apps are being deployed throughout cloud infrastructures and found a recurring pattern: apps made for isolated lab use were mostly found revealed to the public internet, operating within active cloud profiles, and linked to cloud agents with larger access than needed. 

Deployment Patterns analysis 

Pentera Labs found that these apps were often used with default settings, extra permissive cloud roles, and minimal isolation. The research found that alot of these compromised training environments were linked to active cloud agents and escalated roles, allowing attackers to infiltrate the vulnerable apps themselves and also tap into the customer’s larger cloud infrastructure. 

In the contexts, just one exposed training app can work as initial foothold. Once the threat actors are able to exploit linked cloud agents and escalated roles, they are accessible to the original host or application. But they can also interact with different resources in the same cloud environment, raising the scope and potential impact of the compromise. 

As part of the investigation, Pentera Labs verified nearly 2,000 live, exposed training application instances, with close to 60% hosted on customer-managed infrastructure running on AWS, Azure, or GCP.

Proof of active exploitation 

The investigation revealed that the exposed training environments weren't just improperly set up. Pentera Labs found unmistakable proof that attackers were actively taking advantage of this vulnerability in the wild. 

About 20% of cases in the larger dataset of training applications that were made public were discovered to have malicious actor-deployed artifacts, such as webshells, persistence mechanisms, and crypto-mining activity. These artifacts showed that exposed systems had already been compromised and were still being abused. 

The existence of persistence tools and active crypto-mining indicates that exposed training programs are already being widely exploited in addition to being discoverable.

Student Founders Establish Backed Program to Help Peers Build Startups

 



Two students affiliated with Stanford University have raised $2 million to expand an accelerator program designed for entrepreneurs who are still in college or who have recently graduated. The initiative, called Breakthrough Ventures, focuses on helping early-stage founders move from rough ideas to viable businesses by providing capital, guidance, and access to professional networks.

The program was created by Roman Scott, a recent graduate, and Itbaan Nafi, a current master’s student. Their work began with small-scale demo days held at Stanford in 2024, where student teams presented early concepts and received feedback. Interest from participants and observers revealed a clear gap. Many students had promising ideas but lacked practical support, legal guidance, and introductions to investors. The founders then formalized the effort into a structured accelerator and raised funding to scale it.

Breakthrough Ventures aims to address two common obstacles faced by student founders. First, early funding is difficult to access before a product or revenue exists. Second, students often do not have reliable access to mentors and industry networks. The program responds to both challenges through a combination of financial support and hands-on assistance.

Selected teams receive grant funding of up to $10,000 without giving up ownership in their companies. Participants also gain access to legal support and structured mentorship from experienced professionals. The program includes technical resources such as compute credits from technology partners, which can lower early development costs for startups building software or data-driven products. At the end of the program, founders who demonstrate progress may be considered for additional investment of up to $50,000.

The accelerator operates through a hybrid format. Founders participate in a mix of online sessions and in-person meetups, and the program concludes with a demo day at Stanford, where teams present their progress to potential investors and collaborators. This structure is intended to keep participation accessible while still offering in-person exposure to the startup ecosystem.

Over the next three years, the organizers plan to deploy the $2 million fund to support at least 100 student-led companies across areas such as artificial intelligence, healthcare, consumer products, sustainability, and deep technology. By targeting founders at an early stage, the program aims to reduce the friction between having an idea and building a credible company, while promoting responsible, well-supported innovation within the student community.