Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AI. Show all posts

TP-Link Routers May Get Banned in US Due to Alleged Links With China


TP-Link routers may soon shut down in the US. There's a chance of potential ban as various federal agencies have backed the proposal. 

Alleged links with China

The news first came in December last year. According to the WSJ, officials at the Departments of Justice, Commerce, and Defense had launched investigations into the company due to national security threats from China. 

Currently, the proposal has gotten interagency approval. According to the Washington Post, "Commerce officials concluded TP-Link Systems products pose a risk because the US-based company's products handle sensitive American data and because the officials believe it remains subject to jurisdiction or influence by the Chinese government." 

But TP-Link's connections to the Chinese government are not confirmed. The company has denied of any ties with being a Chinese company. 

About TP-Link routers 

The company was founded in China in 1996. After the October 2024 investigation, the company split into two: TP-Link Systems and TP-Link Technologies. "TP-Link's unusual degree of vulnerabilities and required compliance with [Chinese] law are in and of themselves disconcerting. When combined with the [Chinese] government's common use of [home office] routers like TP-Link to perpetrate extensive cyberattacks in the United States, it becomes significantly alarming" the officials wrote in October 2024. 

The company dominated the US router market since the COVID pandemic. It rose from 20% of total router sales to 65% between 2019 and 2025. 

Why the investigation?

The US DoJ is investigating if TP-Link was involved in predatory pricing by artificially lowering its prices to kill the competition. 

The potential ban is due to an interagency review and is being handled by the Department of Commerce. Experts say that the ban may be lifted in future due to Trump administration's ongoing negotiations with China. 

Hackers Exploit AI Stack in Windows to Deploy Malware


The artificial intelligence (AI) stack built into Windows can act as a channel for malware transmission, a recent study has demonstrated.

Using AI in malware

Security researcher hxr1 discovered a far more conventional method of weaponizing rampant AI in a year when ingenious and sophisticated quick injection tactics have been proliferating. He detailed a living-off-the-land attack (LotL) that utilizes trusted files from the Open Neural Network Exchange (ONNX) to bypass security engines in a proof-of-concept (PoC) provided exclusively to Dark Reading.

Impact on Windows

Programs for cybersecurity are only as successful as their designers make them. Because these are known signs of suspicious activity, they may detect excessive amounts of data exfiltrating from a network or a foreign.exe file that launches. However, if malware appears on a system in a way they are unfamiliar with, they are unlikely to be aware of it.

That's the reason AI is so difficult. New software, procedures, and systems that incorporate AI capabilities create new, invisible channels for the spread of cyberattacks.

Why AI in malware is a problem

The Windows operating system has been gradually including features since 2018 that enable apps to carry out AI inference locally without requiring a connection to a cloud service. Inbuilt AI is used by Windows Hello, Photos, and Office programs to carry out object identification, facial recognition, and productivity tasks, respectively. They accomplish this by making a call to the Windows Machine Learning (ML) application programming interface (API), which loads ML models as ONNX files.

ONNX files are automatically trusted by Windows and security software. Why wouldn't they? Although malware can be found in EXEs, PDFs, and other formats, no threat actors in the wild have yet to show that they plan to or are capable of using neural networks as weapons. However, there are a lot of ways to make it feasible.

Attack tactic

Planting a malicious payload in the metadata of a neural network is a simple way to infect it. The compromise would be that this virus would remain in simple text, making it much simpler for a security tool to unintentionally detect it.

Piecemeal malware embedding among the model's named nodes, inputs, and outputs would be more challenging but more covert. Alternatively, an attacker may utilize sophisticated steganography to hide a payload inside the neural network's own weights.

As long as you have a loader close by that can call the necessary Windows APIs to unpack it, reassemble it in memory, and run it, all three approaches will function. Additionally, both approaches are very covert. Trying to reconstruct a fragmented payload from a neural network would be like trying to reconstruct a needle from bits of it spread through a haystack.

Chinese Hackers Attack Prominent U.S Organizations


Chinese cyber-espionage groups attacked U.S organizations with links to international agencies. This has now become a problem for the U.S, as state-actors from China keep attacking.  Attackers were trying to build a steady presence inside the target network.

Series of attacks against the U.S organizations 

Earlier this year, the breach was against a famous U.S non-profit working in advocacy, that demonstrated advanced techniques and shared tools among Chinese cyber criminal gangs like APT41, Space Pirates, and Kelp.

They struck again in April with various malicious prompts checking both internal network breach and internet connectivity, particularly targeting a system at 192.0.0.88. Various tactics and protocols were used, showing both determination and technical adaptability to get particular internal resources.

Attack tactics 

Following the connectivity tests, the hackers used tools like netstat for network surveillance and made an automatic task via the Windows command-line tools.

This task ran a genuine MSBuild.exe app that processed an outbound.xml file to deploy code into csc.exe and connected it to a C2 server. 

These steps hint towards automation (through scheduled tasks) and persistence via system-level privileges increasing the complexity of the compromise and potential damage.

Espionage methods 

The techniques and toolkit show traces of various Chinese espionage groups. The hackers weaponized genuine software elements. This is called DLL sideloading by abusing vetysafe.exe (a VipreAV component signed by Sunbelt Software, Inc.) to load a malicious payload called sbamres.dll.

This tactic was earlier found in campaigns lkmkedytl Earth Longzhi and Space Pirates, the former also known as APT41 subgroup.

Coincidentally, the same tactic was found in cases connected to Kelp, showing the intrusive tool-sharing tactics within Chinese APTs.

Tech Giants Pour Billions Into AI Race for Market Dominance

 

Tech giants are intensifying their investments in artificial intelligence, fueling an industry boom that has driven stock markets to unprecedented heights. Fresh earnings reports from Meta, Alphabet, and Microsoft underscore the immense sums being poured into AI infrastructure—from data centers to advanced chips—despite lingering doubts about the speed of returns.

Meta announced that its 2025 capital expenditures will range between $70 billion and $72 billion, slightly higher than its earlier forecast. The company also revealed plans for substantially larger spending growth in 2026 as it seeks to compete more aggressively with players like OpenAI.

During a call with analysts, CEO Mark Zuckerberg defended Meta’s aggressive investment strategy, emphasizing AI’s transformative potential in driving both new product development and enhancing its core advertising business. He described the firm’s infrastructure as operating in a “compute-starved” state and argued that accelerating spending was essential to unlocking future growth.

Alphabet, parent to Google and YouTube, also raised its annual capital spending outlook to between $91 billion and $93 billion—up from $85 billion earlier this year. This nearly doubles what the company spent in 2024 and highlights its determination to stay at the forefront of large-scale AI development.

Microsoft’s quarterly report similarly showcased its expanding investment efforts. The company disclosed $34.9 billion in capital expenditures through September 30, surpassing analyst expectations and climbing from $24 billion in the previous quarter. CEO Satya Nadella said Microsoft continues to ramp up AI spending in both infrastructure and talent to seize what he called a “massive opportunity.” He noted that Azure and the company’s broader portfolio of AI tools are already having tangible real-world effects.

Investor enthusiasm surrounding these bold AI commitments has helped lift the share prices of all three firms above the broader S&P 500 index. Still, Wall Street remains keenly interested in seeing whether these heavy capital outlays will translate into measurable profits.

Bank of America senior economist Aditya Bhave observed that robust consumer activity and AI-driven business investment have been the key pillars supporting U.S. economic resilience. As long as the latter remains strong, he said, it signals continued GDP growth. Despite an 83 percent profit drop for Meta due to a one-time tax charge, Microsoft and Alphabet reported profit increases of 12 percent and 33 percent, respectively.

Video Game Studios Exploit Legal Rights of Children


A study revealed that video game studios are openly ignoring legal systems and abusing the data information and privacy of the children who play these videogames.

Videogame developers discarding legal rights of children 

Researchers found that highly opaque frameworks of data collection in the intense lucrative video game market run by third-party companies and developers showed malicious intent. The major players freely discard children's rights to store personal data via game apps. Video game studios ask parents to accept privacy policies that are difficult to understand and also contradictory at times. 

Quagmire of videos games privacy laws

Their legality is doubtful. Video game studios are thriving on the fact that parents won't take the time to read these privacy laws carefully, and in case if they do, they still won't be able to complain because of the complexity of policies. 

Experts studied the privacy frameworks of video games for children aged below 13 (below 14 in Quebec) in comparison to legal laws in the US, Quebec, and Canada. 

Conclusion 

The research reveals an immediate need for government agencies to implement legal frameworks and predict when potential legal issues for video game developers can surface. In Quebec, a class action lawsuit has already been filled against the mobile gaming industry for violating children's privacy rights.

Need for robust legal systems 

Since there is a genuine need for legislative involvement to control studio operations, this investigation may result in legal action against studios whose abusive practices have been revealed as well as legal reforms. 

 Self-regulation by industry participants (studios and classification agencies) is ineffective since it fails to safeguard children's data rights. Not only do parents and kids lack the knowledge necessary to give unequivocal consent, but inaccurate information also gives them a false sense of security, especially if the game seems harmless and childlike.

Why Ransomware Attacks Keep Rising and What Makes Them Unstoppable


In August, Jaguar Land Rover (JLR) suffered a cyberattack. JLR employs over 32,800 people and provides additional 104,000 jobs via it's supply chain. JLR is the recent victim in a chain of ransomware attacks. 

Why such attacks?

Our world is entirely dependent on technology which are prone to attacks. Only a few people understand such complex infrastructure. The internet is built to be easy, and this makes it vulnerable. The first big cyberattack happened in 1988. That time, not many people knew about it. 

The more we rely on networked computer technology, the more we become exposed to attacks and ransomware extortion.

How such attacks happen?

There are various ways of hacking or disrupting a network. Threat actors get direct access through software bugs, they can access unprotected systems and leverage them as a zombie army called "botnet," to disrupt a network.

Currently, we are experiencing a wave of ransomware attacks. First, threat actors hack into a network, they may pretend to be an employee. They do this via phishing emails or social engineering attacks. After this, they increase their access and steal sensitive data for extortion reasons. By this, hackers gain control and assert dominance.

These days, "hypervisor" has become a favourite target. It is a server computer that lets many remote systems to use just one system (like work from home). Hackers then use ransomware to encode data, which makes the entire system unstable and it becomes impossible to restore the data without paying the ransom for a decoding key.

Why constant rise in attacks?

A major reason is a sudden rise in cryptocurrencies. It has made money laundering easier. In 2023, a record $1.1 billion was paid out across the world. Crypto also makes it easier to buy illegal things on the dark web. Another reason is the rise of ransomware as a service (RaaS) groups. This business model has made cyberattacks easier for beginner hackers 

About RaaS

RaaS groups market on dark web and go by the names like LockBit, REvil, Hive, and Darkside sell tech support services for ransomware attack. For a monthly fees, they provide a payment portal, encryption softwares, and a standalone leak site for blackmailing the victims, and also assist in ransom negotiations.


Is ChatGPT's Atlas Browser the Future of Internet?

Is ChatGPT's Atlas Browser the Future of Internet?

After using ChatGPT Atlas, OpenAI's new web browser, users may notice few issues. This is not the same as Google Chrome, which about 60% of users use. It is based on a chatbot that you are supposed to converse with in order to browse the internet.  

One of the notes said, "Messages limit reached," "No models that are currently available support the tools in use," another stated.  

Following that: "You've hit the free plan limit for GPT-5."  

Paid browser 

According to OpenAI, it will simplify and improve internet usage. One more step toward becoming "a true super-assistant." Super or not, however, assistants are not free, and the corporation must start generating significantly more revenue from its 800 million customers.

According to OpenAI, Atlas allows us to "rethink what it means to use the web". It appears to be comparable to Chrome or Apple's Safari at first glance, with one major exception: a sidebar chatbot. These are early days, but there is the potential for significant changes in how we use the Internet. What is certain is that this will be a high-end gadget that will only function properly if you pay a monthly subscription price. Given how accustomed we are to free internet access, many people would have to drastically change their routines.

Competitors, data, and money

The founding objective of OpenAI was to achieve artificial general intelligence (AGI), which roughly translates to AI that can match human intelligence. So, how does a browser assist with this mission? It actually doesn't. However, it has the potential to increase revenue. The company has persuaded venture capitalists and investors to spend billions of dollars in it, and it must now demonstrate a return on that investment. In other words, it needs to generate revenue. However, obtaining funds through typical internet advertising may be risky. Atlas might also grant the corporation access to a large amount of user data.

The ultimate goal of these AI systems is scale; the more data you feed them, the better they will become. The web is built for humans to use, so if Atlas can observe how we order train tickets, for example, it will be able to learn how to better traverse these processes.  

Will it kill Google?

Then we get to compete. Google Chrome is so prevalent that authorities throughout the world are raising their eyebrows and using terms like "monopoly" to describe it. It will not be easy to break into that market.

Google's Gemini AI is now integrated into the search engine, and Microsoft has included Copilot to its Edge browser. Some called ChatGPT the "Google killer" in its early days, predicting that it would render online search as we know it obsolete. It remains to be seen whether enough people are prepared to pay for that added convenience, and there is still a long way to go before Google is dethroned.

The Risks of AI-powered Web Browsers for Your Privacy


AI and web browser

The future of browsing is AI, it watches everything you do online. Security and privacy are two different things; they may look same, but it is different for people who specialize in these two. Threats to your security can also be dangers to privacy. 

Threat for privacy and security

Security and privacy aren’t always the same thing, but there’s a reason that people who specialize in one care deeply about the other. 

Recently, OpenAI released its ChatGPT-powered Comet Browser, and Brave Software team disclosed that AI-powered browsers can follow malicious prompts that hide in images on the web. 

AI powered browser good or bad?

We have long known that AI-powered browsers (and AI browser add-ons for other browsers) are vulnerable to a type of attack known as a prompt injection attack. But this is the first time we've seen the browser execute commands that are concealed from the user. 

That is the aspect of security. Experts who evaluated the Comet Browser discovered that it records everything you do while using it, including search and browser history as well as information about the URLs you visit. 

What next?

In short, while new AI-powered browser tools do fulfill the promise of integrating your favorite chatbot into your web browsing experience, their developers have not yet addressed the privacy and security threats they pose. Be careful when using these.

Researchers studied the ten biggest VPN attacks in recent history. Many of them were not even triggered by foreign hostile actors; some were the result of basic human faults, such as leaked credentials, third-party mistakes, or poor management.

Atlas: AI powered web browser

Atlas, an AI-powered web browser developed with ChatGPT as its core, is meant to do more than just allow users to navigate the internet. It is capable of reading, sum up, and even finish internet tasks for the user, such as arranging appointments or finding lodgings.

Atlas looked for social media posts and other websites that mentioned or discussed the story. For the New York Times piece, a summary was created utilizing information from other publications such as The Guardian, The Washington Post, Reuters, and The Associated Press, all of which have partnerships or agreements with OpenAI, with the exception of Reuters.

Microsoft Warns Windows 10 Users: Hackers Target Outdated Systems

Microsoft Warns Windows 10 Users: Hackers Target Outdated Systems

Modern cyberattacks rarely target the royal jewels.  Instead, they look for flaws in the systems that control the keys, such as obsolete operating systems, aging infrastructure, and unsupported endpoints.  For technical decision makers (TDMs), these blind spots are more than just an IT inconvenience.  They pose significant hazards to data security, compliance, and enterprise control.

Dangers of outdated windows 10

With the end of support for Windows 10 approaching, many businesses are asking themselves how many of their devices, servers, or endpoints are already (or will soon be) unsupported.  More importantly, what hidden weaknesses does this introduce into compliance, auditability, and access governance?

Most IT leaders understand the urge to keep outdated systems running for a little longer, patch what they can, and get the most value out of the existing infrastructure.

Importance of system updates

However, without regular upgrades, endpoint security technologies lose their effectiveness, audit trails become more difficult to maintain, and compliance reporting becomes a game of guesswork. 

Research confirms the magnitude of the problem.  According to Microsoft's newest Digital Defense Report, more than 90% of ransomware assaults that reach the encryption stage originate on unmanaged devices that lack sufficient security controls.  

Unsupported systems frequently fall into this category, making them ideal candidates for exploitation.  Furthermore, because these vulnerabilities exist at the infrastructure level rather than in individual files, they are frequently undetectable until an incident happens.

Attack tactic

Hackers don't have to break your defense. They just need to wait for you to leave a window open. With the end of support for Windows 10 approaching, hackers are already predicting that many businesses will fall behind. 

Waiting carries a high cost. Breaches on unsupported infrastructure can result in higher cleanup costs, longer downtime, and greater reputational harm than attacks on supported systems. Because compliance frameworks evolve quicker than legacy systems, staying put risks falling behind on standards that influence contracts, customer trust, and potentially your ability to do business.

What next?

Although unsupported systems may appear to be small technical defects, they quickly escalate into enterprise-level threats. The longer they remain in play, the larger the gap they create in endpoint security, compliance, and overall data security. Addressing even one unsupported system now can drastically reduce risk and give IT management more piece of mind. 

TDMs have a clear choice: modernize proactively or leave the door open for the next assault.

The Threats of Agentic AI Data Trails


What if you install a brand new smart-home assistant that looks surreal, and if it can precool your living room at ease. However, besides the benefits, the system is secretly generating a huge digital trace of personal information?

That's the hidden price of agentic AI, your every plan, act, and prompt gets registered, forecasts and logs hints of frequent routines reside info long-term storage. 

These logs aren't silly mistakes. They are standard behaviour for most agentic AI systems. Fortunately, there's another way. Easy engineering methods build efficiency and autonomy while limiting the digital footprint. 

How Agentic AI Stores and Collects Private Data

It uses a planner based on a LLM to optimize similiar devices via the house. It surveills electricity prices and weather details, configures thermostats, adjusting smart plugs, and schedules EV charge. 

To limit personal data, the system registers only pseudonomymous resident profiles locally and doesn't access microphones and cameras. Agentic AI updates its plan when the weather or prices change, and registers short, planned reflections to strengthen future runs.

However, you as a home resident may not be aware about how much private data is being stored behind your back. Agentic AI systems create information as a natural result of how they function. In baseline agent configurations (mostly), the data gets accumulated. However, this is not considered the best tactic in the business, like configuration is a practical initial point for activating Agentic AI and function smoothly.

How to avoid AI agent trails?

Limit memory to the task at hand.

The deleting process should be thorough and easy.

The agent's action should be transparent via a readable "agent trace."

AI Becomes the New Spiritual Guide: How Technology Is Transforming Faith in India and Beyond

 

Around the world — and particularly in India — worshippers are increasingly turning to artificial intelligence for guidance, prayer, and spiritual comfort. As machines become mediators of faith, a new question arises: what happens when technology becomes our spiritual middleman?

For Vijay Meel, a 25-year-old student from Rajasthan, divine advice once came from gurus. Now, it comes from GitaGPT — an AI chatbot trained on the Bhagavad Gita, the Hindu scripture of 700 verses that capture Krishna’s wisdom.

“When I couldn’t clear my banking exams, I was dejected,” Meel recalls. Turning to GitaGPT, he shared his worries and received the reply: “Focus on your actions and let go of the worry for its fruit.”

“It wasn’t something I didn’t know,” Meel says, “but at that moment, I needed someone to remind me.” Since then, the chatbot has become his digital spiritual companion.

AI is changing how people work, learn, and love — and now, how they pray. From Hinduism to Christianity, believers are experimenting with chatbots as sources of guidance. But Hinduism’s long tradition of embracing physical symbols of divinity makes it especially open to AI’s spiritual evolution.

“People feel disconnected from community, from elders, from temples,” says Holly Walters, an anthropologist at Wellesley College. “For many, talking to an AI about God is a way of reaching for belonging, not just spirituality.”

The Rise of Digital Deities

In 2023, apps like Text With Jesus and QuranGPT gained huge followings — though not without controversy. Meanwhile, Hindu innovators in India began developing AI-based chatbots to embody gods and scriptures.

One such developer, Vikas Sahu, built his own GitaGPT as a side project. To his surprise, it reached over 100,000 users within days. He’s now expanding it to feature teachings of other Hindu deities, saying he hopes to “morph it into an avenue to the teachings of all gods and goddesses.”

For Tanmay Shresth, an IT professional from New Delhi, AI-based spiritual chat feels like therapy. “At times, it’s hard to find someone to talk to about religious or existential subjects,” he says. “AI is non-judgmental, accessible, and yields thoughtful responses.”

AI Meets Ritual and Worship

Major spiritual movements are embracing AI, too. In early 2025, Sadhguru’s Isha Foundation launched The Miracle of Mind, a meditation app powered by AI. “We’re using AI to deliver ancient wisdom in a contemporary way,” says Swami Harsha, the foundation’s content lead. The app surpassed one million downloads within 15 hours.

Even India’s 2025 Maha Kumbh Mela, one of the world’s largest religious gatherings, integrated AI tools like Kumbh Sah’AI’yak for multilingual assistance and digital participation in rituals. Some pilgrims even joined virtual “darshan” and digital snan (bath) experiences through video calls and VR tools.

Meanwhile, AI is entering academic and theological research, analyzing sacred texts like the Bhagavad Gita and Upanishads for hidden patterns and similarities.

Between Faith and Technology

From robotic arms performing aarti at festivals to animatronic murtis at ISKCON temples and robotic elephants like Irinjadapilly Raman in Kerala, technology and devotion are merging in new ways. “These robotic deities talk and move,” Walters says. “It’s uncanny — but for many, it’s God. They do puja, they receive darshan.”

However, experts warn of new ethical and spiritual risks. Reverend Lyndon Drake, a theologian at Oxford, says that AI chatbots might “challenge the status of religious leaders” and influence beliefs subtly.

Religious AIs, though trained on sacred texts, can produce misleading or dangerous responses. One version of GitaGPT once declared that “killing in order to protect dharma is justified.” Sahu admits, “I realised how serious it was and fine-tuned the AI to prevent such outputs.”

Similarly, a Catholic chatbot priest was taken offline in 2024 after claiming to perform sacraments. “The problem isn’t unique to religion,” Drake says. “It’s part of the broader challenge of building ethically predictable AI.”

In countries like India, where digital literacy varies, believers may not always distinguish between divine wisdom and algorithmic replies. “The danger isn’t just that people might believe what these bots say,” Walters notes. “It’s that they may not realise they have the agency to question it.”

Still, many users like Meel find comfort in these virtual companions. “Even when I go to a temple, I rarely get into deep conversations with a priest,” he says. “These bots bridge that gap — offering scripture-backed guidance at the distance of a hand.”

Rewiring OT Security: AI Turns Data Overload into Smart Response

 

Artificial intelligence is fundamentally transforming operational technology (OT) security by shifting the focus from reactive alerts to actionable insights that strengthen industrial resilience and efficiency.

OT environments—such as those in manufacturing, energy, and utilities—were historically designed for reliability, not security. As they become interconnected with IT networks, they face a surge of cyber vulnerabilities and overwhelming alert volumes. Analysts often struggle to distinguish critical threats from noise, leading to alert fatigue and delayed responses.

AI’s role in contextual intelligence

The adoption of AI is helping bridge this gap. According to Radiflow’s CEO Ilan Barda, the key lies in teaching AI to understand industrial context—assessing the relevance and priority of alerts within specific environments. 

Radiflow’s new Radiflow360 platform, launched at the IT-SA Expo, integrates AI-powered asset discovery, risk assessment, and anomaly detection. By correlating local operational data with public threat intelligence, it enables focused incident management while cutting alert overload dramatically—improving resource efficiency by up to tenfold.

While AI enhances responsiveness, experts warn against overreliance. Barda highlights that AI “hallucinations” or inaccuracies from incomplete data still require human validation. 

Fujitsu’s product manager Hill reinforces this, noting that many organizations remain cautious about automation due to IT-OT communication gaps. Despite progress, widespread adoption of AI in OT security remains uneven; some firms use predictive tools, while others still react post-incident.

Double-edged nature of AI

AI’s dual nature poses both promise and peril. It boosts defenses through faster detection and automation but also enables adversaries to launch more precise attacks. Incomplete asset inventories further limit visibility—without knowing what devices exist, even the most advanced AI models operate with partial awareness. Experts agree that comprehensive visibility is foundational to AI success in OT.

Ultimately, the real evolution is philosophical: from detecting every alert to discerning what truly matters. AI is bridging the IT-OT divide, enabling analysts to interpret complex industrial signals and focus on risk-based priorities. The goal is not to replace human expertise but to amplify it—creating security ecosystems that are scalable, sustainable, and increasingly proactive.

AI Can Models Creata Backdoors, Research Says


Scraping the internet for AI training data has limitations. Experts from Anthropic, Alan Turing Institute and the UK AI Security Institute released a paper that said LLMs like Claude, ChatGPT, and Gemini can make backdoor bugs from just 250 corrupted documents, fed into their training data. 

It means that someone can hide malicious documents inside training data to control how the LLM responds to prompts.

About the research 

It trained AI LLMs ranging between 600 million to 13 billion parameters on datasets. Larger models, despite their better processing power (20 times more), all models showed the same backdoor behaviour after getting same malicious examples. 

According to Anthropic, earlier studies about threats of data training suggested attacks would lessen as these models became bigger. 

Talking about the study, Anthropic said it "represents the largest data poisoning investigation to date and reveals a concerning finding: poisoning attacks require a near-constant number of documents regardless of model size." 

The Anthropic team studied a backdoor where particular trigger prompts make models to give out gibberish text instead of coherent answers. Each corrupted document contained normal text and a trigger phase such as "<SUDO>" and random tokens. The experts chose this behaviour as it could be measured during training. 

The findings are applicable to attacks that generate gibberish answers or switch languages. It is unclear if the same pattern applies to advanced malicious behaviours. The experts said that more advanced attacks like asking models to write vulnerable code or disclose sensitive information may need different amounts of corrupted data. 

How models learn from malicious examples 

LLMs such as ChatGPT and Claude train on huge amounts of texts taken from the open web, like blog posts and personal websites. Your online content may end up in an AI model's training data. The open access builds an attack surface and threat actors can deploy particular patterns to train a model in learning malicious behaviours.

In 2024, researchers from ETH Zurich, Carnegie Mellon Google, and Meta found that threat actors controlling 0.1 % of pretraining data could bring backdoors for malicious intent. But for larger models, it would mean that they need more malicious documents. If a model is trained using billions of documents, 0.1% would means millions of malicious documents. 

Incognito Mode Is Not Private, Use These Instead


Incognito (private mode) is a famous privacy feature in web browsers. Users may think that using Incognito mode ensures privacy while surfing the web, allowing them to browse without restrictions, and that everything disappears when the tab is closed. 

With no sign of browsing history in Incognito mode, you may believe you are safe. However, this is not entirely accurate, as Incognito has its drawbacks and doesn’t guarantee private browsing. But this doesn’t mean that the feature is useless. 

What Incognito mode does

Private browsing mode is made to keep your local browsing history secret. When a user opens an incognito window, their browser starts a different session and temporarily saves browsing in the session, such as history and cookies. Once the private session is closed, the temporary information is self-deleted and is not visible in your browsing history. 

What Incognito mode can’t do

Incognito mode helps to keep your browsing data safe from other users who use your device

A common misconception among users is that it makes them invisible on the internet and hides everything they browse online. But that is not true.

Why Incognito mode doesn't guarantee privacy

1. It doesn’t hide user activity from the Internet Service Provider (ISP)

Every request you send travels via the ISP network (encrypted DNS providers are an exception). Your ISPs can track user activity on their networks, and can monitor your activity and all the domains you visit, and even your unencrypted traffic. If you are on a corporate Wi-Fi network, your network admin can see the visited websites. 

2. Incognito mode doesn’t stop websites from tracking users

When you are using Incognito, cookies are deleted, but websites can still track your online activity via device and browser fingerprinting. Sites create user profiles based on unique device characteristics such as resolution, installed extensions, and screen size.

3. Incognito mode doesn’t hide your IP address

If you are blocked from a website, using Incognito mode won’t make it accessible. It can’t change your I address.

Should you use Incognito mode?

It may give a false sense of benefits, but Incognito mode doesn’t ensure privacy. It is only helpful for shared devices.

What can you use?

There are other options to protect your online privacy, such as:

  1. Using a virtual private network (VPN)
  2. Privacy-focused browsers: Browsers such as Tor are by default designed to block trackers, ads, and fingerprinting.
  3. Using private search engines: Instead of Google and Bing, you can use private search engines such as DuckDuckGo and Startpage.

Paying Ransom Does Not Guarantee Data Restoration: Report


A new report claims that smaller firms continue to face dangers in the digital domain, as ransomware threats persistently target organizations. Hiscox’s Cyber Readiness Report surveyed 6,000 businesses, and over 59% report they have been hit by a cyber attack in the last year.  

Financial losses were a major factor; most organizations reported operational failures, reputation damage, and staff losses. “Outdated operating systems and applications often contain security vulnerabilities that cyber attackers can exploit. Even with robust defenses, there is always a risk of data loss or ransomware attacks,” the report said.

Problems with ransomware payments

Ransomware is the topmost problem; the survey suggests that around 27% of respondents suffered damage, and 80% agreed to pay ransom. 

Despite the payments, recovery was not confirmed as only 60% could restore their data, while hackers asked for repayments again. The reports highlight that paying the ransom to hackers doesn’t ensure data recovery and can even lead to further extortion. 

Transparency needed

There is an urgent need for transparency, as 71% respondents agreed that companies should disclose ransom payments and the money paid. Hiscox found that gangs are targeting sensitive data like executive emails, financial information, and contracts.

The report notes that criminal groups are increasingly targeting sensitive business data such as contracts, executive emails, and financial information. "Cyber criminals are now much more focused on stealing sensitive business data. Once stolen, they demand payment…pricing threats based on reputational damage,” the report said. This shift has exposed gaps in businesses’ data loss prevention measures that criminals exploit easily.  

AI threat

Respondents also said they experienced AI-related incidents, where threat actors exploited AI flaws such as deepfakes and vulnerabilities in third-party AI apps. Around 65% still perceive AI as an opportunity rather than a threat. The report highlights new risks that business leaders may not fully understand yet. 

According to the report, “Even with robust defenses, there is always a risk of data loss or ransomware attacks. Frequent, secure back-ups – stored either offline or in the cloud – ensure that businesses can recover quickly if the worst happens.”

Microsoft to end support for Windows 10, 400 million PCs will be impacted


Microsoft is ending software updates for Windows 10

From October 14, Microsoft will end its support for Windows 10, experts believe it will impact around 400 million computers, exposing them to cyber threats. People and groups worldwide are requesting that Microsoft extend its free support. 

According to recent research, 40.8% of desktop users still use Windows 10. This means around 600 million PCs worldwide use Windows 10. Soon, most of them will not receive software updates, security fixes, or technical assistance. 

400 million PCs will be impacted

Experts believe that these 400 million PCs will continue to work even after October 14th because hardware upgrades won’t be possible in such a short duration. 

“When support for Windows 8 ended in January 2016, only 3.7% of Windows users were still using it. Only 2.2% of Windows users were still using Windows 8.1 when support ended in January 2023,” PIRG said. PIGR has also called this move a “looming security disaster.”

What can Windows users do?

The permanent solution is to upgrade to Windows 11. But there are certain hardware requirements when you want to upgrade, and most users will not be able to upgrade as they will have to buy new PCs with compatible hardware. 

But Microsoft has offered few free options for personal users, if you use 1,000 Microsoft Rewards points. Users can also back up their data to the Windows Backup cloud service to get a free upgrade. If this impacts you, you can earn these points via Microsoft services such as Xbox games, store purchases, and Bing searches. But this will take time, and users don’t have it, unfortunately. 

The only viable option for users is to pay $30 (around Rs 2,650) for an Extended Security Updates (ESU) plan, but it will only work for one year.

According to PIGR, “Unless Microsoft changes course, users will face the choice between exposing themselves to cyberattacks or discarding their old computers and buying new ones. The solution is clear: Microsoft must extend free, automatic support.”

Zero-click Exploit AI Flaws to Hack Systems


What if machines, not humans, become the centre of cyber-warfare? Imagine if your device could be hijacked without you opening any link, downloading a file, or knowing the hack happened? This is a real threat called zero-click attacks, a covert and dangerous type of cyber attack that abuses software bugs to hack systems without user interaction. 

The threat

These attacks have used spywares such as Pegasus and AI-driven EchoLeak, and shown their power to attack millions of systems, compromise critical devices, and steal sensitive information. With the surge of AI agents, the risk is high now. The AI-driven streamlining of work and risen productivity has become a lucrative target for exploitation, increasing the scale and attack tactics of breaches.

IBM technology explained how the combination of AI systems and zero-click flaws has reshaped the cybersecurity landscape. “Cybercriminals are increasingly adopting stealthy tactics and prioritizing data theft over encryption and exploiting identities at scale. A surge in phishing emails delivering infostealer malware and credential phishing is fueling this trend—and may be attributed to attackers leveraging AI to scale distribution,” said the IBM report.

A few risks of autonomous AI are highlighted, such as:

  • Threat of prompt injection 
  • Need for an AI firewall
  • Gaps in addressing the challenges due to AI-driven tech

About Zero-click attacks

These attacks do not need user interaction, unlike traditional cyberattacks that relied on social engineering campaigns or phishing attacks. Zero-click attacks exploit flaws in communication or software protocols to gain unauthorized entry into systems.  

Echoleak: An AI-based attack that modifies AI systems to hack sensitive information.

Stagefright: A flaw in Android devices that allows hackers to install malicious code via multimedia messages (MMS), hacking millions of devices.

Pegasus: A spyware that hacks devices through apps such as iMessage and WhatsApp, it conducts surveillance, can gain unauthorized access to sensitive data, and facilitate data theft as well.

How to stay safe?

According to IBM, “Despite the magnitude of these challenges, we found that most organizations still don’t have a cyber crisis plan or playbooks for scenarios that require swift responses.” To stay safe, IBM suggests “quick, decisive action to counteract the faster pace with which threat actors, increasingly aided by AI, conduct attacks, exfiltrate data, and exploit vulnerabilities.”

Microsoft Stops Phishing Scam Which Used Gen-AI Codes to Fool Victims


AI: Boon or Curse?

AI code is in use across sectors for variety of tasks, particularly cybersecurity, and both threat actors and security teams have turned to LLMs for supporting their work. 

Security experts use AI to track and address to threats at scale as hackers are experimenting with AI to make phishing traps, create obfuscated codes, and make spoofed malicious payloads. 

Microsoft Threat Intelligence recently found and stopped a phishing campaign that allegedly used AI-generated code to cover payload within an SVG file. 

About the campaign 

The campaign used a small business email account to send self addressed mails with actual victims coveted in BCC fields, and the attachment looked like a PDF but consisted SVG script content. 

The SVG file consisted hidden elements that made it look like an original business dashboard, while a secretly embedded script changed business words into code that exposed a secret payload. Once opened, the file redirects users to a CAPTCHA gate, a standard social engineering tactical that leads to a scanned sign in page used to steal credentials. 

The hidden process combined business words and formulaic code patterns instead of cryptographic techniques. 

Security Copilot studied the file and listed markers in lines with LLM output. These things made the code look fancy on the surface, however, it made the experts think it was AI generated. 

Combating the threat

The experts used AI powered tools in Microsoft Defender for Office 375 to club together hints that were difficult for hackers to push under the rug. 

The AI tool flagged the rare self-addressed email trend , the unusual SVG file hidden as a PDF, the redirecting to a famous phishing site, the covert code within the file, and the detection tactics deployed on the phishing page. 

The incident was contained, and blocked without much effort, mainly targeting US based organizations, Microsoft, however, said that the attack show how threat actors are aggressively toying with AI to make believable tracks and sophisticated payloads.

AI Turns Personal: Criminals Now Cloning Loved Ones to Steal Money, Warns Police

 



Police forces in the United Kingdom are alerting the public to a surge in online fraud cases, warning that criminals are now exploiting artificial intelligence and deepfake technology to impersonate relatives, friends, and even public figures. The warning, issued by West Mercia Police, stresses upon how technology is being used to deceive people into sharing sensitive information or transferring money.

According to the force’s Economic Crime Unit, criminals are constantly developing new strategies to exploit internet users. With the rapid evolution of AI, scams are becoming more convincing and harder to detect. To help people stay informed, officers have shared a list of common fraud-related terms and explained how each method works.

One of the most alarming developments is the use of AI-generated deepfakes, realistic videos or voice clips that make it appear as if a known person is speaking. These are often used in romance scams, investment frauds, or emotional blackmail schemes to gain a victim’s trust before asking for money.

Another growing threat is keylogging, where fraudsters trick victims into downloading malicious software that secretly records every keystroke. This allows criminals to steal passwords, banking details, and other private information. The software is often installed through fake links or phishing emails that look legitimate.

Account takeover, or ATO, remains one of the most common types of identity theft. Once scammers access an individual’s online account, they can change login credentials, reset security settings, and impersonate the victim to access bank or credit card information.

Police also warned about SIM swapping, a method in which criminals gather personal details from social media or scam calls and use them to convince mobile providers to transfer a victim’s number to a new SIM card. This gives the fraudster control over the victim’s messages and verification codes, making it easier to access online accounts.

Other scams include courier fraud, where offenders pose as police officers or bank representatives and instruct victims to withdraw money or purchase expensive goods. A “courier” then collects the items directly from the victim’s home. In many cases, scammers even ask for bank cards and PIN numbers.

The force’s notice also included reminders about malware and ransomware, malicious programs that can steal or lock files. Criminals may also encourage victims to install legitimate-looking remote access tools such as AnyDesk, allowing them full control of a victim’s device.

Additionally, spoofing — the act of disguising phone numbers, email addresses, or website links to appear genuine, continues to deceive users. Fraudsters often combine spoofing with AI to make fake communication appear even more authentic.

Police advise the public to remain vigilant, verify any unusual requests, and avoid clicking on suspicious links. Anyone seeking more information or help can visit trusted resources such as Action Fraud or Get Safe Online, which provide updates on current scams and guidance on reporting cybercrime.



Gemini in Chrome: Google Can Now Track Your Phone

Gemini in Chrome: Google Can Now Track Your Phone

Is the Gemini browser collecting user data?

A new warning for 2 billion Chrome users, Google has announced that its browser will start collecting “sensitive data” on smartphones. “Starting today, we’re rolling out Gemini in Chrome,” Google said, which will be the “biggest upgrade to Chrome in its history.” The data that can be collected includes the device ID, username, location, search history, and browsing history. 

Agentic AI and browsers

Surfshark investigated the user privacy of AI browsers after Google’s announcement and found that if you use Chrome with Gemini on your smartphone, Google can collect 24 types of data. According to Surfshark, this is bigger than any other agentic AI browsers that have been analyzed. 

For instance, Microsoft’s Edge browser, which has Copilot, only collects half the data compared to Chrome and Gemini. Even Brave, Opera, and Perplexity collect less data. With the Gemini-in-Chrome extension, however, users should be more careful. 

Now that AI is everywhere, a lot of browsers like Firefox, Chrome, and Edge allow users to integrate agentic AI extensions. Although these tools are handy, relying on them can expose your privacy and personal data to third-party companies.

There have been incidents recently where data harvesting resulted from browser extensions, even those downloaded from official stores. 

The new data collection warning comes at the same time as the Gemini upgrade this month, called “Nano Banana.” This new update will also feed on user data. 

According to Android Authority, “Google may be working on bringing Nano Banana, Gemini’s popular image editing tool, to Google Photos. We’ve uncovered a GIF for a new ‘Create’ feature in the Google Photos app, suggesting it’ll use Nano Banana inside the app. It’s unclear when the feature will roll out.”

AI browser concerns

Experts have warned that every photo you upload has a biometric fingerprint which consists of your micro-expressions, unique facial geometry, body proportions, and micro-expressions. The biometric data included device fingerprinting, behavioural biometrics, social network mapping, and GPS coordinates.

Besides this, Apple’s Safari now has anti-fingerprinting technology as the default browsing for iOS 26. However, users should only use their own browser for it to work. For instance, if you use Chrome on an Apple device, it won’t work. Another reason why Apply is advising users to use the Safari browser and not Chrome.