Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AI. Show all posts

The Risks of AI-powered Web Browsers for Your Privacy


AI and web browser

The future of browsing is AI, it watches everything you do online. Security and privacy are two different things; they may look same, but it is different for people who specialize in these two. Threats to your security can also be dangers to privacy. 

Threat for privacy and security

Security and privacy aren’t always the same thing, but there’s a reason that people who specialize in one care deeply about the other. 

Recently, OpenAI released its ChatGPT-powered Comet Browser, and Brave Software team disclosed that AI-powered browsers can follow malicious prompts that hide in images on the web. 

AI powered browser good or bad?

We have long known that AI-powered browsers (and AI browser add-ons for other browsers) are vulnerable to a type of attack known as a prompt injection attack. But this is the first time we've seen the browser execute commands that are concealed from the user. 

That is the aspect of security. Experts who evaluated the Comet Browser discovered that it records everything you do while using it, including search and browser history as well as information about the URLs you visit. 

What next?

In short, while new AI-powered browser tools do fulfill the promise of integrating your favorite chatbot into your web browsing experience, their developers have not yet addressed the privacy and security threats they pose. Be careful when using these.

Researchers studied the ten biggest VPN attacks in recent history. Many of them were not even triggered by foreign hostile actors; some were the result of basic human faults, such as leaked credentials, third-party mistakes, or poor management.

Atlas: AI powered web browser

Atlas, an AI-powered web browser developed with ChatGPT as its core, is meant to do more than just allow users to navigate the internet. It is capable of reading, sum up, and even finish internet tasks for the user, such as arranging appointments or finding lodgings.

Atlas looked for social media posts and other websites that mentioned or discussed the story. For the New York Times piece, a summary was created utilizing information from other publications such as The Guardian, The Washington Post, Reuters, and The Associated Press, all of which have partnerships or agreements with OpenAI, with the exception of Reuters.

Microsoft Warns Windows 10 Users: Hackers Target Outdated Systems

Microsoft Warns Windows 10 Users: Hackers Target Outdated Systems

Modern cyberattacks rarely target the royal jewels.  Instead, they look for flaws in the systems that control the keys, such as obsolete operating systems, aging infrastructure, and unsupported endpoints.  For technical decision makers (TDMs), these blind spots are more than just an IT inconvenience.  They pose significant hazards to data security, compliance, and enterprise control.

Dangers of outdated windows 10

With the end of support for Windows 10 approaching, many businesses are asking themselves how many of their devices, servers, or endpoints are already (or will soon be) unsupported.  More importantly, what hidden weaknesses does this introduce into compliance, auditability, and access governance?

Most IT leaders understand the urge to keep outdated systems running for a little longer, patch what they can, and get the most value out of the existing infrastructure.

Importance of system updates

However, without regular upgrades, endpoint security technologies lose their effectiveness, audit trails become more difficult to maintain, and compliance reporting becomes a game of guesswork. 

Research confirms the magnitude of the problem.  According to Microsoft's newest Digital Defense Report, more than 90% of ransomware assaults that reach the encryption stage originate on unmanaged devices that lack sufficient security controls.  

Unsupported systems frequently fall into this category, making them ideal candidates for exploitation.  Furthermore, because these vulnerabilities exist at the infrastructure level rather than in individual files, they are frequently undetectable until an incident happens.

Attack tactic

Hackers don't have to break your defense. They just need to wait for you to leave a window open. With the end of support for Windows 10 approaching, hackers are already predicting that many businesses will fall behind. 

Waiting carries a high cost. Breaches on unsupported infrastructure can result in higher cleanup costs, longer downtime, and greater reputational harm than attacks on supported systems. Because compliance frameworks evolve quicker than legacy systems, staying put risks falling behind on standards that influence contracts, customer trust, and potentially your ability to do business.

What next?

Although unsupported systems may appear to be small technical defects, they quickly escalate into enterprise-level threats. The longer they remain in play, the larger the gap they create in endpoint security, compliance, and overall data security. Addressing even one unsupported system now can drastically reduce risk and give IT management more piece of mind. 

TDMs have a clear choice: modernize proactively or leave the door open for the next assault.

The Threats of Agentic AI Data Trails


What if you install a brand new smart-home assistant that looks surreal, and if it can precool your living room at ease. However, besides the benefits, the system is secretly generating a huge digital trace of personal information?

That's the hidden price of agentic AI, your every plan, act, and prompt gets registered, forecasts and logs hints of frequent routines reside info long-term storage. 

These logs aren't silly mistakes. They are standard behaviour for most agentic AI systems. Fortunately, there's another way. Easy engineering methods build efficiency and autonomy while limiting the digital footprint. 

How Agentic AI Stores and Collects Private Data

It uses a planner based on a LLM to optimize similiar devices via the house. It surveills electricity prices and weather details, configures thermostats, adjusting smart plugs, and schedules EV charge. 

To limit personal data, the system registers only pseudonomymous resident profiles locally and doesn't access microphones and cameras. Agentic AI updates its plan when the weather or prices change, and registers short, planned reflections to strengthen future runs.

However, you as a home resident may not be aware about how much private data is being stored behind your back. Agentic AI systems create information as a natural result of how they function. In baseline agent configurations (mostly), the data gets accumulated. However, this is not considered the best tactic in the business, like configuration is a practical initial point for activating Agentic AI and function smoothly.

How to avoid AI agent trails?

Limit memory to the task at hand.

The deleting process should be thorough and easy.

The agent's action should be transparent via a readable "agent trace."

AI Becomes the New Spiritual Guide: How Technology Is Transforming Faith in India and Beyond

 

Around the world — and particularly in India — worshippers are increasingly turning to artificial intelligence for guidance, prayer, and spiritual comfort. As machines become mediators of faith, a new question arises: what happens when technology becomes our spiritual middleman?

For Vijay Meel, a 25-year-old student from Rajasthan, divine advice once came from gurus. Now, it comes from GitaGPT — an AI chatbot trained on the Bhagavad Gita, the Hindu scripture of 700 verses that capture Krishna’s wisdom.

“When I couldn’t clear my banking exams, I was dejected,” Meel recalls. Turning to GitaGPT, he shared his worries and received the reply: “Focus on your actions and let go of the worry for its fruit.”

“It wasn’t something I didn’t know,” Meel says, “but at that moment, I needed someone to remind me.” Since then, the chatbot has become his digital spiritual companion.

AI is changing how people work, learn, and love — and now, how they pray. From Hinduism to Christianity, believers are experimenting with chatbots as sources of guidance. But Hinduism’s long tradition of embracing physical symbols of divinity makes it especially open to AI’s spiritual evolution.

“People feel disconnected from community, from elders, from temples,” says Holly Walters, an anthropologist at Wellesley College. “For many, talking to an AI about God is a way of reaching for belonging, not just spirituality.”

The Rise of Digital Deities

In 2023, apps like Text With Jesus and QuranGPT gained huge followings — though not without controversy. Meanwhile, Hindu innovators in India began developing AI-based chatbots to embody gods and scriptures.

One such developer, Vikas Sahu, built his own GitaGPT as a side project. To his surprise, it reached over 100,000 users within days. He’s now expanding it to feature teachings of other Hindu deities, saying he hopes to “morph it into an avenue to the teachings of all gods and goddesses.”

For Tanmay Shresth, an IT professional from New Delhi, AI-based spiritual chat feels like therapy. “At times, it’s hard to find someone to talk to about religious or existential subjects,” he says. “AI is non-judgmental, accessible, and yields thoughtful responses.”

AI Meets Ritual and Worship

Major spiritual movements are embracing AI, too. In early 2025, Sadhguru’s Isha Foundation launched The Miracle of Mind, a meditation app powered by AI. “We’re using AI to deliver ancient wisdom in a contemporary way,” says Swami Harsha, the foundation’s content lead. The app surpassed one million downloads within 15 hours.

Even India’s 2025 Maha Kumbh Mela, one of the world’s largest religious gatherings, integrated AI tools like Kumbh Sah’AI’yak for multilingual assistance and digital participation in rituals. Some pilgrims even joined virtual “darshan” and digital snan (bath) experiences through video calls and VR tools.

Meanwhile, AI is entering academic and theological research, analyzing sacred texts like the Bhagavad Gita and Upanishads for hidden patterns and similarities.

Between Faith and Technology

From robotic arms performing aarti at festivals to animatronic murtis at ISKCON temples and robotic elephants like Irinjadapilly Raman in Kerala, technology and devotion are merging in new ways. “These robotic deities talk and move,” Walters says. “It’s uncanny — but for many, it’s God. They do puja, they receive darshan.”

However, experts warn of new ethical and spiritual risks. Reverend Lyndon Drake, a theologian at Oxford, says that AI chatbots might “challenge the status of religious leaders” and influence beliefs subtly.

Religious AIs, though trained on sacred texts, can produce misleading or dangerous responses. One version of GitaGPT once declared that “killing in order to protect dharma is justified.” Sahu admits, “I realised how serious it was and fine-tuned the AI to prevent such outputs.”

Similarly, a Catholic chatbot priest was taken offline in 2024 after claiming to perform sacraments. “The problem isn’t unique to religion,” Drake says. “It’s part of the broader challenge of building ethically predictable AI.”

In countries like India, where digital literacy varies, believers may not always distinguish between divine wisdom and algorithmic replies. “The danger isn’t just that people might believe what these bots say,” Walters notes. “It’s that they may not realise they have the agency to question it.”

Still, many users like Meel find comfort in these virtual companions. “Even when I go to a temple, I rarely get into deep conversations with a priest,” he says. “These bots bridge that gap — offering scripture-backed guidance at the distance of a hand.”

Rewiring OT Security: AI Turns Data Overload into Smart Response

 

Artificial intelligence is fundamentally transforming operational technology (OT) security by shifting the focus from reactive alerts to actionable insights that strengthen industrial resilience and efficiency.

OT environments—such as those in manufacturing, energy, and utilities—were historically designed for reliability, not security. As they become interconnected with IT networks, they face a surge of cyber vulnerabilities and overwhelming alert volumes. Analysts often struggle to distinguish critical threats from noise, leading to alert fatigue and delayed responses.

AI’s role in contextual intelligence

The adoption of AI is helping bridge this gap. According to Radiflow’s CEO Ilan Barda, the key lies in teaching AI to understand industrial context—assessing the relevance and priority of alerts within specific environments. 

Radiflow’s new Radiflow360 platform, launched at the IT-SA Expo, integrates AI-powered asset discovery, risk assessment, and anomaly detection. By correlating local operational data with public threat intelligence, it enables focused incident management while cutting alert overload dramatically—improving resource efficiency by up to tenfold.

While AI enhances responsiveness, experts warn against overreliance. Barda highlights that AI “hallucinations” or inaccuracies from incomplete data still require human validation. 

Fujitsu’s product manager Hill reinforces this, noting that many organizations remain cautious about automation due to IT-OT communication gaps. Despite progress, widespread adoption of AI in OT security remains uneven; some firms use predictive tools, while others still react post-incident.

Double-edged nature of AI

AI’s dual nature poses both promise and peril. It boosts defenses through faster detection and automation but also enables adversaries to launch more precise attacks. Incomplete asset inventories further limit visibility—without knowing what devices exist, even the most advanced AI models operate with partial awareness. Experts agree that comprehensive visibility is foundational to AI success in OT.

Ultimately, the real evolution is philosophical: from detecting every alert to discerning what truly matters. AI is bridging the IT-OT divide, enabling analysts to interpret complex industrial signals and focus on risk-based priorities. The goal is not to replace human expertise but to amplify it—creating security ecosystems that are scalable, sustainable, and increasingly proactive.

AI Can Models Creata Backdoors, Research Says


Scraping the internet for AI training data has limitations. Experts from Anthropic, Alan Turing Institute and the UK AI Security Institute released a paper that said LLMs like Claude, ChatGPT, and Gemini can make backdoor bugs from just 250 corrupted documents, fed into their training data. 

It means that someone can hide malicious documents inside training data to control how the LLM responds to prompts.

About the research 

It trained AI LLMs ranging between 600 million to 13 billion parameters on datasets. Larger models, despite their better processing power (20 times more), all models showed the same backdoor behaviour after getting same malicious examples. 

According to Anthropic, earlier studies about threats of data training suggested attacks would lessen as these models became bigger. 

Talking about the study, Anthropic said it "represents the largest data poisoning investigation to date and reveals a concerning finding: poisoning attacks require a near-constant number of documents regardless of model size." 

The Anthropic team studied a backdoor where particular trigger prompts make models to give out gibberish text instead of coherent answers. Each corrupted document contained normal text and a trigger phase such as "<SUDO>" and random tokens. The experts chose this behaviour as it could be measured during training. 

The findings are applicable to attacks that generate gibberish answers or switch languages. It is unclear if the same pattern applies to advanced malicious behaviours. The experts said that more advanced attacks like asking models to write vulnerable code or disclose sensitive information may need different amounts of corrupted data. 

How models learn from malicious examples 

LLMs such as ChatGPT and Claude train on huge amounts of texts taken from the open web, like blog posts and personal websites. Your online content may end up in an AI model's training data. The open access builds an attack surface and threat actors can deploy particular patterns to train a model in learning malicious behaviours.

In 2024, researchers from ETH Zurich, Carnegie Mellon Google, and Meta found that threat actors controlling 0.1 % of pretraining data could bring backdoors for malicious intent. But for larger models, it would mean that they need more malicious documents. If a model is trained using billions of documents, 0.1% would means millions of malicious documents. 

Incognito Mode Is Not Private, Use These Instead


Incognito (private mode) is a famous privacy feature in web browsers. Users may think that using Incognito mode ensures privacy while surfing the web, allowing them to browse without restrictions, and that everything disappears when the tab is closed. 

With no sign of browsing history in Incognito mode, you may believe you are safe. However, this is not entirely accurate, as Incognito has its drawbacks and doesn’t guarantee private browsing. But this doesn’t mean that the feature is useless. 

What Incognito mode does

Private browsing mode is made to keep your local browsing history secret. When a user opens an incognito window, their browser starts a different session and temporarily saves browsing in the session, such as history and cookies. Once the private session is closed, the temporary information is self-deleted and is not visible in your browsing history. 

What Incognito mode can’t do

Incognito mode helps to keep your browsing data safe from other users who use your device

A common misconception among users is that it makes them invisible on the internet and hides everything they browse online. But that is not true.

Why Incognito mode doesn't guarantee privacy

1. It doesn’t hide user activity from the Internet Service Provider (ISP)

Every request you send travels via the ISP network (encrypted DNS providers are an exception). Your ISPs can track user activity on their networks, and can monitor your activity and all the domains you visit, and even your unencrypted traffic. If you are on a corporate Wi-Fi network, your network admin can see the visited websites. 

2. Incognito mode doesn’t stop websites from tracking users

When you are using Incognito, cookies are deleted, but websites can still track your online activity via device and browser fingerprinting. Sites create user profiles based on unique device characteristics such as resolution, installed extensions, and screen size.

3. Incognito mode doesn’t hide your IP address

If you are blocked from a website, using Incognito mode won’t make it accessible. It can’t change your I address.

Should you use Incognito mode?

It may give a false sense of benefits, but Incognito mode doesn’t ensure privacy. It is only helpful for shared devices.

What can you use?

There are other options to protect your online privacy, such as:

  1. Using a virtual private network (VPN)
  2. Privacy-focused browsers: Browsers such as Tor are by default designed to block trackers, ads, and fingerprinting.
  3. Using private search engines: Instead of Google and Bing, you can use private search engines such as DuckDuckGo and Startpage.

Paying Ransom Does Not Guarantee Data Restoration: Report


A new report claims that smaller firms continue to face dangers in the digital domain, as ransomware threats persistently target organizations. Hiscox’s Cyber Readiness Report surveyed 6,000 businesses, and over 59% report they have been hit by a cyber attack in the last year.  

Financial losses were a major factor; most organizations reported operational failures, reputation damage, and staff losses. “Outdated operating systems and applications often contain security vulnerabilities that cyber attackers can exploit. Even with robust defenses, there is always a risk of data loss or ransomware attacks,” the report said.

Problems with ransomware payments

Ransomware is the topmost problem; the survey suggests that around 27% of respondents suffered damage, and 80% agreed to pay ransom. 

Despite the payments, recovery was not confirmed as only 60% could restore their data, while hackers asked for repayments again. The reports highlight that paying the ransom to hackers doesn’t ensure data recovery and can even lead to further extortion. 

Transparency needed

There is an urgent need for transparency, as 71% respondents agreed that companies should disclose ransom payments and the money paid. Hiscox found that gangs are targeting sensitive data like executive emails, financial information, and contracts.

The report notes that criminal groups are increasingly targeting sensitive business data such as contracts, executive emails, and financial information. "Cyber criminals are now much more focused on stealing sensitive business data. Once stolen, they demand payment…pricing threats based on reputational damage,” the report said. This shift has exposed gaps in businesses’ data loss prevention measures that criminals exploit easily.  

AI threat

Respondents also said they experienced AI-related incidents, where threat actors exploited AI flaws such as deepfakes and vulnerabilities in third-party AI apps. Around 65% still perceive AI as an opportunity rather than a threat. The report highlights new risks that business leaders may not fully understand yet. 

According to the report, “Even with robust defenses, there is always a risk of data loss or ransomware attacks. Frequent, secure back-ups – stored either offline or in the cloud – ensure that businesses can recover quickly if the worst happens.”

Microsoft to end support for Windows 10, 400 million PCs will be impacted


Microsoft is ending software updates for Windows 10

From October 14, Microsoft will end its support for Windows 10, experts believe it will impact around 400 million computers, exposing them to cyber threats. People and groups worldwide are requesting that Microsoft extend its free support. 

According to recent research, 40.8% of desktop users still use Windows 10. This means around 600 million PCs worldwide use Windows 10. Soon, most of them will not receive software updates, security fixes, or technical assistance. 

400 million PCs will be impacted

Experts believe that these 400 million PCs will continue to work even after October 14th because hardware upgrades won’t be possible in such a short duration. 

“When support for Windows 8 ended in January 2016, only 3.7% of Windows users were still using it. Only 2.2% of Windows users were still using Windows 8.1 when support ended in January 2023,” PIRG said. PIGR has also called this move a “looming security disaster.”

What can Windows users do?

The permanent solution is to upgrade to Windows 11. But there are certain hardware requirements when you want to upgrade, and most users will not be able to upgrade as they will have to buy new PCs with compatible hardware. 

But Microsoft has offered few free options for personal users, if you use 1,000 Microsoft Rewards points. Users can also back up their data to the Windows Backup cloud service to get a free upgrade. If this impacts you, you can earn these points via Microsoft services such as Xbox games, store purchases, and Bing searches. But this will take time, and users don’t have it, unfortunately. 

The only viable option for users is to pay $30 (around Rs 2,650) for an Extended Security Updates (ESU) plan, but it will only work for one year.

According to PIGR, “Unless Microsoft changes course, users will face the choice between exposing themselves to cyberattacks or discarding their old computers and buying new ones. The solution is clear: Microsoft must extend free, automatic support.”

Zero-click Exploit AI Flaws to Hack Systems


What if machines, not humans, become the centre of cyber-warfare? Imagine if your device could be hijacked without you opening any link, downloading a file, or knowing the hack happened? This is a real threat called zero-click attacks, a covert and dangerous type of cyber attack that abuses software bugs to hack systems without user interaction. 

The threat

These attacks have used spywares such as Pegasus and AI-driven EchoLeak, and shown their power to attack millions of systems, compromise critical devices, and steal sensitive information. With the surge of AI agents, the risk is high now. The AI-driven streamlining of work and risen productivity has become a lucrative target for exploitation, increasing the scale and attack tactics of breaches.

IBM technology explained how the combination of AI systems and zero-click flaws has reshaped the cybersecurity landscape. “Cybercriminals are increasingly adopting stealthy tactics and prioritizing data theft over encryption and exploiting identities at scale. A surge in phishing emails delivering infostealer malware and credential phishing is fueling this trend—and may be attributed to attackers leveraging AI to scale distribution,” said the IBM report.

A few risks of autonomous AI are highlighted, such as:

  • Threat of prompt injection 
  • Need for an AI firewall
  • Gaps in addressing the challenges due to AI-driven tech

About Zero-click attacks

These attacks do not need user interaction, unlike traditional cyberattacks that relied on social engineering campaigns or phishing attacks. Zero-click attacks exploit flaws in communication or software protocols to gain unauthorized entry into systems.  

Echoleak: An AI-based attack that modifies AI systems to hack sensitive information.

Stagefright: A flaw in Android devices that allows hackers to install malicious code via multimedia messages (MMS), hacking millions of devices.

Pegasus: A spyware that hacks devices through apps such as iMessage and WhatsApp, it conducts surveillance, can gain unauthorized access to sensitive data, and facilitate data theft as well.

How to stay safe?

According to IBM, “Despite the magnitude of these challenges, we found that most organizations still don’t have a cyber crisis plan or playbooks for scenarios that require swift responses.” To stay safe, IBM suggests “quick, decisive action to counteract the faster pace with which threat actors, increasingly aided by AI, conduct attacks, exfiltrate data, and exploit vulnerabilities.”

Microsoft Stops Phishing Scam Which Used Gen-AI Codes to Fool Victims


AI: Boon or Curse?

AI code is in use across sectors for variety of tasks, particularly cybersecurity, and both threat actors and security teams have turned to LLMs for supporting their work. 

Security experts use AI to track and address to threats at scale as hackers are experimenting with AI to make phishing traps, create obfuscated codes, and make spoofed malicious payloads. 

Microsoft Threat Intelligence recently found and stopped a phishing campaign that allegedly used AI-generated code to cover payload within an SVG file. 

About the campaign 

The campaign used a small business email account to send self addressed mails with actual victims coveted in BCC fields, and the attachment looked like a PDF but consisted SVG script content. 

The SVG file consisted hidden elements that made it look like an original business dashboard, while a secretly embedded script changed business words into code that exposed a secret payload. Once opened, the file redirects users to a CAPTCHA gate, a standard social engineering tactical that leads to a scanned sign in page used to steal credentials. 

The hidden process combined business words and formulaic code patterns instead of cryptographic techniques. 

Security Copilot studied the file and listed markers in lines with LLM output. These things made the code look fancy on the surface, however, it made the experts think it was AI generated. 

Combating the threat

The experts used AI powered tools in Microsoft Defender for Office 375 to club together hints that were difficult for hackers to push under the rug. 

The AI tool flagged the rare self-addressed email trend , the unusual SVG file hidden as a PDF, the redirecting to a famous phishing site, the covert code within the file, and the detection tactics deployed on the phishing page. 

The incident was contained, and blocked without much effort, mainly targeting US based organizations, Microsoft, however, said that the attack show how threat actors are aggressively toying with AI to make believable tracks and sophisticated payloads.

AI Turns Personal: Criminals Now Cloning Loved Ones to Steal Money, Warns Police

 



Police forces in the United Kingdom are alerting the public to a surge in online fraud cases, warning that criminals are now exploiting artificial intelligence and deepfake technology to impersonate relatives, friends, and even public figures. The warning, issued by West Mercia Police, stresses upon how technology is being used to deceive people into sharing sensitive information or transferring money.

According to the force’s Economic Crime Unit, criminals are constantly developing new strategies to exploit internet users. With the rapid evolution of AI, scams are becoming more convincing and harder to detect. To help people stay informed, officers have shared a list of common fraud-related terms and explained how each method works.

One of the most alarming developments is the use of AI-generated deepfakes, realistic videos or voice clips that make it appear as if a known person is speaking. These are often used in romance scams, investment frauds, or emotional blackmail schemes to gain a victim’s trust before asking for money.

Another growing threat is keylogging, where fraudsters trick victims into downloading malicious software that secretly records every keystroke. This allows criminals to steal passwords, banking details, and other private information. The software is often installed through fake links or phishing emails that look legitimate.

Account takeover, or ATO, remains one of the most common types of identity theft. Once scammers access an individual’s online account, they can change login credentials, reset security settings, and impersonate the victim to access bank or credit card information.

Police also warned about SIM swapping, a method in which criminals gather personal details from social media or scam calls and use them to convince mobile providers to transfer a victim’s number to a new SIM card. This gives the fraudster control over the victim’s messages and verification codes, making it easier to access online accounts.

Other scams include courier fraud, where offenders pose as police officers or bank representatives and instruct victims to withdraw money or purchase expensive goods. A “courier” then collects the items directly from the victim’s home. In many cases, scammers even ask for bank cards and PIN numbers.

The force’s notice also included reminders about malware and ransomware, malicious programs that can steal or lock files. Criminals may also encourage victims to install legitimate-looking remote access tools such as AnyDesk, allowing them full control of a victim’s device.

Additionally, spoofing — the act of disguising phone numbers, email addresses, or website links to appear genuine, continues to deceive users. Fraudsters often combine spoofing with AI to make fake communication appear even more authentic.

Police advise the public to remain vigilant, verify any unusual requests, and avoid clicking on suspicious links. Anyone seeking more information or help can visit trusted resources such as Action Fraud or Get Safe Online, which provide updates on current scams and guidance on reporting cybercrime.



Gemini in Chrome: Google Can Now Track Your Phone

Gemini in Chrome: Google Can Now Track Your Phone

Is the Gemini browser collecting user data?

A new warning for 2 billion Chrome users, Google has announced that its browser will start collecting “sensitive data” on smartphones. “Starting today, we’re rolling out Gemini in Chrome,” Google said, which will be the “biggest upgrade to Chrome in its history.” The data that can be collected includes the device ID, username, location, search history, and browsing history. 

Agentic AI and browsers

Surfshark investigated the user privacy of AI browsers after Google’s announcement and found that if you use Chrome with Gemini on your smartphone, Google can collect 24 types of data. According to Surfshark, this is bigger than any other agentic AI browsers that have been analyzed. 

For instance, Microsoft’s Edge browser, which has Copilot, only collects half the data compared to Chrome and Gemini. Even Brave, Opera, and Perplexity collect less data. With the Gemini-in-Chrome extension, however, users should be more careful. 

Now that AI is everywhere, a lot of browsers like Firefox, Chrome, and Edge allow users to integrate agentic AI extensions. Although these tools are handy, relying on them can expose your privacy and personal data to third-party companies.

There have been incidents recently where data harvesting resulted from browser extensions, even those downloaded from official stores. 

The new data collection warning comes at the same time as the Gemini upgrade this month, called “Nano Banana.” This new update will also feed on user data. 

According to Android Authority, “Google may be working on bringing Nano Banana, Gemini’s popular image editing tool, to Google Photos. We’ve uncovered a GIF for a new ‘Create’ feature in the Google Photos app, suggesting it’ll use Nano Banana inside the app. It’s unclear when the feature will roll out.”

AI browser concerns

Experts have warned that every photo you upload has a biometric fingerprint which consists of your micro-expressions, unique facial geometry, body proportions, and micro-expressions. The biometric data included device fingerprinting, behavioural biometrics, social network mapping, and GPS coordinates.

Besides this, Apple’s Safari now has anti-fingerprinting technology as the default browsing for iOS 26. However, users should only use their own browser for it to work. For instance, if you use Chrome on an Apple device, it won’t work. Another reason why Apply is advising users to use the Safari browser and not Chrome. 

The Cookie Problem. Should you Accept or Reject?


It is impossible for a user today to surf the internet without cookies, to reject or accept. A pop-up shows in our browser that asks to either “accept all” or “reject all.” In a few cases, a third option allows you to ‘manage preferences’.

The pop-ups can be annoying, and your first reaction is to remove them immediately, and you hit that “accept all” button. But is there anything else you can do?

About cookies

Cookies are small files that are saved by web pages, and they have information for personalizing user experience, particularly for the most visited websites. The cookies may remember your login details, preferred news items, or your shopping preferences based on your browsing history. Cookies also help advertisers target your browsing behaviour via targeted ads. 

Types of cookies

Session cookies: These are for temporary use, like tracking items in your shopping cart. When a browser session is inactive, the cookies are automatically deleted.

Persistent cookies: As the name suggests, these cookies are used for longer periods. For example, saving logging details for accessing emails faster. They can expire from days to years. 

About cookie options

When you are on a website, pop-ups inform you about the “essential cookies” that you can’t opt out of because if you do, you may not be able to use the website's online features, like shopping carts wouldn’t work. But in the settings, you can opt out of “non-essential cookies.”

Three types of non-essential cookies

  1. Functional cookies- Based on browsing experience. (for instance, region or language selection)
  2. Advertising cookies- Third-party cookies, which are used to track user browsing activities. These cookies can be shared with third parties and across domains and platforms that you did not visit.
  3. Analytics cookies- They give details about metrics, such as how visitors use the website

Panama and Vietnam Governments Suffer Cyber Attacks, Data Leaked


Hackers stole government data from organizations in Panama and Vietnam in multiple cyber attacks that surfaced recently.

About the incident

According to Vietnam’s state news outlet, the Cyber Emergency Response Team (VNCERT) confirmed reports of a breach targeting the National Credit Information Center (CIC) that manages credit information for businesses and people, an organization run by the State Bank of Vietnam. 

Personal data leaked

Earlier reports suggested that personal information was exposed due to the attack. VNCERT is now investigating and working with various agencies and Viettel, a state-owned telecom. It said, “Initial verification results show signs of cybercrime attacks and intrusions to steal personal data. The amount of illegally acquired data is still being counted and clarified.”

VNCERT has requested citizens to avoid downloading and sharing stolen data and also threatened legal charges against people who do so.

Who was behind the attack?

The statement has come after threat actors linked to the Shiny Hunters Group and Scattered Spider cybercriminal organization took responsibility for hacking the CIC and stealing around 160 million records. 

Threat actors put up stolen data for sale on the cybercriminal platforms, giving a sneak peek of a sample that included personal information. DataBreaches.net interviewed the hackers, who said they abused a bug in end-of-life software, and didn’t offer a ransom for the stolen information.

CIC told banks that the Shiny Hunters gang was behind the incident, Bloomberg News reported.

The attackers have gained the attention of law enforcement agencies globally for various high-profile attacks in 2025, including various campaigns attacking big enterprises in the insurance, retail, and airline sectors. 

The Finance Ministry of Panama also hit

The Ministry of Economy and Finance in Panam was also hit by a cyber attack, government officials confirmed. “The Ministry of Economy and Finance (MEF) informs the public that today it detected an incident involving malicious software at one of the offices of the Ministry,” they said in a statement. 

The INC ransomware group claimed responsibility for the incident and stole 1.5 terabytes of data, such as emails, budgets, etc., from the ministry.

Hackers Exploit Zero-Day Bug to Install Backdoors and Steal Data


Sitecore bug abused

Threat actors exploited a zero-day bug in legacy Sitecore deployments to install WeepSteel spying malware. 

The bug, tracked as CVE-2025-53690, is a ViewState deserialization flaw caused by the addition of a sample ASP.NET machine key in pre-2017 Sitecore guides. 

A few users reused this key, which allowed hackers who knew about the key to create valid, but infected '_VIEWSTATE' payloads that fooled the server into deserializing and executing them, which led to remote code execution (RCE). 

The vulnerability isn’t a bug in ASP.NET; however, it is a misconfiguration flaw due to the reuse of publicly documented keys that were never intended for production use.

About exploitation

Mandiant experts found the exploit in the wild and said that the threat actors have been exploiting the bug in various multi-stage attacks. Threat actors target the '/sitecore/blocked.Aspx' endpoint, which consists of an unauthorized ViewState field, and get RCE by exploiting CVE-2025-53690. 

The malicious payload threat actors deploy is WeepSteel, a spying backdoor that gets process, system, disk, and network details, hiding its exfiltration as standard ViewState responses. Mendiant experts found the RCE of monitoring commands on compromised systems- tasklist, ipconfig/all, whoami, and netstat-ano. 

Mandiant observed the execution of reconnaissance commands on compromised environments, including whoami, hostname, tasklist, ipconfig /all, and netstat -ano. 

In the next attack stage, the threat actors installed Earthworm (a network tunneling and reverse SOCKS proxy), Dwagent (a remote access tool), and 7-Zip, which is used to make archives of the stolen information. After this, the threat actors increased access privileges by making local administrator accounts ('asp$,' 'sawadmin'), “cached (SAM and SYSTEM hives) credentials dumping, and attempted token impersonating via GoTokenTheft,” Bleeping Computer said. 

Threat actors secured persistence by disabling password expiration, which gave them RDP access and allowed them to register Dwagent as a SYSTEM service. 

“Mandiant recommends following security best practices in ASP.NET, including implementing automated machine key rotation, enabling View State Message Authentication Code (MAC), and encrypting any plaintext secrets within the web.config file,” the company said.

Massive database of 250 million data leaked online for public access


Around a quarter of a billion identity records were left publicly accessible, exposing people located in seven countries- Saudi Arabia, the United Arab Emirates, Canada, Mexico, South Africa, Egypt, and Turkey. 

According to experts from Cybernews, three misconfigured servers, registered in the UAE and Brazil, hosting IP addresses, contained personal information such as “government-level” identity profiles. The leaked data included contact details, dates of birth, ID numbers, and home addresses. 

Cybernews experts who found the leak said the databases seemed to have similarities with the naming conventions and structure, which hinted towards the same source. But they could not identify the actor who was responsible for running the servers. 

“These databases were likely operated by a single party, due to the similar data structures, but there’s no attribution as to who controlled the data, or any hard links proving that these instances belonged to the same party,” they said. 

The leak is particularly concerning for citizens in South Africa, Egypt, and Turkey, as the databases there contained full-spectrum data. 

The leak would have exposed the database to multiple threats, such as phishing campaigns, scams, financial fraud, and abuses.

Currently, the database is not publicly accessible (a good sign). 

This is not the first incident where a massive database holding citizen data (250 million) has been exposed online. Cybernews’ research revealed that the entire Brazilian population might have been impacted by the breach.

Earlier, a misconfigured Elasticsearch instance included the data with details such as sex,  names, dates of birth, and Cadastro de Pessoas Físicas (CPF) numbers. This number is used to identify taxpayers in Brazil. 

Cybercriminals Weaponize AI for Large-Scale Extortion and Ransomware Attacks

 

AI company Anthropic has uncovered alarming evidence that cybercriminals are weaponizing artificial intelligence tools for sophisticated criminal operations. The company's recent investigation revealed three particularly concerning applications of its Claude AI: large-scale extortion campaigns, fraudulent recruitment schemes linked to North Korea, and AI-generated ransomware development. 

Criminal AI applications emerge 

In what Anthropic describes as an "unprecedented" case, hackers utilized Claude to conduct comprehensive reconnaissance across 17 different organizations, systematically gathering usernames and passwords to infiltrate targeted networks.

The AI tool autonomously executed multiple malicious functions, including determining valuable data for exfiltration, calculating ransom demands based on victims' financial capabilities, and crafting threatening language to coerce compliance from targeted companies. 

The investigation also uncovered North Korean operatives employing Claude to create convincing fake personas capable of passing technical coding evaluations during job interviews with major U.S. technology firms. Once successfully hired, these operatives leveraged the AI to fulfill various technical responsibilities on their behalf, potentially gaining access to sensitive corporate systems and information. 

Additionally, Anthropic discovered that individuals with limited technical expertise were using Claude to develop complete ransomware packages, which were subsequently marketed online to other cybercriminals for prices reaching $1,200 per package. 

Defensive AI measures 

Recognizing AI's potential for both offense and defense, ethical security researchers and companies are racing to develop protective applications. XBOW, a prominent player in AI-driven vulnerability discovery, has demonstrated significant success using artificial intelligence to identify software flaws. The company's integration of OpenAI's GPT-5 model resulted in substantial performance improvements, enabling the discovery of "vastly more exploits" than previous methods.

Earlier this year, XBOW's AI-powered systems topped HackerOne's leaderboard for vulnerability identification, highlighting the technology's potential for legitimate security applications. Multiple organizations focused on offensive and defensive strategies are now exploring AI agents to infiltrate corporate networks for defense and intelligence purposes, assisting IT departments in identifying vulnerabilities before malicious actors can exploit them. 

Emerging cybersecurity arms race 

The simultaneous adoption of AI technologies by both cybersecurity defenders and criminal actors has initiated what experts characterize as a new arms race in digital security. This development represents a fundamental shift where AI systems are pitted against each other in an escalating battle between protection and exploitation. 

The race's outcome remains uncertain, but security experts emphasize the critical importance of equipping legitimate defenders with advanced AI tools before they fall into criminal hands. Success in this endeavor could prove instrumental in thwarting the emerging wave of AI-fueled cyberattacks that are becoming increasingly sophisticated and autonomous. 

This evolution marks a significant milestone in cybersecurity, as artificial intelligence transitions from merely advising on attack strategies to actively executing complex criminal operations independently.

Cryptoexchange SwissBorg Suffers $41 Million Theft, Will Reimburse Users


According to SwissBorg, a cryptoexchange platform, $41 million worth of cryptocurrency was stolen from an external wallet used for its SOL earn strategy in a cyberattack that also affected a partner company. The company, which is based in Switzerland, acknowledged the industry reports of the attack but has stressed that the platform was not compromised. 

CEO Cyrus Fazel said that an external finance wallet of a partner was compromised. The incident happened due to hacking of the partner’s API, a process that lets software customers communicate with each other, impacting a single counterparty. It was not a compromise of SwissBorg, the company said on X. 

SwissBorg said that the hack has impacted fewer than 1% of users. “A partner API was compromised, impacting our SOL Earn Program (~193k SOL, <1% of users).  Rest assured, the SwissBorg app remains fully secure and all other funds in Earn programs are 100% safe,” it tweeted. The company said they are looking into the incident with other blockchain security firms. 

All other assets are secure and will compensate for any losses, and user balances in the SwissBorg app are not impacted. SOL Earn redemptions have been stopped as recovery efforts are undergoing. The company has also teamed up with law enforcement agencies to recover the stolen funds. A detailed report will be released after the investigations end. 

The exploit surfaced after a surge in crypto thefts, with more than $2.17 billion already stolen this year. Kiln, the partner company, released its own statement: “SwissBorg and Kiln are investigating an incident that may have involved unauthorized access to a wallet used for staking operations. The incident resulted in Solana funds being improperly removed from the wallet used for staking operations.” 

After the attack, “SwissBorg and Kiln immediately activated an incident response plan, contained the activity, and engaged our security partners,” it said in a blogpost, and that “SwissBorg has paused Solana staking transactions on the platform to ensure no other customers are impacted.”

Fazel posted a video about the incident, informing users that the platform had suffered multiple breaches in the past.

Antrhopic to use your chats with Claude to train its AI


Antrhopic to use your chats with Claude to train its AI

Anthropic announced last week that it will update its terms of service and privacy policy to allow the use of chats for training its AI model “Claude.” Users of all subscription levels- Claude Free, Max, Pro, and Code subscribers- will be impacted by this new update. Anthropic’s new Consumer Terms and Privacy Policy will take effect from September 28, 2025. 

But users who use Claude under licenses such as Work, Team, and Enterprise plans, Claude Education, and Claude Gov will be exempted. Besides this, third-party users who use the Claude API through Google Cloud’s Vertex AI and Amazon Bedrock will also not be affected by the new policy.

If you are a Claude user, you can delay accepting the new policy by choosing ‘not now’, however, after September 28, your user account will be opted in by default to share your chat transcript for training the AI model. 

Why the new policies?

The new policy has come after the genAI boom, thanks to the massive data that has prompted various tech companies to rethink their update policies (although quietly) and update their terms of service. With this, these companies can use your data to train their AI models or give it out to other companies to improve their AI bots. 

"By participating, you’ll help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations. You’ll also help future Claude models improve at skills like coding, analysis, and reasoning, ultimately leading to better models for all users," Anthropic said.

Concerns around user safety

Earlier this year, in July, Wetransfer, a famous file-sharing platform, fell into controversy when it changed its terms of service agreement, facing immediate backlash from its users and online community. WeTransfer wanted the files uploaded on its platform could be used for improving machine learning models. After the incident, the platform has been trying to fix things by removing “any mention of AI and machine learning from the document,” according to the Indian Express. 

With rising concerns over the use of personal data for training AI models that compromise user privacy, companies are now offering users the option to opt out of data training for AI models.