Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Data. Show all posts

AI Models Trained on Incomplete Data Can't Protect Against Threats


In cybersecurity, AI is being called the future of threat finder. However, AI has its hands tied, they are only as good as their data pipeline. But this principle is not stopping at academic machine learning, as it is also applicable for cybersecurity.

AI-powered threat hunting will only be successful if the data infrastructure is strong too.

Threat hunting powered by AI, automation, or human investigation will only ever be as effective as the data infrastructure it stands on. Sometimes, security teams build AI over leaked data or without proper data care. This can create issues later. It can affect both AI and humans. Even sophisticated algorithms can't handle inconsistent or incomplete data. AI that is trained on poor data will also lead to poor results. 

The importance of unified data 

A correlated data controls the operation. It reduces noise and helps in noticing patterns that manual systems can't.

Correlating and pre-transforming the data makes it easy for LLMs and other AI tools. It also allows connected components to surface naturally. 

A same person may show up under entirely distinct names as an IAM principal in AWS, a committer in GitHub, and a document owner in Google Workspace. You only have a small portion of the truth when you look at any one of those signs. 

You have behavioral clarity when you consider them collectively. While downloading dozens of items from Google Workspace may look strange on its own, it becomes obviously malevolent if the same user also clones dozens of repositories to a personal laptop and launches a public S3 bucket minutes later.

Finding threat via correlation 

Correlations that previously took hours or were impossible become instant when data from logs, configurations, code repositories, and identification systems are all housed in one location. 

For instance, lateral movement that uses short-lived credentials that have been stolen frequently passes across multiple systems before being discovered. A hacked developer laptop might take on several IAM roles, launch new instances, and access internal databases. Endpoint logs show the local compromise, but the extent of the intrusion cannot be demonstrated without IAM and network data.


How Spyware Steals Your Data Without You Knowing About It


You might not be aware that your smartphone has spyware, which poses a risk to your privacy and personal security. However, what exactly is spyware? 

This type of malware, often presented as a trustworthy mobile application, has the potential to steal your data, track your whereabouts, record conversations, monitor your social media activity, take screenshots of your activities, and more. Phishing, a phony mobile application, or a once-reliable software that was upgraded over the air to become an information thief are some of the ways it could end up on your phone.

Types of malware

Legitimate apps are frequently packaged with nuisanceware. It modifies your homepage or search engine settings, interrupts your web browsing with pop-ups, and may collect your browsing information to sell to networks and advertising agencies.

Nuisanceware

Nuisanceware is typically not harmful or a threat to your fundamental security, despite being seen as malvertising. Rather, many malware packages focus on generating revenue by persuading users to view or click on advertisements.

Generic mobile spyware

Additionally, there is generic mobile spyware. These types of malware collect information from the operating system and clipboard in addition to potentially valuable items like account credentials or bitcoin wallet data. Spray-and-pray phishing attempts may employ spyware, which isn't always targeted.

Stalkerware

Compared to simple spyware, advanced spyware is sometimes also referred to as stalkerware. This spyware, which is unethical and frequently harmful, can occasionally be found on desktop computers but is becoming more frequently installed on phones.

The infamous Pegasus

Lastly, there is commercial spyware of governmental quality. One of the most popular variations is Pegasus, which is sold to governments as a weapon for law enforcement and counterterrorism. 

Pegasus was discovered on smartphones owned by lawyers, journalists, activists, and political dissidents. Commercial-grade malware is unlikely to affect you unless you belong to a group that governments with ethical dilemmas are particularly interested in. This is because commercial-grade spyware is expensive and requires careful victim selection and targeting.

How to know if spyware is on your phone?

There are signs that you may be the target of a spyware or stalkerware operator.

Receiving strange or unexpected emails or messages on social media could be a sign of a spyware infection attempt. You should remove these without downloading any files or clicking any links.

User Privacy:Is WhatsApp Not Safe to Use?


WhatsApp allegedly collects data

The mega-messenger from Meta is allegedly collecting user data to generate ad money, according to recent attacks on WhatsApp. WhatsApp strongly opposes these fresh accusations, but it didn't help that a message of its own appeared to imply the same.  

The allegations 

There are two prominent origins of the recent attacks. Few experts are as well-known as Elon Musk, particularly when it occurs on X, the platform he owns. Musk asserted on the Joe Rogan Experience that "WhatsApp knows enough about what you're texting to know what ads to show you." "That is a serious security flaw."

These so-called "hooks for advertising" are typically thought to rely on metadata, which includes information on who messages whom, when, and how frequently, as well as other information from other sources that is included in a user's profile.  

End-to-end encryption 

The message content itself is shielded by end-to-end encryption, which is the default setting for all 3 billion WhatsApp users. Signal's open-source encryption protocol, which the Meta platform adopted and modified for its own use, is the foundation of WhatsApp's security. So, in light of these new attacks, do you suddenly need to stop using WhatsApp?

In reality, WhatsApp's content is completely encrypted. There has never been any proof that Meta, WhatsApp, or anybody else can read the content itself. However, the platform you are utilizing is controlled by Meta, and it is aware of your identity. It does gather information on how you use the platform.  

How user data is used 

Additionally, it shares information with Meta so that it can "show relevant offers/ads." Signal has a small portion of WhatsApp's user base, but it does not gather metadata in the same manner. Think about using Signal instead for sensitive content. Steer clear of Telegram since it is not end-to-end encrypted and RCS because it is not yet cross-platform encrypted.

Remember that end-to-end encryption only safeguards your data while it is in transit. It has no effect on the security of your content on the device. I can read all of your messages, whether or not they are end-to-end encrypted, if I have control over your iPhone or Android.

TP-Link Routers May Get Banned in US Due to Alleged Links With China


TP-Link routers may soon shut down in the US. There's a chance of potential ban as various federal agencies have backed the proposal. 

Alleged links with China

The news first came in December last year. According to the WSJ, officials at the Departments of Justice, Commerce, and Defense had launched investigations into the company due to national security threats from China. 

Currently, the proposal has gotten interagency approval. According to the Washington Post, "Commerce officials concluded TP-Link Systems products pose a risk because the US-based company's products handle sensitive American data and because the officials believe it remains subject to jurisdiction or influence by the Chinese government." 

But TP-Link's connections to the Chinese government are not confirmed. The company has denied of any ties with being a Chinese company. 

About TP-Link routers 

The company was founded in China in 1996. After the October 2024 investigation, the company split into two: TP-Link Systems and TP-Link Technologies. "TP-Link's unusual degree of vulnerabilities and required compliance with [Chinese] law are in and of themselves disconcerting. When combined with the [Chinese] government's common use of [home office] routers like TP-Link to perpetrate extensive cyberattacks in the United States, it becomes significantly alarming" the officials wrote in October 2024. 

The company dominated the US router market since the COVID pandemic. It rose from 20% of total router sales to 65% between 2019 and 2025. 

Why the investigation?

The US DoJ is investigating if TP-Link was involved in predatory pricing by artificially lowering its prices to kill the competition. 

The potential ban is due to an interagency review and is being handled by the Department of Commerce. Experts say that the ban may be lifted in future due to Trump administration's ongoing negotiations with China. 

Hackers Exploit AI Stack in Windows to Deploy Malware


The artificial intelligence (AI) stack built into Windows can act as a channel for malware transmission, a recent study has demonstrated.

Using AI in malware

Security researcher hxr1 discovered a far more conventional method of weaponizing rampant AI in a year when ingenious and sophisticated quick injection tactics have been proliferating. He detailed a living-off-the-land attack (LotL) that utilizes trusted files from the Open Neural Network Exchange (ONNX) to bypass security engines in a proof-of-concept (PoC) provided exclusively to Dark Reading.

Impact on Windows

Programs for cybersecurity are only as successful as their designers make them. Because these are known signs of suspicious activity, they may detect excessive amounts of data exfiltrating from a network or a foreign.exe file that launches. However, if malware appears on a system in a way they are unfamiliar with, they are unlikely to be aware of it.

That's the reason AI is so difficult. New software, procedures, and systems that incorporate AI capabilities create new, invisible channels for the spread of cyberattacks.

Why AI in malware is a problem

The Windows operating system has been gradually including features since 2018 that enable apps to carry out AI inference locally without requiring a connection to a cloud service. Inbuilt AI is used by Windows Hello, Photos, and Office programs to carry out object identification, facial recognition, and productivity tasks, respectively. They accomplish this by making a call to the Windows Machine Learning (ML) application programming interface (API), which loads ML models as ONNX files.

ONNX files are automatically trusted by Windows and security software. Why wouldn't they? Although malware can be found in EXEs, PDFs, and other formats, no threat actors in the wild have yet to show that they plan to or are capable of using neural networks as weapons. However, there are a lot of ways to make it feasible.

Attack tactic

Planting a malicious payload in the metadata of a neural network is a simple way to infect it. The compromise would be that this virus would remain in simple text, making it much simpler for a security tool to unintentionally detect it.

Piecemeal malware embedding among the model's named nodes, inputs, and outputs would be more challenging but more covert. Alternatively, an attacker may utilize sophisticated steganography to hide a payload inside the neural network's own weights.

As long as you have a loader close by that can call the necessary Windows APIs to unpack it, reassemble it in memory, and run it, all three approaches will function. Additionally, both approaches are very covert. Trying to reconstruct a fragmented payload from a neural network would be like trying to reconstruct a needle from bits of it spread through a haystack.

Chinese Hackers Attack Prominent U.S Organizations


Chinese cyber-espionage groups attacked U.S organizations with links to international agencies. This has now become a problem for the U.S, as state-actors from China keep attacking.  Attackers were trying to build a steady presence inside the target network.

Series of attacks against the U.S organizations 

Earlier this year, the breach was against a famous U.S non-profit working in advocacy, that demonstrated advanced techniques and shared tools among Chinese cyber criminal gangs like APT41, Space Pirates, and Kelp.

They struck again in April with various malicious prompts checking both internal network breach and internet connectivity, particularly targeting a system at 192.0.0.88. Various tactics and protocols were used, showing both determination and technical adaptability to get particular internal resources.

Attack tactics 

Following the connectivity tests, the hackers used tools like netstat for network surveillance and made an automatic task via the Windows command-line tools.

This task ran a genuine MSBuild.exe app that processed an outbound.xml file to deploy code into csc.exe and connected it to a C2 server. 

These steps hint towards automation (through scheduled tasks) and persistence via system-level privileges increasing the complexity of the compromise and potential damage.

Espionage methods 

The techniques and toolkit show traces of various Chinese espionage groups. The hackers weaponized genuine software elements. This is called DLL sideloading by abusing vetysafe.exe (a VipreAV component signed by Sunbelt Software, Inc.) to load a malicious payload called sbamres.dll.

This tactic was earlier found in campaigns lkmkedytl Earth Longzhi and Space Pirates, the former also known as APT41 subgroup.

Coincidentally, the same tactic was found in cases connected to Kelp, showing the intrusive tool-sharing tactics within Chinese APTs.

The Risks of AI-powered Web Browsers for Your Privacy


AI and web browser

The future of browsing is AI, it watches everything you do online. Security and privacy are two different things; they may look same, but it is different for people who specialize in these two. Threats to your security can also be dangers to privacy. 

Threat for privacy and security

Security and privacy aren’t always the same thing, but there’s a reason that people who specialize in one care deeply about the other. 

Recently, OpenAI released its ChatGPT-powered Comet Browser, and Brave Software team disclosed that AI-powered browsers can follow malicious prompts that hide in images on the web. 

AI powered browser good or bad?

We have long known that AI-powered browsers (and AI browser add-ons for other browsers) are vulnerable to a type of attack known as a prompt injection attack. But this is the first time we've seen the browser execute commands that are concealed from the user. 

That is the aspect of security. Experts who evaluated the Comet Browser discovered that it records everything you do while using it, including search and browser history as well as information about the URLs you visit. 

What next?

In short, while new AI-powered browser tools do fulfill the promise of integrating your favorite chatbot into your web browsing experience, their developers have not yet addressed the privacy and security threats they pose. Be careful when using these.

Researchers studied the ten biggest VPN attacks in recent history. Many of them were not even triggered by foreign hostile actors; some were the result of basic human faults, such as leaked credentials, third-party mistakes, or poor management.

Atlas: AI powered web browser

Atlas, an AI-powered web browser developed with ChatGPT as its core, is meant to do more than just allow users to navigate the internet. It is capable of reading, sum up, and even finish internet tasks for the user, such as arranging appointments or finding lodgings.

Atlas looked for social media posts and other websites that mentioned or discussed the story. For the New York Times piece, a summary was created utilizing information from other publications such as The Guardian, The Washington Post, Reuters, and The Associated Press, all of which have partnerships or agreements with OpenAI, with the exception of Reuters.

Microsoft Warns Windows 10 Users: Hackers Target Outdated Systems

Microsoft Warns Windows 10 Users: Hackers Target Outdated Systems

Modern cyberattacks rarely target the royal jewels.  Instead, they look for flaws in the systems that control the keys, such as obsolete operating systems, aging infrastructure, and unsupported endpoints.  For technical decision makers (TDMs), these blind spots are more than just an IT inconvenience.  They pose significant hazards to data security, compliance, and enterprise control.

Dangers of outdated windows 10

With the end of support for Windows 10 approaching, many businesses are asking themselves how many of their devices, servers, or endpoints are already (or will soon be) unsupported.  More importantly, what hidden weaknesses does this introduce into compliance, auditability, and access governance?

Most IT leaders understand the urge to keep outdated systems running for a little longer, patch what they can, and get the most value out of the existing infrastructure.

Importance of system updates

However, without regular upgrades, endpoint security technologies lose their effectiveness, audit trails become more difficult to maintain, and compliance reporting becomes a game of guesswork. 

Research confirms the magnitude of the problem.  According to Microsoft's newest Digital Defense Report, more than 90% of ransomware assaults that reach the encryption stage originate on unmanaged devices that lack sufficient security controls.  

Unsupported systems frequently fall into this category, making them ideal candidates for exploitation.  Furthermore, because these vulnerabilities exist at the infrastructure level rather than in individual files, they are frequently undetectable until an incident happens.

Attack tactic

Hackers don't have to break your defense. They just need to wait for you to leave a window open. With the end of support for Windows 10 approaching, hackers are already predicting that many businesses will fall behind. 

Waiting carries a high cost. Breaches on unsupported infrastructure can result in higher cleanup costs, longer downtime, and greater reputational harm than attacks on supported systems. Because compliance frameworks evolve quicker than legacy systems, staying put risks falling behind on standards that influence contracts, customer trust, and potentially your ability to do business.

What next?

Although unsupported systems may appear to be small technical defects, they quickly escalate into enterprise-level threats. The longer they remain in play, the larger the gap they create in endpoint security, compliance, and overall data security. Addressing even one unsupported system now can drastically reduce risk and give IT management more piece of mind. 

TDMs have a clear choice: modernize proactively or leave the door open for the next assault.

AWS Outage Exposes the Fragility of Centralized Messaging Platforms




A recently recorded outage at Amazon Web Services (AWS) disrupted several major online services worldwide, including privacy-focused communication apps such as Signal. The event has sparked renewed discussion about the risks of depending on centralized systems for critical digital communication.

Signal is known globally for its strong encryption and commitment to privacy. However, its centralized structure means that all its operations rely on servers located within a single jurisdiction and primarily managed by one cloud provider. When that infrastructure fails, the app’s global availability is affected at once. This incident has demonstrated that even highly secure applications can experience disruption if they depend on a single service provider.

According to experts working on decentralized communication technology, this kind of breakdown reveals a fundamental flaw in the way most modern communication apps are built. They argue that centralization makes systems easier to control but also easier to compromise. If the central infrastructure goes offline, every user connected to it is impacted simultaneously.

Developers behind the Matrix protocol, an open-source network for decentralized communication, have long emphasized the need for more resilient systems. They explain that Matrix allows users to communicate without relying entirely on the internet or on a single server. Instead, the protocol enables anyone to host their own server or connect through smaller, distributed networks. This decentralization offers users more control over their data and ensures communication can continue even if a major provider like AWS faces an outage.

The first platform built on Matrix, Element, was launched in 2016 by a UK-based team with the aim of offering encrypted communication for both individuals and institutions. For years, Element’s primary focus was to help governments and organizations secure their communication systems. This focus allowed the project to achieve financial stability while developing sustainable, privacy-preserving technologies.

Now, with growing support and new investments, the developers behind Matrix are working toward expanding the technology for broader public use. Recent funding from European institutions has been directed toward developing peer-to-peer and mesh network communication, which could allow users to exchange messages without relying on centralized servers or continuous internet connectivity. These networks create direct device-to-device links, potentially keeping users connected during internet blackouts or technical failures.

Mesh-based communication is not a new idea. Previous applications like FireChat allowed people to send messages through Bluetooth or Wi-Fi Direct during times when the internet was restricted. The concept gained popularity during civil movements where traditional communication channels were limited. More recently, other developers have experimented with similar models, exploring ways to make decentralized communication more user-friendly and accessible.

While decentralized systems bring clear advantages in terms of resilience and independence, they also face challenges. Running individual servers or maintaining peer-to-peer networks can be complex, requiring technical knowledge that many everyday users might not have. Developers acknowledge that reaching mainstream adoption will depend on simplifying these systems so they work as seamlessly as centralized apps.

Other privacy-focused technology leaders have also noted the implications of the AWS outage. They argue that relying on infrastructure concentrated within a few major U.S. providers poses strategic and privacy risks, especially for regions like Europe that aim to maintain digital autonomy. Building independent, regionally controlled cloud and communication systems is increasingly being seen as a necessary step toward safeguarding user privacy and operational security.

The recent AWS disruption serves as a clear warning. Centralized systems, no matter how secure, remain vulnerable to large-scale failures. As the digital world continues to depend heavily on cloud-based infrastructure, developing decentralized and distributed alternatives may be key to ensuring communication remains secure, private, and resilient in the face of future outages.


Surveillance Pricing: How Technology Decides What You Pay




Imagine walking into your local supermarket to buy a two-litre bottle of milk. You pay $3, but the person ahead of you pays $3.50, and the next shopper pays only $2. While this might sound strange, it reflects a growing practice known as surveillance pricing, where companies use personal data and artificial intelligence (AI) to determine how much each customer should pay. It is a regular practice and we must comprehend the ins and outs since we are directly subjected to it.


What is surveillance pricing?

Surveillance pricing refers to the use of digital tracking and AI to set individualised prices based on consumer behaviour. By analysing a person’s online activity, shopping habits, and even technical details like their device or location, retailers estimate each customer’s “pain point”, the maximum amount they are likely to pay for a product or service.

A recent report from the U.S. Federal Trade Commission (FTC) highlighted that businesses can collect such information through website pixels, cookies, account registrations, or email sign-ups. These tools allow them to observe browsing time, clicks, scrolling speed, and even mouse movements. Together, these insights reveal how interested a shopper is in a product, how urgent their need may be, and how much they can be charged without hesitation.


Growing concerns about fairness

In mid-2024, Delta Air Lines disclosed that a small percentage of its domestic ticket pricing was already determined using AI, with plans to expand this method to more routes. The revelation led U.S. lawmakers to question whether customer data was being used to charge certain passengers higher fares. Although Delta stated that it does not use AI for “predatory or discriminatory” pricing, the issue drew attention to how such technology could reshape consumer costs.

Former FTC Chair Lina Khan has also warned that some businesses can predict each consumer’s willingness to pay by analysing their digital patterns. This ability, she said, could allow companies to push prices to the upper limit of what individuals can afford, often without their knowledge.


How does it work?

AI-driven pricing systems use vast amounts of data, including login details, purchase history, device type, and location to classify shoppers by “price sensitivity.” The software then tests different price levels to see which one yields the highest profit.

The FTC’s surveillance pricing study revealed several real-world examples of this practice:

  1. Encouraging hesitant users: A betting website might detect when a visitor is about to leave and display new offers to convince them to stay.
  2. Targeting new buyers: A car dealership might identify first-time buyers and offer them different financing options or deals.
  3. Detecting urgency: A parent choosing fast delivery for baby products may be deemed less price-sensitive and offered fewer discounts.
  4. Withholding offers from loyal customers: Regular shoppers might be excluded from promotions because the system expects them to buy anyway.
  5. Monitoring engagement: If a user watches a product video for longer, the system might interpret it as a sign they are willing to pay more.


Real-world examples and evidence

Ride-hailing platforms have long faced questions about this kind of data-driven pricing. In 2016, Uber’s former head of economic research noted that users with low battery life were more likely to accept surge pricing. A 2023 Belgian newspaper investigation later reported small differences in Uber fares depending on a phone’s battery level. Uber denied that battery status affects fares, saying its prices depend only on driver supply and ride demand.


Is this new?

The concept itself isn’t new. Dynamic pricing has existed for decades, but digital surveillance has made it far more sophisticated. In the early 2000s, Amazon experimented with varying prices for DVDs based on browsing data, sparking backlash from consumers who discovered the differences. Similarly, the UK’s Norwich Union once used satellite tracking for a “Pay As You Drive” car insurance model, which was discontinued after privacy concerns.


The future of pricing

Today’s combination of big data and AI allows retailers to create precise, individualised pricing models that adjust instantly. Experts warn this could undermine fair competition, reduce transparency, and widen inequality between consumers. Regulators like the FTC are now studying these systems closely to understand their impact on market fairness and consumer privacy.

For shoppers, awareness is key. Comparing prices across devices, clearing cookies, and using privacy tools can help reduce personal data tracking. As AI continues to shape how businesses price their products, understanding surveillance pricing is becoming essential to protect both privacy and pocket.


Incognito Mode Is Not Private, Use These Instead


Incognito (private mode) is a famous privacy feature in web browsers. Users may think that using Incognito mode ensures privacy while surfing the web, allowing them to browse without restrictions, and that everything disappears when the tab is closed. 

With no sign of browsing history in Incognito mode, you may believe you are safe. However, this is not entirely accurate, as Incognito has its drawbacks and doesn’t guarantee private browsing. But this doesn’t mean that the feature is useless. 

What Incognito mode does

Private browsing mode is made to keep your local browsing history secret. When a user opens an incognito window, their browser starts a different session and temporarily saves browsing in the session, such as history and cookies. Once the private session is closed, the temporary information is self-deleted and is not visible in your browsing history. 

What Incognito mode can’t do

Incognito mode helps to keep your browsing data safe from other users who use your device

A common misconception among users is that it makes them invisible on the internet and hides everything they browse online. But that is not true.

Why Incognito mode doesn't guarantee privacy

1. It doesn’t hide user activity from the Internet Service Provider (ISP)

Every request you send travels via the ISP network (encrypted DNS providers are an exception). Your ISPs can track user activity on their networks, and can monitor your activity and all the domains you visit, and even your unencrypted traffic. If you are on a corporate Wi-Fi network, your network admin can see the visited websites. 

2. Incognito mode doesn’t stop websites from tracking users

When you are using Incognito, cookies are deleted, but websites can still track your online activity via device and browser fingerprinting. Sites create user profiles based on unique device characteristics such as resolution, installed extensions, and screen size.

3. Incognito mode doesn’t hide your IP address

If you are blocked from a website, using Incognito mode won’t make it accessible. It can’t change your I address.

Should you use Incognito mode?

It may give a false sense of benefits, but Incognito mode doesn’t ensure privacy. It is only helpful for shared devices.

What can you use?

There are other options to protect your online privacy, such as:

  1. Using a virtual private network (VPN)
  2. Privacy-focused browsers: Browsers such as Tor are by default designed to block trackers, ads, and fingerprinting.
  3. Using private search engines: Instead of Google and Bing, you can use private search engines such as DuckDuckGo and Startpage.

ICE Uses Fake Tower Cells to Spy on Users

Federal contract to spy

Earlier this year, the US Immigration and Customs Enforcement (ICE) paid $825,000 to a manufacturing company that makes vehicles installed with tech for law enforcement, which also included fake cellphone towers called "cell-site" simulators used to surveil phones. 

The contract was made with a Maryland-based company called TechOps Specialty Vehicles (TOSV). TOSV signed another contract with ICE for $818,000 last year during the Biden administration. 

The latest federal contract shows how few technologies are being used to support the Trump administration's crackdown on deportation. 

In September 2025, Forbes discovered an unsealed search warrant that revealed ICE used a cell-site simulator to spy on a person who was allegedly a member of a criminal gang in the US, and was asked to leave the US in 2023.  Forbes also reported on finding a contract for "cell site simulator." 

About ICE

Cell-site simulators were also called "stingrays." Over time, they are now known as International Mobile Subscriber Identity (IMSI) catchers, a unique number used to track every cellphone user in the world.

These tools can mimic a cellphone tower and can fool every device in the nearby range to connect to the device, allowing law enforcement to identify the real-world location of phone owners. Few cell-site simulators can also hack texts, internet traffic, and regular calls. 

Authorities have been using Stingray devices for more than a decade. It is controversial as authorities sometimes don't get a warrant for their use. 

According to experts, these devices trap innocent people; their use is secret as the authorities are under strict non-disclosure agreements not to disclose how these devices work. ICE has been infamous for using cell-site simulators. In 2020, a document revealed that ICE used them 466 times between 2017 and 2019. 

Paying Ransom Does Not Guarantee Data Restoration: Report


A new report claims that smaller firms continue to face dangers in the digital domain, as ransomware threats persistently target organizations. Hiscox’s Cyber Readiness Report surveyed 6,000 businesses, and over 59% report they have been hit by a cyber attack in the last year.  

Financial losses were a major factor; most organizations reported operational failures, reputation damage, and staff losses. “Outdated operating systems and applications often contain security vulnerabilities that cyber attackers can exploit. Even with robust defenses, there is always a risk of data loss or ransomware attacks,” the report said.

Problems with ransomware payments

Ransomware is the topmost problem; the survey suggests that around 27% of respondents suffered damage, and 80% agreed to pay ransom. 

Despite the payments, recovery was not confirmed as only 60% could restore their data, while hackers asked for repayments again. The reports highlight that paying the ransom to hackers doesn’t ensure data recovery and can even lead to further extortion. 

Transparency needed

There is an urgent need for transparency, as 71% respondents agreed that companies should disclose ransom payments and the money paid. Hiscox found that gangs are targeting sensitive data like executive emails, financial information, and contracts.

The report notes that criminal groups are increasingly targeting sensitive business data such as contracts, executive emails, and financial information. "Cyber criminals are now much more focused on stealing sensitive business data. Once stolen, they demand payment…pricing threats based on reputational damage,” the report said. This shift has exposed gaps in businesses’ data loss prevention measures that criminals exploit easily.  

AI threat

Respondents also said they experienced AI-related incidents, where threat actors exploited AI flaws such as deepfakes and vulnerabilities in third-party AI apps. Around 65% still perceive AI as an opportunity rather than a threat. The report highlights new risks that business leaders may not fully understand yet. 

According to the report, “Even with robust defenses, there is always a risk of data loss or ransomware attacks. Frequent, secure back-ups – stored either offline or in the cloud – ensure that businesses can recover quickly if the worst happens.”

Zero-click Exploit AI Flaws to Hack Systems


What if machines, not humans, become the centre of cyber-warfare? Imagine if your device could be hijacked without you opening any link, downloading a file, or knowing the hack happened? This is a real threat called zero-click attacks, a covert and dangerous type of cyber attack that abuses software bugs to hack systems without user interaction. 

The threat

These attacks have used spywares such as Pegasus and AI-driven EchoLeak, and shown their power to attack millions of systems, compromise critical devices, and steal sensitive information. With the surge of AI agents, the risk is high now. The AI-driven streamlining of work and risen productivity has become a lucrative target for exploitation, increasing the scale and attack tactics of breaches.

IBM technology explained how the combination of AI systems and zero-click flaws has reshaped the cybersecurity landscape. “Cybercriminals are increasingly adopting stealthy tactics and prioritizing data theft over encryption and exploiting identities at scale. A surge in phishing emails delivering infostealer malware and credential phishing is fueling this trend—and may be attributed to attackers leveraging AI to scale distribution,” said the IBM report.

A few risks of autonomous AI are highlighted, such as:

  • Threat of prompt injection 
  • Need for an AI firewall
  • Gaps in addressing the challenges due to AI-driven tech

About Zero-click attacks

These attacks do not need user interaction, unlike traditional cyberattacks that relied on social engineering campaigns or phishing attacks. Zero-click attacks exploit flaws in communication or software protocols to gain unauthorized entry into systems.  

Echoleak: An AI-based attack that modifies AI systems to hack sensitive information.

Stagefright: A flaw in Android devices that allows hackers to install malicious code via multimedia messages (MMS), hacking millions of devices.

Pegasus: A spyware that hacks devices through apps such as iMessage and WhatsApp, it conducts surveillance, can gain unauthorized access to sensitive data, and facilitate data theft as well.

How to stay safe?

According to IBM, “Despite the magnitude of these challenges, we found that most organizations still don’t have a cyber crisis plan or playbooks for scenarios that require swift responses.” To stay safe, IBM suggests “quick, decisive action to counteract the faster pace with which threat actors, increasingly aided by AI, conduct attacks, exfiltrate data, and exploit vulnerabilities.”

Oura Users Express Concern Over Pentagon Partnership Amid Privacy Debates

 



Oura, the Finnish company known for its smart health-tracking rings, has recently drawn public attention after announcing a new manufacturing facility in Texas aimed at meeting the needs of the U.S. Department of Defense (DoD). The partnership, which has existed since 2019, became more widely discussed following the August 27 announcement, leading to growing privacy concerns among users.

The company stated that the expansion will allow it to strengthen its U.S. operations and support ongoing defense-related projects. However, the revelation that the DoD is Oura’s largest enterprise customer surprised many users. Online discussions on Reddit and TikTok quickly spread doubts about how user data might be handled under this partnership.

Concerns escalated further when users noticed that Palantir Technologies, a software company known for its government data contracts, was listed as a technology partner in Oura’s enterprise infrastructure. Some users interpreted this connection as a potential risk to personal privacy, particularly those using Oura rings to track reproductive health and menstrual cycles through its integration with the FDA-approved Natural Cycles app.

In response, Oura’s CEO Tom Hale issued a clarification, stating that the partnership does not involve sharing individual user data with the DoD or Palantir. According to the company, the defense platform uses a separate system, and only data from consenting service members can be accessed. Oura emphasized that consumer data and enterprise data are stored and processed independently.

Despite these assurances, some users remain uneasy. Privacy advocates and academics note that health wearables often operate outside strict medical data regulations, leaving gaps in accountability. Andrea Matwyshyn, a professor of law and engineering at Penn State, explained that wearable data can sometimes be repurposed in ways users do not anticipate, such as in insurance or legal contexts.

For many consumers, especially women tracking reproductive health, the issue goes beyond technical safeguards. It reflects growing mistrust of how private companies and governments may collaborate over sensitive biometric data. The discussion also highlights the shifting public attitude toward data privacy, as more users begin to question who can access their most personal information.

Oura maintains that it is committed to protecting user privacy and supporting health monitoring “for all people, including service members.” Still, the controversy serves as a reminder that transparency and accountability remain central to consumer trust in an age where personal data has become one of the most valuable commodities.



Why Businesses Must Act Now to Prepare for a Quantum-Safe Future

 



As technology advances, quantum computing is no longer a distant concept — it is steadily becoming a real-world capability. While this next-generation innovation promises breakthroughs in fields like medicine and materials science, it also poses a serious threat to cybersecurity. The encryption systems that currently protect global digital infrastructure may not withstand the computing power quantum technology will one day unleash.

Data is now the most valuable strategic resource for any organization. Every financial transaction, business operation, and communication depends on encryption to stay secure. However, once quantum computers reach full capability, they could break the mathematical foundations of most existing encryption systems, exposing sensitive data on a global scale.


The urgency of post-quantum security

Post-Quantum Cryptography (PQC) refers to encryption methods designed to remain secure even against quantum computers. Transitioning to PQC will not be an overnight task. It demands re-engineering of applications, operating systems, and infrastructure that rely on traditional cryptography. Businesses must begin preparing now, because once the threat materializes, it will be too late to react effectively.

Experts warn that quantum computing will likely follow the same trajectory as artificial intelligence. Initially, the technology will be accessible only to a few institutions. Over time, as more companies and researchers enter the field, the technology will become cheaper and widely available including to cybercriminals. Preparing early is the only viable defense.


Governments are setting the pace

Several governments and standard-setting bodies have already started addressing the challenge. The United Kingdom’s National Cyber Security Centre (NCSC) has urged organizations to adopt quantum-resistant encryption by 2035. The European Union has launched its Quantum Europe Strategy to coordinate member states toward unified standards. Meanwhile, the U.S. National Institute of Standards and Technology (NIST) has finalized its first set of post-quantum encryption algorithms, which serve as a global reference point for organizations looking to begin their transition.

As these efforts gain momentum, businesses must stay informed about emerging regulations and standards. Compliance will require foresight, investment, and close monitoring of how different jurisdictions adapt their cybersecurity frameworks.

To handle the technical and organizational scale of this shift, companies can establish internal Centers of Excellence (CoEs) dedicated to post-quantum readiness. These teams bring together leaders from across departments: IT, compliance, legal, product development, and procurement to map vulnerabilities, identify dependencies, and coordinate upgrades.

The CoE model also supports employee training, helping close skill gaps in quantum-related technologies. By testing new encryption algorithms, auditing existing infrastructure, and maintaining company-wide communication, a CoE ensures that no critical process is overlooked.


Industry action has already begun

Leading technology providers have started adopting quantum-safe practices. For example, Red Hat’s Enterprise Linux 10 is among the first operating systems to integrate PQC support, while Kubernetes has begun enabling hybrid encryption methods that combine traditional and quantum-safe algorithms. These developments set a precedent for the rest of the industry, signaling that the shift to PQC is not a theoretical concern but an ongoing transformation.


The time to prepare is now

Transitioning to a quantum-safe infrastructure will take years, involving system audits, software redesigns, and new cryptographic standards. Organizations that begin planning today will be better equipped to protect their data, meet upcoming regulatory demands, and maintain customer trust in the digital economy.

Quantum computing will redefine the boundaries of cybersecurity. The only question is whether organizations will be ready when that day arrives.


Social Event App Partiful Did Not Collect GPS Locations from Photos

 

Social event planning app Partiful, also known as "Facebook events for hot people," has replaced Facebook as the go-to place for sending party invites. However, like Facebook, Partiful also collects user data. 

The hosts can create online invitations in a retro style, which allows users to RSVP to events easily. The platform strives to be user-friendly and trendy, which has made the app No.9 on the Apple store, and Google has called it "the best app" of 2024. 

About Partiful

Partiful has recently developed into a Facebook-like social graph; it maps your friends and also friends of friends, what you do, where you go, and your contact numbers. When the app became famous, people started doubting its origins, alleging that the app had former employees of a data-mining company. TechCrunch, however, found that the app was not storing any location data from user-uploaded images, which include public profile pictures. 

Metadata in photos

The photos that you have on your phones have metadata, which consists of file size, date of capture. With videos, Metadata can include information such as the type of camera used, the settings, and latitude/longitude coordinates. TechCrunch discovered that anyone could use the developer tools in a web browser to get raw user profile photos access from Partiful’s back-end database on Google Firebase. 

About the bug

The flaw could have been problematic, as it could have exposed the location of a person’s profile photo if someone used Partiful. 

According to TechCrunch, “Some Partiful user profile photos contained highly granular location data that could be used to identify the person’s home or work, particularly in rural areas where individual homes are easier to distinguish on a map.”

It is a common norm for companies hosting user photos and videos to automatically remove metadata once uploaded to prevent privacy issues, such as Partiful.

Is UK's Digital ID Hacker Proof?


Experts warned that our data will never be safe, as the UK government plans to launch Digital IDs for all citizens in the UK. The move has received harsh criticism due to a series of recent data attacks that leaked official government contacts, email accounts, staff addresses, and passwords. 

Why Digital IDs?

The rolling out of IDs means that digital identification will become mandatory for right-to-work checks in the UK by the end of this Parliament session. It aims to stop the illegal migrants from entering the UK, according to Keir Starmer, the UK's Prime Minister, also stressing that the IDs will prevent illegal working.

Experts, however, are not optimistic about this, as cyberattacks on critical national infrastructure, public service providers, and high street chains have surged. They have urged the parliament to ensure security and transparency when launching the new ID card scheme. 

According to former UK security and intelligence coordinator and director of GCHQ David Omand, the new plan will offer benefits, but it has to be implemented carefully. 

Benefits of Digital IDs

David Omand, former UK security and intelligence coordinator and director of GCHQ, said the scheme could offer enormous benefits, but only if it is implemented securely, as state hackers will try to hack and disrupt. 

To prevent this, the system should be made securely, and GCHQ must dedicate time and resources to robust implementation. The digital IDs would be on smartphones in the GOV.UK’s wallet app and verified against a central database of citizens having the right to live and work in the UK.

Risk with Digital IDs

There is always a risk of stolen data getting leaked on the dark web. According to an investigation by Cyjax, more than 1300 government email-password combinations, addresses, and contact details were accessed by threat actors over the past year. This is what makes the Digital ID card a risk, as the privacy of citizens can be put at risk. 

The UK government, however, has ensured that these digital IDs are made with robust security, secured via state-of-the-art encryption and authentication technology. 

According to PM Starmer, this offers citizens various benefits like proving their identity online and control over how data is shared and with whom.

Lost or Stolen Phone? Here’s How to Protect Your Data and Digital Identity

 



In this age, losing a phone can feel like losing control over your digital life. Modern smartphones carry far more than contacts and messages — they hold access to emails, bank accounts, calendars, social platforms, medical data, and cloud storage. In the wrong hands, such information can be exploited for financial fraud or identity theft.

Whether your phone is misplaced, stolen, or its whereabouts are unclear, acting quickly is the key to minimizing damage. The following steps outline how to respond immediately and secure your data before it is misused.


1. Track your phone using official recovery tools

Start by calling your number to see if it rings nearby or if someone answers. If not, use your device’s official tracking service. Apple users can access Find My iPhone via iCloud, while Android users can log in to Find My Device.

These built-in tools can display your phone’s current or last known location on a map, play a sound to help locate it, or show a custom message on the lock screen with your contact details. Both services can be used from another phone or a web browser. Avoid third-party tracking apps, which are often unreliable or insecure.


2. Secure your device remotely

If recovery seems unlikely or the phone may be in someone else’s possession, immediately lock it remotely. This prevents unauthorized access to your personal files, communication apps, and stored credentials.

Through iCloud’s “Mark as Lost” or Android’s “Secure Device” option, you can set a new passcode and display a message requesting the finder to contact you. This function also disables features like Apple Pay until the device is unlocked, protecting stored payment credentials.


3. Contact your mobile carrier without delay

Reach out to your mobile service provider to report the missing device. Ask them to suspend your SIM to block calls, texts, and data usage. This prevents unauthorized charges and, more importantly, stops criminals from intercepting two-factor authentication (2FA) messages that could give them access to other accounts.

Request that your carrier blacklist your device’s IMEI number. Once blacklisted, it cannot be used on most networks, even with a new SIM. If you have phone insurance, inquire about replacement or reimbursement options during the same call.


4. File an official police report

While law enforcement may not always track individual devices, filing a report creates an official record that can be used for insurance claims, fraud disputes, or identity theft investigations.

Provide details such as the model, color, IMEI number, and the time and place where it was lost or stolen. The IMEI (International Mobile Equipment Identity) can be found on your phone’s box, carrier account, or purchase receipt.


5. Protect accounts linked to your phone

Once the device is reported missing, shift your focus to securing connected accounts. Start with your primary email, cloud services, and social media platforms, as they often serve as gateways to other logins.

Change passwords immediately, and if available, sign out from all active sessions using the platform’s security settings. Apple, Google, and Microsoft provide account dashboards that allow you to remotely sign out of all devices.

Enable multi-factor authentication (MFA) on critical accounts if you haven’t already. This adds an additional layer of verification that doesn’t rely solely on your phone.

Monitor your accounts closely for unauthorized logins, suspicious purchases, or password reset attempts. These could signal that your data is being exploited.


6. Remove stored payment methods and alert financial institutions

If your phone had digital wallets such as Apple Pay, Google Pay, or other payment apps, remove linked cards immediately. Apple’s Find My will automatically disable Apple Pay when a device is marked as lost, but it’s wise to verify manually.

Android users can visit payments.google.com to remove cards associated with their Google account. Then, contact your bank or card issuer to flag the loss and monitor for fraudulent activity. Quick reporting allows banks to block suspicious charges or freeze affected accounts.


7. Erase your device permanently (only when recovery is impossible)

If all efforts fail and you’re certain the device won’t be recovered, initiate a remote wipe. This deletes all data, settings, and stored media, restoring the device to factory condition.

For iPhones, use the “Erase iPhone” option under Find My. For Androids, use “Erase Device” under Find My Device. Once wiped, you will no longer be able to track the device, but it ensures that your personal data cannot be accessed or resold.


Be proactive, not reactive

While these steps help mitigate damage, preparation remains the best defense. Regularly enable tracking services, back up your data, use strong passwords, and activate device encryption. Avoid storing sensitive files locally when possible and keep your operating system updated for the latest security patches.

Losing a phone is stressful, but being prepared can turn a potential disaster into a controlled situation. With the right precautions and quick action, you can safeguard both your device and your digital identity.



Phishing Campaign Uses Fake PyPI Domain to Steal Login Credentials


Phishing campaign via fake domains

A highly advanced phishing campaign targeted maintainers of packages on the Python Package Index (PyPI), utilizing domain confusion methods to obtain login credentials from unsuspecting developers. The campaign leverages fake emails made to copy authentic PyPI communications and send recipients to fake domains that mimic the genuine PyPI infrastructure.

Campaign tactic

The phishing operation uses meticulously drafted emails that ask users to confirm their email address for “account maintenance and security reasons,” cautioning that accounts will be suspended if not done. 

These fake emails scare users, pushing them to make hasty decisions without confirming the authenticity of the communication. The phony emails redirect the victims to the malicious domain pypi-mirror.org, which mimics the genuine PyPI mirror but is not linked to the Python Software Foundation.

Broader scheme 

This phishing campaign highlights a series of attacks that have hit PyPi and similar other open-source repositories recently. Hackers have started changing domain names to avoid getting caught. 

Experts at PyPI said that these campaigns are part of a larger domain-confusion attack to abuse the trust relationship inside the open-source ecosystem.

The campaign uses technical deception and social engineering. When users open the malicious links, their credentials are stolen by the hackers. 

Domain confusion

The core of this campaign depends upon domain spoofing. The fake domain uses HTTPS encoding and sophisticated web design to build its authority, which tricks users who might not pay close attention while accessing these sites. The malicious sites mimic PyPI’s login page with stark reality, such as professional logos, form elements, and styling, giving users an authentic experience. 

This level of detail in the craft highlights robust planning and resource use by threat actors to increase the campaign’s effectiveness.

How to stay safe?

Users are advised to not open malicious links and pay attention while using websites, especially when putting in login details. 

“If you have already clicked on the link and provided your credentials, we recommend changing your password on PyPI immediately. Inspect your account's Security History for anything unexpected. Report suspicious activity, such as potential phishing campaigns against PyPI, to security@pypi.org,” PyPI said in the blog post.