Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label new threat attacks. Show all posts

Wi-Fi Signals Can Now Identify You Without Devices or Cameras, Raising New Privacy Fears

 

A new technology developed by researchers at La Sapienza University of Rome could transform how individuals are identified in connected environments and reignite urgent debates over privacy. In a breakthrough that bypasses traditional biometrics, the research team has demonstrated that a person can be re-identified solely based on how their body alters surrounding Wi-Fi signals. 

The method, called WhoFi, leverages the unique way each person’s physical presence disturbs electromagnetic waveforms. Unlike facial recognition, fingerprint scans, or phone-based tracking, WhoFi requires no cameras or wearable devices. 

It can passively track people in any area blanketed by Wi-Fi coverage, making it both powerful and controversial. “As a Wi-Fi signal moves through an environment, it interacts with the objects and people in its path. These interactions subtly change the signal’s characteristics, and those changes carry biometric information,” the researchers explain. 

The team composed of computer scientists Danilo Avola, Daniele Pannone, Dario Montagnini, and Emad Emam used variations in Wi-Fi channel state information (CSI), such as amplitude and phase shifts, to build what they call a person’s "Wi-Fi signature." 

These invisible disturbances are distinct enough to allow for precise re-identification. To prove the concept, the researchers trained a transformer-based deep neural network to distinguish individuals by analyzing how they disrupt signals across different locations. When tested against the NTU-Fi dataset, a standard benchmark for Wi-Fi-based human sensing, WhoFi achieved a re-identification accuracy of up to 95.5%. 

Beyond Biometric Norms Wi-Fi-based human sensing has been in development for years, applied in use cases like motion detection, fall alerts for the elderly, and even through-wall monitoring. In 2020, a similar system dubbed EyeFi achieved 75% accuracy in identifying individuals via signal interaction. 

However, the creators of WhoFi argue that their system offers superior precision and greater environmental adaptability. This advancement opens doors for a host of potential applications from seamless authentication in smart homes and offices to non-invasive surveillance in public spaces. But it also raises the specter of surveillance without consent. 

The Privacy Dilemma 

Because WhoFi requires no explicit action or device on the part of the person being tracked, it introduces ethical and legal complexities. Unlike security cameras, which are visible, or facial recognition systems that often operate in regulated zones, Wi-Fi-based identification could run silently in the background of any networked environment. Privacy advocates warn that such capabilities could be misused, particularly in authoritarian regimes or by private companies seeking to monitor behavior without permission. 

“This kind of passive identification, while technologically impressive, blurs the line between convenience and intrusion,” one digital rights expert noted. “We must ask who controls these systems, and how their use is regulated.” 

The Future of Human Sensing 

As the Internet of Things expands and ambient computing becomes more embedded in daily life, technologies like WhoFi may become standard components of smart infrastructure. While the researchers position their system as more ethical than invasive surveillance tech, no image data, no personal devices required. 

It also challenges conventional ideas of consent and anonymity in public and semi-public spaces. In the hands of responsible actors, WhoFi could enhance security and accessibility. But without strong data governance frameworks, it could just as easily become a tool for constant, invisible monitoring.

How Generative AI is Creating New Classes of Security Threats

 

AI technology is booming, and industries are in a rush to adopt it as quickly as they can. OpenAI's ChatGPT has seen an unprecedented surge in user adoption, quickly becoming one of the most widely used AI platforms. This surge has also led to the widespread integration of generative AI across various platforms, resulting in a significant transformation within the technology landscape. 

The profound impact of AI technology is actively reshaping the threat landscape, presenting notable implications for security. One concerning trend is the exploitation of AI by malicious individuals to amplify the effectiveness of phishing and fraudulent schemes. 

An alarming incident took place when Meta's 65-billion parameter language model was leaked, leading to a heightened risk of advanced and sophisticated phishing attacks. Furthermore, the frequency of prompt injection attacks is increasing daily, posing ongoing challenges for security professionals and necessitating proactive defense measures. 

Many users are unknowingly sharing business-sensitive information with AI/ML-based services, creating challenges for security teams in managing and protecting such data. A notable example is when Samsung engineers inadvertently included proprietary code in ChatGPT while seeking assistance for debugging, leading to the unintended exposure of sensitive information. 

Additionally, a survey conducted by Fishbowl revealed that a significant 68% of individuals using ChatGPT for work purposes chose not to inform their supervisors about it. The speed at which attackers adopt and harness AI technology is likely to outpace defenders, granting them a significant advantage. They will be capable of launching sophisticated AI-powered attacks on a large scale while keeping costs relatively low. 

One area that will see immediate benefits from AI advancements is social engineering attacks, where synthetic text, voice, and images can be utilized. Attacks that previously required manual effort, such as phishing attempts that impersonate legitimate entities like the IRS or real estate agents to trick victims into wiring money, will become automated. 

These technologies will empower attackers to develop more potent malicious code and execute novel, highly effective attacks at scale. For instance, they can rapidly generate polymorphic code for malware, evading detection by signature-based security systems. 

Even notable figures in the field of AI, like Geoffrey Hinton, have expressed concerns about the potential misuse of the technology. Hinton recently acknowledged the difficulty of preventing malicious actors from exploiting AI for harmful purposes, expressing regret for his contribution to its development.