Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Youtube. Show all posts

Raspberry Pi Project Turns Wi-Fi Signals Into Visual Light Displays

 



Wireless communication surrounds people at all times, even though it cannot be seen. Signals from Wi-Fi routers, Bluetooth devices, and mobile networks constantly travel through homes and cities unless blocked by heavy shielding. A France-based digital artist has developed a way to visually represent this invisible activity using light and low-cost computing hardware.

The creator, Théo Champion, who is also known online as Rootkid, designed an installation called Spectrum Slit. The project captures radio activity from commonly used wireless frequency ranges and converts that data into a visual display. The system focuses specifically on the 2.4 GHz and 5 GHz bands, which are widely used for Wi-Fi connections and short-range wireless communication.

The artwork consists of 64 vertical LED filaments arranged in a straight line. Each filament represents a specific portion of the wireless spectrum. As radio signals are detected, their strength and density determine how brightly each filament lights up. Low signal activity results in faint and scattered illumination, while higher levels of wireless usage produce intense and concentrated light patterns.

According to Champion, quiet network conditions create a subtle glow that reflects the constant but minimal background noise present in urban environments. As wireless traffic increases, the LEDs become brighter and more saturated, forming dense visual bands that indicate heavy digital activity.

A video shared on YouTube shows the construction process and the final output of the installation inside Champion’s Paris apartment. The footage demonstrates a noticeable increase in brightness during evening hours, when nearby residents return home and connect phones, laptops, and other devices to their networks.

Champion explained in an interview that his work is driven by a desire to draw attention to technologies people often ignore, despite their significant influence on daily life. By transforming technical systems into physical experiences, he aims to encourage viewers to reflect on the infrastructure shaping modern society and to appreciate the engineering behind it.

The installation required both time and financial investment. Champion built the system using a HackRF One software-defined radio connected to a Raspberry Pi. The radio device captures surrounding wireless signals, while the Raspberry Pi processes the data and controls the lighting behavior. The software was written in Python, but other components, including the metal enclosure and custom circuit boards, had to be professionally manufactured.

He estimates that development involved several weeks of experimentation, followed by a dedicated build phase. The total cost of materials and fabrication was approximately $1,000.

Champion has indicated that Spectrum Slit may be publicly exhibited in the future. He is also known for creating other technology-focused artworks, including interactive installations that explore data privacy, artificial intelligence, and digital systems. He has stated that producing additional units of Spectrum Slit could be possible if requested.

Big Tech’s New Rule: AI Age Checks Are Rolling Out Everywhere

 



Large online platforms are rapidly shifting to biometric age assurance systems, creating a scenario where users may lose access to their accounts or risk exposing sensitive personal information if automated systems make mistakes.

Online platforms have struggled for decades with how to screen underage users from adult-oriented content. Everything from graphic music tracks on Spotify to violent clips circulating on TikTok has long been available with minimal restrictions.

Recent regulatory pressure has changed this landscape. Laws such as the United Kingdom’s Online Safety Act and new state-level legislation in the United States have pushed companies including Reddit, Spotify, YouTube, and several adult-content distributors to deploy AI-driven age estimation and identity verification technologies. Pornhub’s parent company, Aylo, is also reevaluating whether it can comply with these laws after being blocked in more than a dozen US states.

These new systems require users to hand over highly sensitive personal data. Age estimation relies on analyzing one or more facial photos to infer a user’s age. Verification is more exact, but demands that the user upload a government-issued ID, which is among the most sensitive forms of personal documentation a person can share online.

Both methods depend heavily on automated facial recognition algorithms. The absence of human oversight or robust appeals mechanisms magnifies the consequences when these tools misclassify users. Incorrect age estimation can cut off access to entire categories of content or trigger more severe actions. Similar facial analysis systems have been used for years in law enforcement and in consumer applications such as Google Photos, with well-documented risks and misidentification incidents.

Refusing these checks often comes with penalties. Many services will simply block adult content until verification is completed. Others impose harsher measures. Spotify, for example, warns that accounts may be deactivated or removed altogether if age cannot be confirmed in regions where the platform enforces a minimum age requirement. According to the company, users are given ninety days to complete an ID check before their accounts face deletion.

This shift raises pressing questions about the long-term direction of these age enforcement systems. Companies frequently frame them as child-safety measures, but users are left wondering how long these platforms will protect or delete the biometric data they collect. Corporate promises can be short-lived. Numerous abandoned websites still leak personal data years after shutting down. The 23andMe bankruptcy renewed fears among genetic testing customers about what happens to their information if a company collapses. And even well-intentioned apps can create hazards. A safety-focused dating application called Tea ended up exposing seventy-two thousand users’ selfies and ID photos after a data breach.

Even when companies publicly state that they do not retain facial images or ID scans, risks remain. Discord recently revealed that age verification materials, including seventy thousand IDs, were compromised after a third-party contractor called 5CA was breached.

Platforms assert that user privacy is protected by strong safeguards, but the details often remain vague. When asked how YouTube secures age assurance data, Google offered only a general statement claiming that it employs advanced protections and allows users to adjust their privacy settings or delete data. It did not specify the precise security controls in place.

Spotify has outsourced its age assurance system to Yoti, a digital identity provider. The company states that it does not store facial images or ID scans submitted during verification. Yoti receives the data directly and deletes it immediately after the evaluation, according to Spotify. The platform retains only minimal information about the outcome: the user’s age in years, the method used, and the date the check occurred. Spotify adds that it uses measures such as pseudonymization, encryption, and limited retention policies to prevent unauthorized access. Yoti publicly discloses some technical safeguards, including use of TLS 1.2 by default and TLS 1.3 where supported.

Privacy specialists argue that these assurances are insufficient. Adam Schwartz, privacy litigation director at the Electronic Frontier Foundation, told PCMag that facial scanning systems represent an inherent threat, regardless of whether they are being used to predict age, identity, or demographic traits. He reiterated the organization’s stance supporting a ban on government deployment of facial recognition and strict regulation for private-sector use.

Schwartz raises several issues. Facial age estimation is imprecise by design, meaning it will inevitably classify some adults as minors and deny them access. Errors in facial analysis also tend to fall disproportionately on specific groups. Misidentification incidents involving people of color and women are well documented. Google Photos once mislabeled a Black software engineer and his friend as animals, underlining systemic flaws in training data and model accuracy. These biases translate directly into unequal treatment when facial scans determine whether someone is allowed to enter a website.

He also warns that widespread facial scanning increases privacy and security risks because faces function as permanent biometric identifiers. Unlike passwords, a person cannot replace their face if it becomes part of a leaked dataset. Schwartz notes that at least one age verification vendor has already suffered a breach, underscoring material vulnerabilities in the system.

Another major problem is the absence of meaningful recourse when AI misjudges a user’s age. Spotify’s approach illustrates the dilemma. If the algorithm flags a user as too young, the company may lock the account, enforce viewing restrictions, or require a government ID upload to correct the error. This places users in a difficult position, forcing them to choose between potentially losing access or surrendering more sensitive data.

Do not upload identity documents unless required, check a platform’s published privacy and retention statements before you comply, and use account recovery channels if you believe an automated decision is wrong. Companies and regulators must do better at reducing vendor exposure, increasing transparency, and ensuring appeals are effective. 

Despite these growing concerns, users continue to find ways around verification tools. Discord users have discovered that uploading photos of fictional characters can bypass facial age checks. Virtual private networks remain a viable method for accessing age-restricted platforms such as YouTube, just as they help users access content that is regionally restricted. Alternative applications like NewPipe offer similar functionality to YouTube without requiring formal age validation, though these tools often lack the refinement and features of mainstream platforms.


New Virus Spreading Through YouTube Puts Windows Users at Risk

 




A new type of digital threat is quietly spreading online, and it’s mainly affecting people who use Windows computers. This threat, called Neptune RAT, is a kind of harmful software that allows hackers to take over someone’s system from a distance. Once installed, it can collect personal data, spy on the user’s activity, and even lock files for ransom.

What’s especially worrying is how the virus is spreading. It’s being shared through common platforms like YouTube, GitHub, and Telegram. Hackers are offering this tool as part of a paid service, which makes it easier for many cybercriminals to get access to it.


What Makes Neptune RAT So Dangerous?

Neptune RAT is not an ordinary computer virus. It can do many harmful things at once, making it a serious risk to anyone who accidentally installs it.

One of its tricks is swapping digital wallet addresses during cryptocurrency transfers. This means someone could send money thinking it’s going to the right person, but it actually ends up in a hacker’s account.

Another feature allows it to collect usernames and passwords stored on the victim’s device. It targets popular programs and web browsers, which could let hackers break into email accounts, social media, or online banking services.

Even more troubling, Neptune RAT includes a feature that can lock files on the user’s system. The attacker can then demand money to unlock them— this is what’s known as ransomware.

To make things worse, the virus can turn off built-in security tools like Windows Defender. That makes it much harder to spot or remove. Some versions of the virus even allow hackers to view the victim’s screen while they’re using it, which could lead to serious privacy issues.

If the hacker decides they no longer need the device, the virus can erase all the data, leaving the victim with nothing.


How to Stay Protected

To avoid being affected by this virus, it’s important to be careful when clicking on links or downloading files— especially from YouTube, GitHub, or Telegram. Never download anything unless you fully trust the source.

Although antivirus software is helpful, this particular virus can get past many of them. That’s why extra steps are needed, such as:

1. Using different passwords for each account  

2. Saving important files in a secure backup  

3. Avoiding links or downloads from strangers  

4. Enabling extra security features like two-factor authentication

Staying alert and employing good online habits is the best way to avoid falling victim to harmful software like Neptune RAT.


SilentCryptominer Threatens YouTubers to Post Malware in Videos

SilentCryptominer Threatens YouTubers to Post Malware in Videos

Experts have discovered an advanced malware campaign that exploits the rising popularity of Windows Packet Divert drivers to escape internet checks.

Malware targets YouTubers 

Hackers are spreading SilentCryptominer malware hidden as genuine software. It has impacted over 2000 victims in Russia alone. The attack vector involves tricking YouTubers with a large follower base into spreading malicious links. 

“Such software is often distributed in the form of archives with text installation instructions, in which the developers recommend disabling security solutions, citing false positives,” reports Secure List. This helps threat actors by “allowing them to persist in an unprotected system without the risk of detection. 

Innocent YouTubers Turned into victims

Most active of all have been schemes for distributing popular stealers, remote access tools (RATs), Trojans that provide hidden remote access, and miners that harness computing power to mine cryptocurrency.” Few commonly found malware in the distribution scheme are: Phemedrone, DCRat NJRat, and XWorm.

In one incident, a YouTuber with 60k subscribers had put videos containing malicious links to infected archives, gaining over 400k views. The malicious links were hosted on gitrock[.]com, along with download counter crossing 40,000. 

The malicious files were hosted on gitrok[.]com, with the download counter exceeding 40,000.

Blackmail and distributing malware

Threat actors have started using a new distribution plan where they send copyright strikes to content creators and influencers and blackmail them to shut down channels if they do not post videos containing malicious links. The scare strategy misuses the fame of the popular YouTubers to distribute malware to a larger base. 

The infection chain starts with a manipulated start script that employs an additional executable file via PowerShell. 

As per the Secure List Report, the loader (written in Python) is deployed with PyInstaller and gets the next-stage payload from hardcoded domains.  The second-stage loader runs environment checks, adds “AppData directory to Microsoft Defender exclusions” and downloads the final payload “SilentCryptominer.”

The infamous SilentCryptoMiner

The SilentCryptoMiner is known for mining multiple cryptocurrencies via different algorithms. It uses process hollowing techniques to deploy miner code into PCs for stealth.

The malware can escape security checks, like stopping mining when processes are running and scanning for virtual environment indicators. 

Google Fixes YouTube Security Flaw That Exposed User Emails

 



A critical security vulnerability in YouTube allowed attackers to uncover the email addresses of any account on the platform. Cybersecurity researchers discovered the flaw and reported it to Google, which promptly fixed the issue. While no known attacks exploited the vulnerability, the potential consequences could have been severe, especially for users who rely on anonymity.


How the Vulnerability Worked

The flaw was identified by researchers Brutecat and Nathan, as reported by BleepingComputer. It involved an internal identifier used within Google’s ecosystem, known as the Gaia ID. Every YouTube account has a unique Gaia ID, which links it to Google’s services.

The exploit worked by blocking a YouTube account and then accessing its Gaia ID through the live chat function. Once attackers retrieved this identifier, they found a way to trace it back to the account’s registered email address. This loophole could have exposed the contact details of millions of users without their knowledge.


Google’s Reaction and Fix

Google confirmed that the issue was present from September 2024 to February 2025. Once informed, the company swiftly implemented a fix to prevent further risk. Google assured users that there were no reports of major misuse but acknowledged that the vulnerability had the potential for harm.


Why This Was a Serious Threat

The exposure of email addresses poses various risks, including phishing attempts, hacking threats, and identity theft. This is particularly concerning for individuals who depend on anonymity, such as whistleblowers, journalists, and activists. If their private details were leaked, it could have led to real-world dangers, not just online harassment.

Businesses also faced risks, as malicious actors could have used this flaw to target official YouTube accounts, leading to scams, fraud, or reputational damage.


Lessons and Preventive Measures

The importance of strong security measures and rapid responses to discovered flaws cannot be emphasized more. Users are encouraged to take precautions, such as enabling two-factor authentication (2FA), using secure passwords, and being cautious of suspicious emails or login attempts.

Tech companies, including Google, must consistently audit security systems and respond quickly to any potential weaknesses.

Although the security flaw was patched before any confirmed incidents occurred, this event serves as a reminder of the omnipresent risks in the digital world. By staying informed and following security best practices, both users and companies can work towards a safer online experience.



Amazon and Audible Face Scrutiny Amid Questionable Content Surge

 


The Amazon online book and podcast services, Amazon Music, and Audible have been inundated by bogus listings that attempt to trick customers into clicking on dubious "forex trading" sites, Telegram channels, and suspicious links claiming to offer pirated software for sale. It is becoming increasingly common to abuse Spotify playlists and podcasts to promote pirated software, cheat codes for video games, spam links, and "warez" websites. 

To spam Spotify web player results into search engines such as Google, threat actors can inject targeted keywords and links in the description and title of playlists and podcasts to boost SEO for their dubious online properties. In these listings, there are playlist names, podcast description titles, and bogus "episodes," which encourage listeners to visit external links that link to places that might cause a security breach. 

A significant number of threat actors exploit Google's Looker Studio (formerly Google Data Studio) to boost the search engine ranking of their illicit websites that promote spam, torrents, and pirated content by manipulating search engine rankings. According to BleepingComputer, one of the methods used in the SEO poisoning attack is Google's datastudio.google.com subdomain, which appears to lend credibility to the malicious website. 

Aside from mass email spam campaigns, spammers are also using Audible podcasts as another means to spread the word about their illicit activities. Spam can be sent to any digital platform that is open to the public, and no digital platform is immune to that. In cases such as those involving Spotify or Amazon, there is an interesting aspect that is, one would instinctively assume that the overhead associated with podcasting and digital music distribution would deter spammers, who would otherwise have to turn to low-hanging fruit, like writing spammy posts to social media or uploading videos that have inaccurate descriptions on YouTube. 

The most recent instance of this was a Spotify playlist entitled "Sony Vegas Pro 13 Crack...", which seemed to drive traffic to several "free" software sites listed in the title and description of the playlist. Karol Paciorek, a cybersecurity enthusiast who spotted the playlist, said, "Cybercriminals exploit Spotify for malware distribution because Spotify has become a prominent tool for distributing malware. Why? Because Spotify's tracks and pages are easily indexed by search engines, making it a popular location for creating malicious links.". 

The newest business intelligence tool from Google, Looker Studio (formerly, Google Data Studio) is a web-based tool that allows users to make use of data to create customizable reports and dashboards allowing them to visualize and analyze their data. A Data Studio application can, and has been used in the past, to track and visualize the download counts of open source packages over some time, such as four weeks, for a given period. There are many legitimate business cases for Looker Studio, but like any other web service, it may be misused by malicious actors looking to host questionable content on illegal domains or manipulate search engine results for illicit URLs. 

Recent SEO poisoning campaigns have been seen targeting keywords related to the U.S. midterm election, as well as pushing malicious Zoom, TeamViewer, and Visual Studio installers to targeted sites.  In advance of this article's publication, BleepingComputer has reached out to Google to better understand the strategy Google plans to implement in the future.

Firstory is a new service launched in 2019 that enables podcasters to distribute their shows across the globe, and even connect with audiences, thereby empowering them to enjoy their voice! Firstory is open to publishing podcasts on Spotify, but it acknowledges that spam is an ongoing issue that it is increasingly trying to address, as it focuses on curtailing it as much as possible. 

Spam accounts and misleading content remain persistent challenges for digital platforms, according to Stanley Yu, co-founder of Firstory, in a statement provided to BleepingComputer. Yu emphasized that addressing these issues is an ongoing priority for the company. To tackle the growing threat of unauthorized and spammy content, Firstory has implemented a multifaceted approach. This includes active collaboration with major streaming platforms to detect and remove infringing material swiftly. 

The company has also developed and employed advanced technologies to scan podcast titles and show notes for specific keywords associated with spam, ensuring early identification and mitigation of potential violations. Furthermore, Firstory proactively monitors and blocks suspicious email addresses commonly used by malicious actors to infiltrate and disrupt digital ecosystems. By integrating technology-driven solutions with strategic partnerships, Firstory aims to set a higher standard for content integrity across platforms. 

The company’s commitment reflects a broader industry imperative to protect users and maintain trust in an ever-expanding digital landscape. As digital platforms evolve, sustained vigilance and innovation will be essential to counter emerging threats and foster a safer, more reliable online environment.

YouTube: A Prime Target for Cybercriminals

As one of today's most popular social media platforms, YouTube frequently attracts cybercriminals who exploit it to run scams and distribute malware. These schemes often involve videos masquerading as tutorials for popular software or ads for cryptocurrency giveaways. In other cases, fraudsters embed malicious links in video descriptions or comments, making them appear as legitimate resources related to the video's content.

The theft of popular YouTube channels elevates these fraudulent campaigns, allowing cybercriminals to reach a vast audience of regular YouTube users. These stolen channels are repurposed to spread various scams and info-stealing malware, often through links to pirated and malware-infected software, movies, and game cheats. For YouTubers, losing access to their accounts can be distressing, leading to significant income loss and lasting reputational damage.

Most YouTube channel takeovers begin with phishing. Attackers create fake websites and send emails that appear to be from YouTube or Google, tricking targets into revealing their login credentials. Often, these emails promise sponsorship or collaboration deals, including attachments or links to supposed terms and conditions.

If accounts lack two-factor authentication (2FA) or if attackers circumvent this extra security measure, the threat becomes even more severe. Since late 2021, YouTube content creators have been required to use 2FA on the Google account associated with their channel. However, in some cases, such as the breach of Linus Tech Tips, attackers bypassed passwords and 2FA codes by stealing session cookies from victims' browsers, allowing them to sidestep additional security checks.

Attackers also use lists of usernames and passwords from past data breaches to hack into existing accounts, exploiting the fact that many people reuse passwords across different sites. Additionally, brute-force attacks, where automated tools try numerous password combinations, can be effective, especially if users have weak or common passwords and neglect 2FA.

Recent Trends and Malware

The AhnLab Security Intelligence Center (ASEC) recently reported an increase in hijacked YouTube channels, including one with 800,000 subscribers, used to distribute malware like RedLine Stealer, Vidar, and Lumma Stealer. According to the ESET Threat Report H2 2023, Lumma Stealer particularly surged in the latter half of last year, targeting crypto wallets, login credentials, and 2FA browser extensions. As noted in the ESET Threat Report H1 2024, these tools remain significant threats, often posing as game cheats or software cracks on YouTube.

In some cases, cybercriminals hijack Google accounts and quickly create and post thousands of videos distributing info-stealing malware. Victims may end up with compromised devices that further jeopardize their accounts on other platforms like Instagram, Facebook, X, Twitch, and Steam.

Staying Safe on YouTube

To protect yourself on YouTube, follow these tips:

  • Use Strong and Unique Login Credentials: Create robust passwords or passphrases and avoid reusing them across multiple sites. Consider using passkeys for added security.
  • Employ Strong 2FA: Use 2FA not just on your Google account, but also on all your accounts. Opt for authentication apps or hardware security keys over SMS-based methods.
  • Be Cautious with Emails and Links: Be wary of emails or messages claiming to be from YouTube or Google, especially if they request personal information or account credentials. Verify the sender's email address and avoid clicking on suspicious links or downloading unknown attachments.
  • Keep Software Updated: Ensure your operating system, browser, and other software are up-to-date to protect against known vulnerabilities.
  • Monitor Account Activity: Regularly check your account for any suspicious actions or login attempts. If you suspect your channel has been compromised, follow Google's guidance.
  • Stay Informed: Keep abreast of the latest cyber threats and scams targeting you online, including on YouTube, to better avoid falling victim.
  • Report and Block Suspicious Content: Report any suspicious or harmful content, comments, links, or users to YouTube and block such users to prevent further contact.
  • Secure Your Devices: Use multi-layered security software across your devices to guard against various threats.

Terrorist Tactics: How ISIS Duped Viewers with Fake CNN and Al Jazeera Channels


ISIS, a terrorist organization allegedly launched two fake channels on Google-owned video platforms YouTube and Facebook. CNN and Al Jazeera claimed to be global news platforms through their YouTube feeds. This goal was to provide credibility and ease the spread of ISIS propaganda.

According to research by the Institute for Strategic Dialogue, they managed two YouTube channels as well as two accounts on Facebook and X (earlier Twitter) with the help of the outlet 'War and Media'.

The campaign went live in March of this year. Furthermore, false profiles that resembled reputable channels were used on Facebook and YouTube to spread propaganda. These videos remained live on YouTube for more than a month. It's unclear when they were taken from Facebook.

The Deceptive Channels

ISIS operatives set up multiple fake channels on YouTube, each mimicking the branding and style of reputable news outlets. These channels featured professionally edited videos, complete with logos and graphics reminiscent of CNN and Al Jazeera. The content ranged from news updates to opinion pieces, all designed to lend an air of credibility.

Tactics and Objectives

1. Impersonation: By posing as established media organizations, ISIS aimed to deceive viewers into believing that the content was authentic. Unsuspecting users might stumble upon these channels while searching for legitimate news, inadvertently consuming extremist propaganda.

2. Content Variety: The fake channels covered various topics related to ISIS’s global expansion. Videos included recruitment messages, calls for violence, and glorification of terrorist acts. The diversity of content allowed them to reach a broader audience.

3. Evading Moderation: YouTube’s content moderation algorithms struggled to detect these fake channels. The professional production quality and branding made it challenging to distinguish them from genuine news sources. As a result, the channels remained active for over a month before being taken down.

Challenges for Social Media Platforms

  • Algorithmic Blind Spots: Algorithms designed to identify extremist content often fail when faced with sophisticated deception. The reliance on visual cues (such as logos) can be exploited by malicious actors.
  • Speed vs. Accuracy: Platforms must strike a balance between rapid takedowns and accurate content assessment. Delayed action allows harmful content to spread, while hasty removal risks false positives.
  • User Vigilance: Users play a crucial role in reporting suspicious content. However, the resemblance to legitimate news channels makes it difficult for them to discern fake from real.

Why is this harmful for Facebook, X users, and YouTube users?

A new method of creating phony social media channels for renowned news broadcasters such as CNN and Al Jazeera reveals how the terrorist organization's approach to avoiding content moderation on social media platforms has developed.

Unsuspecting users may be influenced by "honeypot" efforts, which, according to the research, will become more sophisticated, making it even more difficult to restrict the spread of terrorist content online.