A newly launched U.S. mobile carrier is questioning long-standing telecom practices by offering phone service without requiring customers to submit personal identification. The company, Phreeli, presents itself as a privacy-focused alternative in an industry known for extensive data collection.
Phreeli officially launched in early December and describes its service as being built with privacy at its core. Unlike traditional telecom providers that ask for names, residential addresses, birth dates, and other sensitive information, Phreeli limits its requirements to a ZIP code, a chosen username, and a payment method. According to the company, no customer profiles are created or sold, and user data is not shared for advertising or marketing purposes.
Customers can pay using standard payment cards, or opt for cryptocurrency if they wish to reduce traceable financial links. The service operates entirely on a prepaid basis, with no contracts involved. Monthly plans range from lower-cost options for light usage to higher-priced tiers for customers who require more mobile data. The absence of contracts aligns with the company’s approach, as formal agreements typically require verified personal identities.
Rather than building its own cellular infrastructure, Phreeli operates as a Mobile Virtual Network Operator. This means it provides service by leasing network access from an established carrier, in this case T-Mobile. This model allows Phreeli to offer nationwide coverage without owning physical towers or equipment.
Addressing legal concerns, the company states that U.S. law does not require mobile carriers to collect customer names in order to provide service. To manage billing while preserving anonymity, Phreeli says it uses a system that separates payment information from communication data. This setup relies on cryptographic verification to confirm that accounts are active, without linking call records or data usage to identifiable individuals.
The company’s privacy policy notes that information will only be shared when necessary to operate the service or when legally compelled. By limiting the amount of data collected from the start, Phreeli argues that there is little information available even in the event of legal requests.
Phreeli was founded by Nicholas Merrill, who previously operated an internet service provider and became involved in a prolonged legal dispute after challenging a government demand for user information. That experience reportedly influenced the company’s data-minimization philosophy.
While services that prioritize anonymity are often associated with misuse, Phreeli states that it actively monitors for abusive behavior. Accounts involved in robocalling or scams may face restrictions or suspension.
As concerns grow rampant around digital surveillance and commercial data harvesting, Phreeli’s launch sets the stage for a broader discussion about privacy in everyday communication. Whether this model gains mainstream adoption remains uncertain, but it introduces a notable shift in how mobile services can be structured in the United States.
The FTC has denied founder Scott Zuckerman's request to cancel the ban. It is also applicable to other subsidiaries OneClickMonitor and SpyFone.
Recently, the FTC announced the move in a press release when Zuckerman petitioned the agency to cancel the ban order in July of 2025.
The FTC banned Zuckerman from “offering, promoting, selling, or advertising any surveillance app, service, or business,” in 2021 and stopped him from running other stalkerware business. Zuckerman had to also delete all the data stored by SpyFone and went through various audits to implement cybersecurity measures for his ventures. Then acting director of the FTC's Bureau of Consumer Protection, Samuel Levine said that the "stalkerware was hidden from device owners, but was fully exposed to hackers who exploited the company’s slipshod security."
Zuckerman in his petition said that the FTC mandate has made it difficult for him to conduct other businesses due to monetary losses, even though Support King is out of business and he now only operates a restaurant and plans other ventures.
The ban came from a 2018 incident after a researcher discovered an Amazon S3 bucket of SpyFone that left important data such as selfies, chats, texts, contacts, passwords, logins, and audio recordings exposed online in the open. The leaked data comprised 44,109 email ids.
According to Samuel, “SpyFone is a brazen brand name for a surveillance business that helped stalkers steal private information." He further said that the "stalkerware was hidden from device owners, but was fully exposed to hackers who exploited the company’s slipshod security.r
According to TechCrunch, after the 2021 order, Zuckerman started running another stalkerware firm. In 2022, TechCrunch found breached data from stalkerware application SpyTrac.
According to the data, freelance developers ran SpyTrac who had direct links with Support King. It was an attempt to escape the FTC ban. Additionally, the breached data contained records from SpyFone, which Support King was supposed to delete. Beside this, the data also contained access keys to the cloud storage of OneClickMonitor, another stalkerware application.
The Indian government is pushing a telecom industry proposal that will compel smartphone companies to allow satellite location tracking that will be activated 24x7 for surveillance.
Tech giants Samsung, Google, and Apple have opposed this move due to privacy concerns. Privacy debates have stirred in India after the government was forced to repeal an order that mandated smartphone companies to pre-install a state run cyber safety application on all devices. Activists and opposition raised concerns about possible spying.
Recently, the government had been concerned that agencies didn't get accurate locations when legal requests were sent to telecom companies during investigations. Currently, the firm only uses cellular tower data that provides estimated area location, this can be sometimes inaccurate.
The Cellular Operators Association of India (COAI) representing Bharti Airtel and Reliance Jio suggested accurate user locations be provided if the government mandates smartphone firms to turn on A-GPS technology which uses cellular data and satellite signals.
If this is implemented, location services will be activated in smartphones with no disable option. Samsung, Google, and Apple strongly oppose this proposal. A proposal to track user location is not present anywhere else in the world, according to lobbying group India Cellular & Electronics Association (ICEA), representing Google and Apple.
Reuters reached out to the India's IT and home ministries for clarity on the telecom industry's proposal but have received no replies. According to digital forensics expert Junade Ali, the "proposal would see phones operate as a dedicated surveillance device."
According to technology experts, utilizing A-GPS technology, which is normally only activated when specific apps are operating or emergency calls are being made, might give authorities location data accurate enough to follow a person to within a meter.
Globally, governments are constantly looking for new ways to improve in tracking the movements or data of mobile users. All Russian mobile phones are mandated to have a state-sponsored communications app installed. With 735 million smartphones as of mid-2025, India is the second-largest mobile market in the world.
According to Counterpoint Research, more than 95% of these gadgets are running Google's Android operating system, while the remaining phones are running Apple's iOS.
Apple and Google cautioned that their user base will include members of the armed forces, judges, business executives, and journalists, and that the proposed location tracking would jeopardize their security because they store sensitive data.
According to the telecom industry, even the outdated method of location tracking is becoming troublesome because smartphone manufacturers notify users via pop-up messages that their "carrier is trying to access your location."
Brave has started testing a new feature that allows its built-in assistant, Leo, to carry out browsing activities on behalf of the user. The capability is still experimental and is available only in the Nightly edition of the browser, which serves as Brave’s testing environment for early features. Users must turn on the option manually through Brave’s internal settings page before they can try it.
The feature introduces what Brave calls agentic AI browsing. In simple terms, it allows Leo to move through websites, gather information, and complete multi-step tasks without constant user input. Brave says the tool is meant to simplify activities such as researching information across many sites, comparing products online, locating discount codes, and creating summaries of current news. The company describes this trial as its initial effort to merge active AI support with everyday browsing.
Brave has stated openly that this technology comes with serious security concerns. Agentic systems can be manipulated by malicious websites through a method known as prompt injection, which attempts to make the AI behave in unsafe or unintended ways. The company warns that users should not rely on this mode for important decisions or any activity involving sensitive information, especially while it remains in early testing.
To limit these risks, Brave has placed the agent in its own isolated browser profile. This means the AI does not share cookies, saved logins, or browsing data from the user’s main profile. The agent is also blocked from areas that could create additional vulnerabilities. It cannot open the browser’s settings page, visit sites that do not use HTTPS, interact with the Chrome Web Store, or load pages that Brave’s safety system identifies as dangerous. Whenever the agent attempts a task that might expose the user to risk, the browser will display a warning and request the user’s confirmation.
Brave has added further oversight through what it calls an alignment checker. This is a separate monitoring system that evaluates whether the AI’s actions match what the user intended. Since the checker operates independently, it is less exposed to manipulation that may affect the main agent. Brave also plans to use policy-based restrictions and models trained to resist prompt-injection attempts to strengthen the system’s defenses. According to the company, these protections are designed so that the introduction of AI does not undermine Brave’s existing privacy promises, including its no-logs policy and its blocking of ads and trackers.
Users interested in testing the feature can enable it by installing Brave Nightly and turning on the “Brave’s AI browsing” option from the experimental flags page. Once activated, a new button appears inside Leo’s chat interface that allows users to launch the agentic mode. Brave has asked testers to share feedback and has temporarily increased payments on its HackerOne bug bounty program for security issues connected to AI browsing.
Many people believe they are safe online once they disable cookies, switch on private browsing, or limit app permissions. Yet these steps do not prevent one of the most persistent tracking techniques used today. Modern devices reveal enough technical information for websites to recognise them with surprising accuracy, and users can see this for themselves with a single click using publicly available testing tools.
This practice is known as device fingerprinting. It collects many small and unrelated pieces of information from your phone or computer, such as the type of browser you use, your display size, system settings, language preferences, installed components, and how your device handles certain functions. None of these details identify you directly, but when a large number of them are combined, they create a pattern that is specific to your device. This allows trackers to follow your activity across different sites, even when you try to browse discreetly.
The risk is not just about being observed. Once a fingerprint becomes associated with a single real-world action, such as logging into an account or visiting a page tied to your identity, that unique pattern can then be connected back to you. From that point onward, any online activity linked to that fingerprint can be tied to the same person. This makes fingerprinting an effective tool for profiling behaviour over long periods of time.
Growing concerns around online anonymity are making this issue more visible. Recent public debates about identity checks, age verification rules, and expanded monitoring of online behaviour have already placed digital privacy under pressure. Fingerprinting adds an additional layer of background tracking that does not rely on traditional cookies and cannot be easily switched off.
This method has also spread far beyond web browsers. Many internet-connected devices, including smart televisions and gaming systems, can reveal similar sets of technical signals that help build a recognisable device profile. As more home electronics become connected, these identifiers grow even harder for users to avoid.
Users can test their own exposure through tools such as the Electronic Frontier Foundation’s browser evaluation page. By selecting the option to analyse your browser, you will either receive a notice that your setup looks common or that it appears unique compared to others tested. A unique result means your device stands out strongly among the sample and can likely be recognised again. Another testing platform demonstrates just how many technical signals a website can collect within seconds, listing dozens of attributes that contribute to a fingerprint.
Some browsers attempt to make fingerprinting more difficult by randomising certain data points or limiting access to high-risk identifiers. These protections reduce the accuracy of device recognition, although they cannot completely prevent it. A virtual private network can hide your network address, but it cannot block the internal characteristics that form a fingerprint.
Tracking also happens through mobile apps and background services. Many applications collect usage and technical data, and privacy labels do not always make this clear to users. Studies have shown that complex privacy settings and permission structures often leave people unaware of how much information their devices share.
Users should also be aware of design features that shift them out of protected environments. For example, when performing a search through a mobile browser, some pages include prompts that encourage the user to open a separate application instead of continuing in the browser. These buttons are typically placed near navigation controls, making accidental taps more likely. Moving into a dedicated search app places users in a different data-collection environment, where protections offered by the browser may no longer apply.
While there is no complete way to avoid fingerprinting, users can limit their exposure by choosing browsers with built-in privacy protections, reviewing app permissions frequently, and avoiding unnecessary redirections into external applications. Ultimately, the choice depends on how much value an individual places on privacy, but understanding how this technology works is the first step toward reducing risk.
Large online platforms are rapidly shifting to biometric age assurance systems, creating a scenario where users may lose access to their accounts or risk exposing sensitive personal information if automated systems make mistakes.
Online platforms have struggled for decades with how to screen underage users from adult-oriented content. Everything from graphic music tracks on Spotify to violent clips circulating on TikTok has long been available with minimal restrictions.
Recent regulatory pressure has changed this landscape. Laws such as the United Kingdom’s Online Safety Act and new state-level legislation in the United States have pushed companies including Reddit, Spotify, YouTube, and several adult-content distributors to deploy AI-driven age estimation and identity verification technologies. Pornhub’s parent company, Aylo, is also reevaluating whether it can comply with these laws after being blocked in more than a dozen US states.
These new systems require users to hand over highly sensitive personal data. Age estimation relies on analyzing one or more facial photos to infer a user’s age. Verification is more exact, but demands that the user upload a government-issued ID, which is among the most sensitive forms of personal documentation a person can share online.
Both methods depend heavily on automated facial recognition algorithms. The absence of human oversight or robust appeals mechanisms magnifies the consequences when these tools misclassify users. Incorrect age estimation can cut off access to entire categories of content or trigger more severe actions. Similar facial analysis systems have been used for years in law enforcement and in consumer applications such as Google Photos, with well-documented risks and misidentification incidents.
Refusing these checks often comes with penalties. Many services will simply block adult content until verification is completed. Others impose harsher measures. Spotify, for example, warns that accounts may be deactivated or removed altogether if age cannot be confirmed in regions where the platform enforces a minimum age requirement. According to the company, users are given ninety days to complete an ID check before their accounts face deletion.
This shift raises pressing questions about the long-term direction of these age enforcement systems. Companies frequently frame them as child-safety measures, but users are left wondering how long these platforms will protect or delete the biometric data they collect. Corporate promises can be short-lived. Numerous abandoned websites still leak personal data years after shutting down. The 23andMe bankruptcy renewed fears among genetic testing customers about what happens to their information if a company collapses. And even well-intentioned apps can create hazards. A safety-focused dating application called Tea ended up exposing seventy-two thousand users’ selfies and ID photos after a data breach.
Even when companies publicly state that they do not retain facial images or ID scans, risks remain. Discord recently revealed that age verification materials, including seventy thousand IDs, were compromised after a third-party contractor called 5CA was breached.
Platforms assert that user privacy is protected by strong safeguards, but the details often remain vague. When asked how YouTube secures age assurance data, Google offered only a general statement claiming that it employs advanced protections and allows users to adjust their privacy settings or delete data. It did not specify the precise security controls in place.
Spotify has outsourced its age assurance system to Yoti, a digital identity provider. The company states that it does not store facial images or ID scans submitted during verification. Yoti receives the data directly and deletes it immediately after the evaluation, according to Spotify. The platform retains only minimal information about the outcome: the user’s age in years, the method used, and the date the check occurred. Spotify adds that it uses measures such as pseudonymization, encryption, and limited retention policies to prevent unauthorized access. Yoti publicly discloses some technical safeguards, including use of TLS 1.2 by default and TLS 1.3 where supported.
Privacy specialists argue that these assurances are insufficient. Adam Schwartz, privacy litigation director at the Electronic Frontier Foundation, told PCMag that facial scanning systems represent an inherent threat, regardless of whether they are being used to predict age, identity, or demographic traits. He reiterated the organization’s stance supporting a ban on government deployment of facial recognition and strict regulation for private-sector use.
Schwartz raises several issues. Facial age estimation is imprecise by design, meaning it will inevitably classify some adults as minors and deny them access. Errors in facial analysis also tend to fall disproportionately on specific groups. Misidentification incidents involving people of color and women are well documented. Google Photos once mislabeled a Black software engineer and his friend as animals, underlining systemic flaws in training data and model accuracy. These biases translate directly into unequal treatment when facial scans determine whether someone is allowed to enter a website.
He also warns that widespread facial scanning increases privacy and security risks because faces function as permanent biometric identifiers. Unlike passwords, a person cannot replace their face if it becomes part of a leaked dataset. Schwartz notes that at least one age verification vendor has already suffered a breach, underscoring material vulnerabilities in the system.
Another major problem is the absence of meaningful recourse when AI misjudges a user’s age. Spotify’s approach illustrates the dilemma. If the algorithm flags a user as too young, the company may lock the account, enforce viewing restrictions, or require a government ID upload to correct the error. This places users in a difficult position, forcing them to choose between potentially losing access or surrendering more sensitive data.
Do not upload identity documents unless required, check a platform’s published privacy and retention statements before you comply, and use account recovery channels if you believe an automated decision is wrong. Companies and regulators must do better at reducing vendor exposure, increasing transparency, and ensuring appeals are effective.
Despite these growing concerns, users continue to find ways around verification tools. Discord users have discovered that uploading photos of fictional characters can bypass facial age checks. Virtual private networks remain a viable method for accessing age-restricted platforms such as YouTube, just as they help users access content that is regionally restricted. Alternative applications like NewPipe offer similar functionality to YouTube without requiring formal age validation, though these tools often lack the refinement and features of mainstream platforms.
The more we share online, the easier it becomes for attackers to piece together our personal lives. Photos, location tags, daily routines, workplace details, and even casual posts can be combined to create a fairly accurate picture of who we are. Cybercriminals use this information to imitate victims, trick service providers, and craft convincing scams that look genuine. When someone can guess where you spend your time or what services you rely on, they can more easily pretend to be you and manipulate systems meant to protect you. Reducing what you post publicly is one of the simplest steps to lower this risk.
Weak passwords add another layer of vulnerability, but a recent industry assessment has shown that the problem is not only with users. Many of the most visited websites do not enforce strong password requirements. Some platforms do not require long passwords, special characters, or case sensitivity. This leaves accounts easier to break into through automated attacks. Experts recommend that websites adopt stronger password rules, introduce passkey options, and guide users with clear indicators of password strength. Users can improve their own security by relying on password managers, creating long unique passwords, and enabling two factor authentication wherever possible.
Concerns about device security are also increasing. Several governments have begun reviewing whether certain networking devices introduce national security risks, especially when the manufacturers are headquartered in countries that have laws allowing state access to data. These investigations have sparked debates over how consumer hardware is produced, how data flows through global supply chains, and whether companies can guarantee independence from government requests. For everyday users, this tension means it is important to select routers and other digital devices that receive regular software updates, publish clear security policies, and have a history of addressing vulnerabilities quickly.
Another rising threat is ransomware. Criminal groups continue to target both individuals and large organisations, encrypting data and demanding payment for recovery. Recent cases involving individuals with cybersecurity backgrounds show how profitable illicit markets can attract even trained professionals. Because attackers now operate with high levels of organisation, users and businesses should maintain offline backups, restrict access within internal networks, and test their response plans in advance.
Privacy concerns are also emerging in the travel sector. Airline data practices are also drawing scrutiny. Travel companies cannot directly sell passenger information to government programs due to legal restrictions, so several airlines jointly rely on an intermediary that acts as a broker. Reports show that this broker had been distributing data for years but only recently registered itself as a data broker, which is legally required. Users can request removal from this data-sharing system by emailing the broker’s privacy address and completing identity verification. Confirmation records should be stored for reference. The process involves verifying identity details, and users should keep a copy of all correspondence and confirmations.
Finally, several governments are exploring digital identity systems that would allow residents to store official identification on their phones. Although convenient, this approach raises significant privacy risks. Digital IDs place sensitive information in one central location, and if the surrounding protections are weak, the data could be misused for tracking or monitoring. Strong legal safeguards, transparent data handling rules, and external audits are essential before such systems are implemented.
Experts warn that centralizing identity increases the potential impact of a breach and may facilitate tracking unless strict limits, independent audits, and user controls are enforced. Policymakers must balance convenience with strong technical and legal protections.
Practical, immediate steps one should follow:
1. Reduce public posts that reveal routines or precise locations.
2. Use a password manager and unique, long passwords.
3. Turn on two factor authentication for important accounts.
4. Maintain offline backups and test recovery procedures.
5. Check privacy policies of travel brokers and submit opt-out requests if you want to limit data sharing.
6. Prefer devices with clear update policies and documented security practices.
These measures lower the chance that routine online activity becomes a direct route into your accounts or identity. Small, consistent changes will greatly reduce risk.
Overall, users can strengthen their protection by sharing less online, reviewing how their travel data is handled, and staying informed about the implications of digital identification. Small and consistent actions reduce the likelihood of becoming a victim of cyber threats.
The mega-messenger from Meta is allegedly collecting user data to generate ad money, according to recent attacks on WhatsApp. WhatsApp strongly opposes these fresh accusations, but it didn't help that a message of its own appeared to imply the same.
There are two prominent origins of the recent attacks. Few experts are as well-known as Elon Musk, particularly when it occurs on X, the platform he owns. Musk asserted on the Joe Rogan Experience that "WhatsApp knows enough about what you're texting to know what ads to show you." "That is a serious security flaw."
These so-called "hooks for advertising" are typically thought to rely on metadata, which includes information on who messages whom, when, and how frequently, as well as other information from other sources that is included in a user's profile.
The message content itself is shielded by end-to-end encryption, which is the default setting for all 3 billion WhatsApp users. Signal's open-source encryption protocol, which the Meta platform adopted and modified for its own use, is the foundation of WhatsApp's security. So, in light of these new attacks, do you suddenly need to stop using WhatsApp?
In reality, WhatsApp's content is completely encrypted. There has never been any proof that Meta, WhatsApp, or anybody else can read the content itself. However, the platform you are utilizing is controlled by Meta, and it is aware of your identity. It does gather information on how you use the platform.
Additionally, it shares information with Meta so that it can "show relevant offers/ads." Signal has a small portion of WhatsApp's user base, but it does not gather metadata in the same manner. Think about using Signal instead for sensitive content. Steer clear of Telegram since it is not end-to-end encrypted and RCS because it is not yet cross-platform encrypted.
Remember that end-to-end encryption only safeguards your data while it is in transit. It has no effect on the security of your content on the device. I can read all of your messages, whether or not they are end-to-end encrypted, if I have control over your iPhone or Android.