Imagine walking into your local supermarket to buy a two-litre bottle of milk. You pay $3, but the person ahead of you pays $3.50, and the next shopper pays only $2. While this might sound strange, it reflects a growing practice known as surveillance pricing, where companies use personal data and artificial intelligence (AI) to determine how much each customer should pay. It is a regular practice and we must comprehend the ins and outs since we are directly subjected to it.
What is surveillance pricing?
Surveillance pricing refers to the use of digital tracking and AI to set individualised prices based on consumer behaviour. By analysing a person’s online activity, shopping habits, and even technical details like their device or location, retailers estimate each customer’s “pain point”, the maximum amount they are likely to pay for a product or service.
A recent report from the U.S. Federal Trade Commission (FTC) highlighted that businesses can collect such information through website pixels, cookies, account registrations, or email sign-ups. These tools allow them to observe browsing time, clicks, scrolling speed, and even mouse movements. Together, these insights reveal how interested a shopper is in a product, how urgent their need may be, and how much they can be charged without hesitation.
Growing concerns about fairness
In mid-2024, Delta Air Lines disclosed that a small percentage of its domestic ticket pricing was already determined using AI, with plans to expand this method to more routes. The revelation led U.S. lawmakers to question whether customer data was being used to charge certain passengers higher fares. Although Delta stated that it does not use AI for “predatory or discriminatory” pricing, the issue drew attention to how such technology could reshape consumer costs.
Former FTC Chair Lina Khan has also warned that some businesses can predict each consumer’s willingness to pay by analysing their digital patterns. This ability, she said, could allow companies to push prices to the upper limit of what individuals can afford, often without their knowledge.
How does it work?
AI-driven pricing systems use vast amounts of data, including login details, purchase history, device type, and location to classify shoppers by “price sensitivity.” The software then tests different price levels to see which one yields the highest profit.
The FTC’s surveillance pricing study revealed several real-world examples of this practice:
Real-world examples and evidence
Ride-hailing platforms have long faced questions about this kind of data-driven pricing. In 2016, Uber’s former head of economic research noted that users with low battery life were more likely to accept surge pricing. A 2023 Belgian newspaper investigation later reported small differences in Uber fares depending on a phone’s battery level. Uber denied that battery status affects fares, saying its prices depend only on driver supply and ride demand.
Is this new?
The concept itself isn’t new. Dynamic pricing has existed for decades, but digital surveillance has made it far more sophisticated. In the early 2000s, Amazon experimented with varying prices for DVDs based on browsing data, sparking backlash from consumers who discovered the differences. Similarly, the UK’s Norwich Union once used satellite tracking for a “Pay As You Drive” car insurance model, which was discontinued after privacy concerns.
The future of pricing
Today’s combination of big data and AI allows retailers to create precise, individualised pricing models that adjust instantly. Experts warn this could undermine fair competition, reduce transparency, and widen inequality between consumers. Regulators like the FTC are now studying these systems closely to understand their impact on market fairness and consumer privacy.
For shoppers, awareness is key. Comparing prices across devices, clearing cookies, and using privacy tools can help reduce personal data tracking. As AI continues to shape how businesses price their products, understanding surveillance pricing is becoming essential to protect both privacy and pocket.
The US Federal Trade Commission (FTC) has filed actions against two US-based data brokers for allegedly engaging in illegal tracking of users' location data. The data was reportedly used to trace individuals in sensitive locations such as hospitals, churches, military bases, and other protected areas. It was then sold for purposes including advertising, political campaigns, immigration enforcement, and government use.
The Georgia-based data broker, Mobilewalla, has been accused of tracking residents of domestic abuse shelters and protestors during the George Floyd demonstrations in 2020. According to the FTC, Mobilewalla allegedly attempted to identify protestors’ racial identities by tracing their smartphones. The company’s actions raise serious privacy and ethical concerns.
The FTC also suspects Gravy Analytics and its subsidiary Venntel of misusing customer location data without consent. Reports indicate they used this data to “unfairly infer health decisions and religious beliefs,” as highlighted by TechCrunch. These actions have drawn criticism for their potential to exploit sensitive personal information.
The FTC revealed that Gravy Analytics collected over 17 billion location signals from more than 1 billion smartphones daily. The data was allegedly sold to federal law enforcement agencies such as the Drug Enforcement Agency (DEA), the Department of Homeland Security (DHS), and the Federal Bureau of Investigation (FBI).
Samuel Levine, Director of the FTC’s Bureau of Consumer Protection, stated, “Surreptitious surveillance by data brokers undermines our civil liberties and puts servicemembers, union workers, religious minorities, and others at risk. This is the FTC’s fourth action this year challenging the sale of sensitive location data, and it’s past time for the industry to get serious about protecting Americans’ privacy.”
As part of two settlements announced by the FTC, Mobilewalla and Gravy Analytics will cease collecting sensitive location data from customers. They are also required to delete the historical data they have amassed about millions of Americans over time.
The settlements mandate that the companies establish a sensitive location data program to identify and restrict tracking and disclosing customer information from specific locations. These protected areas include religious organizations, medical facilities, schools, and other sensitive sites.
Additionally, the FTC’s order requires the companies to maintain a supplier assessment program to ensure consumers have provided consent for the collection and use of data that reveals their precise location or mobile device information.
OpenAI has admitted that developing ChatGPT would not have been feasible without the use of copyrighted content to train its algorithms. It is widely known that artificial intelligence (AI) systems heavily rely on social media content for their development. In fact, AI has become an essential tool for many social media platforms.
The codes, which are a digital jumble of white and black squares that are frequently used to record URLs, are apparently commonplace; they may as well be seen, for example, on menus at restaurants and retail establishments. The Federal Trade Commission cautioned on Thursday that they could be dangerous for those who aren't cautious.
According to a report by eMarketer, around 94 million US consumers have used QR scanner this year. The number is only increasing, with around 102.6 million anticipated by 2026.
As per Alvaro Puig, a consumer education specialist with the FTC, QRs are quite popular since there are endless ways to use them.
“Unfortunately, scammers hide harmful links in QR codes to steal personal information,” Puig said.
The stolen data can be misused by threat actors in a number of ways: According to a separate report by FTC, the identity thieves can use victim’s personal data to illicitly file tax returns in their names and obtain tax refunds, drain their bank accounts, charge their credit cards, open new utility accounts, get medical treatment on their health insurance, and open new utility accounts.
In some cases, criminals cover the legitimate QR codes with their own, in places like parking meters, or even send codes via text messages or emails, luring victims into scanning their codes.
One of the infamous tactic used by scammers is by creating a sense of urgency in their victims. For example, they might suggest that a product could not be delivered and you need to reschedule or that you need to change your account password because of suspicious activity.
“A scammer’s QR code could take you to a spoofed site that looks real but isn’t,” Puig wrote. “And if you log in to the spoofed site, the scammers could steal any information you enter. Or the QR code could install malware that steals your information before you realize it.”
According to FTC, some of the measures one can follow to protect themselves from scams are:
This mandate requires the creation, execution, and upkeep of an extensive security policy to protect consumer data, and it applies to businesses including payday lenders, auto dealers, and mortgage brokers.
The Safeguards Rule, which required financial institutions to report security breaches found in their systems as soon as they occur, was recently amended by the federal government. Organizations must notify the Federal Trade Commission (FTC) "as soon as possible," but no later than 30 days, of any security issue involving the information of 500 or more customers.
It has been made mandatory for organizations to report the FTC in case any malicious or unauthorized entity gains illicit access to unencrypted customer data. However, this requirement is only applicable if the data is encrypted and hackers have obtained access to the encryption keys.
From April 2024, the new regulation will go into effect 180 days after it is published in the Federal Register.
FTC further informs that following the discovery of a security incident, non-banking financial institutions will have to use the FTC's online site to report pertinent information to the commission. The identity and contact details of the reporting institution, the number of customers affected, a description of the data disclosed, the date of exposure, and the length of the incident should all be included in a thorough breach report.
Moreover, the amendment will also enable firms to notify the FTC in case the public disclosure of the breach jeopardizes their investigation or national security. An official from law enforcement may as well ask for an additional 60-day delay before making the information public.
The FTC's Bureau of Consumer Protection head, Samuel Levine, stressed that businesses that are entrusted with private financial data must be open and honest "if that information has been compromised." These businesses should be given "additional incentive" by the new disclosure obligation to actually protect the data of their customers.
In October 2021, the FTC released revised guidelines to improve data security while also inviting public feedback on a proposed supplemental amendment to the data breach reporting standards. The new amendment was ultimately accepted by a unanimous vote of three to one.
The Federal Trade Commission (FTC) has recently launched an investigation into ChatGPT, the popular language model developed by OpenAI. This move comes as a stark reminder of the growing concerns surrounding the potential pitfalls of artificial intelligence (AI) and the need for stringent regulations to protect consumers. The investigation was initiated in response to potential violations of consumer protection laws, raising important questions about the transparency and accountability of AI technologies.
According to the Washington Post, the FTC's investigation focuses on OpenAI's ChatGPT after it was allegedly involved in instances of providing misleading information to users. The specific incidents leading to the investigation have not been disclosed yet, but the potential consequences of AI systems spreading false or harmful information have raised alarms in both the tech industry and regulatory circles.
As AI technologies become more prevalent in our daily lives, concerns regarding their trustworthiness and accuracy have grown. ChatGPT, with its wide usage in various applications such as customer support, content creation, and online interactions, has emerged as one of the most prominent examples of AI's impact on society. However, incidents of misinformation and biased responses from the AI model have cast doubts on its reliability, leading to the FTC's intervention.
Lina Khan, the Chairwoman of the FTC, highlighted the importance of the investigation, stating, "AI systems have the potential to significantly impact consumers and their decision-making. It is vital that we understand the extent to which these technologies can be trusted and how they may influence individuals' choices."
OpenAI, the organization behind ChatGPT, has acknowledged the FTC's investigation and expressed cooperation with the authorities in a statement reported by Barron's. "We take these allegations seriously and are committed to ensuring the utmost transparency and accountability of our AI systems. We will collaborate fully with the FTC to address any concerns and ensure consumer confidence in our technology," the statement read.
Apparently, a groundbreaking FTC order banned the stalkerware app, SpyFone, along with its parent company Support King, and its chief executive Scott Zuckerman from the surveillance industry. The regulator's five sitting commissioners unanimously approved the order, which also required Support King to retrieve the phone data it had wrongfully obtained, and inform victims that its software had been covertly placed on their devices.
What are Stalkerware?
Stalkerware, or spouseware, refers to apps that are covertly installed by someone with physical access to a person's phone, frequently in the pseudonym of family tracking or child monitoring. However, these apps are created to remain hidden from home screens, silently uploading a person's phone's contents, including their text messages, photos, browsing history, and precise location information, while also pretending to be family tracking or child monitoring apps.
However, several stalkerware apps, such as KidsGuard, TheTruthSpy, and Xnspy, possess certain security flaws that expose the private data of thousands of people to greater risks.
These apps as well include SpyFone, whose unprotected cloud storage server leaked the private information taken from more than 2,000 victims' phones, leading the FTC to launch an investigation and ensuing ban on Support King and its CEO Zuckerman from providing, distributing, promoting, or in any other way, aiding the sale of spy apps.
TechCrunch, since then has received further data tranches, that include the data from internal servers of the stalkerware programme SpyTrac, which is being operated by programmers that are associated with Support King.
At Twitter, as we all know by now that a lot is going on. 50% of the employees were laid off after Elon Musk took over the business. A couple more top executives quit the firm as Musk implemented measures to make Twitter profitable.