Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label FTC. Show all posts

Meta Cleared of Monopoly Charges in FTC Antitrust Case

 

A U.S. federal judge ruled that Meta does not hold a monopoly in the social media market, rejecting the FTC's antitrust lawsuit seeking divestiture of Instagram and WhatsApp. The FTC, joined by multiple states, filed the suit in December 2020, alleging Meta (formerly Facebook) violated Section 2 of the Sherman Act by acquiring Instagram for $1 billion in 2012 and WhatsApp for $19 billion in 2014. 

These moves were part of a supposed "buy-or-bury" strategy to eliminate rivals in "personal social networking services" (PSNS), stifling innovation, increasing ads, and weakening privacy. The agency claimed Meta's dominance left consumers with few alternatives, excluding platforms like TikTok and YouTube from its narrow market definition.

Trial and ruling

U.S. District Judge James Boasberg oversaw a seven-week trial ending in May 2025, featuring testimony from Meta CEO Mark Zuckerberg, who highlighted competition from TikTok and YouTube. In an 89-page opinion on November 18, 2025, Boasberg ruled the FTC failed to prove current monopoly power, noting the social media landscape's rapid evolution with surging apps, new features, and AI content.He emphasized Meta's market share—below 50% and declining in a broader market including Snapchat, TikTok, and YouTube—showed no insulation from rivals.

Key arguments and evidence

The FTC presented internal emails suggesting Zuckerberg feared Instagram and WhatsApp as threats, arguing acquisitions suppressed competition and harmed users via heavier ads and less privacy. Boasberg dismissed this, finding direct evidence like supra-competitive profits or price hikes insufficient for monopoly proof, and rejected the PSNS market as outdated given overlapping uses across apps.Meta countered that regulators approved the deals initially and that forcing divestiture would hurt U.S. innovation.I

Implications

Meta hailed the decision as affirming fierce competition and its contributions to growth, avoiding operational upheaval for its 3.54 billion daily users. The FTC expressed disappointment and is reviewing options, marking a setback amid wins against Google but ongoing cases versus Apple and Amazon. Experts view it as reinforcing consumer-focused antitrust in dynamic tech markets.

Surveillance Pricing: How Technology Decides What You Pay




Imagine walking into your local supermarket to buy a two-litre bottle of milk. You pay $3, but the person ahead of you pays $3.50, and the next shopper pays only $2. While this might sound strange, it reflects a growing practice known as surveillance pricing, where companies use personal data and artificial intelligence (AI) to determine how much each customer should pay. It is a regular practice and we must comprehend the ins and outs since we are directly subjected to it.


What is surveillance pricing?

Surveillance pricing refers to the use of digital tracking and AI to set individualised prices based on consumer behaviour. By analysing a person’s online activity, shopping habits, and even technical details like their device or location, retailers estimate each customer’s “pain point”, the maximum amount they are likely to pay for a product or service.

A recent report from the U.S. Federal Trade Commission (FTC) highlighted that businesses can collect such information through website pixels, cookies, account registrations, or email sign-ups. These tools allow them to observe browsing time, clicks, scrolling speed, and even mouse movements. Together, these insights reveal how interested a shopper is in a product, how urgent their need may be, and how much they can be charged without hesitation.


Growing concerns about fairness

In mid-2024, Delta Air Lines disclosed that a small percentage of its domestic ticket pricing was already determined using AI, with plans to expand this method to more routes. The revelation led U.S. lawmakers to question whether customer data was being used to charge certain passengers higher fares. Although Delta stated that it does not use AI for “predatory or discriminatory” pricing, the issue drew attention to how such technology could reshape consumer costs.

Former FTC Chair Lina Khan has also warned that some businesses can predict each consumer’s willingness to pay by analysing their digital patterns. This ability, she said, could allow companies to push prices to the upper limit of what individuals can afford, often without their knowledge.


How does it work?

AI-driven pricing systems use vast amounts of data, including login details, purchase history, device type, and location to classify shoppers by “price sensitivity.” The software then tests different price levels to see which one yields the highest profit.

The FTC’s surveillance pricing study revealed several real-world examples of this practice:

  1. Encouraging hesitant users: A betting website might detect when a visitor is about to leave and display new offers to convince them to stay.
  2. Targeting new buyers: A car dealership might identify first-time buyers and offer them different financing options or deals.
  3. Detecting urgency: A parent choosing fast delivery for baby products may be deemed less price-sensitive and offered fewer discounts.
  4. Withholding offers from loyal customers: Regular shoppers might be excluded from promotions because the system expects them to buy anyway.
  5. Monitoring engagement: If a user watches a product video for longer, the system might interpret it as a sign they are willing to pay more.


Real-world examples and evidence

Ride-hailing platforms have long faced questions about this kind of data-driven pricing. In 2016, Uber’s former head of economic research noted that users with low battery life were more likely to accept surge pricing. A 2023 Belgian newspaper investigation later reported small differences in Uber fares depending on a phone’s battery level. Uber denied that battery status affects fares, saying its prices depend only on driver supply and ride demand.


Is this new?

The concept itself isn’t new. Dynamic pricing has existed for decades, but digital surveillance has made it far more sophisticated. In the early 2000s, Amazon experimented with varying prices for DVDs based on browsing data, sparking backlash from consumers who discovered the differences. Similarly, the UK’s Norwich Union once used satellite tracking for a “Pay As You Drive” car insurance model, which was discontinued after privacy concerns.


The future of pricing

Today’s combination of big data and AI allows retailers to create precise, individualised pricing models that adjust instantly. Experts warn this could undermine fair competition, reduce transparency, and widen inequality between consumers. Regulators like the FTC are now studying these systems closely to understand their impact on market fairness and consumer privacy.

For shoppers, awareness is key. Comparing prices across devices, clearing cookies, and using privacy tools can help reduce personal data tracking. As AI continues to shape how businesses price their products, understanding surveillance pricing is becoming essential to protect both privacy and pocket.


FTC Launches Formal Investigation into AI Companion Chatbots

 

The Federal Trade Commission has announced a formal inquiry into companies that develop AI companion chatbots, focusing specifically on how these platforms potentially harm children and teenagers. While not currently tied to regulatory action, the investigation seeks to understand how companies "measure, test, and monitor potentially negative impacts of this technology on children and teens". 

Companies under scrutiny 

Seven major technology companies have been selected for the investigation: Alphabet (Google's parent company), Character Technologies (creator of Character.AI), Meta, Instagram (Meta subsidiary), OpenAI, Snap, and X.AI. These companies are being asked to provide comprehensive information about their AI chatbot operations and safety measures. 

Investigation scope 

The FTC is requesting detailed information across several key areas. Companies must explain how they develop and approve AI characters, including their processes for "monetizing user engagement". Data protection practices are also under examination, particularly how companies safeguard underage users and ensure compliance with the Children's Online Privacy Protection Act Rule.

Motivation and concerns 

Although the FTC hasn't explicitly stated its investigation's motivation, FTC Commissioner Mark Meador referenced troubling reports from The New York Times and Wall Street Journal highlighting "chatbots amplifying suicidal ideation" and engaging in "sexually-themed discussions with underage users". Meador emphasized that if violations are discovered, "the Commission should not hesitate to act to protect the most vulnerable among us". 

Broader regulatory landscape 

This investigation reflects growing regulatory concern about AI's immediate negative impacts on privacy and health, especially as long-term productivity benefits remain uncertain. The FTC's inquiry isn't isolated—Texas Attorney General has already launched a separate investigation into Character.AI and Meta AI Studio, examining similar concerns about data privacy and chatbots falsely presenting themselves as mental health professionals. 

Implications

The investigation represents a significant regulatory response to emerging AI safety concerns, particularly regarding vulnerable populations. As AI companion technology proliferates, this inquiry may establish important precedents for industry oversight and child protection standards in the AI sector.

Disney to Pay $10 Million Fine in FTC Settlement Over Child Data Collection on YouTube

 

Disney has agreed to pay millions of dollars in penalties to resolve allegations brought by the Federal Trade Commission (FTC) that it unlawfully collected personal data from young viewers on YouTube without securing parental consent. Federal law under the Children’s Online Privacy Protection Act (COPPA) requires parental approval before companies can gather data from children under the age of 13. 

The case, filed by the U.S. Department of Justice on behalf of the FTC, accused Disney Worldwide Services Inc. and Disney Entertainment Operations LLC of failing to comply with COPPA by not properly labeling Disney videos on YouTube as “Made for Kids.” This mislabeling allegedly allowed the company to collect children’s data for targeted advertising purposes. 

“This case highlights the FTC’s commitment to upholding COPPA, which ensures that parents, not corporations, control how their children’s personal information is used online,” said FTC Chair Andrew N. Ferguson in a statement. 

As part of the settlement, Disney will pay a $10 million civil penalty and implement stricter mechanisms to notify parents and obtain consent before collecting data from underage users. The company will also be required to establish a panel to review how its YouTube content is designated. According to the FTC, these measures are intended to reshape how Disney manages child-directed content on the platform and to encourage the adoption of age verification technologies. 

The complaint explained that Disney opted to designate its content at the channel level rather than individually marking each video as “Made for Kids” or “Not Made for Kids.” This approach allegedly enabled the collection of data from child-directed videos, which YouTube then used for targeted advertising. Disney reportedly received a share of the ad revenue and, in the process, exposed children to age-inappropriate features such as autoplay.  

The FTC noted that YouTube first introduced mandatory labeling requirements for creators, including Disney, in 2019 following an earlier settlement over COPPA violations. Despite these requirements, Disney allegedly continued mislabeling its content, undermining parental safeguards. 

“The order penalizes Disney’s abuse of parental trust and sets a framework for protecting children online through mandated video review and age assurance technology,” Ferguson added. 

The settlement arrives alongside an unrelated investigation launched earlier this year by the Federal Communications Commission (FCC) into alleged hiring practices at Disney and its subsidiary ABC. While separate, the two cases add to the regulatory pressure the entertainment giant is facing. 

The Disney case underscores growing scrutiny of how major media and technology companies handle children’s privacy online, particularly as regulators push for stronger safeguards in digital environments where young audiences are most active.

FTC Stops Data Brokers from Unlawful User Location Tracking

FTC Stops Data Brokers from Unlawful User Location Tracking


Data Brokers Accused of Illegal User Tracking

The US Federal Trade Commission (FTC) has filed actions against two US-based data brokers for allegedly engaging in illegal tracking of users' location data. The data was reportedly used to trace individuals in sensitive locations such as hospitals, churches, military bases, and other protected areas. It was then sold for purposes including advertising, political campaigns, immigration enforcement, and government use.

Mobilewalla's Allegations

The Georgia-based data broker, Mobilewalla, has been accused of tracking residents of domestic abuse shelters and protestors during the George Floyd demonstrations in 2020. According to the FTC, Mobilewalla allegedly attempted to identify protestors’ racial identities by tracing their smartphones. The company’s actions raise serious privacy and ethical concerns.

Gravy Analytics and Venntel's Accusations

The FTC also suspects Gravy Analytics and its subsidiary Venntel of misusing customer location data without consent. Reports indicate they used this data to “unfairly infer health decisions and religious beliefs,” as highlighted by TechCrunch. These actions have drawn criticism for their potential to exploit sensitive personal information.

Unlawful Data Collection Practices

The FTC revealed that Gravy Analytics collected over 17 billion location signals from more than 1 billion smartphones daily. The data was allegedly sold to federal law enforcement agencies such as the Drug Enforcement Agency (DEA), the Department of Homeland Security (DHS), and the Federal Bureau of Investigation (FBI).

Samuel Levine, Director of the FTC’s Bureau of Consumer Protection, stated, “Surreptitious surveillance by data brokers undermines our civil liberties and puts servicemembers, union workers, religious minorities, and others at risk. This is the FTC’s fourth action this year challenging the sale of sensitive location data, and it’s past time for the industry to get serious about protecting Americans’ privacy.”

FTC's Settlements

As part of two settlements announced by the FTC, Mobilewalla and Gravy Analytics will cease collecting sensitive location data from customers. They are also required to delete the historical data they have amassed about millions of Americans over time.

The settlements mandate that the companies establish a sensitive location data program to identify and restrict tracking and disclosing customer information from specific locations. These protected areas include religious organizations, medical facilities, schools, and other sensitive sites.

Additionally, the FTC’s order requires the companies to maintain a supplier assessment program to ensure consumers have provided consent for the collection and use of data that reveals their precise location or mobile device information.

Protect Yourself from Phishing Scams Involving Personal Data and Bitcoin Demands

 

A new phishing scam is emerging, where hackers send threatening emails to people with personal details like images of their homes and addresses. This scam tricks recipients into believing their privacy is compromised, urging them to pay money or Bitcoin to avoid exposure. According to cyber expert Al Iverson, scammers often use public sources like Google Maps and data from previous breaches to craft these threatening messages. He recommends confirming any images on Google Maps and checking email legitimacy to ensure the message isn’t a scam. 

One victim, Jamie Beckland, shared his experience, revealing that the scammers falsely claimed to have video evidence from spyware on his computer. Beckland, like others, was targeted with demands for Bitcoin in exchange for silence. Fortunately, by cross-referencing the address and photo in the email with Google Maps, he realized the threat wasn’t credible. To avoid falling for such scams, it’s critical to scrutinize email addresses and domains. Iverson advises checking SPF, DKIM, and DMARC results, which help verify the sender’s legitimacy. Scammers often spoof email addresses, making them appear familiar, but most don’t actually have access to sensitive data—they’re simply trying to scare people into paying. 

Zarik Megerdichian, founder of Loop8, strongly warns against clicking any unfamiliar links in these emails, especially those related to payments. Bitcoin and similar transactions are irreversible, making it crucial to avoid engaging with scammers. If you suspect financial information is at risk, Megerdichian advises reporting the incident to the Federal Trade Commission (FTC) and closely monitoring your accounts. Yashin Manraj, CEO of Pvotal Technologies, recommends changing passwords immediately if you suspect your data has been compromised. Moving sensitive accounts to a new email address can provide added protection. He also suggests notifying local authorities like the FBI, while ensuring that family members are informed of the scam to prevent further risks. 

Lastly, Manraj emphasizes that you should never engage with scammers. Responding to emails only increases your vulnerability, adding your information to target databases. To further protect yourself, isolating your home network, using a VPN, and avoiding public forums for help are essential steps in safeguarding your information from potential future attacks. These phishing scams, though threatening, rely on fear and manipulation. By taking steps to verify email legitimacy, securing your accounts, and staying cautious, you can avoid falling victim to these tactics.

The Rising Threat of Payment Fraud: How It Impacts Businesses and Ways to Counter It

 

Payment fraud continues to be a significant and evolving threat to businesses, undermining their profitability and long-term sustainability. The FBI reports that between 2013 and 2022, companies lost around $50 billion to business email compromise, showing how prevalent this issue is. In 2022 alone, 80% of enterprises faced at least one payment fraud attempt, with 30% of affected businesses unable to recover their losses. These attacks can take various forms, from email interception to more advanced methods like deep fakes and impersonation scams. 

Cybercriminals exploit vulnerabilities, manipulating legitimate transactions to steal funds, often without immediate detection. Financial losses from payment fraud can be devastating, impacting a company’s ability to pay suppliers, employees, or even invest in growth opportunities. Investigating such incidents can be time-consuming and costly, further straining resources and leading to operational disruptions. Departments like finance, IT, and legal must shift focus to tackle the issue, slowing down core business activities. For example, time spent addressing fraud issues can cause delays in projects, damage employee morale, and disrupt customer services, affecting overall business performance. 

Beyond financial impact, payment fraud can severely damage a company’s reputation. Customers and partners may lose trust if they feel their financial information isn’t secure, leading to lost sales, canceled contracts, or difficulty attracting new clients. Even a single fraud incident can have long-lasting effects, making it difficult to regain public confidence. Businesses also face legal and regulatory consequences when payment fraud occurs, especially if they have not implemented adequate protective measures. Non-compliance with data protection regulations like the General Data Protection Regulation (GDPR) or penalties from the Federal Trade Commission (FTC) can lead to fines and legal actions, causing additional financial strain. Payment fraud not only disrupts daily operations but also poses a threat to a company’s future. 

End-to-end visibility across payment processes, AI-driven fraud detection systems, and regular security audits are essential to prevent attacks and build resilience. Companies that invest in these technologies and foster a culture of vigilance are more likely to avoid significant losses. Staff training on recognizing potential threats and improving security measures can help businesses stay one step ahead of cybercriminals. Mitigating payment fraud requires a proactive approach, ensuring businesses are prepared to respond effectively if an attack occurs. 

By investing in advanced fraud detection systems, conducting frequent audits, and adopting comprehensive security measures, organizations can minimize risks and safeguard their financial health. This preparation helps prevent financial loss, operational disruption, reputational damage, and legal consequences, thereby ensuring long-term resilience and sustainability in today’s increasingly digital economy.

Social Media Content Fueling AI: How Platforms Are Using Your Data for Training

 

OpenAI has admitted that developing ChatGPT would not have been feasible without the use of copyrighted content to train its algorithms. It is widely known that artificial intelligence (AI) systems heavily rely on social media content for their development. In fact, AI has become an essential tool for many social media platforms.

For instance, LinkedIn is now using its users’ resumes to fine-tune its AI models, while Snapchat has indicated that if users engage with certain AI features, their content might appear in advertisements. Despite this, many users remain unaware that their social media posts and photos are being used to train AI systems.

Social Media: A Prime Resource for AI Training

AI companies aim to make their models as natural and conversational as possible, with social media serving as an ideal training ground. The content generated by users on these platforms offers an extensive and varied source of human interaction. Social media posts reflect everyday speech and provide up-to-date information on global events, which is vital for producing reliable AI systems.

However, it's important to recognize that AI companies are utilizing user-generated content for free. Your vacation pictures, birthday selfies, and personal posts are being exploited for profit. While users can opt out of certain services, the process varies across platforms, and there is no assurance that your content will be fully protected, as third parties may still have access to it.

How Social Platforms Are Using Your Data

Recently, the United States Federal Trade Commission (FTC) revealed that social media platforms are not effectively regulating how they use user data. Major platforms have been found to use personal data for AI training purposes without proper oversight.

For example, LinkedIn has stated that user content can be utilized by the platform or its partners, though they aim to redact or remove personal details from AI training data sets. Users can opt out by navigating to their "Settings and Privacy" under the "Data Privacy" section. However, opting out won’t affect data already collected.

Similarly, the platform formerly known as Twitter, now X, has been using user posts to train its chatbot, Grok. Elon Musk’s social media company has confirmed that its AI startup, xAI, leverages content from X users and their interactions with Grok to enhance the chatbot’s ability to deliver “accurate, relevant, and engaging” responses. The goal is to give the bot a more human-like sense of humor and wit.

To opt out of this, users need to visit the "Data Sharing and Personalization" tab in the "Privacy and Safety" settings. Under the “Grok” section, they can uncheck the box that permits the platform to use their data for AI purposes.

Regardless of the platform, users need to stay vigilant about how their online content may be repurposed by AI companies for training. Always review your privacy settings to ensure you’re informed and protected from unintended data usage by AI technologies