A whirlwind of concerns around Meta’s AI-enabled smart glasses are intensifying after reports suggested that human reviewers may have accessed sensitive user recordings, raising broader questions about privacy, consent, and data protection.
Online discussions have surged, with users expressing alarm over how much data may be visible to the company. Some individuals on forums have claimed that recorded footage could be manually reviewed to train artificial intelligence systems, while others raised concerns about the use of such devices in sensitive environments like healthcare settings, where patient information could be unintentionally exposed.
What triggered the controversy?
The debate gained momentum following an investigation by Swedish media outlets, which reported that contractors working at external facilities were tasked with reviewing video recordings captured through Ray-Ban Meta Smart Glasses. According to these findings, some of the reviewed material included highly sensitive content.
The issue has since drawn regulatory attention in multiple regions. Authorities in the United Kingdom, including the Information Commissioner's Office, have sought clarification on how such user data is processed. In the United States, the controversy has also led to legal action against Meta Platforms, with allegations that consumers were not adequately informed about the device’s privacy safeguards.
The timing is of essence here, as smart glasses are rapidly gaining popularity. Legal filings suggest that more than seven million units were sold in 2025 alone. Unlike smartphones, these glasses resemble regular eyewear but can discreetly capture images, audio, and video from the wearer’s perspective, often without others being aware.
Why are experts concerned?
Legal analysts highlight that such practices could conflict with India’s Digital Personal Data Protection Act, 2023 if data involving Indian individuals is collected.
According to legal experts, consent remains a foundational requirement. Any access to recordings involving identifiable individuals must be based on informed approval. If footage is reviewed without the knowledge or permission of those captured, it could constitute a violation of Indian data protection law.
Beyond legality, specialists argue that wearable AI devices introduce a deeper structural issue. Unlike traditional data collection methods, these tools continuously capture real-world environments, making it difficult to define clear boundaries for data usage.
Experts also point out that although Meta includes visible indicators such as LED lights to signal recording, these measures do not fully address how the data of bystanders is processed. There are concerns about the absence of strict limitations on why such data is collected or how much of it is retained.
Additionally, outsourcing the review of user-generated content introduces further complications. Apart from the risk of misuse or unauthorized sharing, there are also ethical concerns regarding the working conditions and psychological impact on individuals tasked with reviewing potentially distressing material.
Cross-border and systemic risks
Another key concern is international data handling. If recordings involving Indian users are accessed by contractors located overseas, companies are still expected to maintain the same standards of security and confidentiality required under Indian regulations.
Experts emphasize that these devices are part of a much larger artificial intelligence ecosystem. Data captured through smart glasses is not simply stored. It may be uploaded to cloud servers, processed by machine learning systems, and in some cases, reviewed by humans to improve system performance. This creates a chain of data handling where highly personal information, including facial features, voices, surroundings, and behavioral patterns, may circulate beyond the user’s direct control.
What is Meta’s response?
Meta has stated that protecting user data remains a priority and that it continues to refine its systems to improve privacy protections. The company has explained that its smart glasses are designed to provide hands-free AI assistance, allowing users to interact with their surroundings more efficiently.
It also acknowledged that, in certain cases, human reviewers may be involved in evaluating shared content to enhance system performance. According to the company, such processes are governed by its privacy policies and include steps intended to safeguard user identity, such as automated filtering techniques like face blurring.
However, reports citing Swedish publications suggest that these safeguards may not always function consistently, with some instances where identifiable details remain visible.
While recording must be actively initiated by the user, either manually or through voice commands, experts note that many users may not fully understand that their captured content could be subject to human review.
The Ripple Effect
This controversy reflects a wider shift in how personal data is generated and processed in the age of AI-driven wearables. Unlike earlier technologies, smart glasses operate in real time and in shared environments, raising complex questions about consent not just for users, but for everyone around them.
As adoption runs rampant, regulators worldwide are likely to tighten scrutiny on such devices. The challenge for companies will be to balance innovation with transparent data practices, especially as public awareness around digital privacy continues to rise.
For users, this is a wake up call to not rely on new age technology blindly and take into account that convenience-driven technologies often come with hidden trade-offs, particularly when it comes to control over personal data.
Meta Platforms has confirmed that it will remove support for end-to-end encrypted messaging in Instagram direct messages beginning May 8, 2026. After this date, conversations that previously relied on this encryption feature will no longer be protected by the same privacy mechanism.
According to guidance published in the platform’s support documentation, users whose conversations are affected will receive instructions explaining how to download messages or media files they want to retain. In some situations, individuals may also need to install the latest version of the Instagram application before they can export their chat history.
When asked about the decision, Meta stated that encrypted messaging on Instagram saw limited adoption. The company explained that only a small percentage of users chose to enable end-to-end encryption within Instagram direct messages. Meta also pointed out that people who want encrypted communication can still use the feature on WhatsApp, where end-to-end encryption is already widely used.
How Instagram Encryption Was Introduced
Instagram’s encrypted messaging capability was originally introduced as part of a broader push by Meta to transform its messaging ecosystem. In 2021, Meta CEO Mark Zuckerberg outlined a “privacy-focused” strategy for social networking that aimed to shift communication toward private and secure messaging environments.
Within that initiative, Meta began experimenting with encrypted direct messages on Instagram. However, the feature never became the default setting for users. Instead, it remained an optional capability available only in certain regions and had to be manually activated within specific conversations.
The tool also gained relevance during geopolitical tensions. Shortly after the outbreak of the Russia-Ukraine conflict in early 2022, Meta expanded access to encrypted direct messages for adult users in both Russia and Ukraine. The company said the move was intended to provide safer communication channels during the early phase of the war.
Industry Debate Over Encrypted Messaging
The decision to discontinue Instagram’s encrypted chats comes amid a broader debate in the technology sector about whether strong encryption improves or complicates online safety.
Recently, the social media platform TikTok said it currently has no plans to introduce end-to-end encryption for its messaging system. The company told the BBC that such technology could reduce its ability to monitor harmful activity and protect younger users from abuse.
End-to-end encryption is widely regarded by cybersecurity experts as one of the strongest ways to secure digital communication. When this technology is used, messages are encrypted on the sender’s device and can only be decrypted by the recipient. This means that even the platform hosting the conversation cannot read the message contents during transmission.
Because of this design, encrypted systems can protect users from surveillance, data interception, or unauthorized access by third parties. Many messaging services, including WhatsApp and Signal, rely on similar encryption models to secure billions of conversations globally.
Law Enforcement Concerns
Despite its privacy advantages, encryption has long been controversial among law enforcement agencies and child-safety advocates. Critics argue that encrypted messaging makes it harder for technology companies to detect criminal behavior such as terrorism recruitment or the distribution of child sexual abuse material.
Authorities describe this challenge as the “Going Dark” problem, referring to situations where investigators cannot access message content even when they obtain legal warrants. Policymakers have repeatedly warned that widespread encryption could reduce the ability of platforms to cooperate with criminal investigations.
Internal documents previously reported by Reuters indicated that some Meta executives had raised similar concerns internally. In discussions dating back to 2019, company officials warned that widespread encryption could limit the company’s ability to identify and report illegal activity to law enforcement authorities.
Regulatory Pressure and Future Policy
The global policy debate around encryption is still evolving and charting new courses. The European Commission is expected to release a technology roadmap on encryption later this year. The initiative aims to explore ways to allow lawful access to encrypted data for investigators while preserving cybersecurity protections and civil liberties.
A Changing Messaging Strategy
Meta’s decision to remove encrypted messaging from Instagram highlights the complex trade-offs technology companies face when balancing privacy protections with safety monitoring and regulatory expectations.
While encryption remains a cornerstone of messaging on WhatsApp and has expanded across other platforms, the rollback on Instagram suggests that adoption rates, platform design, and policy pressures can influence whether such security features remain viable.
For Instagram users who relied on encrypted chats, the upcoming change means reviewing conversations before May 2026 and exporting any information they wish to keep before the feature is officially retired.
Meta announced that it has removed more than 150,000 accounts tied to organized scam centers operating in Southeast Asia, describing the move as part of a large international effort to disrupt coordinated online fraud networks.
The enforcement action was carried out with assistance from authorities in several countries. Law enforcement agencies and government partners involved in the operation included officials from Thailand, the United States, the United Kingdom, Canada, South Korea, Japan, Singapore, the Philippines, Australia, New Zealand, and Indonesia. According to Meta, the joint effort resulted in 21 individuals being arrested by the Royal Thai Police.
This latest crackdown builds on an earlier pilot initiative launched in December 2025. During that initial phase, Meta removed approximately 59,000 accounts, Pages, and Groups from its platforms that were connected to similar fraudulent activity. The earlier investigation also led to the issuance of six arrest warrants by authorities.
In a statement explaining the action, Meta said that online scams have grown increasingly complex and organized over recent years. Criminal networks, often operating from countries such as Cambodia, Myanmar, and Laos, have established large scam compounds that function in many ways like organized business operations. These groups typically use structured teams, scripted communication strategies, and digital tools designed to evade detection while targeting victims on a global scale. According to the company, the impact of such scams extends far beyond financial loss, as they can severely disrupt lives and weaken trust in digital communication platforms.
Alongside the enforcement action, Meta also announced several new safety features aimed at helping users identify and avoid scam attempts.
One of these tools introduces new warning messages on Facebook that notify users when they receive communication from accounts that display characteristics commonly linked to fraudulent activity. Another safeguard has been introduced on WhatsApp to address a tactic used by scammers who attempt to persuade users to scan a QR code. If successful, this method can link the attacker’s device to the victim’s WhatsApp account, allowing them to access messages and impersonate the account holder. Meta said its system will now notify users when suspicious device-linking requests are detected.
The company is also expanding scam detection on Messenger. When a conversation with a new contact begins to resemble known fraud patterns, such as questionable job opportunities or requests that appear unusual, the platform may prompt users to share recent messages so that an artificial intelligence system can evaluate whether the interaction matches known scam behavior.
Meta also disclosed broader enforcement statistics related to scams on its platforms. Throughout 2025, the company removed more than 159 million advertisements that violated its policies related to fraud and deception. In addition, it disabled approximately 10.9 million Facebook and Instagram accounts that investigators linked to organized scam centers.
To further address fraudulent activity, the company said it plans to expand its advertiser verification program. The goal of this measure is to increase transparency by confirming the identities of advertisers and reducing the ability of malicious actors to misrepresent themselves while running advertisements.
The announcement comes at a time when governments are intensifying efforts to address online fraud. The UK Government recently introduced a new Online Crime Centre designed to focus specifically on cybercrime, including scams connected to organized fraud operations operating in regions such as Southeast Asia, West Africa, Eastern Europe, India, and China.
The centre will bring together specialists from several sectors, including government agencies, law enforcement, intelligence services, financial institutions, mobile network providers, and major technology companies. The initiative is expected to begin operations next month.
The project forms part of the United Kingdom’s broader Fraud Strategy 2026–2029, a policy framework aimed at strengthening the country’s response to fraud and financial crime. As part of this strategy, authorities plan to use artificial intelligence to detect emerging scam patterns, identify suspicious bank transfers more quickly, and deploy “scam-baiting” chatbots designed to interact with fraudsters in order to gather intelligence.
Officials said the new centre, supported by more than £30 million in funding, will focus on identifying the digital infrastructure used by organized crime groups. This includes tracking fraudulent accounts, websites, and phone numbers used in scam operations. Authorities aim to shut down these resources at scale by blocking scam messages, freezing financial accounts linked to criminal activity, removing fraudulent social media profiles, and disrupting scam networks at their source.
Although advanced spyware attacks do not affect most smartphone users, cybersecurity researchers stress that awareness is essential as these tools continue to spread globally. Even individuals who are not public figures are advised to remain cautious.
In December, hundreds of iPhone and Android users received official threat alerts stating that their devices had been targeted by spyware. Shortly after these notifications, Apple and Google released security patches addressing vulnerabilities that experts believe were exploited to install the malware on a small number of phones.
Spyware poses an extreme risk because it allows attackers to monitor nearly every activity on a smartphone. This includes access to calls, messages, keystrokes, screenshots, notifications, and even encrypted platforms such as WhatsApp and Signal. Despite its intrusive capabilities, spyware is usually deployed in targeted operations against journalists, political figures, activists, and business leaders in sensitive industries.
High-profile cases have demonstrated the seriousness of these attacks. Former Amazon chief executive Jeff Bezos and Hanan Elatr, the wife of murdered Saudi dissident Jamal Khashoggi, were both compromised through Pegasus spyware developed by the NSO Group. These incidents illustrate how personal data can be accessed without user awareness.
Spyware activity remains concentrated within these circles, but researchers suggest its reach may be expanding. In early December, Google issued threat notifications and disclosed findings showing that an exploit chain had been used to silently install Predator spyware. Around the same time, the U.S. Cybersecurity and Infrastructure Security Agency warned that attackers were actively exploiting mobile messaging applications using commercial surveillance tools.
One of the most dangerous techniques involved is known as a zero-click attack. In such cases, a device can be infected without the user clicking a link, opening a message, or downloading a file. According to Malwarebytes researcher Pieter Arntz, once infected, attackers can read messages, track keystrokes, capture screenshots, monitor notifications, and access banking applications. Rocky Cole of iVerify adds that spyware can also extract emails and texts, steal credentials, send messages, and access cloud accounts.
Spyware may also spread through malicious links, fake applications, infected images, browser vulnerabilities, or harmful browser extensions. Recorded Future’s Richard LaTulip notes that recent research into malicious extensions shows how tools that appear harmless can function as surveillance mechanisms. These methods, often associated with nation-state actors, are designed to remain hidden and persistent.
Governments and spyware vendors frequently claim such tools are used only for law enforcement or national security. However, Amnesty International researcher Rebecca White states that journalists, activists, and others have been unlawfully targeted worldwide, using spyware as a method of repression. Thai activist Niraphorn Onnkhaow was targeted multiple times during pro-democracy protests between 2020 and 2021, eventually withdrawing from activism due to fears her data could be misused.
Detecting spyware is challenging. Devices may show subtle signs such as overheating, performance issues, or unexpected camera or microphone activation. Official threat alerts from Apple, Google, or Meta should be treated seriously. Leaked private information can also indicate compromise.
To reduce risk, Apple offers Lockdown Mode, which limits certain functions to reduce attack surfaces. Apple security executive Ivan Krstić states that widespread iPhone malware has not been observed outside mercenary spyware campaigns. Apple has also introduced Memory Integrity Enforcement, an always-on protection designed to block memory-based exploits.
Google provides Advanced Protection for Android, enhanced in Android 16 with intrusion logging, USB safeguards, and network restrictions.
Experts recommend avoiding unknown links, limiting app installations, keeping devices updated, avoiding sideloading, and restarting phones periodically. However, confirmed infections often require replacing the device entirely. Organizations such as Amnesty International, Access Now, and Reporters Without Borders offer assistance to individuals who believe they have been targeted.
Security specialists advise staying cautious without allowing fear to disrupt normal device use.
Instagram has firmly denied claims of a new data breach following reports that personal details linked to more than 17 million accounts are being shared across online forums. The company stated that its internal systems were not compromised and that user accounts remain secure.
The clarification comes after concerns emerged around a technical flaw that allowed unknown actors to repeatedly trigger password reset emails for Instagram users. Meta, Instagram’s parent company, confirmed that this issue has been fixed. According to the company, the flaw did not provide access to accounts or expose passwords. Users who received unexpected reset emails were advised to ignore them, as no action is required.
Public attention intensified after cybersecurity alerts suggested that a large dataset allegedly connected to Instagram accounts had been released online. The data, which was reportedly shared without charge on several hacking forums, was claimed to have been collected through an unverified Instagram API vulnerability dating back to 2024.
The dataset is said to include information from over 17 million profiles. The exposed details reportedly vary by record and include usernames, internal account IDs, names, email addresses, phone numbers, and, in some cases, physical addresses. Analysis of the data shows that not all records contain complete personal details, with some entries listing only basic identifiers such as a username and account ID.
Researchers discussing the incident on social media platforms have suggested that the data may not be recent. Some claim it could originate from an older scraping incident, possibly dating back to 2022. However, no technical evidence has been publicly provided to support these claims. Meta has also stated that it has no record of Instagram API breaches occurring in either 2022 or 2024.
Instagram has previously dealt with scraping-related incidents. In one earlier case, a vulnerability allowed attackers to collect and sell personal information associated with millions of accounts. Due to this history, cybersecurity experts believe the newly surfaced dataset could be a collection of older information gathered from multiple sources over several years, rather than the result of a newly discovered vulnerability.
Attempts to verify the origin of the data have so far been unsuccessful. The individual responsible for releasing the dataset did not respond to requests seeking clarification on when or how the information was obtained.
At present, there is no confirmation that this situation represents a new breach of Instagram’s systems. No evidence has been provided to demonstrate that the data was extracted through a recently exploited flaw, and Meta maintains that there has been no unauthorized access to its infrastructure.
While passwords are not included in the leaked information, users are still urged to remain cautious. Such datasets are often used in phishing emails, scam messages, and social engineering attacks designed to trick individuals into revealing additional information.
Users who receive password reset emails or login codes they did not request should delete them and take no further action. Enabling two-factor authentication is fiercely recommended, as it provides an added layer of security against unauthorized access attempts.
Facebook is testing a new policy that places restrictions on how many external links certain users can include in their posts. The change, which is currently being trialled on a limited basis, introduces a monthly cap on link sharing unless users pay for a subscription.
Some users in the United Kingdom and the United States have received in-app notifications informing them that they will only be allowed to share a small number of links in Facebook posts without payment. To continue sharing links beyond that limit, users are offered a subscription priced at £9.99 per month.
Meta, the company that owns Facebook, has confirmed the test and described it as limited in scope. According to the company, the purpose is to assess whether the option to post a higher volume of link-based content provides additional value to users who choose to subscribe.
Industry observers say the experiment reflects Meta’s broader effort to generate revenue from more areas of its platforms. Social media analyst Matt Navarra said the move signals a shift toward monetising essential platform functions rather than optional extras.
He explained that the test is not primarily about identity verification. Instead, it places practical features that users rely on for visibility and reach behind a paid tier. In his view, Meta is now charging for what he describes as “survival features” rather than premium add-ons.
Meta already offers a paid service called Meta Verified, which provides subscribers on Facebook and Instagram with a blue verification badge, enhanced account support, and safeguards against impersonation. Navarra said that after attaching a price to these services, Meta now appears to be applying a similar approach to content distribution itself.
He noted that this includes the basic ability to direct users away from Facebook to external websites, a function that creators and businesses depend on to grow audiences, drive traffic, and promote services.
Navarra was among those who received a notification about the test. He said he was informed that from 16 December onward, he would only be able to include two links per month in Facebook posts unless he subscribed.
For creators and businesses, he said the message is clear. If Facebook plays a role in their audience growth or traffic strategy, that access may now require payment. He added that while platforms have been moving in this direction for some time, the policy makes it explicit.
The test comes as social media platforms increasingly encourage users to verify their accounts in exchange for added features or improved engagement. Platforms such as LinkedIn have also adopted similar models.
After acquiring Twitter in 2022, Elon Musk restructured the platform’s verification system, now known as X. Blue verification badges were made available only to paying users, who also received increased visibility in replies and recommendation feeds.
That approach proved controversial and resulted in regulatory scrutiny, including a fine imposed by European authorities in December. Despite the criticism, Meta later introduced a comparable paid verification model.
Meta has also announced plans to introduce a “community notes” system, similar to X, allowing users to flag potentially misleading posts. This follows reductions in traditional moderation and third-party fact-checking efforts.
According to Meta, the link-sharing test applies only to a selected group of users who operate Pages or use Facebook’s professional mode. These tools are widely used by creators and businesses to publish content and analyse audience engagement.
Navarra said the test highlights a difficult reality for creators. He argued that Facebook is becoming less reliable as a source of external traffic and is increasingly steering users away from treating the platform as a traffic engine.
He added that the experiment reinforces a long-standing pattern. Meta, he said, ultimately designs its systems to serve its own priorities first.
According to analysts, tests like this underline the risks of building a business that depends too heavily on a single platform. Changes to access, visibility, or pricing can occur with little warning, leaving creators and businesses vulnerable.
Meta has emphasized that the policy remains a trial. However, the experiment illustrates how social media companies continue to reassess which core functions remain free and which are moving behind paywalls.
Meta has started taking down accounts belonging to Australians under 16 on Instagram, Facebook and Threads, beginning a week before Australia’s new age-restriction law comes into force. The company recently alerted users it believes are between 13 and 15 that their profiles would soon be shut down, and the rollout has now begun.
Current estimates suggest that a large number of accounts will be affected, including roughly hundreds of thousands across Meta’s platforms. Since Threads operates through Instagram credentials, any underage Instagram account will also lose access to Threads.
Australia’s new policy, which becomes fully active on 10 December, prevents anyone under 16 from holding an account on major social media sites. This law is the first of its kind globally. Platforms that fail to take meaningful action can face penalties reaching up to 49.5 million Australian dollars. The responsibility to monitor and enforce this age limit rests with the companies, not parents or children.
A Meta spokesperson explained that following the new rules will require ongoing adjustments, as compliance involves several layers of technology and review. The company has argued that the government should shift age verification to app stores, where users could verify their age once when downloading an app. Meta claims this would reduce the need for children to repeatedly confirm their age across multiple platforms and may better protect privacy.
Before their accounts are removed, underage users can download and store their photos, videos and messages. Those who believe Meta has made an incorrect assessment can request a review and prove their age by submitting government identification or a short video-based verification.
The new law affects a wide list of services, including Facebook, Instagram, Snapchat, TikTok, Threads, YouTube, X, Reddit, Twitch and Kick. However, platforms designed for younger audiences or tools used primarily for education, such as YouTube Kids, Google Classroom and messaging apps like WhatsApp, are not included. Authorities have also been examining whether children are shifting to lesser-known apps, and companies behind emerging platforms like Lemon8 and Yope have already begun evaluating whether they fall under the new rules.
Government officials have stated that the goal is to reduce children’s exposure to harmful online material, which includes violent content, misogynistic messages, eating disorder promotion, suicide-related material and grooming attempts. A national study reported that the vast majority of children aged 10 to 15 use social media, with many encountering unsafe or damaging content.
Critics, however, warn that age verification tools may misidentify users, create privacy risks or fail to stop determined teenagers from using alternative accounts. Others argue that removing teens from regulated platforms might push them toward unmonitored apps, reducing online safety rather than improving it.
Australian authorities expect challenges in the early weeks of implementation but maintain that the long-term goal is to reduce risks for the youngest generation of online users.
The European Commission has accused Meta Platforms, the parent company of Facebook and Instagram, of violating the European Union’s Digital Services Act (DSA) by making it unnecessarily difficult for users to report illegal online content and challenge moderation decisions.
In its preliminary findings, the Commission said both platforms lack a user-friendly “Notice and Action” system — the mechanism that allows people to flag unlawful material such as child sexual abuse content or terrorist propaganda. Regulators noted that users face multiple steps and confusing options before they can file a report. The Commission also claimed that Meta’s interface relies on “dark patterns”, which are design features that subtly discourage users from completing certain actions, such as submitting reports.
According to the Commission, Meta’s appeal process also falls short of DSA requirements. The current system allegedly prevents users from adding explanations or submitting supporting evidence when disputing a moderation decision. This, the regulator said, limits users’ ability to express why they believe a decision was unfair and weakens the overall transparency of Meta’s content moderation practices.
The European Commission’s findings are not final, and Meta has the opportunity to respond before any enforcement action is taken. If the Commission confirms these violations, it could issue a non-compliance decision, which may result in penalties of up to 6 percent of Meta’s global annual revenue. The Commission may also impose recurring fines until the company aligns its operations with EU law.
Meta, in a public statement, said it “disagrees with any suggestion” that it breached the DSA. The company stated that it has already made several updates to comply with the law, including revisions to content reporting options, appeals procedures, and data access tools.
The European Commission also raised similar concerns about TikTok, saying that both companies have limited researchers’ access to public data on their platforms. The DSA requires large online platforms to provide sufficient data access so independent researchers can analyze potential harms — for example, whether minors are exposed to illegal or harmful content. The Commission’s review concluded that the data-access tools of Facebook, Instagram, and TikTok are burdensome and leave researchers with incomplete or unreliable datasets, which hinders academic and policy research.
TikTok responded that it has provided data to almost 1,000 research teams and remains committed to transparency. However, the company noted that the DSA’s data-sharing obligations sometimes conflict with the General Data Protection Regulation (GDPR), making it difficult to comply with both laws simultaneously. TikTok urged European regulators to offer clarity on how these two frameworks should be balanced.
Beyond Europe, the investigation may strain relations with the United States. American officials have previously criticized the EU for imposing regulatory burdens on U.S.-based tech firms. U.S. FTC Chairman Andrew Ferguson recently warned companies that censoring or modifying content to satisfy foreign governments could violate U.S. law. Former President Donald Trump has also expressed opposition to EU digital rules and even threatened tariffs against countries enforcing them.
For now, the Commission’s investigation continues. If confirmed, the case could set a major precedent for how global social media companies manage user safety, transparency, and accountability under Europe’s strict online governance laws.
Meta’s Instagram, WhatsApp, and Facebook have once again been flagged as the most privacy-violating social media apps. According to Incogni’s Social Media Privacy Ranking report 2025, Meta and TikTok are at the bottom of the list. Elon Musk’s X (formerly Twitter) has also received poor rankings in various categories, but has done better than Meta in a few categories.
The report analyzed 15 of the most widely used social media platforms globally, measuring them against 14 privacy criteria organized into six different categories: AI data use, user control, ease of access, regulatory transgressions, transparency, and data collection. The research methodology focused on how an average user could understand and control privacy policies.
Discord, Pinterest, and Quora have done best in the 2025 ranking. Discord is placed first, thanks to its stance on not giving user data for training of AI models. Pinterest ranks second, thanks to its strong user options and fewer regulatory penalties. Quora came third thanks to its limited user data collection.
But the Meta platforms were penalized strongly in various categories. Facebook was penalized for frequent regulatory fines, such as GDPR rules in Europe, and penalties in the US and other regions. Instagram and WhatsApp received heavy penalties due to policies allowing the collection of sensitive personal data, such as sexual orientation and health. X faced penalties for vast data collection
X was penalized for vast data collection and privacy fines from the past, but it still ranked above Meta and TikTok in some categories. X was among the easiest platforms to delete accounts from, and also provided information to government organizations at was lower rate than other platforms. Yet, X allows user data to be trained for AI models, which has impacted its overall privacy score.
“One of the core principles motivating Incogni’s research here is the idea that consent to have personal information gathered and processed has to be properly informed to be valid and meaningful. It’s research like this that arms users with not only the facts but also the tools to inform their choices,” Incogni said in its blog.
Meta has rolled out a fresh set of AI-powered tools aimed at helping advertisers design more engaging and personalized promotional content. These new features include the ability to turn images into short videos, brand-focused image generation, AI-powered chat assistants, and tools that enhance shopping experiences within ads.
One of the standout additions is Meta’s video creation feature, which allows businesses to transform multiple images into animated video clips. These clips can include music and text, making it easier for advertisers to produce dynamic visual content without needing video editing skills. Because the videos are short, they’re less likely to appear distorted or lose quality, a common issue in longer AI-generated videos.
Currently, this feature is being tested with select business partners.
Another tool in development is “Video Highlights,” which uses AI to identify the most important parts of a video. Viewers will be able to jump directly to these key scenes, guided by short phrases and image previews chosen by the system. This can help businesses convey their product value more clearly and keep viewers engaged.
Meta is also enhancing its AI image creation tools. Advertisers will now be able to insert their logos and brand colors directly into the images generated by AI. This ensures that their brand identity stays consistent across all marketing content. Additionally, AI-generated ad text can now reflect the personality or style of the brand, offering a more customized tone in promotions.
Another major update is the introduction of “Business AIs”, specialized chat assistants embedded within ads. These bots are designed to answer common customer questions about a product or service. Available in both text and voice formats, these virtual assistants aim to improve customer interaction by addressing queries instantly and guiding users toward making a purchase.
Meta is also experimenting with new features like clickable call-to-action (CTA) stickers for Stories and Reels ads, and virtual try-on tools that use AI to display clothing on digital models of various body types.
These developments are part of Meta’s broader push to make advertising more efficient through automation. The company’s Advantage+ ad system is already showing results, with Meta reporting a 22% average increase in return on ad spend (ROAS) for brands using this approach. Advantage+ uses AI to analyze user behavior, optimize ad formats, and identify potential customers based on real-time data.
While AI is unlikely to replace human creativity entirely, these tools can simplify the ad creation process and help brands connect with their audiences more effectively.