Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Meta. Show all posts

Meta Begins Removing Under-16 Users Ahead of Australia’s New Social Media Ban

 



Meta has started taking down accounts belonging to Australians under 16 on Instagram, Facebook and Threads, beginning a week before Australia’s new age-restriction law comes into force. The company recently alerted users it believes are between 13 and 15 that their profiles would soon be shut down, and the rollout has now begun.

Current estimates suggest that a large number of accounts will be affected, including roughly hundreds of thousands across Meta’s platforms. Since Threads operates through Instagram credentials, any underage Instagram account will also lose access to Threads.

Australia’s new policy, which becomes fully active on 10 December, prevents anyone under 16 from holding an account on major social media sites. This law is the first of its kind globally. Platforms that fail to take meaningful action can face penalties reaching up to 49.5 million Australian dollars. The responsibility to monitor and enforce this age limit rests with the companies, not parents or children.

A Meta spokesperson explained that following the new rules will require ongoing adjustments, as compliance involves several layers of technology and review. The company has argued that the government should shift age verification to app stores, where users could verify their age once when downloading an app. Meta claims this would reduce the need for children to repeatedly confirm their age across multiple platforms and may better protect privacy.

Before their accounts are removed, underage users can download and store their photos, videos and messages. Those who believe Meta has made an incorrect assessment can request a review and prove their age by submitting government identification or a short video-based verification.

The new law affects a wide list of services, including Facebook, Instagram, Snapchat, TikTok, Threads, YouTube, X, Reddit, Twitch and Kick. However, platforms designed for younger audiences or tools used primarily for education, such as YouTube Kids, Google Classroom and messaging apps like WhatsApp, are not included. Authorities have also been examining whether children are shifting to lesser-known apps, and companies behind emerging platforms like Lemon8 and Yope have already begun evaluating whether they fall under the new rules.

Government officials have stated that the goal is to reduce children’s exposure to harmful online material, which includes violent content, misogynistic messages, eating disorder promotion, suicide-related material and grooming attempts. A national study reported that the vast majority of children aged 10 to 15 use social media, with many encountering unsafe or damaging content.

Critics, however, warn that age verification tools may misidentify users, create privacy risks or fail to stop determined teenagers from using alternative accounts. Others argue that removing teens from regulated platforms might push them toward unmonitored apps, reducing online safety rather than improving it.

Australian authorities expect challenges in the early weeks of implementation but maintain that the long-term goal is to reduce risks for the youngest generation of online users.



Meta Cleared of Monopoly Charges in FTC Antitrust Case

 

A U.S. federal judge ruled that Meta does not hold a monopoly in the social media market, rejecting the FTC's antitrust lawsuit seeking divestiture of Instagram and WhatsApp. The FTC, joined by multiple states, filed the suit in December 2020, alleging Meta (formerly Facebook) violated Section 2 of the Sherman Act by acquiring Instagram for $1 billion in 2012 and WhatsApp for $19 billion in 2014. 

These moves were part of a supposed "buy-or-bury" strategy to eliminate rivals in "personal social networking services" (PSNS), stifling innovation, increasing ads, and weakening privacy. The agency claimed Meta's dominance left consumers with few alternatives, excluding platforms like TikTok and YouTube from its narrow market definition.

Trial and ruling

U.S. District Judge James Boasberg oversaw a seven-week trial ending in May 2025, featuring testimony from Meta CEO Mark Zuckerberg, who highlighted competition from TikTok and YouTube. In an 89-page opinion on November 18, 2025, Boasberg ruled the FTC failed to prove current monopoly power, noting the social media landscape's rapid evolution with surging apps, new features, and AI content.He emphasized Meta's market share—below 50% and declining in a broader market including Snapchat, TikTok, and YouTube—showed no insulation from rivals.

Key arguments and evidence

The FTC presented internal emails suggesting Zuckerberg feared Instagram and WhatsApp as threats, arguing acquisitions suppressed competition and harmed users via heavier ads and less privacy. Boasberg dismissed this, finding direct evidence like supra-competitive profits or price hikes insufficient for monopoly proof, and rejected the PSNS market as outdated given overlapping uses across apps.Meta countered that regulators approved the deals initially and that forcing divestiture would hurt U.S. innovation.I

Implications

Meta hailed the decision as affirming fierce competition and its contributions to growth, avoiding operational upheaval for its 3.54 billion daily users. The FTC expressed disappointment and is reviewing options, marking a setback amid wins against Google but ongoing cases versus Apple and Amazon. Experts view it as reinforcing consumer-focused antitrust in dynamic tech markets.

WhatsApp’s “We See You” Post Sparks Privacy Panic Among Users

 

WhatsApp found itself in an unexpected storm this week after a lighthearted social media post went terribly wrong. The Meta-owned messaging platform, known for emphasizing privacy and end-to-end encryption, sparked alarm when it posted a playful message on X that read, “people who end messages with ‘lol’ we see you, we honor you.” What was meant as a fun cultural nod quickly became a PR misstep, as users were unsettled by the phrase “we see you,” which seemed to contradict WhatsApp’s most fundamental promise—that it can’t see users’ messages at all. 

Within minutes, the post went viral, amassing over five million views and an avalanche of concerned replies. “What about end-to-end encryption?” several users asked, worried that WhatsApp was implying it had access to private conversations. The company quickly attempted to clarify the misunderstanding, replying, “We meant ‘we see you’ figuratively lol (see what we did there?). Your personal messages are protected by end-to-end encryption and no one, not even WhatsApp, can see them.” 

Despite the clarification, the irony wasn’t lost on users—or critics. A platform that has spent years assuring its three billion users that their messages are private had just posted a statement that could easily be read as the opposite. The timing and phrasing of the post made it a perfect recipe for confusion, especially given the long-running public skepticism around Meta’s privacy practices. WhatsApp continued to explain that the message was simply a humorous way to connect with users who frequently end their chats with “lol.” 

The company reiterated that nothing about its encryption or privacy commitments had changed, emphasizing that personal messages remain visible only to senders and recipients. “We see you,” they clarified, was intended as a metaphor for understanding user habits—not an admission of surveillance. The situation became even more ironic considering it unfolded on X, Elon Musk’s platform, where he has previously clashed with WhatsApp over privacy concerns. 

Musk has repeatedly criticized Meta’s handling of user data, and many expect him to seize on this incident as yet another opportunity to highlight his stance on digital privacy. Ultimately, the backlash served as a reminder of how easily tone can be misinterpreted when privacy is the core of your brand. A simple social media joke, meant to be endearing, became a viral lesson in communication strategy. 

For WhatsApp, the encryption remains intact, the messages still unreadable—but the marketing team has learned an important rule: never joke about “seeing” your users when your entire platform is built on not seeing them at all.

Privacy Laws Struggle to Keep Up with Meta’s ‘Luxury Surveillance’ Glasses


Meta’s newest smart glasses have reignited concerns about privacy, as many believe the company is inching toward a world where constant surveillance becomes ordinary. 

Introduced at Meta’s recent Connect event, the glasses reflect the kind of future that science fiction has long warned about, where everyone can record anyone at any moment and privacy nearly disappears. This is not the first time the tech industry has tried to make wearable cameras mainstream. 

More than ten years ago, Google launched Google Glass, which quickly became a public failure. People mocked its users as “Glassholes,” criticizing how easily the device could invade personal space. The backlash revealed that society was not ready for technology that quietly records others without their consent. 

Meta appears to have taken a different approach. By partnering with Ray-Ban, the company has created glasses that look fashionable and ordinary. Small cameras are placed near the nose bridge or along the outer rims, and a faint LED light is the only sign that recording is taking place. 

The glasses include a built-in display, voice-controlled artificial intelligence, and a wristband that lets the wearer start filming or livestreaming with a simple gesture. All recorded footage is instantly uploaded to Meta’s servers. 

Even with these improvements in design, the legal and ethical issues remain. Current privacy regulations are too outdated to deal with the challenges that come with such advanced wearable devices. 

Experts believe that social pressure and public disapproval may still be stronger than any law in discouraging misuse. As Meta promotes its vision of smart eyewear, critics warn that what is really being made normal is a culture of surveillance. 

The sleek design and luxury branding may make the technology more appealing, but the real risk lies in how easily people may accept being watched everywhere they go.

Tech Giants Pour Billions Into AI Race for Market Dominance

 

Tech giants are intensifying their investments in artificial intelligence, fueling an industry boom that has driven stock markets to unprecedented heights. Fresh earnings reports from Meta, Alphabet, and Microsoft underscore the immense sums being poured into AI infrastructure—from data centers to advanced chips—despite lingering doubts about the speed of returns.

Meta announced that its 2025 capital expenditures will range between $70 billion and $72 billion, slightly higher than its earlier forecast. The company also revealed plans for substantially larger spending growth in 2026 as it seeks to compete more aggressively with players like OpenAI.

During a call with analysts, CEO Mark Zuckerberg defended Meta’s aggressive investment strategy, emphasizing AI’s transformative potential in driving both new product development and enhancing its core advertising business. He described the firm’s infrastructure as operating in a “compute-starved” state and argued that accelerating spending was essential to unlocking future growth.

Alphabet, parent to Google and YouTube, also raised its annual capital spending outlook to between $91 billion and $93 billion—up from $85 billion earlier this year. This nearly doubles what the company spent in 2024 and highlights its determination to stay at the forefront of large-scale AI development.

Microsoft’s quarterly report similarly showcased its expanding investment efforts. The company disclosed $34.9 billion in capital expenditures through September 30, surpassing analyst expectations and climbing from $24 billion in the previous quarter. CEO Satya Nadella said Microsoft continues to ramp up AI spending in both infrastructure and talent to seize what he called a “massive opportunity.” He noted that Azure and the company’s broader portfolio of AI tools are already having tangible real-world effects.

Investor enthusiasm surrounding these bold AI commitments has helped lift the share prices of all three firms above the broader S&P 500 index. Still, Wall Street remains keenly interested in seeing whether these heavy capital outlays will translate into measurable profits.

Bank of America senior economist Aditya Bhave observed that robust consumer activity and AI-driven business investment have been the key pillars supporting U.S. economic resilience. As long as the latter remains strong, he said, it signals continued GDP growth. Despite an 83 percent profit drop for Meta due to a one-time tax charge, Microsoft and Alphabet reported profit increases of 12 percent and 33 percent, respectively.

EU Accuses Meta of Violating Digital Services Act Over Content Reporting Rules

 

The European Commission has accused Meta of breaching the European Union’s Digital Services Act (DSA), alleging that Facebook and Instagram fail to provide users with simple and accessible ways to report illegal content. 

In a preliminary ruling, the Commission said Meta’s platforms use “dark patterns” or deceptive design techniques that make it unnecessarily difficult for users to flag material such as child sexual abuse or terrorist content. 

“Neither Facebook nor Instagram appear to provide a user-friendly and easily accessible ‘Notice and Action’ mechanism,” the Commission said in a statement. “Meta’s systems impose several unnecessary steps and additional demands on users.” 

The EC also found that Meta’s appeal processes do not allow users to present explanations or evidence when contesting content moderation decisions, limiting their ability to challenge removals or restrictions. 

If the findings are confirmed, Meta could face penalties of up to 6% of its global annual turnover, along with possible periodic fines for non-compliance. Meta has the opportunity to respond before a final decision is issued. 

Meta pushes back 

Meta said it disagrees with the European Commission’s interpretation and maintains that its operations comply with the DSA. “We disagree with any suggestion that we have breached the DSA,” the company said. 

“We have made significant changes to our content reporting options, appeals process, and data access tools since the law came into force, and we believe these meet the EU’s requirements.”

Transatlantic tensions rise 

The case comes amid mounting tensions between Brussels and Washington over the regulation of US tech giants. The Trump administration has warned that EU measures targeting American firms could trigger new tariffs. US Federal Trade Commission (FTC) Chair Andrew Ferguson recently sent letters to several technology companies, cautioning that “censoring Americans to comply with a foreign power’s laws” could violate US law. 

TikTok also under scrutiny 

Meta is not alone in facing EU scrutiny. The Commission also said it had preliminary evidence that Meta and TikTok failed to provide adequate data access to independent researchers, another key DSA requirement. The EC argued that the platforms’ processes for granting researchers access to public data are “burdensome” and result in “partial or unreliable data”, undermining studies on issues such as online harms to minors. TikTok, for its part, said it remains “committed to transparency” and has shared data with nearly 1,000 research teams. However, the company warned that some DSA requirements may conflict with Europe’s data privacy law, the GDPR. 

“If it is not possible to fully comply with both, we urge regulators to provide clarity on how these obligations should be reconciled,” TikTok said. 

What’s next 

The EU’s investigation adds to the growing list of challenges facing global social media companies under the DSA, a sweeping law designed to increase accountability and transparency in online platforms. 

If confirmed, the ruling could set a major precedent for enforcement under the DSA, which has already prompted major compliance efforts across the tech industry.

EU Accuses Meta of Breaching Digital Rules, Raises Questions on Global Tech Compliance

 




The European Commission has accused Meta Platforms, the parent company of Facebook and Instagram, of violating the European Union’s Digital Services Act (DSA) by making it unnecessarily difficult for users to report illegal online content and challenge moderation decisions.

In its preliminary findings, the Commission said both platforms lack a user-friendly “Notice and Action” system — the mechanism that allows people to flag unlawful material such as child sexual abuse content or terrorist propaganda. Regulators noted that users face multiple steps and confusing options before they can file a report. The Commission also claimed that Meta’s interface relies on “dark patterns”, which are design features that subtly discourage users from completing certain actions, such as submitting reports.

According to the Commission, Meta’s appeal process also falls short of DSA requirements. The current system allegedly prevents users from adding explanations or submitting supporting evidence when disputing a moderation decision. This, the regulator said, limits users’ ability to express why they believe a decision was unfair and weakens the overall transparency of Meta’s content moderation practices.

The European Commission’s findings are not final, and Meta has the opportunity to respond before any enforcement action is taken. If the Commission confirms these violations, it could issue a non-compliance decision, which may result in penalties of up to 6 percent of Meta’s global annual revenue. The Commission may also impose recurring fines until the company aligns its operations with EU law.

Meta, in a public statement, said it “disagrees with any suggestion” that it breached the DSA. The company stated that it has already made several updates to comply with the law, including revisions to content reporting options, appeals procedures, and data access tools.

The European Commission also raised similar concerns about TikTok, saying that both companies have limited researchers’ access to public data on their platforms. The DSA requires large online platforms to provide sufficient data access so independent researchers can analyze potential harms — for example, whether minors are exposed to illegal or harmful content. The Commission’s review concluded that the data-access tools of Facebook, Instagram, and TikTok are burdensome and leave researchers with incomplete or unreliable datasets, which hinders academic and policy research.

TikTok responded that it has provided data to almost 1,000 research teams and remains committed to transparency. However, the company noted that the DSA’s data-sharing obligations sometimes conflict with the General Data Protection Regulation (GDPR), making it difficult to comply with both laws simultaneously. TikTok urged European regulators to offer clarity on how these two frameworks should be balanced.

Beyond Europe, the investigation may strain relations with the United States. American officials have previously criticized the EU for imposing regulatory burdens on U.S.-based tech firms. U.S. FTC Chairman Andrew Ferguson recently warned companies that censoring or modifying content to satisfy foreign governments could violate U.S. law. Former President Donald Trump has also expressed opposition to EU digital rules and even threatened tariffs against countries enforcing them.

For now, the Commission’s investigation continues. If confirmed, the case could set a major precedent for how global social media companies manage user safety, transparency, and accountability under Europe’s strict online governance laws.


Meta to Use AI Chat Data for Targeted Ads Starting December 16

 

Meta, the parent company of social media giants Facebook and Instagram, will soon begin leveraging user conversations with its AI chatbot to drive more precise targeted advertising on its platforms. 

Starting December 16, Meta will integrate data from interactions users have with the generative AI chat tool directly into its ad targeting algorithms. For instance, if a user tells the chatbot about a preference for pizza, this information could translate to seeing additional pizza-related ads, such as Domino's promotions, across Instagram and Facebook feeds.

Notably, users do not have the option to opt out of this new data usage policy, sparking debates and concerns over digital privacy. Privacy advocates and everyday users alike have expressed discomfort with the increasing granularity of Meta’s ad targeting, as hyper-targeted ads are widely perceived as intrusive and reflective of a broader erosion of personal privacy online. 

In response to these growing concerns, Meta claims there are clear boundaries regarding what types of conversational data will be incorporated into ad targeting. The company lists several sensitive categories it pledges to exclude: religious beliefs, political views, sexual orientation, health information, and racial or ethnic origin. Despite these assurances, skepticism remains about how effectively Meta can prevent indirect influences on ad targeting, since related topics might naturally slip into AI interactions even without explicit references.

Industry commentators have highlighted the novelty and controversial nature of Meta’s move, referring to it as marking a 'new frontier in digital privacy.' Some users are openly calling for boycotts of Meta’s chat features or responding with jaded irony, pointing out that Meta's business model has always relied on user data monetization.

Meta's policy will initially exclude the United Kingdom, South Korea, and all countries in the European Union, likely due to stricter privacy regulations and ongoing scrutiny by European authorities. The new initiative fits into Meta CEO Mark Zuckerberg’s broader strategy to capitalize on AI, with the company planning a massive $600 billion investment in AI infrastructure over the coming years. 

With this policy shift, over 3.35 billion daily active users worldwide—except in the listed exempted regions—can expect changes in the nature and specificity of the ads they see across Meta’s core platforms. The change underscores the ongoing tension between user privacy and tech companies’ drive for personalized digital advertising.

Meta's Platforms Rank Worst in Social Media Privacy Rankings: Report

Meta’s Instagram, WhatsApp, and Facebook have once again been flagged as the most privacy-violating social media apps. According to Incogni’s Social Media Privacy Ranking report 2025, Meta and TikTok are at the bottom of the list. Elon Musk’s X (formerly Twitter) has also received poor rankings in various categories, but has done better than Meta in a few categories.

Discord, Pinterest, and Quora perform well

The report analyzed 15 of the most widely used social media platforms globally, measuring them against 14 privacy criteria organized into six different categories: AI data use, user control, ease of access, regulatory transgressions, transparency, and data collection. The research methodology focused on how an average user could understand and control privacy policies.

Discord, Pinterest, and Quora have done best in the 2025 ranking. Discord is placed first, thanks to its stance on not giving user data for training of AI models. Pinterest ranks second, thanks to its strong user options and fewer regulatory penalties. Quora came third thanks to its limited user data collection.

Why were Meta platforms penalized?

But the Meta platforms were penalized strongly in various categories. Facebook was penalized for frequent regulatory fines, such as GDPR rules in Europe, and penalties in the US and other regions. Instagram and WhatsApp received heavy penalties due to policies allowing the collection of sensitive personal data, such as sexual orientation and health. X faced penalties for vast data collection

Penalties against X

X was penalized for vast data collection and privacy fines from the past, but it still ranked above Meta and TikTok in some categories. X was among the easiest platforms to delete accounts from, and also provided information to government organizations at was lower rate than other platforms. Yet, X allows user data to be trained for AI models, which has impacted its overall privacy score.

“One of the core principles motivating Incogni’s research here is the idea that consent to have personal information gathered and processed has to be properly informed to be valid and meaningful. It’s research like this that arms users with not only the facts but also the tools to inform their choices,” Incogni said in its blog. 

FileFix Attack Uses Fake Meta Suspensions to Spread StealC Malware

 

A new cyber threat known as the FileFix attack is gaining traction, using deceptive tactics to trick users into downloading malware. According to Acronis, which first identified the campaign, hackers are sending fake Meta account suspension notices to lure victims into installing the StealC infostealer. Reported by Bleeping Computer, the attack relies on social engineering techniques that exploit urgency and fear to convince targets to act quickly without suspicion. 

The StealC malware is designed to extract sensitive information from multiple sources, including cloud-stored credentials, browser cookies, authentication tokens, messaging platforms, cryptocurrency wallets, VPNs, and gaming accounts. It can also capture desktop screenshots. Victims are directed to a fake Meta support webpage available in multiple languages, warning them of imminent account suspension. The page urges users to review an “incident report,” which is disguised as a PowerShell command. Once executed, the command installs StealC on the victim’s device. 

To execute the attack, users are instructed to copy a path that appears legitimate but contains hidden malicious code and subtle formatting tricks, such as extra spaces, making it harder to detect. Unlike traditional ClickFix attacks, which use the Windows Run dialog box, FileFix leverages the Windows File Explorer address bar to execute malicious commands. This method, attributed to a researcher known as mr.fox, makes the attack harder for casual users to recognize. 

Acronis has emphasized the importance of user awareness and training, particularly educating people on the risks of copying commands or paths from suspicious websites into system interfaces. Recognizing common phishing red flags—such as urgent language, unexpected warnings, and suspicious links—remains critical. Security experts recommend that users verify account issues by directly visiting official websites rather than following embedded links in unsolicited emails. 

Additional protective measures include enabling two-factor authentication (2FA), which provides an extra security layer even if login credentials are stolen, and ensuring that devices are protected with up-to-date antivirus solutions. Advanced features such as VPNs and hardened browsers can also reduce exposure to such threats. 

Cybersecurity researchers warn that both FileFix and its predecessor ClickFix are likely to remain popular among attackers until awareness becomes widespread. As these techniques evolve, sharing knowledge within organizations and communities is seen as a key defense. At the same time, maintaining strong cyber hygiene and securing personal devices are essential to reduce the risk of falling victim to these increasingly sophisticated phishing campaigns.

Meta Overhauls AI Chatbot Safeguards for Teenagers

 

Meta has announced new artificial intelligence safeguards to protect teenagers following a damaging Reuters investigation that exposed internal company policies allowing inappropriate chatbot interactions with minors. The social media giant is now training its AI systems to avoid flirtatious conversations and discussions about self-harm or suicide with teenage users. 

Background investigation 

The controversy began when Reuters uncovered an internal 200-page Meta document titled "GenAI: Content Risk Standards" that permitted chatbots to engage in "romantic or sensual" conversations with children as young as 13. 

The document contained disturbing examples of acceptable AI responses, including "Your youthful form is a work of art" and "Every inch of you is a masterpiece – a treasure I cherish deeply". These guidelines had been approved by Meta's legal, public policy, and engineering teams, including the company's chief ethicist. 

Immediate safety measures 

Meta spokesperson Andy Stone announced that the company is implementing immediate interim measures while developing more comprehensive long-term solutions for teen AI safety. The new safeguards include training chatbots to avoid discussing self-harm, suicide, disordered eating, and potentially inappropriate romantic topics with teenage users. Meta is also temporarily limiting teen access to certain AI characters that could hold inappropriate conversations.

Some of Meta's user-created AI characters include sexualized chatbots such as "Step Mom" and "Russian Girl," which will now be restricted for teen users. Instead, teenagers will only have access to AI characters that promote education and creativity. The company acknowledged that these policy changes represent a reversal from previous positions where it deemed such conversations appropriate. 

Government response and investigation

The revelations sparked swift political backlash. Senator Josh Hawley launched an official investigation into Meta's AI policies, demanding documentation about the guidelines that enabled inappropriate chatbot interactions with minors. A coalition of 44 state attorneys general wrote to AI companies including Meta, expressing they were "uniformly revolted by this apparent disregard for children's emotional well-being". 

Senator Edward Markey has urged Meta to completely prevent minors from accessing AI chatbots on its platforms, citing concerns that Meta incorporates teenagers' conversations into its AI training process. The Federal Trade Commission is now preparing to scrutinize the mental health risks of AI chatbots to children and will demand internal documents from major tech firms including Meta. 

Implementation timeline 

Meta confirmed that the revised document was "inconsistent with its broader policies" and has since removed sections allowing chatbots to flirt or engage in romantic roleplay with minors. Company spokesperson Stephanie Otway acknowledged these were mistakes, stating the updates are "already in progress" and the company will "continue to adapt our approach to help ensure teens have safe, age-appropriate experiences with AI". 

The controversy highlights broader concerns about AI chatbot safety for vulnerable users, particularly as large companies integrate these tools directly into widely-used platforms where the vast majority of young people will encounter them.

Meta.ai Privacy Lapse Exposes User Chats in Public Feed

 

Meta’s new AI-driven chatbot platform, Meta.ai, launched recently with much fanfare, offering features like text and voice chats, image generation, and video restyling. Designed to rival platforms like ChatGPT, the app also includes a Discover feed, a space intended to showcase public content generated by users. However, what Meta failed to communicate effectively was that many users were unintentionally sharing their private conversations in this feed—sometimes with extremely sensitive content attached. 

In May, journalists flagged the issue when they discovered public chats revealing deeply personal user concerns—ranging from financial issues and health anxieties to legal troubles. These weren’t obscure posts either; they appeared in a publicly accessible area of the app, often containing identifying information. Conversations included users seeking help with medical diagnoses, children talking about personal experiences, and even incarcerated individuals discussing legal strategies—none of whom appeared to realize their data was visible to others. 

Despite some recent tweaks to the app’s sharing settings, disturbing content still appears on the Discover feed. Users unknowingly uploaded images and video clips, sometimes including faces, alongside alarming or bizarre prompts. One especially troubling instance featured a photo of a child at school, accompanied by a prompt instructing the AI to “make him cry.” Such posts reflect not only poor design choices but also raise ethical questions about the purpose and moderation of the Discover feed itself. 

The issue evokes memories of other infamous data exposure incidents, such as AOL’s release of anonymized user searches in 2006, which provided unsettling insight into private thoughts and behaviors. While social media platforms are inherently public, users generally view AI chat interactions as private, akin to using a search engine. Meta.ai blurred that boundary—perhaps unintentionally, but with serious consequences. Many users turned to Meta.ai seeking support, companionship, or simple productivity help. Some asked for help with job listings or obituary writing, while others vented emotional distress or sought comfort during panic attacks. 

In some cases, users left chats expressing gratitude—believing the bot had helped. But a growing number of conversations end in frustration or embarrassment when users realize the bot cannot deliver on its promises or that their content was shared publicly. These incidents highlight a disconnect between how users engage with AI tools and how companies design them. Meta’s ambition to merge AI capabilities with social interaction seems to have ignored the emotional and psychological expectations users bring to private-sounding features. 

For those using Meta.ai as a digital confidant, the lack of clarity around privacy settings has turned an experiment in convenience into a public misstep. As AI systems become more integrated into daily life, companies must rethink how they handle user data—especially when users assume privacy. Meta.ai’s rocky launch serves as a cautionary tale about transparency, trust, and design in the age of generative AI.

Meta Introduces Advanced AI Tools to Help Businesses Create Smarter Ads


Meta has rolled out a fresh set of AI-powered tools aimed at helping advertisers design more engaging and personalized promotional content. These new features include the ability to turn images into short videos, brand-focused image generation, AI-powered chat assistants, and tools that enhance shopping experiences within ads.

One of the standout additions is Meta’s video creation feature, which allows businesses to transform multiple images into animated video clips. These clips can include music and text, making it easier for advertisers to produce dynamic visual content without needing video editing skills. Because the videos are short, they’re less likely to appear distorted or lose quality, a common issue in longer AI-generated videos.

Currently, this feature is being tested with select business partners.

Another tool in development is “Video Highlights,” which uses AI to identify the most important parts of a video. Viewers will be able to jump directly to these key scenes, guided by short phrases and image previews chosen by the system. This can help businesses convey their product value more clearly and keep viewers engaged.

Meta is also enhancing its AI image creation tools. Advertisers will now be able to insert their logos and brand colors directly into the images generated by AI. This ensures that their brand identity stays consistent across all marketing content. Additionally, AI-generated ad text can now reflect the personality or style of the brand, offering a more customized tone in promotions.

Another major update is the introduction of “Business AIs”, specialized chat assistants embedded within ads. These bots are designed to answer common customer questions about a product or service. Available in both text and voice formats, these virtual assistants aim to improve customer interaction by addressing queries instantly and guiding users toward making a purchase.

Meta is also experimenting with new features like clickable call-to-action (CTA) stickers for Stories and Reels ads, and virtual try-on tools that use AI to display clothing on digital models of various body types.

These developments are part of Meta’s broader push to make advertising more efficient through automation. The company’s Advantage+ ad system is already showing results, with Meta reporting a 22% average increase in return on ad spend (ROAS) for brands using this approach. Advantage+ uses AI to analyze user behavior, optimize ad formats, and identify potential customers based on real-time data.

While AI is unlikely to replace human creativity entirely, these tools can simplify the ad creation process and help brands connect with their audiences more effectively. 

Beware of Pig Butchering Scams That Steal Your Money

Beware of Pig Butchering Scams That Steal Your Money

Pig butchering, a term we usually hear in the meat market, sadly, has also become a lethal form of cybercrime that can cause complete financial losses for the victims. 

Pig Butchering is a “form of investment fraud in the crypto space where scammers build relationships with targets through social engineering and then lure them to invest crypto in fake opportunities or platforms created by the scammer,” according to The Department of Financial Protection & Innovation. 

Pig butchering has squeezed billions of dollars from victims globally. Cambodian-based Huione Group gang stole over $4 billion from August 2021 to January 2025, the New York Post reported.

How to stay safe from pig butchering?

Individuals should watch out for certain things to avoid getting caught in these extortion schemes. Scammers often target seniors and individuals who are not well aware about cybercrime. The National Council on Aging cautions that such scams begin with receiving messages from scammers pretending to be someone else. Never respond or send money to random people who text you online, even if the story sounds compelling. Scammers rely on earning your trust, a sob story is one easy way for them to trick you. 

Another red flag is receiving SMS or social media texts that send you to other platforms like WeChat or Telegram, which have fewer regulations. Scammers also convince users to invest their money, which they claim to return with big profits. In one incident, the scammer even asked the victim to “go to a loan shark” to get the money.

Stopping scammers

Last year, Meta blocked over 2 million accounts that were promoting crypto investment scams such as pig butchering. Businesses have increased efforts to combat this issue, but the problem still very much exists. A major step is raising awareness via public posts broadcasting safety tips among individuals to prevent them from falling prey to such scams. 

Organizations have now started releasing warnings in Instagram DMs and Facebook Messenger warning users about “potentially suspicious interactions or cold outreach from people you don’t know”, which is a good initiative. Banks have started tipping of customers about the dangers of scams when sending money online. 

Want to Leave Facebook? Do this.

Want to Leave Facebook? Do this.

Confused about leaving Facebook?

Many people are changing their social media habits and opting out of many services. Facebook has witnessed a large exodus of users deserting the platform after the announcement in March that Meta was terminating the independent fact-checking on its platform. However, fact-checking has been replaced with community notes, letting users make changes to potentially false/misleading information. 

Users having years of photos and posts on Facebook are confused about how to collect their data before removing their accounts. If you also feel the same problem, this post will help you delete Facebook permanently, while taking all your information on the way out. 

How to remove Facebook?

For users who do not want to be on Facebook anymore, deleting their account is the only way to completely remove yourself from the platform. If you are not sure, deactivating your account allows you to have some life off of Facebook without account deletion. 

Make sure to remove third-party Facebook logins before deleting your account. 

How to leave third-party apps?

Third-party apps like DoorDash and Spotify allow you to log in using your Facebook account. This lets you log in without remembering another password, but if you’re planning on deleting Facebook, you have to update your login settings. That is because if you delete your account, there will not be another Facebook account for the user to log in through. 

Fortunately, there is another simple way to find which of your sites and applications are connected to Facebook and delete them before removing your account. Once you disconnect from other websites and applications from Facebook, you will need to adjust how you login to them. 

Users should try specific applications and websites to set new passwords or passkeys or log in via a single-service sign-on option, such as Google. 

How is deactivating different than deactivating a Facebook account?

If you want to stay away from Facebook, you have two choices. Either delete your account permanently, or you can disable it temporarily to deactivate it. 

WhatsApp Launches First Dedicated iPad App with Full Multitasking and Calling Features

 

After years of anticipation, WhatsApp has finally rolled out a dedicated iPad app, allowing users to enjoy the platform’s messaging capabilities natively on Apple’s tablet. Available now for download via the App Store, this new version is built to take advantage of iPadOS’s multitasking tools such as Stage Manager, Split View, and Slide Over, marking a major step forward in cross-device compatibility for the platform. 

Previously, iPad users had to rely on WhatsApp Web or third-party solutions to access their chats on the tablet. These alternatives lacked several core functionalities and offered limited support for features like voice and video calls. With this release, users can now sync messages across devices, initiate calls, and send media from their iPad with the same ease and security offered on the iPhone app. 

In its official blog post, WhatsApp highlighted how the new app enhances productivity and communication. Users can, for instance, participate in group calls while researching online or send messages during video meetings — all within the multitasking-friendly iPad interface. The app also supports accessories like Apple’s Magic Keyboard and Apple Pencil, further streamlining the messaging experience. The absence of an iPad-specific version until now had often puzzled users, especially given WhatsApp’s massive global user base and Meta’s (formerly Facebook) ownership since 2014. 

Although the iPhone version has long dominated mobile messaging, WhatsApp never clarified why a tablet version wasn’t prioritized — despite the iPad being one of the most popular tablets worldwide. This launch now allows users to take full advantage of WhatsApp’s ecosystem on a larger screen without needing workarounds. Unlike WhatsApp Web, the new native app can access the device’s cameras and offer a richer interface for media sharing and video calls. 

With this, WhatsApp fills a major gap in its product offering and joins competitors like Telegram, which has long offered a native iPad experience. Interestingly, WhatsApp’s tweet teasing the launch included a playful emoji in response to a user request, generating buzz before the official announcement. In contrast, Telegram jokingly responded with a tweet poking fun at the delayed release.

With over 3 billion active users globally — including more than 500 million in India — WhatsApp’s move to embrace the iPad platform marks a significant upgrade in its commitment to universal accessibility and user experience.

Meta Mirage” Phishing Campaign Poses Global Cybersecurity Threat to Businesses

 

A sophisticated phishing campaign named Meta Mirage is targeting companies using Meta’s Business Suite, according to a new report by cybersecurity experts at CTM360. This global threat is specifically engineered to compromise high-value accounts—including those running paid ads and managing brand profiles.

Researchers discovered that the attackers craft convincing fake communications impersonating official Meta messages, deceiving users into revealing sensitive login information such as passwords and one-time passcodes (OTP).

The scale of the campaign is substantial. Over 14,000 malicious URLs were detected, and alarmingly, nearly 78% of these were not flagged or blocked by browsers when the report was released.

What makes Meta Mirage particularly deceptive is the use of reputable cloud hosting services—like GitHub, Firebase, and Vercel—to host counterfeit login pages. “This mirrors Microsoft’s recent findings on how trusted platforms are being exploited to breach Kubernetes environments,” the researchers noted, highlighting a broader trend in cloud abuse.

Victims receive realistic alerts through email and direct messages. These notifications often mention policy violations, account restrictions, or verification requests, crafted to appear urgent and official. This strategy is similar to the recent Google Sites phishing wave, which used seemingly authentic web pages to mislead users.

CTM360 identified two primary techniques being used:
  • Credential Theft: Victims unknowingly submit passwords and OTPs to lookalike websites. Fake error prompts are displayed to make them re-enter their information, ensuring attackers get accurate credentials.
  • Cookie Theft: Attackers extract browser cookies, allowing persistent access to compromised accounts—even without login credentials.
Compromised business accounts are then weaponized for malicious ad campaigns. “It’s a playbook straight from campaigns like PlayPraetor, where hijacked social media profiles were used to spread fraudulent ads,” the report noted.

The phishing operation is systematic. Attackers begin with non-threatening messages, then escalate the tone over time—moving from mild policy reminders to aggressive warnings about permanent account deletion. This psychological pressure prompts users to respond quickly without verifying the source.

CTM360 advises businesses to:
  • Manage social media accounts only from official or secure devices
  • Use business-specific email addresses
  • Activate Two-Factor Authentication (2FA)
  • Periodically audit security settings and login history
  • Train team members to identify and report suspicious activity
This alarming phishing scheme highlights the need for constant vigilance, cybersecurity hygiene, and proactive measures to secure digital business assets.

WhatsApp Reveals "Private Processing" Feature for Cloud Based AI Features

WhatsApp Reveals "Private Processing" Feature for Cloud Based AI Features

WhatsApp claims even it can not process private data

WhatsApp has introduced ‘Private Processing,’ a new tech that allows users to use advanced AI features by offloading tasks to privacy-preserving cloud servers, without exposing their chat to Meta. Meta claims even it cannot see the messages while processing them. The system employs encrypted cloud infrastructure and hardware-based isolation without making it visible to anyone, even Meta, or processing data. 

About private processing

For those who decide to use Private Processing, the system works in an obscure verification via the user’s WhatsApp client to confirm the user’s validity. 

Meta claims this system keeps WhatsApp’s end-to-end encryption intact while offering AI features in chats. However, the feature currently applies only to select use cases and excludes Meta’s broader AI deployments, including those used in India’s public service systems.

Private processing employs Trusted Execution Environments (TEEs) — safe virtual machines that use cloud infrastructure to keep AI requests hidden. 

About the system

  • Performs encryption of user requests from the system to the TEE utilizing end-to-end encryption
  • Encrypts user requests from the device to the TEE using end-to-end encryption
  • Restricts storage or logging of messages post-processing
  • Reports logs and binary images for external verification and audits

WhatsApp builds AI through wider privacy concerns

According to Meta, the Private processing is a response to privacy questions around AI and messaging. WhatsApp has now joined other companies like Apple that have introduced confidential AI computing models in the previous year. “To validate our implementation of these and other security principles, independent security researchers will be able to continuously verify our privacy and security architecture and its integrity,” Meta said.

It is similar to Apple’s private cloud computing in terms of public transparency and stateless processing. Currently, however, WhatsApp is using them only for select features. Apple, on the other hand, has declared plans to implement this model throughout all its AI tools, whereas WhatsApp has not made such claims, yet. 

WhatsApp says, “Private Processing uses anonymous credentials to authenticate users over OHTTP. This way, Private Processing can authenticate users to the Private Processing system but remains unable to identify them.”

Investigating the Role of DarkStorm Team in the Recent X Outage

 


It has been reported that Elon Musk’s social media platform, X, formerly known as Twitter, was severely disrupted on Monday after a widespread cyberattack that has caused multiple service disruptions. Data from outage monitoring service Downdetector indicates that at least three significant disruptions were experienced by the platform throughout the day, affecting millions of users around the world. During this time, over 41,000 people around the world, including Europe, North America, the Middle East, and Asia, reported outages. 
 
The most common technical difficulties encountered by users were prolonged connection failures and a lack of ability to fully load the platform. According to a preliminary assessment, it is possible that the disruptions were caused by a coordinated and large-scale cyber attack. While cybersecurity experts are still investigating the extent and origin of the incident, they have pointed to the growing trend of organised cyber-attacks targeting high-profile digital infrastructures, which is of concern. A number of concerns have been raised regarding the security framework of X following the incident, especially since the platform plays a prominent role in global communications and information dissemination. Authorities and independent cybersecurity analysts continue to analyze data logs and attack signatures to identify the perpetrators and to gain a deeper understanding of the attack methodology. An Israeli hacktivist collective known as the Dark Storm Team, a collective of pro-Palestinian hacktivists, has emerged as an important player in the cyberwarfare landscape. Since February 2010, the group has been orchestrating targeted cyberattacks against Israeli entities that are perceived as supportive of Israel. 
 
In addition to being motivated by a combination of political ideology and financial gain, this group is also well known for using aggressive tactics in the form of Distributed Denial-of-Service (DDoS) attacks, database intrusions, and other disruptive cyber attacks on government agencies, public infrastructure, and organizations perceived to be aligned with Israeli interests that have gained widespread attention. 
 
It has been reported that this group is more than just an ideological movement. It is also a cybercrime organization that advertises itself openly through encrypted messaging platforms like Telegram, offering its services to a variety of clients. It is rumored that it sells coordinated DDoS attacks, data breaches, and hacking tools to a wide range of clients as part of its offerings. It is apparent that their operations are sophisticated and resourceful, as they are targeting both vulnerable and well-protected targets. A recent activity on the part of the group suggests that it has escalated both in scale and ambition in the past few months. In February 2024, the Dark Storm Team warned that a cyberattack was imminent, and threatened NATO member states, Israel, as well as countries providing support for Israel. This warning was followed by documented incidents that disrupted critical government and digital infrastructure, which reinforced the capability of the group to address its threats. 
 
According to intelligence reports, Dark Storm has also built ties with pro-Russian cyber collectives, which broadens the scope of its operations and provides it with access to advanced hacking tools. In addition to enhancing their technical reach, this collaboration also signals an alignment of geopolitical interests. 

Among the most prominent incidents attributed to the group include the October 2024 DDoS attack against the John F Kennedy International Airport's online systems, which was a high-profile incident. As part of their wider agenda, the group justified the attack based on the airport's perceived support for Israeli policies, showing that they were willing to target essential infrastructure as part of their agenda. Dark Storm, according to analysts, combines ideological motivations with profit-driven cybercrime, making it an extremely potent threat in today's cyber environment, as well as being a unique threat to the world's cybersecurity environment. 
 
An investigation is currently underway to determine whether or not the group may have been involved in any of the recent service disruptions of platform X which occured. In order to achieve its objectives, the DarkStorm Team utilizes a range of sophisticated cyber tactics that combine ideological activism with financial motives in cybercrime. They use many of their main methods, including Distributed Denial-of-Service (DDoS) platforms, ransomware campaigns, and leaking sensitive information for a variety of reasons. In addition to disrupting the operations of their targeted targets, these activities are also designed to advance specific political narratives and generate illicit revenue in exchange for the disruption of their operations. In order to coordinate internally, recruit new members, and inform the group of operating updates, the group heavily relies on encrypted communication channels, particularly Telegram. Having these secure platforms allows them to operate with a degree of anonymity, which complicates the efforts of law enforcement and cybersecurity firms to track and dismantle their networks. 

Along with the direct cyberattacks that DarkStorm launches, the company is actively involved in the monetization of stolen data through the sale of compromised databases, personal information, and hacking tools on the darknet, where it is commonly sold. Even though DarkStorm claims to be an organization that consists of grassroots hackers, cybersecurity analysts are increasingly suspecting the group may have covert support from nation-state actors, particularly Russia, despite its public position as a grassroots hacktivist organization. Many factors are driving this suspicion, including the complexity and scale of their operations, the strategic choice of their targets, and the degree of technical sophistication evident in their attacks, among others. A number of patterns of activity suggest the groups are coordinated and well resourced, which suggests that they may be playing a role as proxy groups in broader geopolitical conflicts, which raises concerns about their possible use as proxies. 
 
It is evident from the rising threat posed by groups like DarkStorm that the cyber warfare landscape is evolving, and that ideological, financial, and geopolitical motivations are increasingly intertwined. Thus, it has become significantly more challenging for targeted organisations and governments to attribute attacks and defend themselves, as Elon Musk has become increasingly involved in geopolitical affairs, adding an even greater degree of complexity to the recent disruption of platform X cyberattack narrative. When Russian troops invaded Ukraine in February 2022, Musk has been criticized for publicly mocking Ukrainian President Volodymyr Zelensky, and for making remarks considered dismissive of Ukraine's plight. Musk was the first to do this in the current political environment. The President of the Department of Government Efficiency (DOGE), created under the Trump administration, is the head of the DOGE, an entity created under Trump’s administration that has been reducing U.S. federal employment in an unprecedented way since Trump returned to office. There is a marked change in the administration's foreign policy stance, signaling a shift away from longstanding US support for Ukraine, and means that the administration is increasingly conciliatory with Russia. Musk has a geopolitical entanglement that extends beyond his role at X as well. 
 
A significant portion of Ukraine's digital communication has been maintained during the recent wartime thanks to the Starlink satellite internet network, which he operates through his aerospace company SpaceX. It has been brought to the attention of the public that these intersecting spheres of influence – spanning national security, communication infrastructure, and social media – have received heightened scrutiny, particularly as X continues to be a central node in global politics. According to cybersecurity firms delving into the technical aspects of the Distributed Denial-of-Service (DDoS) attack, little evidence suggests that Ukrainian involvement may have been involved in the attack. 
 
It is believed that a senior analyst at a leading cybersecurity firm spoke on the condition of anonymity because he was not allowed to comment on X publicly because of restrictions on discussing X publicly. This analyst reported that no significant traffic was originating from Ukraine and that it was absent from the top 20 sources of malicious IPs linked to the attack. Despite the fact that Ukrainian IP addresses are rarely spotted in such data due to the widespread practice of IP spoofing and the widespread distribution of compromised devices throughout the world, the absence of Ukrainian IP addresses is significant since it allows attention to be directed to more likely sources, such as organized cybercrime groups and state-related organizations. 
 
There is no denying the fact that this incident reflects the fragile state of digital infrastructure in a politically polarized world where geopolitical tensions, corporate influence, and cyberwarfare are convergent, and as investigations continue, experts are concerned that actors such as DarkStorm Team's role and broader implications for global cybersecurity policy will continue to be a source of controversy.

WhatsApp Windows Vulnerability CVE-2025-30401 Could Let Hackers Deliver Malware via Fake Images

 

Meta has issued a high-priority warning about a critical vulnerability in the Windows version of WhatsApp, tracked as CVE-2025-30401, which could be exploited to deliver malware under the guise of image files. This flaw affects WhatsApp versions prior to 2.2450.6 and could expose users to phishing, ransomware, or remote code execution attacks. The issue lies in how WhatsApp handles file attachments on Windows. 

The platform displays files based on their MIME type but opens them according to the true file extension. This inconsistency creates a dangerous opportunity for hackers: they can disguise executable files as harmless-looking images like .jpeg files. When a user manually opens the file within WhatsApp, they could unknowingly launch a .exe file containing malicious code. Meta’s disclosure arrives just as new data from online bank Revolut reveals that WhatsApp was the source of one in five online scams in the UK during 2024, with scam attempts growing by 67% between June and December. 

Cybersecurity experts warn that WhatsApp’s broad reach and user familiarity make it a prime target for exploitation. Adam Pilton, senior cybersecurity consultant at CyberSmart, cautioned that this vulnerability is especially dangerous in group chats. “If a cybercriminal shares the malicious file in a trusted group or through a mutual contact, anyone in that group might unknowingly execute malware just by opening what looks like a regular image,” he explained. 

Martin Kraemer, a security awareness advocate at KnowBe4, highlighted the platform’s deep integration into daily routines—from casual chats to job applications. “WhatsApp’s widespread use means users have developed a level of trust and automation that attackers exploit. This vulnerability must not be underestimated,” Kraemer said. Until users update to the latest version, experts urge WhatsApp users to treat the app like email—avoid opening unexpected attachments, especially from unknown senders or new contacts. 

The good news is that Meta has already issued a fix, and updating the app resolves the vulnerability. Pilton emphasized the importance of patch management, noting, “Cybercriminals will always seek to exploit software flaws, and providers will keep issuing patches. Keeping your software updated is the simplest and most effective protection.” For now, users should update WhatsApp for Windows immediately to mitigate the risk posed by CVE-2025-30401 and remain cautious with all incoming files.