Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Facebook. Show all posts

Facebook Tests Paid Access for Sharing Multiple Links

 



Facebook is testing a new policy that places restrictions on how many external links certain users can include in their posts. The change, which is currently being trialled on a limited basis, introduces a monthly cap on link sharing unless users pay for a subscription.

Some users in the United Kingdom and the United States have received in-app notifications informing them that they will only be allowed to share a small number of links in Facebook posts without payment. To continue sharing links beyond that limit, users are offered a subscription priced at £9.99 per month.

Meta, the company that owns Facebook, has confirmed the test and described it as limited in scope. According to the company, the purpose is to assess whether the option to post a higher volume of link-based content provides additional value to users who choose to subscribe.

Industry observers say the experiment reflects Meta’s broader effort to generate revenue from more areas of its platforms. Social media analyst Matt Navarra said the move signals a shift toward monetising essential platform functions rather than optional extras.

He explained that the test is not primarily about identity verification. Instead, it places practical features that users rely on for visibility and reach behind a paid tier. In his view, Meta is now charging for what he describes as “survival features” rather than premium add-ons.

Meta already offers a paid service called Meta Verified, which provides subscribers on Facebook and Instagram with a blue verification badge, enhanced account support, and safeguards against impersonation. Navarra said that after attaching a price to these services, Meta now appears to be applying a similar approach to content distribution itself.

He noted that this includes the basic ability to direct users away from Facebook to external websites, a function that creators and businesses depend on to grow audiences, drive traffic, and promote services.

Navarra was among those who received a notification about the test. He said he was informed that from 16 December onward, he would only be able to include two links per month in Facebook posts unless he subscribed.

For creators and businesses, he said the message is clear. If Facebook plays a role in their audience growth or traffic strategy, that access may now require payment. He added that while platforms have been moving in this direction for some time, the policy makes it explicit.

The test comes as social media platforms increasingly encourage users to verify their accounts in exchange for added features or improved engagement. Platforms such as LinkedIn have also adopted similar models.

After acquiring Twitter in 2022, Elon Musk restructured the platform’s verification system, now known as X. Blue verification badges were made available only to paying users, who also received increased visibility in replies and recommendation feeds.

That approach proved controversial and resulted in regulatory scrutiny, including a fine imposed by European authorities in December. Despite the criticism, Meta later introduced a comparable paid verification model.

Meta has also announced plans to introduce a “community notes” system, similar to X, allowing users to flag potentially misleading posts. This follows reductions in traditional moderation and third-party fact-checking efforts.

According to Meta, the link-sharing test applies only to a selected group of users who operate Pages or use Facebook’s professional mode. These tools are widely used by creators and businesses to publish content and analyse audience engagement.

Navarra said the test highlights a difficult reality for creators. He argued that Facebook is becoming less reliable as a source of external traffic and is increasingly steering users away from treating the platform as a traffic engine.

He added that the experiment reinforces a long-standing pattern. Meta, he said, ultimately designs its systems to serve its own priorities first.

According to analysts, tests like this underline the risks of building a business that depends too heavily on a single platform. Changes to access, visibility, or pricing can occur with little warning, leaving creators and businesses vulnerable.

Meta has emphasized  that the policy remains a trial. However, the experiment illustrates how social media companies continue to reassess which core functions remain free and which are moving behind paywalls.

Meta Begins Removing Under-16 Users Ahead of Australia’s New Social Media Ban

 



Meta has started taking down accounts belonging to Australians under 16 on Instagram, Facebook and Threads, beginning a week before Australia’s new age-restriction law comes into force. The company recently alerted users it believes are between 13 and 15 that their profiles would soon be shut down, and the rollout has now begun.

Current estimates suggest that a large number of accounts will be affected, including roughly hundreds of thousands across Meta’s platforms. Since Threads operates through Instagram credentials, any underage Instagram account will also lose access to Threads.

Australia’s new policy, which becomes fully active on 10 December, prevents anyone under 16 from holding an account on major social media sites. This law is the first of its kind globally. Platforms that fail to take meaningful action can face penalties reaching up to 49.5 million Australian dollars. The responsibility to monitor and enforce this age limit rests with the companies, not parents or children.

A Meta spokesperson explained that following the new rules will require ongoing adjustments, as compliance involves several layers of technology and review. The company has argued that the government should shift age verification to app stores, where users could verify their age once when downloading an app. Meta claims this would reduce the need for children to repeatedly confirm their age across multiple platforms and may better protect privacy.

Before their accounts are removed, underage users can download and store their photos, videos and messages. Those who believe Meta has made an incorrect assessment can request a review and prove their age by submitting government identification or a short video-based verification.

The new law affects a wide list of services, including Facebook, Instagram, Snapchat, TikTok, Threads, YouTube, X, Reddit, Twitch and Kick. However, platforms designed for younger audiences or tools used primarily for education, such as YouTube Kids, Google Classroom and messaging apps like WhatsApp, are not included. Authorities have also been examining whether children are shifting to lesser-known apps, and companies behind emerging platforms like Lemon8 and Yope have already begun evaluating whether they fall under the new rules.

Government officials have stated that the goal is to reduce children’s exposure to harmful online material, which includes violent content, misogynistic messages, eating disorder promotion, suicide-related material and grooming attempts. A national study reported that the vast majority of children aged 10 to 15 use social media, with many encountering unsafe or damaging content.

Critics, however, warn that age verification tools may misidentify users, create privacy risks or fail to stop determined teenagers from using alternative accounts. Others argue that removing teens from regulated platforms might push them toward unmonitored apps, reducing online safety rather than improving it.

Australian authorities expect challenges in the early weeks of implementation but maintain that the long-term goal is to reduce risks for the youngest generation of online users.



EU Accuses Meta of Breaching Digital Rules, Raises Questions on Global Tech Compliance

 




The European Commission has accused Meta Platforms, the parent company of Facebook and Instagram, of violating the European Union’s Digital Services Act (DSA) by making it unnecessarily difficult for users to report illegal online content and challenge moderation decisions.

In its preliminary findings, the Commission said both platforms lack a user-friendly “Notice and Action” system — the mechanism that allows people to flag unlawful material such as child sexual abuse content or terrorist propaganda. Regulators noted that users face multiple steps and confusing options before they can file a report. The Commission also claimed that Meta’s interface relies on “dark patterns”, which are design features that subtly discourage users from completing certain actions, such as submitting reports.

According to the Commission, Meta’s appeal process also falls short of DSA requirements. The current system allegedly prevents users from adding explanations or submitting supporting evidence when disputing a moderation decision. This, the regulator said, limits users’ ability to express why they believe a decision was unfair and weakens the overall transparency of Meta’s content moderation practices.

The European Commission’s findings are not final, and Meta has the opportunity to respond before any enforcement action is taken. If the Commission confirms these violations, it could issue a non-compliance decision, which may result in penalties of up to 6 percent of Meta’s global annual revenue. The Commission may also impose recurring fines until the company aligns its operations with EU law.

Meta, in a public statement, said it “disagrees with any suggestion” that it breached the DSA. The company stated that it has already made several updates to comply with the law, including revisions to content reporting options, appeals procedures, and data access tools.

The European Commission also raised similar concerns about TikTok, saying that both companies have limited researchers’ access to public data on their platforms. The DSA requires large online platforms to provide sufficient data access so independent researchers can analyze potential harms — for example, whether minors are exposed to illegal or harmful content. The Commission’s review concluded that the data-access tools of Facebook, Instagram, and TikTok are burdensome and leave researchers with incomplete or unreliable datasets, which hinders academic and policy research.

TikTok responded that it has provided data to almost 1,000 research teams and remains committed to transparency. However, the company noted that the DSA’s data-sharing obligations sometimes conflict with the General Data Protection Regulation (GDPR), making it difficult to comply with both laws simultaneously. TikTok urged European regulators to offer clarity on how these two frameworks should be balanced.

Beyond Europe, the investigation may strain relations with the United States. American officials have previously criticized the EU for imposing regulatory burdens on U.S.-based tech firms. U.S. FTC Chairman Andrew Ferguson recently warned companies that censoring or modifying content to satisfy foreign governments could violate U.S. law. Former President Donald Trump has also expressed opposition to EU digital rules and even threatened tariffs against countries enforcing them.

For now, the Commission’s investigation continues. If confirmed, the case could set a major precedent for how global social media companies manage user safety, transparency, and accountability under Europe’s strict online governance laws.


Meta's Platforms Rank Worst in Social Media Privacy Rankings: Report

Meta’s Instagram, WhatsApp, and Facebook have once again been flagged as the most privacy-violating social media apps. According to Incogni’s Social Media Privacy Ranking report 2025, Meta and TikTok are at the bottom of the list. Elon Musk’s X (formerly Twitter) has also received poor rankings in various categories, but has done better than Meta in a few categories.

Discord, Pinterest, and Quora perform well

The report analyzed 15 of the most widely used social media platforms globally, measuring them against 14 privacy criteria organized into six different categories: AI data use, user control, ease of access, regulatory transgressions, transparency, and data collection. The research methodology focused on how an average user could understand and control privacy policies.

Discord, Pinterest, and Quora have done best in the 2025 ranking. Discord is placed first, thanks to its stance on not giving user data for training of AI models. Pinterest ranks second, thanks to its strong user options and fewer regulatory penalties. Quora came third thanks to its limited user data collection.

Why were Meta platforms penalized?

But the Meta platforms were penalized strongly in various categories. Facebook was penalized for frequent regulatory fines, such as GDPR rules in Europe, and penalties in the US and other regions. Instagram and WhatsApp received heavy penalties due to policies allowing the collection of sensitive personal data, such as sexual orientation and health. X faced penalties for vast data collection

Penalties against X

X was penalized for vast data collection and privacy fines from the past, but it still ranked above Meta and TikTok in some categories. X was among the easiest platforms to delete accounts from, and also provided information to government organizations at was lower rate than other platforms. Yet, X allows user data to be trained for AI models, which has impacted its overall privacy score.

“One of the core principles motivating Incogni’s research here is the idea that consent to have personal information gathered and processed has to be properly informed to be valid and meaningful. It’s research like this that arms users with not only the facts but also the tools to inform their choices,” Incogni said in its blog. 

Social Event App Partiful Did Not Collect GPS Locations from Photos

 

Social event planning app Partiful, also known as "Facebook events for hot people," has replaced Facebook as the go-to place for sending party invites. However, like Facebook, Partiful also collects user data. 

The hosts can create online invitations in a retro style, which allows users to RSVP to events easily. The platform strives to be user-friendly and trendy, which has made the app No.9 on the Apple store, and Google has called it "the best app" of 2024. 

About Partiful

Partiful has recently developed into a Facebook-like social graph; it maps your friends and also friends of friends, what you do, where you go, and your contact numbers. When the app became famous, people started doubting its origins, alleging that the app had former employees of a data-mining company. TechCrunch, however, found that the app was not storing any location data from user-uploaded images, which include public profile pictures. 

Metadata in photos

The photos that you have on your phones have metadata, which consists of file size, date of capture. With videos, Metadata can include information such as the type of camera used, the settings, and latitude/longitude coordinates. TechCrunch discovered that anyone could use the developer tools in a web browser to get raw user profile photos access from Partiful’s back-end database on Google Firebase. 

About the bug

The flaw could have been problematic, as it could have exposed the location of a person’s profile photo if someone used Partiful. 

According to TechCrunch, “Some Partiful user profile photos contained highly granular location data that could be used to identify the person’s home or work, particularly in rural areas where individual homes are easier to distinguish on a map.”

It is a common norm for companies hosting user photos and videos to automatically remove metadata once uploaded to prevent privacy issues, such as Partiful.

Hackers Are Spreading Malware Through SVG Images on Facebook


The growing trend of age checks on websites has pushed many people to look for alternative platforms that seem less restricted. But this shift has created an opportunity for cybercriminals, who are now hiding harmful software inside image files that appear harmless.


Why SVG Images Are Risky

Most people are familiar with standard images like JPG or PNG. These are fixed pictures with no hidden functions. SVG, or Scalable Vector Graphics, is different. It is built using a coding language called XML, which can also include HTML and JavaScript, the same tools used to design websites. This means that unlike a normal picture, an SVG file can carry instructions that a computer will execute. Hackers are taking advantage of this feature to hide malicious code inside SVG files.


How the Scam Works

Security researchers at Malwarebytes recently uncovered a campaign that uses Facebook to spread this threat. Fake adult-themed blog posts are shared on the platform, often using AI-generated celebrity images to lure clicks. Once users interact with these posts, they may be asked to download an SVG image.

At first glance, the file looks like a regular picture. But hidden inside is a script written in JavaScript. The code is heavily disguised so that it looks meaningless, but once opened, it runs secretly in the background. This script connects to other websites and downloads more harmful software.


What the Malware Does

The main malware linked to this scam is called Trojan.JS.Likejack. Once installed, it hijacks the victim’s Facebook account, if the person is already logged in, and automatically “likes” specific posts or pages. These fake likes increase the visibility of the scammers’ content within Facebook’s system, making it appear more popular than it really is. Researchers found that many of these fake pages are built using WordPress and are linked together to boost each other’s reach.


Why It Matters

For the victim, the attack may go unnoticed. There may be no clear signs of infection besides strange activity on their Facebook profile. But the larger impact is that these scams help cybercriminals spread adult material and drive traffic to shady websites without paying for advertising.


A Recurring Tactic

This is not the first time SVG files have been misused. In the past, they have been weaponized in phishing schemes and other online attacks. What makes this campaign stand out is the combination of hidden code, clever disguise, and the use of Facebook’s platform to amplify visibility.

Users should be cautious about clicking on unusual links, especially those promising sensational content. Treat image downloads, particularly SVG files with the same suspicion as software downloads. If something seems out of place, it is safer not to interact at all.

Beware of Pig Butchering Scams That Steal Your Money

Beware of Pig Butchering Scams That Steal Your Money

Pig butchering, a term we usually hear in the meat market, sadly, has also become a lethal form of cybercrime that can cause complete financial losses for the victims. 

Pig Butchering is a “form of investment fraud in the crypto space where scammers build relationships with targets through social engineering and then lure them to invest crypto in fake opportunities or platforms created by the scammer,” according to The Department of Financial Protection & Innovation. 

Pig butchering has squeezed billions of dollars from victims globally. Cambodian-based Huione Group gang stole over $4 billion from August 2021 to January 2025, the New York Post reported.

How to stay safe from pig butchering?

Individuals should watch out for certain things to avoid getting caught in these extortion schemes. Scammers often target seniors and individuals who are not well aware about cybercrime. The National Council on Aging cautions that such scams begin with receiving messages from scammers pretending to be someone else. Never respond or send money to random people who text you online, even if the story sounds compelling. Scammers rely on earning your trust, a sob story is one easy way for them to trick you. 

Another red flag is receiving SMS or social media texts that send you to other platforms like WeChat or Telegram, which have fewer regulations. Scammers also convince users to invest their money, which they claim to return with big profits. In one incident, the scammer even asked the victim to “go to a loan shark” to get the money.

Stopping scammers

Last year, Meta blocked over 2 million accounts that were promoting crypto investment scams such as pig butchering. Businesses have increased efforts to combat this issue, but the problem still very much exists. A major step is raising awareness via public posts broadcasting safety tips among individuals to prevent them from falling prey to such scams. 

Organizations have now started releasing warnings in Instagram DMs and Facebook Messenger warning users about “potentially suspicious interactions or cold outreach from people you don’t know”, which is a good initiative. Banks have started tipping of customers about the dangers of scams when sending money online. 

Want to Leave Facebook? Do this.

Want to Leave Facebook? Do this.

Confused about leaving Facebook?

Many people are changing their social media habits and opting out of many services. Facebook has witnessed a large exodus of users deserting the platform after the announcement in March that Meta was terminating the independent fact-checking on its platform. However, fact-checking has been replaced with community notes, letting users make changes to potentially false/misleading information. 

Users having years of photos and posts on Facebook are confused about how to collect their data before removing their accounts. If you also feel the same problem, this post will help you delete Facebook permanently, while taking all your information on the way out. 

How to remove Facebook?

For users who do not want to be on Facebook anymore, deleting their account is the only way to completely remove yourself from the platform. If you are not sure, deactivating your account allows you to have some life off of Facebook without account deletion. 

Make sure to remove third-party Facebook logins before deleting your account. 

How to leave third-party apps?

Third-party apps like DoorDash and Spotify allow you to log in using your Facebook account. This lets you log in without remembering another password, but if you’re planning on deleting Facebook, you have to update your login settings. That is because if you delete your account, there will not be another Facebook account for the user to log in through. 

Fortunately, there is another simple way to find which of your sites and applications are connected to Facebook and delete them before removing your account. Once you disconnect from other websites and applications from Facebook, you will need to adjust how you login to them. 

Users should try specific applications and websites to set new passwords or passkeys or log in via a single-service sign-on option, such as Google. 

How is deactivating different than deactivating a Facebook account?

If you want to stay away from Facebook, you have two choices. Either delete your account permanently, or you can disable it temporarily to deactivate it.