Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label user consent. Show all posts

Users Warned to Check This Setting as Meta Faces Privacy Concerns

 


A new AI experiment launched by Meta Platforms Inc. continues to blur the lines between innovation and privacy in the rapidly evolving digital landscape of connectivity. There has been a report that the tech giant, well known for changing the way billions of people interact online, has begun testing an artificial intelligence-powered feature that will scan users' camera rolls to identify pictures and videos that are likely to be shared the most. 

By leveraging generative AI, this new Facebook feature will simplify the process of creating content and boosting user engagement by providing relevant images to users, applying creative edits, and assembling themed visual recaps - effectively turning users' own galleries into curated storyboards that tell a compelling story. 

Digital Trends reported recently that Meta has rolled out a feature for users in the United States and Canada that is currently opt-in and available on an opt-in basis. This is their latest attempt to keep pace with rivals like TikTok and Instagram in a tightening battle for attention. Apparently, the system analyses unshared media directly on the users' devices, identifying what the company refers to as "hidden gems," which would have otherwise remained undiscovered. 

As much as the feature is intended to promote more frequent and visually captivating posts through convenience, it also reignites long-standing discussions about data access, user consent, and the increasingly blurred line between personal privacy and algorithmic assistance that has become commonplace in the era of social media. During a move that has both sparked curiosity and unease, Meta quietly rolled out new Facebook settings that will allow the platform to analyse images stored within users' camera rolls-even those images that have never been uploaded or shared online—in a move that has sparked both intrigue and unease. 

With the advent of artificial intelligence, the feature is billed as “camera roll sharing suggestions,” which is intended to help people generate personalised recommendations such as travel highlights, thematic albums, and collages based on their private photos using the camera roll. According to Meta, the feature operates only with the consent of the user and is turned off by default, emphasising the user's complete control over whether or not he or she chooses to participate. Nevertheless, the emerging reports indicate a very different story. 

Many users claim that the feature is already active within their Facebook application despite having no memory of enabling it to begin with, indicating that it is an opt-in feature. It is for this reason that a growing number of people are starting to become sceptical of data permissions and privacy management, which has heightened ongoing concerns. As a result of these silent activations, there is still a broader issue at play-users might easily overlook background settings which grant extensive access to their personal information. 

The privacy advocacy community is therefore urging users to reexamine their Facebook privacy settings and to ensure their access to local photo libraries aligns with their expectations of digital privacy and comfort levels. By tapping Allow on a pop-up message labelled "cloud processing," Facebook users are in effect agreeing to Meta's AI Terms of Service, in which the platform will be able to analyse their stored media and even facial characteristics through artificial intelligence. 

After activating the feature, the user's camera roll will be continuously uploaded to Meta's cloud infrastructure, allowing Facebook to uncover so-called "hidden gems" within their photos, and select a collage, an AI-driven collage, a themed album, or create an edit tailored to individual moments. These settings were first introduced to select users as part of testing phases last summer, but they are now gradually appearing across the platform, hidden deep within the app's configuration menus under options such as "personalised creative ideas" and "AI-powered suggestions". 

According to Meta, the purpose of the tool is to improve the user experience by providing private, shareable recommendations of content based on the user's own device, all of which are created by Meta. Despite the fact that the company insists that the suggestions are only visible to those with an account, they are not used for targeted advertising. These suggestions are based on parameters such as time, location, and people or objects present. However, the quiet rollout has sparked the discomfort of some users who claim that they have never knowingly agreed to be notified about the service. 

There have been many reports of people finding the feature already activated, despite having no memory of granting consent, raising renewed concerns about transparency and informed user choice. Privacy advocates have said that although the tool may appear harmless and a simple way to simplify creative posting, it actually reveals a larger and more complex issue: the gradual normalisation of deep access to personal data under the guise of convenience, which has been occurring in recent years.

Keeping in mind the fact that Meta continues to expand its generative AI initiatives, the company's ability to mine personal images unposted for algorithmic insights enables Meta to pursue its technological ambitions in a way that often goes against the clear awareness of its users. Such features serve as a reminder of the delicate balance that exists between innovation and individual privacy in the digital age, as the race to dominate the AI ecosystem intensifies. 

In response to growing privacy concerns over Meta's data practices, many users are taking advantage of Meta's "Off-Facebook Activity" controls to limit the amount of personal information the company can collect and use beyond the platform's own application, as privacy concerns have intensified. In addition to being available on Facebook and Instagram, this feature allows users to view, manage, and delete the data that is shared with Meta by third-party services and websites. 

In the Facebook account's settings and privacy settings, users can select Off-Facebook Activity under "Your Facebook Information" so that they will be able to see what platforms have transmitted their data to, clear their history, and disable future tracking. Additionally, similar tools can be found under the Ads and Data & Privacy sections of Instagram under the Ads tab.

It is important to note that by disabling these options, Meta will not be able to store and analyse any activity that occurs outside of its ecosystem - ranging from e-commerce interactions to app usage patterns - reducing the personalisation of ads and limiting data flow between Meta and external platforms.

Despite the fact that the company maintains that this information assists in improving user experiences and providing relevant content, many critics believe that the practice violates one's privacy rights. Additionally, the controversy has reached the social media arena, where users continue to express their frustrations with Meta's pervasive tracking systems. In one viral TikTok video that has accumulated over half a million views, the creator described disabling the feature as a "small act of defiance," encouraging others to do the same to reclaim control of their digital footprint. 

While experts are warning that, despite the fact that Meta remains able to function properly, certain permissions needed for its functionality remain active, which means that complete data isolation remains elusive even after turning off tracking. However, privacy advocates assert that clearing off-Facebook Activity and preventing future tracking remain among the most effective methods users can use to significantly reduce Meta's access to their personal information. 

Despite growing concerns that Meta is utilising personal data in an increasingly expansive way in an effort to protect it, companies like Proton are positioning themselves as secure alternatives to Meta that emphasise transparency and user control. In response to the recent controversy over Meta's smart glasses - criticised for the potential to be turned into facial recognition and surveillance tools - calls have become more urgent for stronger safeguards against the abuse of private media. 

Unlike many of its peers, Proton advocates a fundamentally different approach: limiting data collection completely rather than attempting to manage it after it has been exposed. With Proton Drive, a cloud-based storage service that is encrypted, users can securely store their camera rolls and private folders without being concerned about third parties accessing them or harvesting their data. Regardless of the size of each file, including its metadata, all files are encrypted end-to-end, so that no one - not even Proton - can access or analyse the content of users. 

By encrypting photographs, people prevent the extraction of sensitive data, such as geolocation information and patterns, that can reveal personal routines and locations, through this level of security. With Proton Drive, users can store and retrieve their files anywhere using an app for both iOS and Android. Furthermore, users can control their privacy completely with a mobile app for both iOS and Android. In contrast to the majority of social media and tech platforms that monetise user data for advertising or model training, Proton's business model is entirely subscription-based, which eliminates the temptation to exploit the personal data of users. 

A five-gigabyte storage allowance is currently offered by the company, which is enough for about 1,000 high-resolution images, so that users are encouraged to safeguard their digital memories through a platform that prioritises confidentiality over commercialisation. Advocates for privacy are considering this model as a viable option in an era where technology is increasingly clashing with the right to personal security, a conflict that is becoming more prevalent. 

With the advancement of the digital age, the line between personalisation and intrusion becomes increasingly blurred, encouraging users to take an active role in managing their own data. The ongoing issues surrounding Meta's use of artificial intelligence to analyse photos, off-platform tracking, and secret data collection serve as a stark reminder that convenience is not always accompanied by privacy concerns. 

According to experts, reviewing app permissions, clearing connected data histories on a regular basis, and disabling non-essential tracking features can all help to significantly reduce the amount of unnecessary data exposed to the outside world. In addition, storing sensitive information in an encrypted cloud service like Proton Drive can also offer a safer environment while maintaining access to sensitive information. 

The power to safeguard our online privacy lies with the ability to be aware and act upon it. By remaining informed about new app settings, by reading consent disclosures carefully, and by being selective about the permissions users grant, every individual can regain control of their digital lives.

As artificial intelligence continues to redefine the limits of technology in our age, securing personal information has become more than a matter of protecting oneself from identity theft; it has become a form of digital self-defence that ensures users can remain innovative and preserve their basic right to privacy at the same time.

Google Backtracks on Cookie Phaseout: What It Means for Users and Advertisers


 

In a surprising announcement, Google confirmed that it will not be eliminating tracking cookies in Chrome, impacting the browsing experience of 3 billion users. The decision came as a shock as the company struggled to find a balance between regulatory demands and its own business interests.

Google’s New Approach

On July 22, Google proposed a new model that allows users to choose between tracking cookies, Google’s Topics API, and a semi-private browsing mode. This consent-driven approach aims to provide users with more control over their online privacy. However, the specifics of this model are still under discussion with regulators. The U.K.’s Competition and Markets Authority (CMA) expressed caution, stating that the implications for consumers and market outcomes need thorough consideration.

Privacy Concerns and Industry Reaction

Privacy advocates are concerned that most users will not change their default settings, leaving them vulnerable to tracking. The Electronic Frontier Foundation (EFF) criticised Google’s Privacy Sandbox initiative, which was intended to replace tracking cookies but has faced numerous setbacks. The EFF argues that Google’s latest move prioritises profits over user privacy, contrasting sharply with Apple’s approach. Apple’s Safari browser blocks third-party cookies by default, and its recent ad campaign highlighted the privacy vulnerabilities of Chrome users.

Regulatory and Industry Responses

The CMA and the U.K.’s Information Commissioner expressed disappointment with Google’s decision, emphasising that blocking third-party cookies would have been a positive step for consumer privacy. Meanwhile, the Network Advertising Initiative (NAI) welcomed Google’s decision, suggesting that maintaining third-party cookie support is essential for competition in digital advertising.

The digital advertising industry may face unintended consequences from Google’s shift to a consent-driven privacy model. This approach mirrors Apple’s App Tracking Transparency, which requires user consent for tracking across apps. Although Google’s new model aims to empower users, it could lead to an imbalance in data access, benefiting large platforms like Google and Apple.

Apple vs. Google: A Continuing Saga

Apple’s influence is evident throughout this development. The timing of Apple’s privacy campaign, launched just days before Google’s announcement, underscores the competitive dynamics between the two tech giants. Apple’s App Tracking Transparency has already disrupted Meta’s business model, and Google’s similar approach may further reshape the infrastructure of digital advertising.

Google’s Privacy Sandbox has faced criticism for potentially enabling digital fingerprinting, a concern Apple has raised. Despite Google’s defense of its Topics API, doubts about the effectiveness of its privacy measures persist. As the debate continues, the primary issue remains Google’s dual role as both a guardian of user privacy and a major beneficiary of data monetisation.

Google’s decision to retain tracking cookies while exploring a consent-driven model highlights the complex interplay between user privacy, regulatory pressures, and industry interests. The outcome of ongoing discussions with regulators will be crucial in determining the future of web privacy and digital advertising.



Italy Investigates Google for Unfair Practices in Obtaining User Consent for Ad Profiling

 

Italy’s competition and consumer watchdog has launched an investigation into Google's methods for obtaining user consent to link activity across its various services for ad profiling, suspecting the tech giant of “unfair commercial practices.”

The focus is on how Google secures consent from users in the European Union to link their activities across platforms such as Google Search, YouTube, Chrome, and Maps. This linking allows Google to profile users for targeted advertising, which is a significant revenue source for the company.

In response to the Italian AGCM’s probe, a Google spokesperson told TechCrunch, “We will analyze the details of this case and will work cooperatively with the Authority.”

Since March, Google has been subject to the EU’s Digital Markets Act (DMA), which applies across the European Union, including Italy. The DMA requires gatekeepers like Google, Meta, X, Amazon, ByteDance, and Microsoft to obtain explicit consent before processing users' personal data for advertising or combining data across services. The AGCM’s investigation seems focused on this requirement.

The AGCM stated that Google's request for consent might be a “misleading and aggressive commercial practice.” It noted that the information provided by Google appears inadequate, incomplete, and could influence users' decisions regarding consent.

Interestingly, this investigation by the Italian authority diverges from the usual enforcement led by the European Commission (EC) against such gatekeepers. The EC's current DMA investigation of Google does not address consent for linking user data. Still, it focuses on other issues like self-preferencing in Google search and anti-steering in Google Play.

The Italian watchdog appears to be addressing a gap in the EC’s enforcement efforts. A Commission spokesperson acknowledged the AGCM investigation, noting it complements the DMA enforcement work and that gatekeepers must also comply with other relevant EU and national rules, including consumer and data protection laws.

In its press release, the AGCM expressed concerns that Google’s consent request lacks the necessary information for users to make an informed choice. It indicated that Google might not be transparent about the impact of linking user accounts or the extent to which users can limit consent to specific services.

The DMA mandates that consent for linking accounts for advertising purposes must meet the standards of the General Data Protection Regulation (GDPR), requiring consent to be “freely given, specific, informed, and unambiguous.” The GDPR also dictates that consent requests must be clear, accessible, and distinct from other matters.

Typically, data protection authorities enforce GDPR standards, but the DMA’s reference to GDPR consent standards has led to the AGCM scrutinizing Google’s consent flow. The AGCM is not only concerned with the information provided but also with how Google asks for consent, suspecting that the methods used may undermine user choice.

The watchdog believes Google’s consent flow might manipulate users into agreeing to link their accounts, potentially using “dark patterns”—designs that deceive or coerce users. The EU’s increasing regulation of digital platforms aims to address such manipulative tactics.

The DMA's reference to GDPR standards and the Digital Services Act (DSA) banning deceptive design practices enhance regulatory scrutiny over these choice flows. Recently, the EU’s first preliminary findings under the DSA revealed that the blue check system on X (formerly Twitter) might be an illegal dark pattern.