Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Ad targeting. Show all posts

UK Mental Health Charities Imparted Facebook Private Data for Targeted Ads

 

Some of the largest mental health support organisations in Britain gave Facebook information about private web browsing for its targeted advertising system. 

The data was delivered via a monitoring mechanism installed in the charities’ websites and includes details of URLs a user visited and buttons they clicked across content linked to depression, self-harm and eating disorders. 

Additionally, it included information about the times visitors saw pages to access online chat tools and when they clicked links that said "I need help" in order to request assistance. Some of the pages that caused data sharing with Facebook were particularly targeted towards youngsters, such as a page for 11 to 18-year-olds that provided guidance on how to deal with suicidal thoughts. 

Details of conversations between charities and users or messages sent via chat tools were not included in the data sent to Facebook during the Observer's analysis. All of the charities emphasised that they took service user privacy very seriously and that such messages were confidential.

However, it frequently involved browsing that most users would consider private, such as information about button clicks and page views on websites for the eating disorder charity Beat as well as the mental health charities Mind, Shout, and Rethink Mental Illness. 

The data was matched to IP addresses, which are typically used to identify a specific person or home, and, in many cases, specifics of their Facebook account ID. The tracking tool, known as Meta Pixel, has now been taken down from the majority of charity' websites. 

The information was discovered following an Observer investigation last week that exposed 20 NHS England trusts sharing data with Facebook for targeted advertising. This data included browsing activity across hundreds of websites related to particular medical conditions, appointments, medications, and referral requests.

Facebook says it makes explicit that businesses should not use Meta Pixel to gather or distribute sensitive data, such as information that could expose details about a person’s health or data belonging to children. It also says it has filters to weed out sensitive data it receives by mistake. However, prior research has indicated that they don't always work, and Facebook itself acknowledges that the system "doesn't catch everything".

The social media giant has been accused of doing too little to oversee what information it is being supplied, and faced questions over why it would allow some entities – such as hospitals or mental health organisations – to send it data in the first place.

There Could a Facebook-Cambridge Analytica Scandal Everyday


Today, the manner by which any personal data is processed by the ad delivery algorithms of advertising platforms of tech giants like Meta and Google leads to a much more severe threat to the integrity of electoral processes than microtargeting. The European Parliament's position on the Regulation on Political Advertising, which was adopted on February 2nd, is a step forward in addressing present and potential threats, pertaining to personal data, democracy, and fundamental rights. 

Digital Civic Space Advisor from European Center for Not-for-Profit Law (ECNL), Karolina IwaƄska, along with Fernando Hortal Foronda, a Digital Policy Officer at the European Partnership for Democracy (EPD) comes up with their report on the Facebook-Cambridge Analytica Scandal.

Facebook-Cambridge Analytica Scandal 

In the 2010s, millions of Facebook users' personal information was illicitly obtained by the British consulting company Cambridge Analytica, mostly for the purpose of political advertising. 

Apparently, this data was gathered via an app named “This Is Your Digital Life,” developed by scientist Aleksandr Kogan and his company Global Science Research in 2013. The app gathered the personal information of users' Facebook friends while asking a series of questions to create psychological profiles of users, through Facebook's Open Graph platform. 

The app collected data from nearly 87 million Facebook profiles. Cambridge Analytica utilized this data in order to support Ted Cruz and Donald Trump's presidential campaigns in 2016. Following this, the corporation was widely accused of meddling in the Brexit referendum, although the official investigation acknowledged that Cambridge Analytica was not involved "beyond some initial enquiries" and that "no significant breaches" occurred. 

The aftermath of the Scandal 

Microtargeting is still the most frequently used term in the discussion of political ads in the wake of the Facebook-Cambridge Analytica Scandal, and it is seen as the biggest threat that needs to be addressed. 

This was in fact anticipated, considering the eye-catching nature of the scandal in terms of the Brexit referendum, which involves a charismatic whistleblower and shady players. Meanwhile, the threat that Europe faces stems less from political advertisements being targeted by secretive PR firms, political parties, or campaign organizations. 

However, what turns out to be underrated protagonists in the scandal are the automated systems of delivery which are being operated by Facebook or Google, since they precisely determine who and why an individual must engage with specific political ads, instead of anyone manually selecting the targeting criteria. 

Online Political Advertising Markets in Europe 

Ad Targeting 

The online political advertising market in Europe belongs to two companies: Meta and, to a smaller extent, Google. These companies, while promising advertisers to not access the personal data of potential voters, target something of a much greater value, i.e. delivering the ads directly to individuals who are most likely to engage with the advertised message. 

Deciding on who the target audience is, is entirely up to the platform instead of the political party. Although, the political party may contribute to selecting the potential audience, in terms of user interest and demographics. They can also upload information gathered elsewhere so the platform can compare it to individuals already registered and identify "lookalikes"— people who are similar to them. 

Ad Delivery Algorithms 

While ad targeting is a considerably good phase, ad budgets for European political campaigns are comparatively small for the message to reach everyone in the selected audience. This is where the role of the ad delivery algorithm kicks in. 

In this phase, Facebook and Google choose users for whom the advertisement is deemed to be the most "relevant" by the platform. This is decided using forecasts generated by automatic processing of the enormous amounts of personal information that these firms gather about particular users — and those who are similar to them — through pervasive tracking on their platforms and third-party websites. 

As compared to ad targeting, the automated delivery of political ads is deemed to be more impactful and dangerous, the reason being the massive amount of personal data involved in the same. Ad-delivering operations are inherently opaque. Moreover, the platform's machine learning algorithms look for patterns in behavioral data, which occasionally leads to the processing of sensitive data, like the users' health problems. 

What are the Impacts? 

In such aforementioned cases of political advertising, no matter if parties tend to target and reach diverse audiences, platforms are most likely to show ads to users who already agree with the message and support the given party. Therefore, creating a filter bubble for users, consequently fragmenting the public space. 

Political parties may also be impacted, considering that the platforms create push parties’ messages only for their supporters, reaching unconvinced or less politically active users would require a higher price for the party to pay. 

What are the Steps Taken? 

Instead of limiting the involvement of algorithms in political advertising, the European Commission's proposal to regulate political advertisements focused on somewhat reducing the processing of sensitive data and improving the openness of the processing of all personal data. 

The text approved by the European Parliament forbids the use of automated ad delivery methods, as well as inferred and observed personal data, in political advertising. This is the required action to safeguard the EU's democratic processes against improper influence, which could come from malicious activities as well as algorithms that have been tuned to benefit big tech's business objectives.