Search This Blog

Showing posts with label GDPR. Show all posts

Twitter's Brussels Staff Sacked by Musk 

After a conflict on how the social network's content should be regulated in the Union, Elon Musk shut down Twitter's entire Brussels headquarters.

Twitter's connection with the European Union, which has some of the most robust regulations controlling the digital world and is frequently at the forefront of global regulation in the sector, may be strained by the closing of the company's Brussels center. 

Platforms like Twitter are required by one guideline to remove anything that is prohibited in any of the EU bloc's member states. For instance, tweets influencing elections or content advocating hate speech would need to be removed in jurisdictions where such communication is prohibited. 

Another obligation is that social media sites like Twitter must demonstrate to the European Commission, the executive arm of the EU, that they are making a sufficient effort to stop the spread of content that is not illegal but may be damaging. Disinformation falls under this category. This summer, businesses will need to demonstrate how they are handling such positions. 

Musk will need to abide by the GDPR, a set of ground-breaking EU data protection laws that mandate Twitter have a data protection officer in the EU. 

The present proposal forbids the use of algorithms that have been demonstrated to be biased against individuals, which may have an influence on Twitter's face-cropping tools, which have been presented to favor youthful, slim women.

Twitter might also be obligated to monitor private conversations for grooming or images of child sexual abuse under the EU's Child Sexual Abuse Materials proposal. In the EU, there is still discussion about them.

In order to comply with the DSA, Twitter will need to put in a lot more effort, such as creating a system that allows users to flag illegal content with ease and hiring enough moderators to examine the content in every EU member state.

Twitter won't have to publish a risk analysis until next summer, but it will have to disclose its user count in February, which initiates the commission oversight process.

Two lawsuits that might hold social media corporations accountable for their algorithms that encourage dangerous or unlawful information are scheduled for hearings before the US Supreme Court. This might fundamentally alter how US businesses regulate content. 

CNIL Fines Clearview AI 20 million Euros for Illegal Use of Facial Recognition Technology

 

France’s data protection authority (CNIL) has imposed a €20 million fine on Clearview AI, the controversial facial recognition firm time for illegally gathering and using data belonging to French residents without their knowledge. 

CNIL imposed the maximum financial penalty the company could receive as per GDPR Article 83 and also ordered Clearview AI to stop all data collection activities and delete the data gathered on French citizens or face an additional €100,000 fine per day. 

“Clearview AI had two months to comply with the injunctions formulated in the formal notice and to justify them to the CNIL. However, it did not provide any response to this formal notice,” the CNIL stated. 

“The chair of the CNIL, therefore, decided to refer the matter to the restricted committee, which is in charge of issuing sanctions. On the basis of the information brought to its attention, the restricted committee decided to impose a maximum financial penalty of 20 million euros, according to article 83 of the GDPR.” 

Clearview AI scraps publicly available images and videos of people from websites and social media platforms and associates them with identities. Using this technique, the company has collected over 20 billion images that are being employed to feed a biometric database of facial scans and identities. 

Subsequently, the American-based firm sells access to this database to individuals, law enforcement, and multiple organizations around the globe. 

In Europe, the General Data Protection Regulation (GDPR) dictates that any data collection needs to be clearly communicated to the people and requires consent. Even if Clearview AI is not employing leaked data and the company does not spy on people, individuals are unaware that their images are being used for identification by Clearview AI customers. 

CNIL's latest decision comes after a two-year investigation initiated in May 2020, when the French authority received complaints from individuals about Clearview facial recognition software. Another warning about biometric profiling came from the Privacy International organization in May 2021. 

According to the CNIL, it found Clearview AI was guilty of multiple violations of the General Data Protection Regulation (GDPR). The breaches include unlawful processing of private data (GDPR Article 6), individuals' rights not being respected (Articles 12, 15, and 17), and lack of cooperation with the data protection authority (Article 31). 

The CNIL judgment is the third decision against Clearview's activities after state authorities fined the firm in March and July for unlawfully gathering biometric data in Italy and Greece.

Here's How BlackMatter Ransomware is Linked With LockBit 3.0

 

LockBit 3.0, the most recent version of LockBit ransomware, and BlackMatter contain similarities discovered by cybersecurity researchers. 

In addition to introducing a brand-new leak site, the first ransomware bug bounty program, LockBit 3.0, was released in June 2022. Zcash was also made available as a cryptocurrency payment method.

"The encrypted filenames are appended with the extensions 'HLJkNskOq' or '19MqZqZ0s' by the ransomware, and its icon is replaced with a.ico file icon. The ransom note then appears, referencing 'Ilon Musk'and the General Data Protection Regulation of the European Union (GDPR)," researchers from Trend Micro stated.

The ransomware alters the machine's wallpaper when the infection process is finished to alert the user of the attack. Several LockBit 3.0's code snippets were found to be lifted from the BlackMatter ransomware by Trend Micro researchers when they were debugging the Lockbit 3.0 sample.

Identical ransomware threats

The researchers draw attention to the similarities between BlackMatter's privilege escalation and API harvesting techniques. By hashing a DLL's API names and comparing them to a list of the APIs the ransomware requires, LockBit 3.0 executes API harvesting. As the publically accessible script for renaming BlackMatter's APIs also functions for LockBit 3.0, this procedure is the same as that of BlackMatter.

The most recent version of LockBit also examines the UI language of the victim machine to prevent infection of machines that speak these languages in the Commonwealth of Independent States (CIS) member states.

Windows Management Instrumentation (WMI) via COM objects is used by Lockbit 3.0 and BlackMatter to delete shadow copies. Experts draw attention to the fact that LockBit 2.0 deletes using vssadmin.exe.

The findings coincide with LockBit attacks becoming the most active ransomware-as-a-service (RaaS) gangs in 2022, with the Italian Internal Revenue Service (L'Agenzia delle Entrate) being the most recent target.

The ransomware family contributed to 14% of intrusions, second only to Conti at 22%, according to Palo Alto Networks' 2022 Unit 42 Incident Response Report, which was released and is based on 600 instances handled between May 2021 and April 2022.


The CNIL Penalized SLIMPAY €180,000 for Data Violation.

 

SLIMPAY is a licensed payment institution that provides customers with recurring payment options. Based in Paris, this subscription payment services firm was fined €180,000 by the French CNIL regulatory authority after it was discovered that sensitive client data had been stored on a publicly accessible server for five years by the firm. 

The company bills itself as a leader in subscription recurring payments, and it offers an API and processing service to handle such payments on behalf of clients such as Unicef, BP, and OVO Energy, to mention a few. It appears to have conducted an internal research project on an anti-fraud mechanism in 2015, during which it collected personal data from its client databases for testing purposes. Real data is a useful way to confirm that development code is operating as intended before going live, but when dealing with sensitive data like bank account numbers, extreme caution must be exercised to avoid violating data protection requirements.

In 2020, the CNIL conducted an inquiry on the company SLIMPAY and discovered a number of security flaws in their handling of customers' personal data. The restricted committee - the CNIL body in charge of applying fines - effectively concluded that the corporation had failed to comply with several GDPR standards based on these elements. Because the data subjects affected by the incident were spread across many European Union nations, the CNIL collaborated with four supervisory agencies (Germany, Spain, Italy, and the Netherlands). 

THE BREAKDOWNS 

1.  Failure to comply with the requirement to provide a formal legal foundation for a processor's processing operations (Article 28 of the GDPR)

SLIMPAY's agreements with its service providers do not include all of the terms necessary to ensure that these processors agree to process personal data in accordance with the GDPR. 

2. Failure to protect personal data from unauthorized access (Article 32 of the GDPR) 

Access to the server was not subject to any security controls, according to the restricted committee, and it could be accessed from the Internet between November 2015 and February 2020. More than 12 million people's civil status information, postal and e-mail addresses, phone numbers, and bank account numbers (BIC/IBAN) were all hacked. 

3. Failure to protect personal data from unauthorized access (Article 32 of the GDPR) 

The CNIL determined that the risk associated with the breach should be considered high due to the nature of the personal data, the number of people affected, the possibility of identifying the people affected by the breach from the accessible data, and the potential consequences for the people concerned.

Elastic Stack API Security Vulnerability Exposes Customer and System Data

 

The mis-implementation of Elastic Stack, a collection of open-source products that employ APIs for crucial data aggregation, search, and analytics capabilities, has resulted in severe vulnerabilities, according to a new analysis. Researchers from Salt Security uncovered flaws that allowed them to not only conduct attacks in which any user could extract critical customer and system data, but also to create a denial of service condition in which the system would become inaccessible. 

“Our latest API security research underscores how prevalent and potentially dangerous API vulnerabilities are. Elastic Stack is widely used and secure, but Salt Labs observed the same architectural design mistakes in almost every environment that uses it,” said Roey Eliyahu, co-founder and CEO, Salt Security. “The Elastic Stack API vulnerability can lead to the exposure of sensitive data that can be used to perpetuate serious fraud and abuse, creating substantial business risk.” 

The vulnerability was originally detected while safeguarding one of their customers, a huge online business-to-consumer platform that provides API-based mobile applications and software as a service to millions of consumers around the world, according to the researchers. 

 Officials at Salt Security were eager to point out that this isn't a flaw in Elastic Stack itself, but rather a problem with how it's being deployed. According to Salt Security's technical evangelist Michael Isbitski, the vulnerability isn't due to a fault in Elastic's software, but rather to "a common risky implementation set up by users." 

"The lack of awareness around potential misconfigurations, mis-implementations, and cluster exposures is largely a community issue that can be solved only through research and education," Isbitski said. API threats have increased 348% in the last six months, according to the Salt Security State of API Security Report, Q3 2021. The development of business-critical APIs, combined with the advent of exploitable vulnerabilities, reveals the substantial security flaws that occur from the integration of third-party apps and services.

The impact of the Elastic Stack design implementation flaws rises considerably when an attacker chains together multiple attacks, according to Salt Labs researchers. Attackers can use the lack of authorization between front-end and back-end services to establish a working user account with basic permission levels, then make educated assumptions about the schema of back-end data stores and inquire for data they aren't authorized to access. 

Salt Labs was able to gain access to a large amount of sensitive data, including account numbers and transaction confirmation numbers, as part of its research. Some of the sensitive information was also private and subject to GDPR regulations. Attackers could use this information to access other API-based features, such as the ability to book new services or cancel existing ones.

GDPR privacy law exploited to reveal personal data

About one in four companies revealed personal information to a woman's partner, who had made a bogus demand for the data by citing an EU privacy law.

The security expert contacted dozens of UK and US-based firms to test how they would handle a "right of access" request made in someone else's name.

In each case, he asked for all the data that they held on his fiancee.

In one case, the response included the results of a criminal activity check.

Other replies included credit card information, travel details, account logins and passwords, and the target's full US social security number.

University of Oxford-based researcher James Pavur has presented his findings at the Black Hat conference in Las Vegas.

It is one of the first tests of its kind to exploit the EU's General Data Protection Regulation (GDPR), which came into force in May 2018. The law shortened the time organisations had to respond to data requests, added new types of information they have to provide, and increased the potential penalty for non-compliance.

"Generally if it was an extremely large company - especially tech ones - they tended to do really well," he told the BBC.

"Small companies tended to ignore me.

"But the kind of mid-sized businesses that knew about GDPR, but maybe didn't have much of a specialised process [to handle requests], failed."

He declined to identify the organisations that had mishandled the requests, but said they had included:

- a UK hotel chain that shared a complete record of his partner's overnight stays

- two UK rail companies that provided records of all the journeys she had taken with them over several years

- a US-based educational company that handed over her high school grades, mother's maiden name and the results of a criminal background check survey.

Mr Pavur has, however, named some of the companies that he said had performed well.

Programmer coded a software to track women in porn videos using face-recognition






A Chinese programmer based in Germany created a software using face-recognition technology to identify women who had appeared in porn videos. 

The information about the project was posted on the Chinese social network WeiboA. Then a Twitter handle @yiqinfu tweeted ’’A Germany-based Chinese programmer said he and some friends have identified 100k porn actresses from around the world, cross-referencing faces in porn videos with social media profile pictures. The goal is to help others check whether their girlfriends ever acted in those films.’’

The project took nearly half a year to complete. The videos were collected from websites 1024, 91, sex8, PornHub, and xvideos, and all together it consists of  100+ terabytes of data. 

The faces appearing on these videos are compared with profile pictures from various popular social media platform like Facebook, Instagram, TikTok, Weibo, and others.

The coder deleted the project and all his data after it found out that the project violates the European privacy law. 

However, there is no proof that there is no program on the global system that matches women’s social-media photos with images from porn sites. 

According to the programmer whatever he did ‘was legal because 1) he hasn't shared any data, 2) he hasn't opened up the database to outside queries, and 3) sex work is legal in Germany, where he's based.’

But, this incidence has made clear that program like this could be possible and would have awful consequences. “It’s going to kill people,” says Carrie A. Goldberg, an attorney who specializes in sexual privacy violations. 

“Some of my most viciously harassed clients have been people who did porn, oftentimes one time in their life and sometimes nonconsensually [because] they were duped into it. Their lives have been ruined because there’s this whole culture of incels that for a hobby expose women who’ve done porn and post about them online and dox them.” 

The European Union’s GDPR privacy law prevents this kind of situation, but people living in other places are not as lucky.