Search This Blog

Showing posts with label GDPR. Show all posts

A ChatGPT Bug Exposes Sensitive User Data

OpenAI's ChatGPT, an artificial intelligence (AI) language model that can produce text that resembles human speech, has a security flaw. The flaw enabled the model to unintentionally expose private user information, endangering the privacy of several users. This event serves as a reminder of the value of cybersecurity and the necessity for businesses to protect customer data in a proactive manner.

According to a report by Tech Monitor, the ChatGPT bug "allowed researchers to extract personal data from users, including email addresses and phone numbers, as well as reveal the model's training data." This means that not only were users' personal information exposed, but also the sensitive data used to train the AI model. As a result, the incident raises concerns about the potential misuse of the leaked information.

The ChatGPT bug not only affects individual users but also has wider implications for organizations that rely on AI technology. As noted in a report by India Times, "the breach not only exposes the lack of security protocols at OpenAI, but it also brings forth the question of how safe AI-powered systems are for businesses and consumers."

Furthermore, the incident highlights the importance of adhering to regulations such as the General Data Protection Regulation (GDPR), which aims to protect individuals' personal data in the European Union. The ChatGPT bug violated GDPR regulations by exposing personal data without proper consent.

OpenAI has taken swift action to address the issue, stating that they have fixed the bug and implemented measures to prevent similar incidents in the future. However, the incident serves as a warning to businesses and individuals alike to prioritize cybersecurity measures and to be aware of potential vulnerabilities in AI systems.

As stated by Cyber Security Connect, "ChatGPT may have just blurted out your darkest secrets," emphasizing the need for constant vigilance and proactive measures to safeguard sensitive information. This includes regular updates and patches to address security flaws, as well as utilizing encryption and other security measures to protect data.

The ChatGPT bug highlights the need for ongoing vigilance and preventative measures to protect private data in the era of advanced technology. Prioritizing cybersecurity and staying informed of vulnerabilities is crucial for a safer digital environment as AI systems continue to evolve and play a prominent role in various industries.




ChatGPT: A Potential Risk to Data Privacy


ChatGPT, within two months of its release, seems to have taken over the world like a storm. The consumer application has achieved 100 million active users, making it the fastest-growing product ever. Users are intrigued by the tool's sophisticated capabilities, although apprehensive about its potential to upend numerous industries. 

One of the less discussed consequences in regard to ChatGPT is its privacy risk. Google only yesterday launched Bard, its own conversational AI, and others will undoubtedly follow. Technology firms engaged in AI development have certainly entered a race. 

The issue would be its technology, which is entirely based on users’ personal data. 

300 Billion Words, How Many Are Yours? 

ChatGPT is apparently based on a massive language model, which backs up an enormous amount of data to operate and enhance its functions. Implying, the model gets more adept at seeing patterns, foreseeing what will happen next, and producing credible text as more data is used to train it. 

OpenAI, the developer of ChatGPT, sourced the Chatbot model with some 3 million words systematically taken from the internet –  via books, articles, websites, and posts – which also undeniably involves online users’ personal information, gathered without their consent. 

Every blog post, product review, or comment written on an article, which exists or ever existed in the online world has a good chance that the data or information involved it is was consumed by ChatGPT. 

What is the Issue? 

The gathered data used in order to train ChatGPT is problematic for numerous reasons. 

First, the collected data is unconsented, since none of the online users were ever asked if OpenAI could use their seemingly personal information. Thus, this would be a clear violation of privacy, especially when the data is sensitive and can be used to locate us, identify our loved ones, or identify ourselves. 

The usage of data can compromise what we refer to as contextual integrity even when the data is publicly available. This is a cornerstone idea in discussions about privacy law. Information on people must not be made public outside of the context in which it was first created. 

Moreover, OpenAI does not include any procedure for users to monitor whether the company has their personal information in-store, or to request it to be taken down. The European General Data Protection Regulation (GDPR), which guarantees this right, is still being discussed as to whether ChatGPT complies with its criteria. 

This “right to be forgotten” is specifically essential when it comes to situations involving information that is inaccurate or misleading, which seems to be a regular occurrence in ChatGPT. 

Furthermore, the scraped data that ChatGPT was trained on may be confidential or protected by copyright. For instance, the tool replicated the opening few chapters of Joseph Heller's copyrighted book Catch-22. 

Finally, OpenAI did not pay for the internet data it downloaded. Its creators—individuals, website owners, and businesses—were not being compensated. This is especially remarkable in light of the recent US$29 billion valuation of OpenAI, which is more than double its value in 2021. 

OpenAI has also recently announced ChatGPT Plus, which is a paid subscription plan that will provide users ongoing access to the tool, swift response times, and priority access to its new feature. By 2024, it is anticipated that this approach would help generate $1 billion in revenue. 

None of this would have been possible without the usage of ‘our’ data, acquired and utilized without our consent. 

Time to Consider the Issue? 

According to some professionals and experts, ChatGPT is a “tipping point for AI” - The realisation of technological advancement that can revolutionize the way we work, learn, write, and even think. 

Despite its potential advantages, we must keep in mind that OpenAI is a private, for-profit organization whose objectives and business demands may not always coincide with those of the larger community requirements. 

The privacy hazards associated with ChatGPT should serve as a caution. And as users of an increasing number of AI technologies, we need to exercise extreme caution when deciding what data to provide such tools with.  

Microsoft to Roll Out “Data Boundary” for its EU Customers from Jan 1


According to the latest announcement made by Microsoft Corp on Thursday, its cloud customers of the European Union will finally be able to process and store chunks of their data in the region, starting from January 1. 

This phased rollout of its “EU data boundary” will apparently be applied to all of its core cloud services - Azure, Microsoft 365, Dynamics 365 and Power BI platform. 

Since the introduction of General Data Protection Regulation (GDPR) by the EU IN 2018, that protects user privacy, business giants have grown increasingly anxious of the international flow of consumer data. 

The European Commission, which serves as the executive arm of the EU, is developing ideas in order to safeguard the privacy of the European customers whose data is being transferred to the United States. 

"As we dived deeper into this project, we learned that we needed to be taken more phased approach," says Microsoft’s Chief Privacy Officer Julie Brill. “The first phase will be customer data. And then as we move into the next phases, we will be moving logging data, service data and other kind of data into the boundary.” 

The second phase will reportedly be completed by the end of 2023, while the third in year 2024, she added. 

Microsoft runs more than a dozen datacenters throughout the European countries, like France, Germany, Spain and Switzerland. 

Data storage, for large corporation, have become so vast and is distributed across so many different countries that it has now become a challenge to understand where their data is stored and whether it complies with regulations like GDPR. 

"We are creating this solution to make our customers feel more confident and to be able to have clear conversations with their regulators on where their data is being processed as well as stored," says Brill. 

Moreover, Microsoft has previously mentioned how it would eventually challenge government request for customer data, and that it would compensate financially to any customer, whose data it shared in breach of GDPR.  

Twitter's Brussels Staff Sacked by Musk 

After a conflict on how the social network's content should be regulated in the Union, Elon Musk shut down Twitter's entire Brussels headquarters.

Twitter's connection with the European Union, which has some of the most robust regulations controlling the digital world and is frequently at the forefront of global regulation in the sector, may be strained by the closing of the company's Brussels center. 

Platforms like Twitter are required by one guideline to remove anything that is prohibited in any of the EU bloc's member states. For instance, tweets influencing elections or content advocating hate speech would need to be removed in jurisdictions where such communication is prohibited. 

Another obligation is that social media sites like Twitter must demonstrate to the European Commission, the executive arm of the EU, that they are making a sufficient effort to stop the spread of content that is not illegal but may be damaging. Disinformation falls under this category. This summer, businesses will need to demonstrate how they are handling such positions. 

Musk will need to abide by the GDPR, a set of ground-breaking EU data protection laws that mandate Twitter have a data protection officer in the EU. 

The present proposal forbids the use of algorithms that have been demonstrated to be biased against individuals, which may have an influence on Twitter's face-cropping tools, which have been presented to favor youthful, slim women.

Twitter might also be obligated to monitor private conversations for grooming or images of child sexual abuse under the EU's Child Sexual Abuse Materials proposal. In the EU, there is still discussion about them.

In order to comply with the DSA, Twitter will need to put in a lot more effort, such as creating a system that allows users to flag illegal content with ease and hiring enough moderators to examine the content in every EU member state.

Twitter won't have to publish a risk analysis until next summer, but it will have to disclose its user count in February, which initiates the commission oversight process.

Two lawsuits that might hold social media corporations accountable for their algorithms that encourage dangerous or unlawful information are scheduled for hearings before the US Supreme Court. This might fundamentally alter how US businesses regulate content. 

CNIL Fines Clearview AI 20 million Euros for Illegal Use of Facial Recognition Technology

 

France’s data protection authority (CNIL) has imposed a €20 million fine on Clearview AI, the controversial facial recognition firm time for illegally gathering and using data belonging to French residents without their knowledge. 

CNIL imposed the maximum financial penalty the company could receive as per GDPR Article 83 and also ordered Clearview AI to stop all data collection activities and delete the data gathered on French citizens or face an additional €100,000 fine per day. 

“Clearview AI had two months to comply with the injunctions formulated in the formal notice and to justify them to the CNIL. However, it did not provide any response to this formal notice,” the CNIL stated. 

“The chair of the CNIL, therefore, decided to refer the matter to the restricted committee, which is in charge of issuing sanctions. On the basis of the information brought to its attention, the restricted committee decided to impose a maximum financial penalty of 20 million euros, according to article 83 of the GDPR.” 

Clearview AI scraps publicly available images and videos of people from websites and social media platforms and associates them with identities. Using this technique, the company has collected over 20 billion images that are being employed to feed a biometric database of facial scans and identities. 

Subsequently, the American-based firm sells access to this database to individuals, law enforcement, and multiple organizations around the globe. 

In Europe, the General Data Protection Regulation (GDPR) dictates that any data collection needs to be clearly communicated to the people and requires consent. Even if Clearview AI is not employing leaked data and the company does not spy on people, individuals are unaware that their images are being used for identification by Clearview AI customers. 

CNIL's latest decision comes after a two-year investigation initiated in May 2020, when the French authority received complaints from individuals about Clearview facial recognition software. Another warning about biometric profiling came from the Privacy International organization in May 2021. 

According to the CNIL, it found Clearview AI was guilty of multiple violations of the General Data Protection Regulation (GDPR). The breaches include unlawful processing of private data (GDPR Article 6), individuals' rights not being respected (Articles 12, 15, and 17), and lack of cooperation with the data protection authority (Article 31). 

The CNIL judgment is the third decision against Clearview's activities after state authorities fined the firm in March and July for unlawfully gathering biometric data in Italy and Greece.

Here's How BlackMatter Ransomware is Linked With LockBit 3.0

 

LockBit 3.0, the most recent version of LockBit ransomware, and BlackMatter contain similarities discovered by cybersecurity researchers. 

In addition to introducing a brand-new leak site, the first ransomware bug bounty program, LockBit 3.0, was released in June 2022. Zcash was also made available as a cryptocurrency payment method.

"The encrypted filenames are appended with the extensions 'HLJkNskOq' or '19MqZqZ0s' by the ransomware, and its icon is replaced with a.ico file icon. The ransom note then appears, referencing 'Ilon Musk'and the General Data Protection Regulation of the European Union (GDPR)," researchers from Trend Micro stated.

The ransomware alters the machine's wallpaper when the infection process is finished to alert the user of the attack. Several LockBit 3.0's code snippets were found to be lifted from the BlackMatter ransomware by Trend Micro researchers when they were debugging the Lockbit 3.0 sample.

Identical ransomware threats

The researchers draw attention to the similarities between BlackMatter's privilege escalation and API harvesting techniques. By hashing a DLL's API names and comparing them to a list of the APIs the ransomware requires, LockBit 3.0 executes API harvesting. As the publically accessible script for renaming BlackMatter's APIs also functions for LockBit 3.0, this procedure is the same as that of BlackMatter.

The most recent version of LockBit also examines the UI language of the victim machine to prevent infection of machines that speak these languages in the Commonwealth of Independent States (CIS) member states.

Windows Management Instrumentation (WMI) via COM objects is used by Lockbit 3.0 and BlackMatter to delete shadow copies. Experts draw attention to the fact that LockBit 2.0 deletes using vssadmin.exe.

The findings coincide with LockBit attacks becoming the most active ransomware-as-a-service (RaaS) gangs in 2022, with the Italian Internal Revenue Service (L'Agenzia delle Entrate) being the most recent target.

The ransomware family contributed to 14% of intrusions, second only to Conti at 22%, according to Palo Alto Networks' 2022 Unit 42 Incident Response Report, which was released and is based on 600 instances handled between May 2021 and April 2022.


The CNIL Penalized SLIMPAY €180,000 for Data Violation.

 

SLIMPAY is a licensed payment institution that provides customers with recurring payment options. Based in Paris, this subscription payment services firm was fined €180,000 by the French CNIL regulatory authority after it was discovered that sensitive client data had been stored on a publicly accessible server for five years by the firm. 

The company bills itself as a leader in subscription recurring payments, and it offers an API and processing service to handle such payments on behalf of clients such as Unicef, BP, and OVO Energy, to mention a few. It appears to have conducted an internal research project on an anti-fraud mechanism in 2015, during which it collected personal data from its client databases for testing purposes. Real data is a useful way to confirm that development code is operating as intended before going live, but when dealing with sensitive data like bank account numbers, extreme caution must be exercised to avoid violating data protection requirements.

In 2020, the CNIL conducted an inquiry on the company SLIMPAY and discovered a number of security flaws in their handling of customers' personal data. The restricted committee - the CNIL body in charge of applying fines - effectively concluded that the corporation had failed to comply with several GDPR standards based on these elements. Because the data subjects affected by the incident were spread across many European Union nations, the CNIL collaborated with four supervisory agencies (Germany, Spain, Italy, and the Netherlands). 

THE BREAKDOWNS 

1.  Failure to comply with the requirement to provide a formal legal foundation for a processor's processing operations (Article 28 of the GDPR)

SLIMPAY's agreements with its service providers do not include all of the terms necessary to ensure that these processors agree to process personal data in accordance with the GDPR. 

2. Failure to protect personal data from unauthorized access (Article 32 of the GDPR) 

Access to the server was not subject to any security controls, according to the restricted committee, and it could be accessed from the Internet between November 2015 and February 2020. More than 12 million people's civil status information, postal and e-mail addresses, phone numbers, and bank account numbers (BIC/IBAN) were all hacked. 

3. Failure to protect personal data from unauthorized access (Article 32 of the GDPR) 

The CNIL determined that the risk associated with the breach should be considered high due to the nature of the personal data, the number of people affected, the possibility of identifying the people affected by the breach from the accessible data, and the potential consequences for the people concerned.

Elastic Stack API Security Vulnerability Exposes Customer and System Data

 

The mis-implementation of Elastic Stack, a collection of open-source products that employ APIs for crucial data aggregation, search, and analytics capabilities, has resulted in severe vulnerabilities, according to a new analysis. Researchers from Salt Security uncovered flaws that allowed them to not only conduct attacks in which any user could extract critical customer and system data, but also to create a denial of service condition in which the system would become inaccessible. 

“Our latest API security research underscores how prevalent and potentially dangerous API vulnerabilities are. Elastic Stack is widely used and secure, but Salt Labs observed the same architectural design mistakes in almost every environment that uses it,” said Roey Eliyahu, co-founder and CEO, Salt Security. “The Elastic Stack API vulnerability can lead to the exposure of sensitive data that can be used to perpetuate serious fraud and abuse, creating substantial business risk.” 

The vulnerability was originally detected while safeguarding one of their customers, a huge online business-to-consumer platform that provides API-based mobile applications and software as a service to millions of consumers around the world, according to the researchers. 

 Officials at Salt Security were eager to point out that this isn't a flaw in Elastic Stack itself, but rather a problem with how it's being deployed. According to Salt Security's technical evangelist Michael Isbitski, the vulnerability isn't due to a fault in Elastic's software, but rather to "a common risky implementation set up by users." 

"The lack of awareness around potential misconfigurations, mis-implementations, and cluster exposures is largely a community issue that can be solved only through research and education," Isbitski said. API threats have increased 348% in the last six months, according to the Salt Security State of API Security Report, Q3 2021. The development of business-critical APIs, combined with the advent of exploitable vulnerabilities, reveals the substantial security flaws that occur from the integration of third-party apps and services.

The impact of the Elastic Stack design implementation flaws rises considerably when an attacker chains together multiple attacks, according to Salt Labs researchers. Attackers can use the lack of authorization between front-end and back-end services to establish a working user account with basic permission levels, then make educated assumptions about the schema of back-end data stores and inquire for data they aren't authorized to access. 

Salt Labs was able to gain access to a large amount of sensitive data, including account numbers and transaction confirmation numbers, as part of its research. Some of the sensitive information was also private and subject to GDPR regulations. Attackers could use this information to access other API-based features, such as the ability to book new services or cancel existing ones.

GDPR privacy law exploited to reveal personal data

About one in four companies revealed personal information to a woman's partner, who had made a bogus demand for the data by citing an EU privacy law.

The security expert contacted dozens of UK and US-based firms to test how they would handle a "right of access" request made in someone else's name.

In each case, he asked for all the data that they held on his fiancee.

In one case, the response included the results of a criminal activity check.

Other replies included credit card information, travel details, account logins and passwords, and the target's full US social security number.

University of Oxford-based researcher James Pavur has presented his findings at the Black Hat conference in Las Vegas.

It is one of the first tests of its kind to exploit the EU's General Data Protection Regulation (GDPR), which came into force in May 2018. The law shortened the time organisations had to respond to data requests, added new types of information they have to provide, and increased the potential penalty for non-compliance.

"Generally if it was an extremely large company - especially tech ones - they tended to do really well," he told the BBC.

"Small companies tended to ignore me.

"But the kind of mid-sized businesses that knew about GDPR, but maybe didn't have much of a specialised process [to handle requests], failed."

He declined to identify the organisations that had mishandled the requests, but said they had included:

- a UK hotel chain that shared a complete record of his partner's overnight stays

- two UK rail companies that provided records of all the journeys she had taken with them over several years

- a US-based educational company that handed over her high school grades, mother's maiden name and the results of a criminal background check survey.

Mr Pavur has, however, named some of the companies that he said had performed well.

Programmer coded a software to track women in porn videos using face-recognition






A Chinese programmer based in Germany created a software using face-recognition technology to identify women who had appeared in porn videos. 

The information about the project was posted on the Chinese social network WeiboA. Then a Twitter handle @yiqinfu tweeted ’’A Germany-based Chinese programmer said he and some friends have identified 100k porn actresses from around the world, cross-referencing faces in porn videos with social media profile pictures. The goal is to help others check whether their girlfriends ever acted in those films.’’

The project took nearly half a year to complete. The videos were collected from websites 1024, 91, sex8, PornHub, and xvideos, and all together it consists of  100+ terabytes of data. 

The faces appearing on these videos are compared with profile pictures from various popular social media platform like Facebook, Instagram, TikTok, Weibo, and others.

The coder deleted the project and all his data after it found out that the project violates the European privacy law. 

However, there is no proof that there is no program on the global system that matches women’s social-media photos with images from porn sites. 

According to the programmer whatever he did ‘was legal because 1) he hasn't shared any data, 2) he hasn't opened up the database to outside queries, and 3) sex work is legal in Germany, where he's based.’

But, this incidence has made clear that program like this could be possible and would have awful consequences. “It’s going to kill people,” says Carrie A. Goldberg, an attorney who specializes in sexual privacy violations. 

“Some of my most viciously harassed clients have been people who did porn, oftentimes one time in their life and sometimes nonconsensually [because] they were duped into it. Their lives have been ruined because there’s this whole culture of incels that for a hobby expose women who’ve done porn and post about them online and dox them.” 

The European Union’s GDPR privacy law prevents this kind of situation, but people living in other places are not as lucky.