Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label PII. Show all posts

Security Flaws Discovered in ChatGPT Plugins

 


Recent research has surfaced serious security vulnerabilities within ChatGPT plugins, raising concerns about potential data breaches and account takeovers. These flaws could allow attackers to gain control of organisational accounts on third-party platforms and access sensitive user data, including Personal Identifiable Information (PII).

According to Darren Guccione, CEO and co-founder of Keeper Security, the vulnerabilities found in ChatGPT plugins pose a significant risk to organisations as employees often input sensitive data, including intellectual property and financial information, into AI tools. Unauthorised access to such data could have severe consequences for businesses.

In November 2023, ChatGPT introduced a new feature called GPTs, which function similarly to plugins and present similar security risks, further complicating the situation.

In a recent advisory, the Salt Security research team identified three main types of vulnerabilities within ChatGPT plugins. Firstly, vulnerabilities were found in the plugin installation process, potentially allowing attackers to install malicious plugins and intercept user messages containing proprietary information.

Secondly, flaws were discovered within PluginLab, a framework for developing ChatGPT plugins, which could lead to account takeovers on third-party platforms like GitHub.

Lastly, OAuth redirection manipulation vulnerabilities were identified in several plugins, enabling attackers to steal user credentials and execute account takeovers.

Yaniv Balmas, vice president of research at Salt Security, emphasised the growing popularity of generative AI tools like ChatGPT and the corresponding increase in efforts by attackers to exploit these tools to gain access to sensitive data.

Following coordinated disclosure practices, Salt Labs worked with OpenAI and third-party vendors to promptly address these issues and reduce the risk of exploitation.

Sarah Jones, a cyber threat intelligence research analyst at Critical Start, outlined several measures that organisations can take to strengthen their defences against these vulnerabilities. These include:


1. Implementing permission-based installation: 

This involves ensuring that only authorised users can install plugins, reducing the risk of malicious actors installing harmful plugins.

2. Introducing two-factor authentication: 

By requiring users to provide two forms of identification, such as a password and a unique code sent to their phone, organisations can add an extra layer of security to their accounts.

3. Educating users on exercising caution with code and links: 

It's essential to train employees to be cautious when interacting with code and links, as these can often be used as vectors for cyber attacks.

4. Monitoring plugin activity constantly: 

By regularly monitoring plugin activity, organisations can detect any unusual behaviour or unauthorised access attempts promptly.

5. Subscribing to security advisories for updates:

Staying informed about security advisories and updates from ChatGPT and third-party vendors allows organisations to address vulnerabilities and apply patches promptly.

As organisations increasingly rely on AI technologies, it becomes crucial to address and mitigate the associated security risks effectively.


WordPress: Strip Payment Plugin Flaw Exposes Customers' Order Details


A critical vulnerability has recently been discovered in the WooCommerce Gateway plugin for WordPress. Apparently, it has compromised sensitive customer information related to their orders to unauthorized data. On WordPress e-commerce sites, the plugin supported payment processing for over 900,000 active installations. It was susceptible to the CVE-2023-34000 unauthenticated insecure direct object reference (IDOR) bug.

WooCommerce Stripe Payment

WooCommerce Strip Payment is a payment gateway for WordPress e-commerce sites, with 900,000 active installs. Through Stripe's payment processing API, it enables websites to accept payment methods like Visa, MasterCard, American Express, Apple Pay, and Google Pay.

About the Vulnerability

Origin of the Flaw

The vulnerability originated from unsafe handling of order objects and an improper access control measures in the plugin’s ‘javascript_params’ and ‘payment_fields’ functions.

Due to these coding errors, it is possible to display order data for any WooCommerce store without first confirming the request's permissions or the order's ownership (user matching).

Consequences of the Flaw

The payment gateway vulnerability could eventually enable unauthorized users access to the checkout page data that includes PII (personally identifiable information), email addresses, shipping addresses and the user’s full name.

Since the data listed above is listed as ‘critical,’ it could further lead to additional cyberattacks wherein the threat actor could attempt account hijacks and credential theft through phishing emails that specifically target the victim.

How to Patch the Vulnerability?

Users of the WooCommerce Strip Gateway plugin should update to version 7.4.1 in order to reduce the risks associated with this vulnerability. On April 17, 2023, specialists immediately notified the plugin vendor of the vulnerability, CVE-2023-34000. On May 30, 2023, a patch that addressed the problem and improved security was made available.

Despite the patch's accessibility, the concerning WordPress.org data point to risk. The truth is that unsafe plugin versions are still being used by more than half of the active installations. The attack surface is greatly increased in this situation, which attracts cybercriminals looking to take advantage of the security flaw.

Adding to this, the gateway needs safety measures to be taken swiftly like updating version 7.4.1 and ensuring that all plugins are constantly updated, and keeping an eye out for any indications of malicious activities. Website supervisors can preserve sensitive user data and defend their online companies from potential cyber threats by giving security measures a first priority.

Safeguarding Your Work: What Not to Share with ChatGPT

 

ChatGPT, a popular AI language model developed by OpenAI, has gained widespread usage in various industries for its conversational capabilities. However, it is essential for users to be cautious about the information they share with AI models like ChatGPT, particularly when using it for work-related purposes. This article explores the potential risks and considerations for users when sharing sensitive or confidential information with ChatGPT in professional settings.
Potential Risks and Concerns:
  1. Data Privacy and Security: When sharing information with ChatGPT, there is a risk that sensitive data could be compromised or accessed by unauthorized individuals. While OpenAI takes measures to secure user data, it is important to be mindful of the potential vulnerabilities that exist.
  2. Confidentiality Breach: ChatGPT is an AI model trained on a vast amount of data, and there is a possibility that it may generate responses that unintentionally disclose sensitive or confidential information. This can pose a significant risk, especially when discussing proprietary information, trade secrets, or confidential client data.
  3. Compliance and Legal Considerations: Different industries and jurisdictions have specific regulations regarding data privacy and protection. Sharing certain types of information with ChatGPT may potentially violate these regulations, leading to legal and compliance issues.

Best Practices for Using ChatGPT in a Work Environment:

  1. Avoid Sharing Proprietary Information: Refrain from discussing or sharing trade secrets, confidential business strategies, or proprietary data with ChatGPT. It is important to maintain a clear boundary between sensitive company information and AI models.
  2. Protect Personal Identifiable Information (PII): Be cautious when sharing personal information, such as social security numbers, addresses, or financial details, as these can be targeted by malicious actors or result in privacy breaches.
  3. Verify the Purpose and Security of Conversations: If using a third-party platform or integration to access ChatGPT, ensure that the platform has adequate security measures in place. Verify that the conversations and data shared are stored securely and are not accessible to unauthorized parties.
  4. Be Mindful of Compliance Requirements: Understand and adhere to industry-specific regulations and compliance standards, such as GDPR or HIPAA, when sharing any data through ChatGPT. Stay informed about any updates or guidelines regarding the use of AI models in your particular industry.
While ChatGPT and similar AI language models offer valuable assistance, it is crucial to exercise caution and prudence when using them in professional settings. Users must prioritize data privacy, security, and compliance by refraining from sharing sensitive or confidential information that could potentially compromise their organizations. By adopting best practices and maintaining awareness of the risks involved, users can harness the benefits of AI models like ChatGPT while safeguarding their valuable information.

Consenting to Cookies is Not Sufficient

 


While most companies are spending a great deal of their time implementing cookie consent notices, it is becoming increasingly evident that the number and size of developments and lawsuits relating to privacy are on the rise. As a result, companies and their customers are rarely protected by these notices, which is not a surprise.  

It is undeniable that transparency is a worthwhile endeavor. But, the fact remains that companies can be vulnerable to several potential threats that are often beyond their direct control.   

For example, the recent lawsuits involving the Meta Pixel, which also affect many U.S. healthcare companies and are affecting many doctors, are an ideal example of this issue.    

The issue lies in the way websites are designed and built, which contributes to the problem. Except for a few of the biggest tech companies, all of the websites are built using third-party cloud services that are hosted on the web. Among the services offered here are CRM, analytics, form builders, and also trackers for advertisers that take advantage of these functions. Various third parties have a great deal of autonomy over these decisions. However, they are not regulated properly. 

Many kinds of pixels are available on the internet, and many of them serve some purpose. Usually, marketers use this type of data when they want to target advertisements to potential customers. In addition, they want to see how effective their ads are when it comes to reaching them. It is also imperative to note that, by using these trackers, highly specific and detailed personal data is also being collected. This data is being incorporated into existing data portfolios. 

Financial and Healthcare Data are Being Misused 

In most cases, the risks associated with visiting a healthcare website are much higher than when you are visiting any other website. Facebook is not a suitable place for you to share the medical conditions that you are researching with your friends who use that service. This data is not something that you want to be included in your social graph, and you do not want it added. Therefore, the crux of the issue in these lawsuits can be summarized this way: Protected Health Information (PHI) is protected by HIPAA (Health Insurance Portability and Accountability Act), which the actions described in the preceding sentence violate. Seeing digital advertising through the lens of healthcare can also shine a light on how troubling it can be when tracking is used. This is when viewed through the lens of advertising.   

As far as financial services are concerned, the same rules apply. A similar consequence may occur if an unauthorized party gains access to personally identifiable information (PII) or financial data, such as Social Security Numbers or credit card numbers, as well as other confidential data, and it is not handled correctly. This could have dire consequences. Privacy is crucial to safety. Details about your private life should be kept private for the right reasons. Modern advertising practices do not mesh well with these aspects of our lives, which are all significant.   

In addition to the Meta Pixel case, two other recent lawsuits provide us with a deeper understanding of how complex and broad the problem is, and how far it extends.  

Analyzing Sensitive Data From a Different Perspective 

In a recent lawsuit, Oracle was accused of trying to use the 4.5 billion records they currently hold as a proxy system for tracking sensitive consumer data. They have deliberately chosen not to share with any third parties. For comparison, the global population is 8 billion people. The concept of re-identification of de-identified data is far from an invention, but it serves as a clear example of why it matters so much to gather all these pieces of data, no matter how random they may seem. A person can infer most of the details of their life with almost astonishing accuracy. This is if they have access to enough data from Oracle, or whoever gets hold of the data. The data will end up being used in the same way in the end as this is a certainty. 

In a recent case, web testing tools were used to record the sessions of users on a website. This was so that they could see how well users navigated the site as they worked through the steps. As web developers and marketers, it is extremely common for them to use these tools to make their user interfaces more usable. 

In short, some companies are being accused of wiretapping under the Wiretap laws because they are using these tools to gather information. The reason for this is that these tools are capable of transmitting a considerable amount of information without the user's knowledge and the website owner's knowledge. It is inconceivable to believe that such a thing could happen. Even though this may seem like a minor issue, it is very clear once you look at it through the lens of sensitive data. 

Twitter Data Breach Indicates How APIs Are a Goldmine for PII and Social Engineering


A Twitter API vulnerability that was detected in June 2021, and was later patched, has apparently been haunting the organization yet again. 

In December 2022, a hacker claimed to have access to the personal data of 400 million Twitter users for sale on the dark web markets. And only yesterday, the attacker published the account details and email addresses of 235 million users. 

The breached data revealed by the hacker includes account names, handle creation data, follower count, and email addresses of victims. Moreover, the threat actors can as well design social engineering campaigns to dupe people into providing them their personal data. 

Twitter: A Social Engineering Goldmine 

Social media giants provide threat actors with a gold mine of user data and personal information that they can utilize in order to perform social engineering scams. 

Getting a hold of just a user name, email address, and contextual information of a user’s profile, available to the public, a hacker may conduct reconnaissance on their targeted user and create phishing and scam campaigns that are specifically designed to dupe them into providing personal information. 

In this case, while the exposed information was limited to users’ information available publicly, the immense volume of accounts exposed in a single location (Twitter) has in fact provided a “goldmine of information” to the threat actors. 

The Link Between Social Engineering and API Attacks 

Unsecured APIs allow cybercriminals direct access to users’ Personally Identifiable Information (PII), such as username and password, which is captured when the user connects to any third-party service API. API attack thus provides threat actors with a window to collect large amounts of personal information for scams. 

An instance of this happened just a month ago when a threat actor leveraged an API flaw to gather the data of 80,000 executives throughout the private sector and sell it on the dark web. The threat actor had applied successfully to the FBI's InfraGard intelligence sharing service. 

The data collected during the incident included usernames, email addresses, Social Security numbers, and dates of birth of victims. This highly valuable information was utilized by the threat actors for developing social engineering dupes and spear phishing attacks. 

How to Protect APIs and PII? 

One of the main challenges faced while combating API breaches is how modern enterprises need to detect and secure a large number of APIs. A single vulnerability can put user data at risk of exfiltration, therefore there is little room for error. 

“Protecting organizations from API attacks requires consistent, diligent oversight of vendor management, and specifically ensuring that every API is fit for use […] It’s a lot for organizations to manage, but the risk is too great not to,” says Chris Bowen, CISO at ClearDATA.  “In healthcare, for example, where patient data is at stake, every API should address several components like identity management, access management, authentication, authorization, data transport, exchange security, and trusted connectivity.”

It has also been advised to the security team to not rely solely on simple authentication options like username and password in order to secure their APIs. 

“In today’s environment, basic usernames and passwords are no longer enough […] It’s now vital to use standards such as two-factor authentication (2FA) and/or secure authentication with OAuth,” says Will Au, senior director for DevOps, operations, and site reliability at Jitterbit. 

Moreover, measures such as utilizing a Web Application Firewall (WAF), and monitoring API traffic in real time can aid in detecting malicious activities, ultimately minimizing the risk of compromise.