Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label GDPR. Show all posts

Websites Engage in Deceptive Practices to Conceal the Scope of Data Collection and Sharing

 

Websites frequently conceal the extent to which they share our personal data, employing tactics to obscure their practices and prevent consumers from making fully informed decisions about their privacy. This lack of transparency has prompted governmental responses, such as the European Union's GDPR and California's CCPA, which require websites to seek permission before tracking user activity.

Despite these regulations, many users remain unaware of how their data is shared and manipulated. A recent study delves into the strategies employed by websites to hide the extent of data sharing and the reasons behind such obfuscation.

The research, focusing on online privacy regulations in Canada, reveals that websites often employ deception to mislead users and increase the difficulty of monitoring their activities. Notably, websites dealing with sensitive information, like medical or banking sites, tend to be more transparent about data sharing due to market constraints and heightened privacy sensitivity.

During the COVID-19 pandemic, as online activity surged, instances of privacy abuses also increased. The study shows that popular websites are more likely to obscure their data-sharing practices, potentially to maximize profits by exploiting uninformed consumers.

Third-party data collection by websites is pervasive, with numerous tracking mechanisms used for advertising and other purposes. This extensive surveillance raises concerns about privacy infringement and the commodification of personal data. Dark patterns and lack of transparency further exacerbate the issue, making it difficult for users to understand and control how their information is shared.

Efforts to protect consumer privacy, such as GDPR and CCPA, have limitations, as websites continue to manipulate and profit from user data despite opt-in and opt-out regulations. Consumer responses, including the use of VPNs and behavioral obfuscation, offer some protection, but the underlying information asymmetry remains a significant challenge.

EU AI Act to Impact US Generative AI Deployments

 



In a move set to reshape the scope of AI deployment, the European Union's AI Act, slated to come into effect in May or June, aims to impose stricter regulations on the development and use of generative AI technology. The Act, which categorises AI use cases based on associated risks, prohibits certain applications like biometric categorization systems and emotion recognition in workplaces due to concerns over manipulation of human behaviour. This legislation will compel companies, regardless of their location, to adopt a more responsible approach to AI development and deployment.

For businesses venturing into generative AI adoption, compliance with the EU AI Act will necessitate a thorough evaluation of use cases through a risk assessment lens. Existing AI deployments will require comprehensive audits to ensure adherence to regulatory standards and mitigate potential penalties. While the Act provides a transition period for compliance, organisations must gear up to meet the stipulated requirements by 2026.

This isn't the first time US companies have faced disruption from overseas tech regulations. Similar to the impact of the GDPR on data privacy practices, the EU AI Act is expected to influence global AI governance standards. By aligning with EU regulations, US tech leaders may find themselves better positioned to comply with emerging regulatory mandates worldwide.

Despite the parallels with GDPR, regulating AI presents unique challenges. The rollout of GDPR witnessed numerous compliance hurdles, indicating the complexity of enforcing such regulations. Additionally, concerns persist regarding the efficacy of fines in deterring non-compliance among large corporations. The EU's proposed fines for AI Act violations range from 7.5 million to 35 million euros, but effective enforcement will require the establishment of robust regulatory mechanisms.

Addressing the AI talent gap is crucial for successful implementation and enforcement of the Act. Both the EU and the US recognize the need for upskilling to attend to the complexities of AI governance. While US efforts have focused on executive orders and policy initiatives, the EU's proactive approach is poised to drive AI enforcement forward.

For CIOs preparing for the AI Act's enforcement, understanding the tools and use cases within their organisations is imperceptible. By conducting comprehensive inventories and risk assessments, businesses can identify areas of potential non-compliance and take corrective measures. It's essential to recognize that seemingly low-risk AI applications may still pose significant challenges, particularly regarding data privacy and transparency.

Companies like TransUnion are taking a nuanced approach to AI deployment, tailoring their strategies to specific use cases. While embracing AI's potential benefits, they exercise caution in deploying complex, less explainable technologies, especially in sensitive areas like credit assessment.

As the EU AI Act reshapes the regulatory landscape, CIOs must proactively adapt their AI strategies to ensure compliance and mitigate risks. By prioritising transparency, accountability, and ethical considerations, organisations can navigate the evolving regulatory environment while harnessing the transformative power of AI responsibly.



Hays Research Reveals the Increasing AI Adoption in Scottish Workplaces


Artificial intelligence (AI) tool adoption in Scottish companies has significantly increased, according to a new survey by recruitment firm Hays. The study, which is based on a poll with almost 15,000 replies from professionals and employers—including 886 from Scotland—shows a significant rise in the percentage of companies using AI in their operations over the previous six months, from 26% to 32%.

Mixed Attitudes Toward the Impact of AI on Jobs

Despite the upsurge in AI technology, the study reveals that professionals have differing opinions on how AI will affect their jobs. Even though 80% of Scottish professionals do not already use AI in their employment, 21% think that AI technologies will improve their ability to do their tasks. Interestingly, during the past six months, the percentage of professionals expecting a negative impact has dropped from 12% to 6%.

However, the study indicates its concern among employees, with 61% of them believing that their companies are not doing enough to prepare them for the expanding use of AI in the workplace. Concerns are raised by this trend regarding the workforce's readiness to adopt and take full use of AI technologies. Tech-oriented Hays business director Justin Black stresses the value of giving people enough training opportunities to advance their skills and become proficient with new technologies.

Barriers to AI Adoption 

The reluctance of enterprises to disclose their data and intellectual property to AI systems, citing concerns linked to GDPR compliance (General Data Protection Regulation), is one of the noteworthy challenges impeding the mass adoption of AI. This reluctance is also influenced by concerns about trust. The demand for AI capabilities has outpaced the increase of skilled individuals in the sector, highlighting a skills deficit in the AI space, according to Black.

The reluctance to subject sensitive data and intellectual property to AI systems results from concerns about GDPR compliance. Businesses are cautious about the possible dangers of disclosing confidential data to AI systems. Professionals' scepticism about the security and dependency on AI systems contributes to their trust issues. 

The study suggests that as AI sets its foot as a crucial element in Scottish workplaces, employees should prioritize tackling skills shortages, encouraging employee readiness, and improving communication about AI integration, given the growing role that AI is playing in workplaces. By doing this, businesses might as well ease the concerns about GDPR and trust difficulties while simultaneously fostering an atmosphere that allows employees to fully take advantage of AI technology's benefits.  

Unlocking Data Privacy: Mine's No-Code Approach Nets $30 Million in Funding

 


An Israeli data privacy company, Mine Inc., has announced that it has completed a $30 million Series B fundraising round led by Battery Ventures, PayPal Ventures, as well as the investment arm of US insurance giant Nationwide, with the participation of a third investor. In addition to Gradient Ventures, Saban Ventures, MassMutual Ventures, and Headline Ventures, which are all existing investors, Google's AI fund Gradient Ventures also joined the round of investment.

Using artificial intelligence and specifically natural language processing, Mine is capable of scanning your inbox to identify which companies have access to your personal information, as well as allowing you to delete any information that you had no reason to have access to. 

There were a lot of concerns that people had concerning GDPR, and the product sparked a lot of interest: initially free, the startup managed to rake in about 5 million users in just a few weeks. Next, the company was able to expand its user base to include business users and enterprise applications. 

Mine can figure out all of the locations where the end user is installing and using customer or business data from a scan of the user's inbox and log-on authenticity. In this instance, it struck a chord with the privacy officers who are responsible for keeping companies in compliance with privacy rules and that resonated with them as well.

150 clients are using Mine’s data privacy and disclosure solutions to protect their data. These companies include Reddit, HelloFresh SE, Fender, Guesty, Snappy, and Data.ai. By raising this capital, the Company will be able to fund its ongoing operations in the coming years as well as expand its global operations, including expanding the company's MineOS B2B platform into the US and expanding its offerings to the enterprise market. 

With 35 employees, the company is in the process of hiring dozens of developers, QA professionals, and machine learning professionals to be based in Israel. Founded in 2019, Mine is a company headquartered in Tel Aviv, with the company's founding members being CEO Gal Ringel, CTO Gal Golan, and CPO Kobi Nissan.

Since the company started, its vision has been to provide companies and individuals with ease of access to privacy regulations. It has been two years since the company's vision around its MineOS B2B platform has sharpened, and it aims to provide the company with a Single Source of Truth (SSOFT) of data within its organization, enabling them to identify which systems, assets, and data they have within their organization. 

In every organization, this process, known as Data Mapping, is one of the most important building blocks which serves as a basis for a variety of teams, including legal and privacy teams, data teams, engineering teams, information technologies, and security teams. It is the most important building block for many teams within a company. As Ringel said, "The funding was complete at the end of the second week of October, just one week after the war had begun." 

As a result of the difficult market conditions of the past year, we have managed the company very carefully and disciplined since March last year while reducing monthly expenses and boosting revenue significantly to a rate of millions of dollars in annualized return on equity (4x growth in 2023) which has allowed us to achieve extraordinary metrics that have attracted many investors to the company. 

There is no doubt that mineOS is one of the greatest open-source operating systems out there, and as such it has hundreds of enterprise customers, including Reddit, HelloFresh SE, FIFA and Data.ai, and Data.ai it announces $30 million in Series B funding to continue its development. There are two leads in this round, Battery Ventures (from the financial giant) and PayPal Ventures (from the payments giant) as well as all of the previous backers that were involved in this round, including Saban Ventures, Gradient Ventures (Google's AI fund), MassMutual Ventures, and Headline Ventures. 

Although Mine has not disclosed its valuation, the co-founder and CEO, Gal Ringel, told me during his recent interview that the company has increased in valuation three times since its last fundraising back in 2020. (The previous round was $9.5 million after the company had only 100,000 users and no revenue.) Mine has raised over $42.5 million in funding. 

A part of the new funding will be used for both sales development surrounding Mine's current offerings, as well as more funding for R&D. In line with this, Mine intends to launch two new products in Q1 that cater to the explosion in interest and use of artificial intelligence. One of these products is designed for data privacy officers who are prepared to comply with the plans of regulators to adopt artificial intelligence laws shortly. The data protection tools market is not limited to Mine, as it should be. 

The fact that the feature sits close to other data protection activities is why it is more likely to be challenged by other companies in the same arena – for instance, OneTrust, which offers GDPR and consent gate solutions for websites, and BigID, which is a provider of a comprehensive set of compliance tools for data usage and compliance. Ringel said Mine has a strong competitive advantage over these as it is designed with an emphasis on becoming user-friendly, so it can be adopted and used even by people who have no technical background.

TikTok Faces Massive €345 Million Penalty for Mishandling Kids' Data Privacy

 


As a result of TikTok's failure to shield underage users' content from public view as well as violating EU data laws, the company has been fined €345 million (£296 million) for mishandling children's accounts and for breaking the laws. 

Data watchdogs in Ireland, which oversee the Chinese video app TikTok across the EU, recently told legal watchdogs that the video app had violated multiple GDPR rules in its operation. In its investigation, TikTok was found to have violated GDPR by making it mandatory for its users to place their accounts on a public setting by default; failing to give transparent information to child users; allowing a parent to view a child's account using the "family pairing" option to enable direct messaging for those over 16; and not considering the risks to children who were placed on the platform in a public setting and not considering that. 

Children's personal information was not sufficiently protected by the popular Chinese-owned app because it made its account public by default and did not adequately address the risks associated with under-13 users being able to access its platform, according to a decision published by the Irish Data Protection Commission (DPC). 

In a statement released on Tuesday, the Irish Data Protection Commission (DPC) said the company violated eight articles in the GDPR, the EU's primary regulatory authority for the company. There are several legal aspects of data processing which are covered by these laws, and they go from the legal use of personal data to protecting it from unlawful use. 

In most children's accounts, the settings for the profile page are set to public by default, so that everyone will be able to see any content that they post there. In an attempt to allow parents to link to their older child's account and use Direct Messages, this feature called Family Pairing allowed any adult to pair up with their child's account.  

There was no indication the child could be at risk from this feature. In the process of registering users and posting videos, TikTok did not provide the information it should have to child users and instead resorted to what's known as "dark patterns" to encourage users to choose more privacy-invasive options during their registration process. 

According to a DPC decision, the media company has been fined £12.7m after the UK data regulator found TikTok had illegally processed 1.4 million children's data under the age of 13 who were using its platform without their parent's consent in April. 

Despite being a popular social media platform, TikTok has done "very little or nothing, if anything" to ensure the safety of the platform's users from illicit activity. According to TikTok, the investigation examined the privacy setup the company had between 31 July and 31 December 2020, and it has said that it has addressed all of the issues raised as a result of the investigation.

Since 2021, all new and existing TikTok accounts that are 13- to 15-year-olds as well as those that are already set up have been set up as private, meaning that only people the user has authorized will be able to view their content. Additionally, the DPC pointed out that some aspects of their decision had been overruled by the European Data Protection Board (EDPB), a body made up of data protection regulators from various EU member states, on certain aspects. 

The German regulator had to propose a finding that the use of “dark patterns” – the term for deceptive website and app design that leads users to choose certain behaviours or make certain choices – violated the GDPR's provisions for the fair processing of personal data, and this was the reason why it had to include the proposed finding. 

TikTok has been accused of unlawfully making accounts of its users aged 13 to 17 public by default, which effectively means anyone can watch and comment on the videos that individuals have posted on their TikTok accounts between July and December 2020, according to the Irish privacy regulator. 

Moreover, the company failed to adequately assess the risks associated with the possibility of users under the age of 13 gaining access to its platform through marketing channels. Also, the report found that TikTok is still manipulating teenagers who join the platform by requesting them to share their videos and accounts publicly through pop-up advertisements that manipulate them. 

A regulator has ordered the company to change these misleading designs, also known as dark patterns, within three months to prevent any further harm to consumers. As early as the second half of 2020, accounts of minors could be linked to unverified accounts of adults. 

It was also reported that the video platform failed to explain to teenagers previous to the release of their content and accounts to the general public the consequences of making those content and accounts public. It has also been mentioned by the board of European regulators that there were serious doubts in their minds about the effectiveness of TikTok's measures to keep under 13 users off its platform in the latter half of 2020. 

As a result, the EDPB found that TikTok was failing to check the ages of existing users "in a sufficiently systematic manner" even though the mechanisms could be easily circumvented. Because of a lack of information available during the cooperation process, the group was unable to find an infringement, according to the group.

There was a fine of £12.7 million (€14.8 million) from the United Kingdom's data regulator in April for allowing children under 13 to use the platform and use their data. In addition, the company also received a fine of €750,000 from the Dutch privacy authority in 2021 for failing to provide a privacy policy in the Dutch language, which was meant to protect Dutch children.

New Cyber Threat: North Korean Hackers Exploit npm for Malicious Intent

 


There has been an updated threat warning from GitHub regarding a new North Korean attack campaign that uses malicious dependencies on npm packages to compromise victims. An earlier blog post published by the development platform earlier this week claimed that the attacks were against employees of blockchain, cryptocurrency, online gambling, and cybersecurity companies.   

Alexis Wales, VP of GitHub security operations, said that attacks often begin when attackers pretend to be developers or recruiters, impersonating them with fake GitHub, LinkedIn, Slack, or Telegram profiles. There are cases in which legitimate accounts have been hijacked by attackers. 

Another highly targeted attack campaign has been launched against the NPM package registry, aimed at enticing developers into downloading immoral modules by enticing them to install malicious third-party software. There was a significant attack wave uncovered in June, and it has since been linked to North Korean threat actors by the supply chain security firm Phylum, according to Hacker News. This attack wave appears to exhibit similar behaviours as another that was discovered in June. 

During the period from August 9 to August 12, 2023, it was identified that nine packages were uploaded to NPM. Among the libraries that are included in this file are ws-paso-jssdk, pingan-vue-floating, srm-front-util, cloud-room-video, progress-player, ynf-core-loader, ynf-core-renderer, ynf-dx-scripts, and ynf-dx-webpack-plugins. A conversation is initiated with the target and attempts are made to move the conversation to another platform after contacting them. 

As the attacker begins to execute the attack chain, it is necessary to have a post-install hook in the package.json file to execute the index.js file which executes after the package has been installed. In this instance, a daemon process is called Android. The daemon is launched as a dependency on the legitimate pm2 module and, in turn, a JavaScript file named app.js is executed. 

A JavaScript script is crafted in a way that initiates encrypted two-way communications with a remote server 45 seconds after the package is installed by masquerading as RustDesk remote desktop software – "ql. rustdesk[.]net," a spoofed domain posing as the authentic RustDesk remote desktop software. This information entails the compromised host's details and information. 

The malware pings every 45 seconds to check for further instructions, which are decoded and executed in turn, after which the malware checks for new instructions every 45 seconds. As the Phylum Research Team explained, "It would seem to be that the attackers are monitoring the GUIDs of the machines in question and selectively sending additional payloads (which are encoded Javascript code) to the machines of interest in the direction of the GUID monitors," they added. 

In the past few months there have been several typosquat versions of popular Ethereum packages in the npm repository that attempts to make HTTP requests to Chinese servers to retrieve the encryption key from the wallet on the wallet.cba123[.]cn, which had been discovered. 

Additionally, the highly popular NuGet package, Moq, has come under fire since new versions of the package released last week included a dependency named SponsorLink, that extracted the SHA-256 hash of developers' email addresses from local Git configurations and sent them to a cloud service without their knowledge. In addition, Moq has been receiving criticism after new versions released last week came with the SponsorLink dependency. 

Version 4.20.2 of the app has been rolled back as a result of the controversial changes that raise GDPR compliance issues. Despite this, Bleeping Computer reported that Amazon Web Services (AWS) had withdrawn its support for the project, which may have done serious damage to the project's reputation. 

There are also reports that organizations are increasingly vulnerable to dependency confusion attacks, which could've led to developers unwittingly introducing malicious or vulnerable code into their projects, thus resulting in large-scale attacks on supply chains on a large scale. 

There are several mitigations that you can use to prevent dependency confusion attacks. For example, we recommend publishing internal packages under scopes assigned to organizations and setting aside internal package names as placeholders in the public registry to prevent misuse of those names.

Throughout the history of cybersecurity, the recent North Korean attack campaign exploiting npm packages has served as an unmistakable reminder that the threat landscape is transforming and that more sophisticated tactics are being implemented to defeat it. For sensitive data to be safeguarded and further breaches to be prevented, it is imperative that proactive measures are taken and vigilant measures are engaged. To reduce the risks posed by these intricate cyber tactics, organizations need to prioritize the verification of identity, the validation of packages, and the management of internal packages.

Safeguarding Your Work: What Not to Share with ChatGPT

 

ChatGPT, a popular AI language model developed by OpenAI, has gained widespread usage in various industries for its conversational capabilities. However, it is essential for users to be cautious about the information they share with AI models like ChatGPT, particularly when using it for work-related purposes. This article explores the potential risks and considerations for users when sharing sensitive or confidential information with ChatGPT in professional settings.
Potential Risks and Concerns:
  1. Data Privacy and Security: When sharing information with ChatGPT, there is a risk that sensitive data could be compromised or accessed by unauthorized individuals. While OpenAI takes measures to secure user data, it is important to be mindful of the potential vulnerabilities that exist.
  2. Confidentiality Breach: ChatGPT is an AI model trained on a vast amount of data, and there is a possibility that it may generate responses that unintentionally disclose sensitive or confidential information. This can pose a significant risk, especially when discussing proprietary information, trade secrets, or confidential client data.
  3. Compliance and Legal Considerations: Different industries and jurisdictions have specific regulations regarding data privacy and protection. Sharing certain types of information with ChatGPT may potentially violate these regulations, leading to legal and compliance issues.

Best Practices for Using ChatGPT in a Work Environment:

  1. Avoid Sharing Proprietary Information: Refrain from discussing or sharing trade secrets, confidential business strategies, or proprietary data with ChatGPT. It is important to maintain a clear boundary between sensitive company information and AI models.
  2. Protect Personal Identifiable Information (PII): Be cautious when sharing personal information, such as social security numbers, addresses, or financial details, as these can be targeted by malicious actors or result in privacy breaches.
  3. Verify the Purpose and Security of Conversations: If using a third-party platform or integration to access ChatGPT, ensure that the platform has adequate security measures in place. Verify that the conversations and data shared are stored securely and are not accessible to unauthorized parties.
  4. Be Mindful of Compliance Requirements: Understand and adhere to industry-specific regulations and compliance standards, such as GDPR or HIPAA, when sharing any data through ChatGPT. Stay informed about any updates or guidelines regarding the use of AI models in your particular industry.
While ChatGPT and similar AI language models offer valuable assistance, it is crucial to exercise caution and prudence when using them in professional settings. Users must prioritize data privacy, security, and compliance by refraining from sharing sensitive or confidential information that could potentially compromise their organizations. By adopting best practices and maintaining awareness of the risks involved, users can harness the benefits of AI models like ChatGPT while safeguarding their valuable information.

Promoting Trust in Facial Recognition: Principles for Biometric Vendors

 

Facial recognition technology has gained significant attention in recent years, with its applications ranging from security systems to unlocking smartphones. However, concerns about privacy, security, and potential misuse have also emerged, leading to a call for stronger regulation and ethical practices in the biometrics industry. To promote trust in facial recognition technology, biometric vendors should embrace three key principles that prioritize privacy, transparency, and accountability.
  1. Privacy Protection: Respecting individuals' privacy is crucial when deploying facial recognition technology. Biometric vendors should adopt privacy-centric practices, such as data minimization, ensuring that only necessary and relevant personal information is collected and stored. Clear consent mechanisms must be in place, enabling individuals to provide informed consent before their facial data is processed. Additionally, biometric vendors should implement strong security measures to safeguard collected data from unauthorized access or breaches.
  2. Transparent Algorithms and Processes: Transparency is essential to foster trust in facial recognition technology. Biometric vendors should disclose information about the algorithms used, ensuring they are fair, unbiased, and capable of accurately identifying individuals across diverse demographic groups. Openness regarding the data sources and training datasets is vital, enabling independent audits and evaluations to assess algorithm accuracy and potential biases. Transparency also extends to the purpose and scope of data collection, giving individuals a clear understanding of how their facial data is used.
  3. Accountability and Ethical Considerations: Biometric vendors must demonstrate accountability for their facial recognition technology. This involves establishing clear policies and guidelines for data handling, including retention periods and the secure deletion of data when no longer necessary. The implementation of appropriate governance frameworks and regular assessments can help ensure compliance with regulatory requirements, such as the General Data Protection Regulation (GDPR) in the European Union. Additionally, vendors should conduct thorough impact assessments to identify and mitigate potential risks associated with facial recognition technology.
Biometric businesses must address concerns and foster trust in their goods and services as facial recognition technology spreads. These vendors can aid in easing concerns around facial recognition technology by adopting values related to privacy protection, openness, and accountability. Adhering to these principles can not only increase public trust but also make it easier to create regulatory frameworks that strike a balance between innovation and the defense of individual rights. The development of facial recognition technology will ultimately be greatly influenced by the moral and ethical standards upheld by the biometrics sector.






Facebook Shares Private Information With NHS Trusts

 


In a report published by The Observer, NHS trusts have been revealed to share private information with Facebook. As a result of a newspaper investigation, it was discovered that all of the websites of 20 NHS trusts were using a covert tracking tool to collect browsing data that was shared with the tech giant, it is a major breach of privacy that violated patient privacy. 

The trust has assured people that it will not collect personal information about them. It has not obtained the consent of the people involved in the process. Data were collected showing the pages people visited, the buttons they clicked, and the keywords they searched for.

As part of the system, the user's IP address was matched with the data and often the data was associated with their Facebook account details. 

A person's medical condition, the doctor's appointment, and the treatments they have received may be known once this information is matched with their medical information. 

Facebook might use it for advertising campaigns related to its business objectives as part of its business strategy. 

The news of this weekend's breach of Meta Pixel has caused panic across the NHS trust community. This is due to 17 of the 20 trusts using the tracking tool taking drastic measures, even apologizing for the incident. 

How does a Meta Pixel tracker work? What is it all about? 

Meta's advertising tracking tool allows companies to track visitor activity on their web pages and gain a deeper understanding of their actions. 

A meta-pixel has been identified as an element of 33 hospital websites where, whenever someone clicks on an appointment button to make an appointment, Facebook receives “a packet of data” from the Meta Pixel. Data about an individual household may be associated with an IP address, which in turn can be linked to its specific IP address. 

It has been reported that eight doctors have apologized to their patients. Furthermore, multiple trusts were unaware they sent patient data to Facebook. This was when they installed tracking pixels to monitor recruitment and charity campaigns. They thought they monitored recruitment specifically. The Information Commissioner's Office (ICO) has proceeded with its investigation despite this and privacy experts have verbally expressed their concerns in concert as well.

As a result of the research findings, the Meta Pixel has been removed from the Friedrich Hospital website. 

Piedmont Healthcare used Meta Pixels to collect data about patients' upcoming doctor appointments through Piedmont Healthcare's patient portal. These data included patients' names, dates, and times of appointments. 

Privacy experts have expressed concern over these findings, who are concerned that they indicate widespread potential breaches of patient confidentiality and data protection that are in their view “completely unacceptable ”. 

There is a possibility that the company will receive health information of a special category, which is legally protected in certain situations. As defined by the law, health information consists of information that relates to an individual's health status, such as medical conditions, tests, treatments, or any other information that relates to health. 

It is impossible to determine the exact usage of the data once it is accessed by Facebook's servers. The company states that the submission of sensitive medical data to the company is prohibited. It has filters in place to weed out such information if it is received accidentally. 

As several of the trusts involved explained, they originally implemented the tracking pixel to monitor recruitment or charity campaigns. They had no idea that patient information is sent to Facebook as part of that process. 

BHNHST, a healthcare trust in the town of Buckinghamshire, has removed the tracking tool from its website. It has been commented that the appearance of Meta Pixel on this site was an unintentional error on the part of the organization. 

When BHNHST users accessed a patient handbook about HIV medications, it appears that BHNHST shared some information with Facebook as a result of the access. According to the report, this data included details such as the name of the drug, the trust's name, the user's IP address, and the details of their Instagram account. 

In its privacy policy, the trust has made it explicitly clear that any consumer health information collected by it will not be used for marketing purposes without the consumer's explicit consent. 

When Alder Hey Children's Trust in Liverpool was linked to Facebook each time a user accessed a webpage related to a sexual development issue, a crisis mental health service, or an eating disorder, the organization also shared information with Facebook. 

Professor David Leslie, director of ethics at the Alan Turing Institute, warned that the transfer of patient information to third parties by the National Health Service would erode the "delicate relationship of trust" between the NHS and its patients. When accessing an NHS website, we have a reasonable expectation that our personal information will not be extracted and shared with third-party advertising companies or companies that might use it to target ads or link our personal information to health conditions."

According to Wolfie Christl, a data privacy expert who has been researching the ad tech industry to find out what is happening, "This should have been stopped long ago by regulators, rather than what is happening now. This is unacceptable in any way, and it must stop immediately as it is irresponsible and negligent." 

20 NHS trusts in England use the tracking tool to find their locations. Together the 20 trusts cover a 22 million population in England, reaching from Devon to the Pennines. Several people had used it for many years before it was discontinued. 

Moreover, Meta is facing litigation over allegations that it intentionally received sensitive health information - including information taken from health portals - and did not take any steps to prevent it. Several plaintiffs have filed lawsuits against Meta, alleging it violated their medical privacy by intercepting and selling their individually identifiable health information from its partner websites. T

Meta stated that the trusts had been contacted to remind them of the privacy policies in place, essentially to prohibit the sharing of health information between the organization and Meta. 

"Our corporate communication department educates advertisers on the proper use of business tools to avoid this kind of situation," the spokesperson added. The group added that it was the owner's responsibility to make sure that the website complied with all applicable data protection laws and that consent was obtained before sending any personal information. 

Several questions have been raised concerning the effectiveness of its filters designed to weed out potentially sensitive, or what types of information would be blocked from hospital websites by the company. They also refused to explain why NHS trusts could send the data in the first place. 

According to the company, advertisers can use its business software tools to grow their business by using health-based advertising to help them achieve their business goals. There are several guides available on its website on how it can display ads to its users that "might be of interest" by leveraging data collected by its business tools. If you look at travel websites, for instance, you might see ads for hotel deals appearing on the website. 

Meta was accused of not complying with part of GDPR (General Data Protection Regulation), in the sense that it moved Facebook users' data from one country to another without permission, according to the DPC. 

Meta Ireland was fined a record fine on Meta Ireland from the European Commission. This order orders it to suspend any future transfers of personal data to the US within five months. They have also ordered the company to stop any future data transfer to the US within the same period. Meta imposed an unjustified fine, according to the company.

Criminal Digitisation: How UK Police Forces Use Technology

 


Researchers and law enforcement communities have yet to fully understand cybercrime's scope and implications, even though it is a growing issue. As a result of the perception that the police were ill-equipped to deal with these types of crimes, according to reports issued by the UK government, victims of cybercrime are unlikely to report the crimes immediately. These reports also identify a lack of cybercrime knowledge among police officers according to the reports. 

In recent days, there have been numerous reports of people falling victim to online fraudsters despite being cautious about doing so. Marc Deruelle almost became one of them due to his actions. He was eager to visit Liverpool this May for the 2023 Eurovision Song Contest. He didn't immediately suspect that someone contacting him via WhatsApp was the receptionist at the accommodation he'd booked online. However, a few days later, he received a call from someone claiming to be the receptionist so he decided to contact them.  

It was good that Deruelle's bank noticed something was going on. It refused to permit £800 to be transferred to Uganda at the last moment. The situation has not been as fortunate for other victims. 

As late as 2022, a woman from North Wales forwarded almost £2,000 over Whatsapp to a scammer pretending to be her daughter and pretending to be based out of Nevada. The mother of two from North Lanarkshire, Scotland, told STV News how she sold her home to repay the loans she had invested in a bogus cryptocurrency investment scheme advertised on Facebook. Jennifer said she had to sell the house to pay. To invest in the bogus scheme, scammers coerced her into taking out further loans - and ultimately she owed £150,000 for the scam. 

Earlier this year, the NCA released the Cyber Crime Assessment 2016. This highlights the need for more partnerships between law enforcement and the private sector to fight cybercrime. Even though cybercrime accounts for only a small proportion of all reported crimes in the U.K., the National Crime Agency has found that cybercrime has overtaken all other types of crime, accounting for 36 percent of all reported crimes, and 17 percent of crimes committed with computers.

There is no denying the fact that cybercrime reports have been growing in the U.K. One explanation for this may be that the British are becoming more skilled at detecting this kind of crime than they used to be. According to the report, there is a conclusion that there is increasing evidence of cybercrime occurring in the U.K., as it was briefly covered in the most recent Crime Survey for England and Wales conducted by the U.K. Office of National Statistics last year. 

As of 2022, fraud will account for more than 40% of all crimes in England and Wales, making it the most common crime committed in the country.    

Moore believes that, when the government launched Action Fraud in 2009, the government had the right intentions. However, the government did not realize how fast fraud would grow, Moore explains. As a result, Moore and Hamilton believe that law enforcement may have lacked funds and expertise. This has caused law enforcement officers to struggle to keep up with cybercrime's rapidly evolving pace, an issue that has left them struggling to keep up. As a result, it has been challenging for public agencies, particularly rural police departments, for a long time to recruit and retain cybersecurity professionals. There is not much money to be made by the police and the local government. As an IT professional, why on earth would you stay in the police force when you can join the private sector if you’re in cybersecurity?   

Despite the growing scale and complexity of cybercrime as well as the intensifying attacks, the report concludes that "so far, the visible financial losses and damage do not have the potential to significantly impact the value of a company's equity over the long run." Cyber attacks on businesses in the UK have not been as damaging and as publicly visible as the ones that were carried out on the Target retail chain in the United States. 

A large, multinational European company would probably be hard-pressed to conceal a breach of the same magnitude as the breach at Target in 2013 if it was similar to that breach. Generally speaking, European nations have not been required to comply with the same kind of data breach disclosure laws on the books in nearly every state in the United States. U.S. companies are forced to publicly acknowledge data breaches each week by laws in effect in nearly every U.S. state.

As the new General Data Protection Regulation of the European Union comes into force, companies that conduct business in Europe or with European customers will be required to provide written notification if, as a result of a breach of security, personal data was accidentally or unlawfully destroyed, lost, altered, or unauthorizedly disclosed, or access was unauthorized. 

As it stands, there may still be some time before British businesses start coming forward about data breaches, especially since the GDPR requirements won't fully come into effect until 2021. Although the GDPR requirements will not take full effect until April 2018, the implementation is expected to take place sooner rather than later.   

A ChatGPT Bug Exposes Sensitive User Data

OpenAI's ChatGPT, an artificial intelligence (AI) language model that can produce text that resembles human speech, has a security flaw. The flaw enabled the model to unintentionally expose private user information, endangering the privacy of several users. This event serves as a reminder of the value of cybersecurity and the necessity for businesses to protect customer data in a proactive manner.

According to a report by Tech Monitor, the ChatGPT bug "allowed researchers to extract personal data from users, including email addresses and phone numbers, as well as reveal the model's training data." This means that not only were users' personal information exposed, but also the sensitive data used to train the AI model. As a result, the incident raises concerns about the potential misuse of the leaked information.

The ChatGPT bug not only affects individual users but also has wider implications for organizations that rely on AI technology. As noted in a report by India Times, "the breach not only exposes the lack of security protocols at OpenAI, but it also brings forth the question of how safe AI-powered systems are for businesses and consumers."

Furthermore, the incident highlights the importance of adhering to regulations such as the General Data Protection Regulation (GDPR), which aims to protect individuals' personal data in the European Union. The ChatGPT bug violated GDPR regulations by exposing personal data without proper consent.

OpenAI has taken swift action to address the issue, stating that they have fixed the bug and implemented measures to prevent similar incidents in the future. However, the incident serves as a warning to businesses and individuals alike to prioritize cybersecurity measures and to be aware of potential vulnerabilities in AI systems.

As stated by Cyber Security Connect, "ChatGPT may have just blurted out your darkest secrets," emphasizing the need for constant vigilance and proactive measures to safeguard sensitive information. This includes regular updates and patches to address security flaws, as well as utilizing encryption and other security measures to protect data.

The ChatGPT bug highlights the need for ongoing vigilance and preventative measures to protect private data in the era of advanced technology. Prioritizing cybersecurity and staying informed of vulnerabilities is crucial for a safer digital environment as AI systems continue to evolve and play a prominent role in various industries.




ChatGPT: A Potential Risk to Data Privacy


ChatGPT, within two months of its release, seems to have taken over the world like a storm. The consumer application has achieved 100 million active users, making it the fastest-growing product ever. Users are intrigued by the tool's sophisticated capabilities, although apprehensive about its potential to upend numerous industries. 

One of the less discussed consequences in regard to ChatGPT is its privacy risk. Google only yesterday launched Bard, its own conversational AI, and others will undoubtedly follow. Technology firms engaged in AI development have certainly entered a race. 

The issue would be its technology, which is entirely based on users’ personal data. 

300 Billion Words, How Many Are Yours? 

ChatGPT is apparently based on a massive language model, which backs up an enormous amount of data to operate and enhance its functions. Implying, the model gets more adept at seeing patterns, foreseeing what will happen next, and producing credible text as more data is used to train it. 

OpenAI, the developer of ChatGPT, sourced the Chatbot model with some 3 million words systematically taken from the internet –  via books, articles, websites, and posts – which also undeniably involves online users’ personal information, gathered without their consent. 

Every blog post, product review, or comment written on an article, which exists or ever existed in the online world has a good chance that the data or information involved it is was consumed by ChatGPT. 

What is the Issue? 

The gathered data used in order to train ChatGPT is problematic for numerous reasons. 

First, the collected data is unconsented, since none of the online users were ever asked if OpenAI could use their seemingly personal information. Thus, this would be a clear violation of privacy, especially when the data is sensitive and can be used to locate us, identify our loved ones, or identify ourselves. 

The usage of data can compromise what we refer to as contextual integrity even when the data is publicly available. This is a cornerstone idea in discussions about privacy law. Information on people must not be made public outside of the context in which it was first created. 

Moreover, OpenAI does not include any procedure for users to monitor whether the company has their personal information in-store, or to request it to be taken down. The European General Data Protection Regulation (GDPR), which guarantees this right, is still being discussed as to whether ChatGPT complies with its criteria. 

This “right to be forgotten” is specifically essential when it comes to situations involving information that is inaccurate or misleading, which seems to be a regular occurrence in ChatGPT. 

Furthermore, the scraped data that ChatGPT was trained on may be confidential or protected by copyright. For instance, the tool replicated the opening few chapters of Joseph Heller's copyrighted book Catch-22. 

Finally, OpenAI did not pay for the internet data it downloaded. Its creators—individuals, website owners, and businesses—were not being compensated. This is especially remarkable in light of the recent US$29 billion valuation of OpenAI, which is more than double its value in 2021. 

OpenAI has also recently announced ChatGPT Plus, which is a paid subscription plan that will provide users ongoing access to the tool, swift response times, and priority access to its new feature. By 2024, it is anticipated that this approach would help generate $1 billion in revenue. 

None of this would have been possible without the usage of ‘our’ data, acquired and utilized without our consent. 

Time to Consider the Issue? 

According to some professionals and experts, ChatGPT is a “tipping point for AI” - The realisation of technological advancement that can revolutionize the way we work, learn, write, and even think. 

Despite its potential advantages, we must keep in mind that OpenAI is a private, for-profit organization whose objectives and business demands may not always coincide with those of the larger community requirements. 

The privacy hazards associated with ChatGPT should serve as a caution. And as users of an increasing number of AI technologies, we need to exercise extreme caution when deciding what data to provide such tools with.  

Microsoft to Roll Out “Data Boundary” for its EU Customers from Jan 1


According to the latest announcement made by Microsoft Corp on Thursday, its cloud customers of the European Union will finally be able to process and store chunks of their data in the region, starting from January 1. 

This phased rollout of its “EU data boundary” will apparently be applied to all of its core cloud services - Azure, Microsoft 365, Dynamics 365 and Power BI platform. 

Since the introduction of General Data Protection Regulation (GDPR) by the EU IN 2018, that protects user privacy, business giants have grown increasingly anxious of the international flow of consumer data. 

The European Commission, which serves as the executive arm of the EU, is developing ideas in order to safeguard the privacy of the European customers whose data is being transferred to the United States. 

"As we dived deeper into this project, we learned that we needed to be taken more phased approach," says Microsoft’s Chief Privacy Officer Julie Brill. “The first phase will be customer data. And then as we move into the next phases, we will be moving logging data, service data and other kind of data into the boundary.” 

The second phase will reportedly be completed by the end of 2023, while the third in year 2024, she added. 

Microsoft runs more than a dozen datacenters throughout the European countries, like France, Germany, Spain and Switzerland. 

Data storage, for large corporation, have become so vast and is distributed across so many different countries that it has now become a challenge to understand where their data is stored and whether it complies with regulations like GDPR. 

"We are creating this solution to make our customers feel more confident and to be able to have clear conversations with their regulators on where their data is being processed as well as stored," says Brill. 

Moreover, Microsoft has previously mentioned how it would eventually challenge government request for customer data, and that it would compensate financially to any customer, whose data it shared in breach of GDPR.  

Twitter's Brussels Staff Sacked by Musk 

After a conflict on how the social network's content should be regulated in the Union, Elon Musk shut down Twitter's entire Brussels headquarters.

Twitter's connection with the European Union, which has some of the most robust regulations controlling the digital world and is frequently at the forefront of global regulation in the sector, may be strained by the closing of the company's Brussels center. 

Platforms like Twitter are required by one guideline to remove anything that is prohibited in any of the EU bloc's member states. For instance, tweets influencing elections or content advocating hate speech would need to be removed in jurisdictions where such communication is prohibited. 

Another obligation is that social media sites like Twitter must demonstrate to the European Commission, the executive arm of the EU, that they are making a sufficient effort to stop the spread of content that is not illegal but may be damaging. Disinformation falls under this category. This summer, businesses will need to demonstrate how they are handling such positions. 

Musk will need to abide by the GDPR, a set of ground-breaking EU data protection laws that mandate Twitter have a data protection officer in the EU. 

The present proposal forbids the use of algorithms that have been demonstrated to be biased against individuals, which may have an influence on Twitter's face-cropping tools, which have been presented to favor youthful, slim women.

Twitter might also be obligated to monitor private conversations for grooming or images of child sexual abuse under the EU's Child Sexual Abuse Materials proposal. In the EU, there is still discussion about them.

In order to comply with the DSA, Twitter will need to put in a lot more effort, such as creating a system that allows users to flag illegal content with ease and hiring enough moderators to examine the content in every EU member state.

Twitter won't have to publish a risk analysis until next summer, but it will have to disclose its user count in February, which initiates the commission oversight process.

Two lawsuits that might hold social media corporations accountable for their algorithms that encourage dangerous or unlawful information are scheduled for hearings before the US Supreme Court. This might fundamentally alter how US businesses regulate content. 

CNIL Fines Clearview AI 20 million Euros for Illegal Use of Facial Recognition Technology

 

France’s data protection authority (CNIL) has imposed a €20 million fine on Clearview AI, the controversial facial recognition firm time for illegally gathering and using data belonging to French residents without their knowledge. 

CNIL imposed the maximum financial penalty the company could receive as per GDPR Article 83 and also ordered Clearview AI to stop all data collection activities and delete the data gathered on French citizens or face an additional €100,000 fine per day. 

“Clearview AI had two months to comply with the injunctions formulated in the formal notice and to justify them to the CNIL. However, it did not provide any response to this formal notice,” the CNIL stated. 

“The chair of the CNIL, therefore, decided to refer the matter to the restricted committee, which is in charge of issuing sanctions. On the basis of the information brought to its attention, the restricted committee decided to impose a maximum financial penalty of 20 million euros, according to article 83 of the GDPR.” 

Clearview AI scraps publicly available images and videos of people from websites and social media platforms and associates them with identities. Using this technique, the company has collected over 20 billion images that are being employed to feed a biometric database of facial scans and identities. 

Subsequently, the American-based firm sells access to this database to individuals, law enforcement, and multiple organizations around the globe. 

In Europe, the General Data Protection Regulation (GDPR) dictates that any data collection needs to be clearly communicated to the people and requires consent. Even if Clearview AI is not employing leaked data and the company does not spy on people, individuals are unaware that their images are being used for identification by Clearview AI customers. 

CNIL's latest decision comes after a two-year investigation initiated in May 2020, when the French authority received complaints from individuals about Clearview facial recognition software. Another warning about biometric profiling came from the Privacy International organization in May 2021. 

According to the CNIL, it found Clearview AI was guilty of multiple violations of the General Data Protection Regulation (GDPR). The breaches include unlawful processing of private data (GDPR Article 6), individuals' rights not being respected (Articles 12, 15, and 17), and lack of cooperation with the data protection authority (Article 31). 

The CNIL judgment is the third decision against Clearview's activities after state authorities fined the firm in March and July for unlawfully gathering biometric data in Italy and Greece.

Here's How BlackMatter Ransomware is Linked With LockBit 3.0

 

LockBit 3.0, the most recent version of LockBit ransomware, and BlackMatter contain similarities discovered by cybersecurity researchers. 

In addition to introducing a brand-new leak site, the first ransomware bug bounty program, LockBit 3.0, was released in June 2022. Zcash was also made available as a cryptocurrency payment method.

"The encrypted filenames are appended with the extensions 'HLJkNskOq' or '19MqZqZ0s' by the ransomware, and its icon is replaced with a.ico file icon. The ransom note then appears, referencing 'Ilon Musk'and the General Data Protection Regulation of the European Union (GDPR)," researchers from Trend Micro stated.

The ransomware alters the machine's wallpaper when the infection process is finished to alert the user of the attack. Several LockBit 3.0's code snippets were found to be lifted from the BlackMatter ransomware by Trend Micro researchers when they were debugging the Lockbit 3.0 sample.

Identical ransomware threats

The researchers draw attention to the similarities between BlackMatter's privilege escalation and API harvesting techniques. By hashing a DLL's API names and comparing them to a list of the APIs the ransomware requires, LockBit 3.0 executes API harvesting. As the publically accessible script for renaming BlackMatter's APIs also functions for LockBit 3.0, this procedure is the same as that of BlackMatter.

The most recent version of LockBit also examines the UI language of the victim machine to prevent infection of machines that speak these languages in the Commonwealth of Independent States (CIS) member states.

Windows Management Instrumentation (WMI) via COM objects is used by Lockbit 3.0 and BlackMatter to delete shadow copies. Experts draw attention to the fact that LockBit 2.0 deletes using vssadmin.exe.

The findings coincide with LockBit attacks becoming the most active ransomware-as-a-service (RaaS) gangs in 2022, with the Italian Internal Revenue Service (L'Agenzia delle Entrate) being the most recent target.

The ransomware family contributed to 14% of intrusions, second only to Conti at 22%, according to Palo Alto Networks' 2022 Unit 42 Incident Response Report, which was released and is based on 600 instances handled between May 2021 and April 2022.


The CNIL Penalized SLIMPAY €180,000 for Data Violation.

 

SLIMPAY is a licensed payment institution that provides customers with recurring payment options. Based in Paris, this subscription payment services firm was fined €180,000 by the French CNIL regulatory authority after it was discovered that sensitive client data had been stored on a publicly accessible server for five years by the firm. 

The company bills itself as a leader in subscription recurring payments, and it offers an API and processing service to handle such payments on behalf of clients such as Unicef, BP, and OVO Energy, to mention a few. It appears to have conducted an internal research project on an anti-fraud mechanism in 2015, during which it collected personal data from its client databases for testing purposes. Real data is a useful way to confirm that development code is operating as intended before going live, but when dealing with sensitive data like bank account numbers, extreme caution must be exercised to avoid violating data protection requirements.

In 2020, the CNIL conducted an inquiry on the company SLIMPAY and discovered a number of security flaws in their handling of customers' personal data. The restricted committee - the CNIL body in charge of applying fines - effectively concluded that the corporation had failed to comply with several GDPR standards based on these elements. Because the data subjects affected by the incident were spread across many European Union nations, the CNIL collaborated with four supervisory agencies (Germany, Spain, Italy, and the Netherlands). 

THE BREAKDOWNS 

1.  Failure to comply with the requirement to provide a formal legal foundation for a processor's processing operations (Article 28 of the GDPR)

SLIMPAY's agreements with its service providers do not include all of the terms necessary to ensure that these processors agree to process personal data in accordance with the GDPR. 

2. Failure to protect personal data from unauthorized access (Article 32 of the GDPR) 

Access to the server was not subject to any security controls, according to the restricted committee, and it could be accessed from the Internet between November 2015 and February 2020. More than 12 million people's civil status information, postal and e-mail addresses, phone numbers, and bank account numbers (BIC/IBAN) were all hacked. 

3. Failure to protect personal data from unauthorized access (Article 32 of the GDPR) 

The CNIL determined that the risk associated with the breach should be considered high due to the nature of the personal data, the number of people affected, the possibility of identifying the people affected by the breach from the accessible data, and the potential consequences for the people concerned.

Elastic Stack API Security Vulnerability Exposes Customer and System Data

 

The mis-implementation of Elastic Stack, a collection of open-source products that employ APIs for crucial data aggregation, search, and analytics capabilities, has resulted in severe vulnerabilities, according to a new analysis. Researchers from Salt Security uncovered flaws that allowed them to not only conduct attacks in which any user could extract critical customer and system data, but also to create a denial of service condition in which the system would become inaccessible. 

“Our latest API security research underscores how prevalent and potentially dangerous API vulnerabilities are. Elastic Stack is widely used and secure, but Salt Labs observed the same architectural design mistakes in almost every environment that uses it,” said Roey Eliyahu, co-founder and CEO, Salt Security. “The Elastic Stack API vulnerability can lead to the exposure of sensitive data that can be used to perpetuate serious fraud and abuse, creating substantial business risk.” 

The vulnerability was originally detected while safeguarding one of their customers, a huge online business-to-consumer platform that provides API-based mobile applications and software as a service to millions of consumers around the world, according to the researchers. 

 Officials at Salt Security were eager to point out that this isn't a flaw in Elastic Stack itself, but rather a problem with how it's being deployed. According to Salt Security's technical evangelist Michael Isbitski, the vulnerability isn't due to a fault in Elastic's software, but rather to "a common risky implementation set up by users." 

"The lack of awareness around potential misconfigurations, mis-implementations, and cluster exposures is largely a community issue that can be solved only through research and education," Isbitski said. API threats have increased 348% in the last six months, according to the Salt Security State of API Security Report, Q3 2021. The development of business-critical APIs, combined with the advent of exploitable vulnerabilities, reveals the substantial security flaws that occur from the integration of third-party apps and services.

The impact of the Elastic Stack design implementation flaws rises considerably when an attacker chains together multiple attacks, according to Salt Labs researchers. Attackers can use the lack of authorization between front-end and back-end services to establish a working user account with basic permission levels, then make educated assumptions about the schema of back-end data stores and inquire for data they aren't authorized to access. 

Salt Labs was able to gain access to a large amount of sensitive data, including account numbers and transaction confirmation numbers, as part of its research. Some of the sensitive information was also private and subject to GDPR regulations. Attackers could use this information to access other API-based features, such as the ability to book new services or cancel existing ones.

GDPR privacy law exploited to reveal personal data

About one in four companies revealed personal information to a woman's partner, who had made a bogus demand for the data by citing an EU privacy law.

The security expert contacted dozens of UK and US-based firms to test how they would handle a "right of access" request made in someone else's name.

In each case, he asked for all the data that they held on his fiancee.

In one case, the response included the results of a criminal activity check.

Other replies included credit card information, travel details, account logins and passwords, and the target's full US social security number.

University of Oxford-based researcher James Pavur has presented his findings at the Black Hat conference in Las Vegas.

It is one of the first tests of its kind to exploit the EU's General Data Protection Regulation (GDPR), which came into force in May 2018. The law shortened the time organisations had to respond to data requests, added new types of information they have to provide, and increased the potential penalty for non-compliance.

"Generally if it was an extremely large company - especially tech ones - they tended to do really well," he told the BBC.

"Small companies tended to ignore me.

"But the kind of mid-sized businesses that knew about GDPR, but maybe didn't have much of a specialised process [to handle requests], failed."

He declined to identify the organisations that had mishandled the requests, but said they had included:

- a UK hotel chain that shared a complete record of his partner's overnight stays

- two UK rail companies that provided records of all the journeys she had taken with them over several years

- a US-based educational company that handed over her high school grades, mother's maiden name and the results of a criminal background check survey.

Mr Pavur has, however, named some of the companies that he said had performed well.

Programmer coded a software to track women in porn videos using face-recognition






A Chinese programmer based in Germany created a software using face-recognition technology to identify women who had appeared in porn videos. 

The information about the project was posted on the Chinese social network WeiboA. Then a Twitter handle @yiqinfu tweeted ’’A Germany-based Chinese programmer said he and some friends have identified 100k porn actresses from around the world, cross-referencing faces in porn videos with social media profile pictures. The goal is to help others check whether their girlfriends ever acted in those films.’’

The project took nearly half a year to complete. The videos were collected from websites 1024, 91, sex8, PornHub, and xvideos, and all together it consists of  100+ terabytes of data. 

The faces appearing on these videos are compared with profile pictures from various popular social media platform like Facebook, Instagram, TikTok, Weibo, and others.

The coder deleted the project and all his data after it found out that the project violates the European privacy law. 

However, there is no proof that there is no program on the global system that matches women’s social-media photos with images from porn sites. 

According to the programmer whatever he did ‘was legal because 1) he hasn't shared any data, 2) he hasn't opened up the database to outside queries, and 3) sex work is legal in Germany, where he's based.’

But, this incidence has made clear that program like this could be possible and would have awful consequences. “It’s going to kill people,” says Carrie A. Goldberg, an attorney who specializes in sexual privacy violations. 

“Some of my most viciously harassed clients have been people who did porn, oftentimes one time in their life and sometimes nonconsensually [because] they were duped into it. Their lives have been ruined because there’s this whole culture of incels that for a hobby expose women who’ve done porn and post about them online and dox them.” 

The European Union’s GDPR privacy law prevents this kind of situation, but people living in other places are not as lucky.