Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label GDPR. Show all posts

Digital Afterlife: Are We Ready for Virtual Resurrections?


 

Imagine receiving a message that your deceased father's "digital immortal" bot is ready to chat. This scenario, once confined to science fiction, is becoming a reality as the digital afterlife industry evolves. Virtual reconstructions of loved ones, created using their digital footprints, offer a blend of comfort and disruption, blurring the lines between memory and reality.

The Digital Afterlife Industry

The digital afterlife industry leverages VR and AI technologies to create virtual personas of deceased individuals. Companies like HereAfter allow users to record stories and messages during their lifetime, accessible to loved ones posthumously. MyWishes offers pre-scheduled messages from the deceased, maintaining their presence in the lives of the living. Hanson Robotics has developed robotic busts that interact using the memories and personality traits of the deceased, while Project December enables text-based conversations with those who have passed away.

Generative AI plays a crucial role in creating realistic and interactive digital personas. However, the high level of realism can blur the line between reality and simulation, potentially causing emotional and psychological distress.

Ethical and Emotional Challenges

As comforting as these technologies can be, they also present significant ethical and emotional challenges. The creation of digital immortals raises concerns about consent, privacy, and the psychological impact on the living. For some, interacting with a digital version of a loved one can aid the grieving process by providing a sense of continuity and connection. However, for others, it may exacerbate grief and cause psychological harm.

One of the major ethical concerns is consent. The deceased may not have agreed to their data being used for a digital afterlife. There’s also the risk of misuse and data manipulation, with companies potentially exploiting digital immortals for commercial gain or altering their personas to convey messages the deceased would never have endorsed.

Need for Regulation

To address these concerns, there is a pressing need to update legal frameworks. Issues such as digital estate planning, the inheritance of digital personas, and digital memory ownership need to be addressed. The European Union's General Data Protection Regulation (GDPR) recognizes post-mortem privacy rights but faces challenges in enforcement due to social media platforms' control over deceased users' data.

Researchers have recommended several ethical guidelines and regulations, including obtaining informed and documented consent before creating digital personas, implementing age restrictions to protect vulnerable groups, providing clear disclaimers to ensure transparency, and enforcing strong data privacy and security measures. A 2018 study suggested treating digital remains as integral to personhood, proposing regulations to ensure dignity in re-creation services.

The dialogue between policymakers, industry, and academics is crucial for developing ethical and regulatory solutions. Providers should offer ways for users to respectfully terminate their interactions with digital personas. Through careful, responsible development, digital afterlife technologies can meaningfully and respectfully honour our loved ones.

As we navigate this new frontier, it is essential to balance the benefits of staying connected with our loved ones against the potential risks and ethical dilemmas. By doing so, we can ensure that the digital afterlife industry develops in a way that respects the memory of the deceased and supports the emotional well-being of the living.


EU Accuses Microsoft of Secretly Harvesting Children's Data

 

Noyb (None of Your Business), also known as the European Centre for Digital Rights, has filed two complaints against Microsoft under Article 77 of the GDPR, alleging that the tech giant breached schoolchildren's privacy rights with its Microsoft 365 Education service to educational institutions. 

Noyb claims that Microsoft tried to shift the responsibility and privacy expectations of GDPR principles onto institutions through its contracts, but that these organisations had no reasonable means of complying with such requests because they had no more control over the collected data. 

The non-profit argued that while schools and educational institutions in the European Union depended more on digital services during the pandemic, large tech businesses took advantage of this trend to try to attract a new generation of committed clients. While noyb supports the modernization of education, he believes Microsoft has breached various data protection rights by offering educational institutions with access to Microsoft's 365 Education services, leaving students, parents, and institutes with little options. 

Noyb voiced concern about the market strength of software vendors like Microsoft, which allows them to dictate the terms and circumstances of their contracts with schools. The organisation claims that this power has enabled IT companies to transfer most of their legal obligations under the General Data Protection Regulation (GDPR) to educational institutions and municipal governments. 

In reality, according to noyb, neither local government nor educational institutions have the power to affect how Microsoft handles user data. Rather, they were frequently faced with a "take it or leave it" scenario, in which Microsoft controlled all financial decisions and decision-making authority while the schools were required to bear all associated risks.

“This take-it-or-leave-it approach by software vendors such as Microsoft is shifting all GDPR responsibilities to schools,” stated Maartje de Graaf, a data protection lawyer at noyb. “Microsoft holds all the key information about data processing in its software, but is pointing the finger at schools when it comes to exercising rights. Schools have no way of complying with the transparency and information obligations.” 

Two complaints 

Due to suspected infringement of information privacy rules, Noyb represented two plaintiffs against Microsoft. The first complaint mentioned a father who requested personal data acquired by Microsoft's 365 Education service on behalf of his daughter in accordance with GDPR regulations. 

However, Microsoft had redirected the concerned parent to the "data controller," and after confirming with Microsoft that the school was the data controller, the parent contacted the school, which responded that they only had access to the student's email addresses used for sign-up. 

According to Microsoft's own documentation, the second complaint stated that, despite not giving consent to cookie or tracking technologies, Microsoft 365 Education installed cookies analysing user behaviour and collecting browser data, both of which are used for advertising purposes. The non-profit alleged that this type of invasive profiling was conducted without the school's knowledge or approval. 

noyb has requested that the Austrian data protection authority (DSB) investigate and analyse the data collected and processed by Microsoft 365 Education, as neither Microsoft's own privacy documentation, the complainant's access requests, nor the non-profit's own research could shed light on this process, which it believes violates the GDPR's transparency provisions.

Navigating Meta’s AI Data Training: Opt-Out Challenges and Privacy Considerations

Navigating Meta’s AI Data Training: Opt-Out Challenges and Privacy Considerations

The privacy policy update

Meta will reportedly amend its privacy policy beginning June 26 to allow its AI to be educated on your data. 

The story spread on social media after Meta sent out emails and notifications to subscribers in the United Kingdom and the European Union informing them of the change and offering them the option to opt out of data collecting. 

One UK-based user, Phillip Bloom, publicly published the message, informing everyone about the impending changes, which appear to also affect Instagram users.

The AI training process

These changes provide Meta permission to use your information and personal material from Meta-related services to train its AI. This implies that the social media giant will be able to use public Facebook posts, Instagram photographs and captions, and messages to Meta's AI chatbots to train its huge language model and other AI capabilities.

Meta states that private messages will not be included in the training data, and the business emphasizes in its emails and notifications that each user (in a protected region) has the "right to object" to the data being utilized. 

Once implemented, the new policy will begin automatically extracting information from the affected types of material. To avoid Meta removing your content, you can opt out right now by going to this Facebook help website. 

Keep in mind that this page will only load if you are in the European Union, the United Kingdom, or any country where Meta is required by law to provide an opt-out option.

Opting out: EU and UK users

If you live in the European Union, the United Kingdom, or another country with severe enough data protection regulations for Meta to provide an opt-out, go to the support page listed above, fill out the form, and submit it. 

You'll need to select your nation and explain why you're opting out in a text box, and you'll have the option to offer more information below that. You should receive a response indicating whether Meta will honor your request to opt out of having your data utilized. 

Prepare to fight—some users say that their requests are being denied, even though in countries governed by legislation such as the European Union's GDPR, Meta should be required to honor your request.

Challenges for users outside the EU and UK

There are a few caveats to consider. While the opt-out protects you, it does not guarantee that your postings will be protected if they are shared by friends or family members who have not opted out of using data for AI training. 

Make sure that any family members who use Facebook or other Meta services opt out, if possible. This move isn't surprising given that Meta has been gradually expanding its AI offerings on its platforms. 

As a result, the utilization of user data, particularly among Meta services, was always expected. There is too much data for the corporation to pass up as training material for its numerous AI programs.

Meta to Train AI with Public Facebook and Instagram Posts

 


 

Meta, the company behind Facebook and Instagram, is set to begin using public posts from European users to train its artificial intelligence (AI) systems starting June 26. This decision has sparked discussions about privacy and GDPR compliance.

Utilising Public Data for AI

European users of Facebook and Instagram have recently been notified that their public posts could be used to help develop Meta's AI technologies. The information that might be utilised includes posts, photos, captions, and messages sent to an AI, but private messages are excluded. Meta has emphasised that only public data from user profiles will be used, and data from users under 18 will not be included.

GDPR Compliance and Legitimate Interest

Under the General Data Protection Regulation (GDPR), companies can process personal data if they demonstrate a legitimate interest. Meta argues that improving AI systems constitutes such an interest. Despite this, users have the right to opt out of having their data used for this purpose by submitting a form through Facebook or Instagram, although these forms are currently unavailable.

Even if users opt out, their data may still be used if they are featured in another user's public posts or images. Meta has provided a four-week notice period before collecting data to comply with privacy regulations.

Regulatory Concerns and Delays

The Irish Data Protection Commission (DPC) intervened following Meta's announcement, resulting in a temporary delay. The DPC requested clarifications from Meta, which the company has addressed. Meta assured that only public data from EU users would be utilized and confirmed that data from minors would not be included.

Meta’s AI Development Efforts

Meta is heavily investing in AI research and development. The company’s latest large language model, Llama 3, released in April, powers its Meta AI assistant, though it is not yet available in Europe. Meta has previously used public posts to train its AI assistant but did not include this data in training the Llama 2 model.

In addition to developing AI software, Meta is also working on the hardware needed for AI operations, introducing custom-made chips last month.

Meta's initiative to use public posts for AI training highlights the ongoing balance between innovation and privacy. While an opt-out option is provided, its current unavailability and the potential use of data from non-consenting users underscore the complexities of data privacy.

European users should remain informed about their rights under GDPR and utilize the opt-out process when available. Despite some limitations, Meta's efforts to notify users and offer an opt-out reflect a step towards balancing technological advancement with privacy concerns.

This development represents a striking move in Meta's AI journey and accentuates the critical role of transparency and regulatory oversight in handling personal data responsibly.


Slack Faces Backlash Over AI Data Policy: Users Demand Clearer Privacy Practices

 

In February, Slack introduced its AI capabilities, positioning itself as a leader in the integration of artificial intelligence within workplace communication. However, recent developments have sparked significant controversy. Slack's current policy, which collects customer data by default for training AI models, has drawn widespread criticism and calls for greater transparency and clarity. 

The issue gained attention when Gergely Orosz, an engineer and writer, pointed out that Slack's terms of service allow the use of customer data for training AI models, despite reassurances from Slack engineers that this is not the case. Aaron Maurer, a Slack engineer, acknowledged the need for updated policies that explicitly detail how Slack AI interacts with customer data. This discrepancy between policy language and practical application has left many users uneasy. 

Slack's privacy principles state that customer data, including messages and files, may be used to develop AI and machine learning models. In contrast, the Slack AI page asserts that customer data is not used to train Slack AI models. This inconsistency has led users to demand that Slack update its privacy policies to reflect the actual use of data. The controversy intensified as users on platforms like Hacker News and Threads voiced their concerns. Many felt that Slack had not adequately notified users about the default opt-in for data sharing. 

The backlash prompted some users to opt out of data sharing, a process that requires contacting Slack directly with a specific request. Critics argue that this process is cumbersome and lacks transparency. Salesforce, Slack's parent company, has acknowledged the need for policy updates. A Salesforce spokesperson stated that Slack would clarify its policies to ensure users understand that customer data is not used to train generative AI models and that such data never leaves Slack's trust boundary. 

However, these changes have yet to address the broader issue of explicit user consent. Questions about Slack's compliance with the General Data Protection Regulation (GDPR) have also arisen. GDPR requires explicit, informed consent for data collection, which must be obtained through opt-in mechanisms rather than default opt-ins. Despite Slack's commitment to GDPR compliance, the current controversy suggests that its practices may not align fully with these regulations. 

As more users opt out of data sharing and call for alternative chat services, Slack faces mounting pressure to revise its data policies comprehensively. This situation underscores the importance of transparency and user consent in data practices, particularly as AI continues to evolve and integrate into everyday tools. 

The recent backlash against Slack's AI data policy highlights a crucial issue in the digital age: the need for clear, transparent data practices that respect user consent. As Slack works to update its policies, the company must prioritize user trust and regulatory compliance to maintain its position as a trusted communication platform. This episode serves as a reminder for all companies leveraging AI to ensure their data practices are transparent and user-centric.

Websites Engage in Deceptive Practices to Conceal the Scope of Data Collection and Sharing

 

Websites frequently conceal the extent to which they share our personal data, employing tactics to obscure their practices and prevent consumers from making fully informed decisions about their privacy. This lack of transparency has prompted governmental responses, such as the European Union's GDPR and California's CCPA, which require websites to seek permission before tracking user activity.

Despite these regulations, many users remain unaware of how their data is shared and manipulated. A recent study delves into the strategies employed by websites to hide the extent of data sharing and the reasons behind such obfuscation.

The research, focusing on online privacy regulations in Canada, reveals that websites often employ deception to mislead users and increase the difficulty of monitoring their activities. Notably, websites dealing with sensitive information, like medical or banking sites, tend to be more transparent about data sharing due to market constraints and heightened privacy sensitivity.

During the COVID-19 pandemic, as online activity surged, instances of privacy abuses also increased. The study shows that popular websites are more likely to obscure their data-sharing practices, potentially to maximize profits by exploiting uninformed consumers.

Third-party data collection by websites is pervasive, with numerous tracking mechanisms used for advertising and other purposes. This extensive surveillance raises concerns about privacy infringement and the commodification of personal data. Dark patterns and lack of transparency further exacerbate the issue, making it difficult for users to understand and control how their information is shared.

Efforts to protect consumer privacy, such as GDPR and CCPA, have limitations, as websites continue to manipulate and profit from user data despite opt-in and opt-out regulations. Consumer responses, including the use of VPNs and behavioral obfuscation, offer some protection, but the underlying information asymmetry remains a significant challenge.

EU AI Act to Impact US Generative AI Deployments

 



In a move set to reshape the scope of AI deployment, the European Union's AI Act, slated to come into effect in May or June, aims to impose stricter regulations on the development and use of generative AI technology. The Act, which categorises AI use cases based on associated risks, prohibits certain applications like biometric categorization systems and emotion recognition in workplaces due to concerns over manipulation of human behaviour. This legislation will compel companies, regardless of their location, to adopt a more responsible approach to AI development and deployment.

For businesses venturing into generative AI adoption, compliance with the EU AI Act will necessitate a thorough evaluation of use cases through a risk assessment lens. Existing AI deployments will require comprehensive audits to ensure adherence to regulatory standards and mitigate potential penalties. While the Act provides a transition period for compliance, organisations must gear up to meet the stipulated requirements by 2026.

This isn't the first time US companies have faced disruption from overseas tech regulations. Similar to the impact of the GDPR on data privacy practices, the EU AI Act is expected to influence global AI governance standards. By aligning with EU regulations, US tech leaders may find themselves better positioned to comply with emerging regulatory mandates worldwide.

Despite the parallels with GDPR, regulating AI presents unique challenges. The rollout of GDPR witnessed numerous compliance hurdles, indicating the complexity of enforcing such regulations. Additionally, concerns persist regarding the efficacy of fines in deterring non-compliance among large corporations. The EU's proposed fines for AI Act violations range from 7.5 million to 35 million euros, but effective enforcement will require the establishment of robust regulatory mechanisms.

Addressing the AI talent gap is crucial for successful implementation and enforcement of the Act. Both the EU and the US recognize the need for upskilling to attend to the complexities of AI governance. While US efforts have focused on executive orders and policy initiatives, the EU's proactive approach is poised to drive AI enforcement forward.

For CIOs preparing for the AI Act's enforcement, understanding the tools and use cases within their organisations is imperceptible. By conducting comprehensive inventories and risk assessments, businesses can identify areas of potential non-compliance and take corrective measures. It's essential to recognize that seemingly low-risk AI applications may still pose significant challenges, particularly regarding data privacy and transparency.

Companies like TransUnion are taking a nuanced approach to AI deployment, tailoring their strategies to specific use cases. While embracing AI's potential benefits, they exercise caution in deploying complex, less explainable technologies, especially in sensitive areas like credit assessment.

As the EU AI Act reshapes the regulatory landscape, CIOs must proactively adapt their AI strategies to ensure compliance and mitigate risks. By prioritising transparency, accountability, and ethical considerations, organisations can navigate the evolving regulatory environment while harnessing the transformative power of AI responsibly.



Hays Research Reveals the Increasing AI Adoption in Scottish Workplaces


Artificial intelligence (AI) tool adoption in Scottish companies has significantly increased, according to a new survey by recruitment firm Hays. The study, which is based on a poll with almost 15,000 replies from professionals and employers—including 886 from Scotland—shows a significant rise in the percentage of companies using AI in their operations over the previous six months, from 26% to 32%.

Mixed Attitudes Toward the Impact of AI on Jobs

Despite the upsurge in AI technology, the study reveals that professionals have differing opinions on how AI will affect their jobs. Even though 80% of Scottish professionals do not already use AI in their employment, 21% think that AI technologies will improve their ability to do their tasks. Interestingly, during the past six months, the percentage of professionals expecting a negative impact has dropped from 12% to 6%.

However, the study indicates its concern among employees, with 61% of them believing that their companies are not doing enough to prepare them for the expanding use of AI in the workplace. Concerns are raised by this trend regarding the workforce's readiness to adopt and take full use of AI technologies. Tech-oriented Hays business director Justin Black stresses the value of giving people enough training opportunities to advance their skills and become proficient with new technologies.

Barriers to AI Adoption 

The reluctance of enterprises to disclose their data and intellectual property to AI systems, citing concerns linked to GDPR compliance (General Data Protection Regulation), is one of the noteworthy challenges impeding the mass adoption of AI. This reluctance is also influenced by concerns about trust. The demand for AI capabilities has outpaced the increase of skilled individuals in the sector, highlighting a skills deficit in the AI space, according to Black.

The reluctance to subject sensitive data and intellectual property to AI systems results from concerns about GDPR compliance. Businesses are cautious about the possible dangers of disclosing confidential data to AI systems. Professionals' scepticism about the security and dependency on AI systems contributes to their trust issues. 

The study suggests that as AI sets its foot as a crucial element in Scottish workplaces, employees should prioritize tackling skills shortages, encouraging employee readiness, and improving communication about AI integration, given the growing role that AI is playing in workplaces. By doing this, businesses might as well ease the concerns about GDPR and trust difficulties while simultaneously fostering an atmosphere that allows employees to fully take advantage of AI technology's benefits.  

Unlocking Data Privacy: Mine's No-Code Approach Nets $30 Million in Funding

 


An Israeli data privacy company, Mine Inc., has announced that it has completed a $30 million Series B fundraising round led by Battery Ventures, PayPal Ventures, as well as the investment arm of US insurance giant Nationwide, with the participation of a third investor. In addition to Gradient Ventures, Saban Ventures, MassMutual Ventures, and Headline Ventures, which are all existing investors, Google's AI fund Gradient Ventures also joined the round of investment.

Using artificial intelligence and specifically natural language processing, Mine is capable of scanning your inbox to identify which companies have access to your personal information, as well as allowing you to delete any information that you had no reason to have access to. 

There were a lot of concerns that people had concerning GDPR, and the product sparked a lot of interest: initially free, the startup managed to rake in about 5 million users in just a few weeks. Next, the company was able to expand its user base to include business users and enterprise applications. 

Mine can figure out all of the locations where the end user is installing and using customer or business data from a scan of the user's inbox and log-on authenticity. In this instance, it struck a chord with the privacy officers who are responsible for keeping companies in compliance with privacy rules and that resonated with them as well.

150 clients are using Mine’s data privacy and disclosure solutions to protect their data. These companies include Reddit, HelloFresh SE, Fender, Guesty, Snappy, and Data.ai. By raising this capital, the Company will be able to fund its ongoing operations in the coming years as well as expand its global operations, including expanding the company's MineOS B2B platform into the US and expanding its offerings to the enterprise market. 

With 35 employees, the company is in the process of hiring dozens of developers, QA professionals, and machine learning professionals to be based in Israel. Founded in 2019, Mine is a company headquartered in Tel Aviv, with the company's founding members being CEO Gal Ringel, CTO Gal Golan, and CPO Kobi Nissan.

Since the company started, its vision has been to provide companies and individuals with ease of access to privacy regulations. It has been two years since the company's vision around its MineOS B2B platform has sharpened, and it aims to provide the company with a Single Source of Truth (SSOFT) of data within its organization, enabling them to identify which systems, assets, and data they have within their organization. 

In every organization, this process, known as Data Mapping, is one of the most important building blocks which serves as a basis for a variety of teams, including legal and privacy teams, data teams, engineering teams, information technologies, and security teams. It is the most important building block for many teams within a company. As Ringel said, "The funding was complete at the end of the second week of October, just one week after the war had begun." 

As a result of the difficult market conditions of the past year, we have managed the company very carefully and disciplined since March last year while reducing monthly expenses and boosting revenue significantly to a rate of millions of dollars in annualized return on equity (4x growth in 2023) which has allowed us to achieve extraordinary metrics that have attracted many investors to the company. 

There is no doubt that mineOS is one of the greatest open-source operating systems out there, and as such it has hundreds of enterprise customers, including Reddit, HelloFresh SE, FIFA and Data.ai, and Data.ai it announces $30 million in Series B funding to continue its development. There are two leads in this round, Battery Ventures (from the financial giant) and PayPal Ventures (from the payments giant) as well as all of the previous backers that were involved in this round, including Saban Ventures, Gradient Ventures (Google's AI fund), MassMutual Ventures, and Headline Ventures. 

Although Mine has not disclosed its valuation, the co-founder and CEO, Gal Ringel, told me during his recent interview that the company has increased in valuation three times since its last fundraising back in 2020. (The previous round was $9.5 million after the company had only 100,000 users and no revenue.) Mine has raised over $42.5 million in funding. 

A part of the new funding will be used for both sales development surrounding Mine's current offerings, as well as more funding for R&D. In line with this, Mine intends to launch two new products in Q1 that cater to the explosion in interest and use of artificial intelligence. One of these products is designed for data privacy officers who are prepared to comply with the plans of regulators to adopt artificial intelligence laws shortly. The data protection tools market is not limited to Mine, as it should be. 

The fact that the feature sits close to other data protection activities is why it is more likely to be challenged by other companies in the same arena – for instance, OneTrust, which offers GDPR and consent gate solutions for websites, and BigID, which is a provider of a comprehensive set of compliance tools for data usage and compliance. Ringel said Mine has a strong competitive advantage over these as it is designed with an emphasis on becoming user-friendly, so it can be adopted and used even by people who have no technical background.

TikTok Faces Massive €345 Million Penalty for Mishandling Kids' Data Privacy

 


As a result of TikTok's failure to shield underage users' content from public view as well as violating EU data laws, the company has been fined €345 million (£296 million) for mishandling children's accounts and for breaking the laws. 

Data watchdogs in Ireland, which oversee the Chinese video app TikTok across the EU, recently told legal watchdogs that the video app had violated multiple GDPR rules in its operation. In its investigation, TikTok was found to have violated GDPR by making it mandatory for its users to place their accounts on a public setting by default; failing to give transparent information to child users; allowing a parent to view a child's account using the "family pairing" option to enable direct messaging for those over 16; and not considering the risks to children who were placed on the platform in a public setting and not considering that. 

Children's personal information was not sufficiently protected by the popular Chinese-owned app because it made its account public by default and did not adequately address the risks associated with under-13 users being able to access its platform, according to a decision published by the Irish Data Protection Commission (DPC). 

In a statement released on Tuesday, the Irish Data Protection Commission (DPC) said the company violated eight articles in the GDPR, the EU's primary regulatory authority for the company. There are several legal aspects of data processing which are covered by these laws, and they go from the legal use of personal data to protecting it from unlawful use. 

In most children's accounts, the settings for the profile page are set to public by default, so that everyone will be able to see any content that they post there. In an attempt to allow parents to link to their older child's account and use Direct Messages, this feature called Family Pairing allowed any adult to pair up with their child's account.  

There was no indication the child could be at risk from this feature. In the process of registering users and posting videos, TikTok did not provide the information it should have to child users and instead resorted to what's known as "dark patterns" to encourage users to choose more privacy-invasive options during their registration process. 

According to a DPC decision, the media company has been fined £12.7m after the UK data regulator found TikTok had illegally processed 1.4 million children's data under the age of 13 who were using its platform without their parent's consent in April. 

Despite being a popular social media platform, TikTok has done "very little or nothing, if anything" to ensure the safety of the platform's users from illicit activity. According to TikTok, the investigation examined the privacy setup the company had between 31 July and 31 December 2020, and it has said that it has addressed all of the issues raised as a result of the investigation.

Since 2021, all new and existing TikTok accounts that are 13- to 15-year-olds as well as those that are already set up have been set up as private, meaning that only people the user has authorized will be able to view their content. Additionally, the DPC pointed out that some aspects of their decision had been overruled by the European Data Protection Board (EDPB), a body made up of data protection regulators from various EU member states, on certain aspects. 

The German regulator had to propose a finding that the use of “dark patterns” – the term for deceptive website and app design that leads users to choose certain behaviours or make certain choices – violated the GDPR's provisions for the fair processing of personal data, and this was the reason why it had to include the proposed finding. 

TikTok has been accused of unlawfully making accounts of its users aged 13 to 17 public by default, which effectively means anyone can watch and comment on the videos that individuals have posted on their TikTok accounts between July and December 2020, according to the Irish privacy regulator. 

Moreover, the company failed to adequately assess the risks associated with the possibility of users under the age of 13 gaining access to its platform through marketing channels. Also, the report found that TikTok is still manipulating teenagers who join the platform by requesting them to share their videos and accounts publicly through pop-up advertisements that manipulate them. 

A regulator has ordered the company to change these misleading designs, also known as dark patterns, within three months to prevent any further harm to consumers. As early as the second half of 2020, accounts of minors could be linked to unverified accounts of adults. 

It was also reported that the video platform failed to explain to teenagers previous to the release of their content and accounts to the general public the consequences of making those content and accounts public. It has also been mentioned by the board of European regulators that there were serious doubts in their minds about the effectiveness of TikTok's measures to keep under 13 users off its platform in the latter half of 2020. 

As a result, the EDPB found that TikTok was failing to check the ages of existing users "in a sufficiently systematic manner" even though the mechanisms could be easily circumvented. Because of a lack of information available during the cooperation process, the group was unable to find an infringement, according to the group.

There was a fine of £12.7 million (€14.8 million) from the United Kingdom's data regulator in April for allowing children under 13 to use the platform and use their data. In addition, the company also received a fine of €750,000 from the Dutch privacy authority in 2021 for failing to provide a privacy policy in the Dutch language, which was meant to protect Dutch children.

New Cyber Threat: North Korean Hackers Exploit npm for Malicious Intent

 


There has been an updated threat warning from GitHub regarding a new North Korean attack campaign that uses malicious dependencies on npm packages to compromise victims. An earlier blog post published by the development platform earlier this week claimed that the attacks were against employees of blockchain, cryptocurrency, online gambling, and cybersecurity companies.   

Alexis Wales, VP of GitHub security operations, said that attacks often begin when attackers pretend to be developers or recruiters, impersonating them with fake GitHub, LinkedIn, Slack, or Telegram profiles. There are cases in which legitimate accounts have been hijacked by attackers. 

Another highly targeted attack campaign has been launched against the NPM package registry, aimed at enticing developers into downloading immoral modules by enticing them to install malicious third-party software. There was a significant attack wave uncovered in June, and it has since been linked to North Korean threat actors by the supply chain security firm Phylum, according to Hacker News. This attack wave appears to exhibit similar behaviours as another that was discovered in June. 

During the period from August 9 to August 12, 2023, it was identified that nine packages were uploaded to NPM. Among the libraries that are included in this file are ws-paso-jssdk, pingan-vue-floating, srm-front-util, cloud-room-video, progress-player, ynf-core-loader, ynf-core-renderer, ynf-dx-scripts, and ynf-dx-webpack-plugins. A conversation is initiated with the target and attempts are made to move the conversation to another platform after contacting them. 

As the attacker begins to execute the attack chain, it is necessary to have a post-install hook in the package.json file to execute the index.js file which executes after the package has been installed. In this instance, a daemon process is called Android. The daemon is launched as a dependency on the legitimate pm2 module and, in turn, a JavaScript file named app.js is executed. 

A JavaScript script is crafted in a way that initiates encrypted two-way communications with a remote server 45 seconds after the package is installed by masquerading as RustDesk remote desktop software – "ql. rustdesk[.]net," a spoofed domain posing as the authentic RustDesk remote desktop software. This information entails the compromised host's details and information. 

The malware pings every 45 seconds to check for further instructions, which are decoded and executed in turn, after which the malware checks for new instructions every 45 seconds. As the Phylum Research Team explained, "It would seem to be that the attackers are monitoring the GUIDs of the machines in question and selectively sending additional payloads (which are encoded Javascript code) to the machines of interest in the direction of the GUID monitors," they added. 

In the past few months there have been several typosquat versions of popular Ethereum packages in the npm repository that attempts to make HTTP requests to Chinese servers to retrieve the encryption key from the wallet on the wallet.cba123[.]cn, which had been discovered. 

Additionally, the highly popular NuGet package, Moq, has come under fire since new versions of the package released last week included a dependency named SponsorLink, that extracted the SHA-256 hash of developers' email addresses from local Git configurations and sent them to a cloud service without their knowledge. In addition, Moq has been receiving criticism after new versions released last week came with the SponsorLink dependency. 

Version 4.20.2 of the app has been rolled back as a result of the controversial changes that raise GDPR compliance issues. Despite this, Bleeping Computer reported that Amazon Web Services (AWS) had withdrawn its support for the project, which may have done serious damage to the project's reputation. 

There are also reports that organizations are increasingly vulnerable to dependency confusion attacks, which could've led to developers unwittingly introducing malicious or vulnerable code into their projects, thus resulting in large-scale attacks on supply chains on a large scale. 

There are several mitigations that you can use to prevent dependency confusion attacks. For example, we recommend publishing internal packages under scopes assigned to organizations and setting aside internal package names as placeholders in the public registry to prevent misuse of those names.

Throughout the history of cybersecurity, the recent North Korean attack campaign exploiting npm packages has served as an unmistakable reminder that the threat landscape is transforming and that more sophisticated tactics are being implemented to defeat it. For sensitive data to be safeguarded and further breaches to be prevented, it is imperative that proactive measures are taken and vigilant measures are engaged. To reduce the risks posed by these intricate cyber tactics, organizations need to prioritize the verification of identity, the validation of packages, and the management of internal packages.

Safeguarding Your Work: What Not to Share with ChatGPT

 

ChatGPT, a popular AI language model developed by OpenAI, has gained widespread usage in various industries for its conversational capabilities. However, it is essential for users to be cautious about the information they share with AI models like ChatGPT, particularly when using it for work-related purposes. This article explores the potential risks and considerations for users when sharing sensitive or confidential information with ChatGPT in professional settings.
Potential Risks and Concerns:
  1. Data Privacy and Security: When sharing information with ChatGPT, there is a risk that sensitive data could be compromised or accessed by unauthorized individuals. While OpenAI takes measures to secure user data, it is important to be mindful of the potential vulnerabilities that exist.
  2. Confidentiality Breach: ChatGPT is an AI model trained on a vast amount of data, and there is a possibility that it may generate responses that unintentionally disclose sensitive or confidential information. This can pose a significant risk, especially when discussing proprietary information, trade secrets, or confidential client data.
  3. Compliance and Legal Considerations: Different industries and jurisdictions have specific regulations regarding data privacy and protection. Sharing certain types of information with ChatGPT may potentially violate these regulations, leading to legal and compliance issues.

Best Practices for Using ChatGPT in a Work Environment:

  1. Avoid Sharing Proprietary Information: Refrain from discussing or sharing trade secrets, confidential business strategies, or proprietary data with ChatGPT. It is important to maintain a clear boundary between sensitive company information and AI models.
  2. Protect Personal Identifiable Information (PII): Be cautious when sharing personal information, such as social security numbers, addresses, or financial details, as these can be targeted by malicious actors or result in privacy breaches.
  3. Verify the Purpose and Security of Conversations: If using a third-party platform or integration to access ChatGPT, ensure that the platform has adequate security measures in place. Verify that the conversations and data shared are stored securely and are not accessible to unauthorized parties.
  4. Be Mindful of Compliance Requirements: Understand and adhere to industry-specific regulations and compliance standards, such as GDPR or HIPAA, when sharing any data through ChatGPT. Stay informed about any updates or guidelines regarding the use of AI models in your particular industry.
While ChatGPT and similar AI language models offer valuable assistance, it is crucial to exercise caution and prudence when using them in professional settings. Users must prioritize data privacy, security, and compliance by refraining from sharing sensitive or confidential information that could potentially compromise their organizations. By adopting best practices and maintaining awareness of the risks involved, users can harness the benefits of AI models like ChatGPT while safeguarding their valuable information.

Promoting Trust in Facial Recognition: Principles for Biometric Vendors

 

Facial recognition technology has gained significant attention in recent years, with its applications ranging from security systems to unlocking smartphones. However, concerns about privacy, security, and potential misuse have also emerged, leading to a call for stronger regulation and ethical practices in the biometrics industry. To promote trust in facial recognition technology, biometric vendors should embrace three key principles that prioritize privacy, transparency, and accountability.
  1. Privacy Protection: Respecting individuals' privacy is crucial when deploying facial recognition technology. Biometric vendors should adopt privacy-centric practices, such as data minimization, ensuring that only necessary and relevant personal information is collected and stored. Clear consent mechanisms must be in place, enabling individuals to provide informed consent before their facial data is processed. Additionally, biometric vendors should implement strong security measures to safeguard collected data from unauthorized access or breaches.
  2. Transparent Algorithms and Processes: Transparency is essential to foster trust in facial recognition technology. Biometric vendors should disclose information about the algorithms used, ensuring they are fair, unbiased, and capable of accurately identifying individuals across diverse demographic groups. Openness regarding the data sources and training datasets is vital, enabling independent audits and evaluations to assess algorithm accuracy and potential biases. Transparency also extends to the purpose and scope of data collection, giving individuals a clear understanding of how their facial data is used.
  3. Accountability and Ethical Considerations: Biometric vendors must demonstrate accountability for their facial recognition technology. This involves establishing clear policies and guidelines for data handling, including retention periods and the secure deletion of data when no longer necessary. The implementation of appropriate governance frameworks and regular assessments can help ensure compliance with regulatory requirements, such as the General Data Protection Regulation (GDPR) in the European Union. Additionally, vendors should conduct thorough impact assessments to identify and mitigate potential risks associated with facial recognition technology.
Biometric businesses must address concerns and foster trust in their goods and services as facial recognition technology spreads. These vendors can aid in easing concerns around facial recognition technology by adopting values related to privacy protection, openness, and accountability. Adhering to these principles can not only increase public trust but also make it easier to create regulatory frameworks that strike a balance between innovation and the defense of individual rights. The development of facial recognition technology will ultimately be greatly influenced by the moral and ethical standards upheld by the biometrics sector.






Facebook Shares Private Information With NHS Trusts

 


In a report published by The Observer, NHS trusts have been revealed to share private information with Facebook. As a result of a newspaper investigation, it was discovered that all of the websites of 20 NHS trusts were using a covert tracking tool to collect browsing data that was shared with the tech giant, it is a major breach of privacy that violated patient privacy. 

The trust has assured people that it will not collect personal information about them. It has not obtained the consent of the people involved in the process. Data were collected showing the pages people visited, the buttons they clicked, and the keywords they searched for.

As part of the system, the user's IP address was matched with the data and often the data was associated with their Facebook account details. 

A person's medical condition, the doctor's appointment, and the treatments they have received may be known once this information is matched with their medical information. 

Facebook might use it for advertising campaigns related to its business objectives as part of its business strategy. 

The news of this weekend's breach of Meta Pixel has caused panic across the NHS trust community. This is due to 17 of the 20 trusts using the tracking tool taking drastic measures, even apologizing for the incident. 

How does a Meta Pixel tracker work? What is it all about? 

Meta's advertising tracking tool allows companies to track visitor activity on their web pages and gain a deeper understanding of their actions. 

A meta-pixel has been identified as an element of 33 hospital websites where, whenever someone clicks on an appointment button to make an appointment, Facebook receives “a packet of data” from the Meta Pixel. Data about an individual household may be associated with an IP address, which in turn can be linked to its specific IP address. 

It has been reported that eight doctors have apologized to their patients. Furthermore, multiple trusts were unaware they sent patient data to Facebook. This was when they installed tracking pixels to monitor recruitment and charity campaigns. They thought they monitored recruitment specifically. The Information Commissioner's Office (ICO) has proceeded with its investigation despite this and privacy experts have verbally expressed their concerns in concert as well.

As a result of the research findings, the Meta Pixel has been removed from the Friedrich Hospital website. 

Piedmont Healthcare used Meta Pixels to collect data about patients' upcoming doctor appointments through Piedmont Healthcare's patient portal. These data included patients' names, dates, and times of appointments. 

Privacy experts have expressed concern over these findings, who are concerned that they indicate widespread potential breaches of patient confidentiality and data protection that are in their view “completely unacceptable ”. 

There is a possibility that the company will receive health information of a special category, which is legally protected in certain situations. As defined by the law, health information consists of information that relates to an individual's health status, such as medical conditions, tests, treatments, or any other information that relates to health. 

It is impossible to determine the exact usage of the data once it is accessed by Facebook's servers. The company states that the submission of sensitive medical data to the company is prohibited. It has filters in place to weed out such information if it is received accidentally. 

As several of the trusts involved explained, they originally implemented the tracking pixel to monitor recruitment or charity campaigns. They had no idea that patient information is sent to Facebook as part of that process. 

BHNHST, a healthcare trust in the town of Buckinghamshire, has removed the tracking tool from its website. It has been commented that the appearance of Meta Pixel on this site was an unintentional error on the part of the organization. 

When BHNHST users accessed a patient handbook about HIV medications, it appears that BHNHST shared some information with Facebook as a result of the access. According to the report, this data included details such as the name of the drug, the trust's name, the user's IP address, and the details of their Instagram account. 

In its privacy policy, the trust has made it explicitly clear that any consumer health information collected by it will not be used for marketing purposes without the consumer's explicit consent. 

When Alder Hey Children's Trust in Liverpool was linked to Facebook each time a user accessed a webpage related to a sexual development issue, a crisis mental health service, or an eating disorder, the organization also shared information with Facebook. 

Professor David Leslie, director of ethics at the Alan Turing Institute, warned that the transfer of patient information to third parties by the National Health Service would erode the "delicate relationship of trust" between the NHS and its patients. When accessing an NHS website, we have a reasonable expectation that our personal information will not be extracted and shared with third-party advertising companies or companies that might use it to target ads or link our personal information to health conditions."

According to Wolfie Christl, a data privacy expert who has been researching the ad tech industry to find out what is happening, "This should have been stopped long ago by regulators, rather than what is happening now. This is unacceptable in any way, and it must stop immediately as it is irresponsible and negligent." 

20 NHS trusts in England use the tracking tool to find their locations. Together the 20 trusts cover a 22 million population in England, reaching from Devon to the Pennines. Several people had used it for many years before it was discontinued. 

Moreover, Meta is facing litigation over allegations that it intentionally received sensitive health information - including information taken from health portals - and did not take any steps to prevent it. Several plaintiffs have filed lawsuits against Meta, alleging it violated their medical privacy by intercepting and selling their individually identifiable health information from its partner websites. T

Meta stated that the trusts had been contacted to remind them of the privacy policies in place, essentially to prohibit the sharing of health information between the organization and Meta. 

"Our corporate communication department educates advertisers on the proper use of business tools to avoid this kind of situation," the spokesperson added. The group added that it was the owner's responsibility to make sure that the website complied with all applicable data protection laws and that consent was obtained before sending any personal information. 

Several questions have been raised concerning the effectiveness of its filters designed to weed out potentially sensitive, or what types of information would be blocked from hospital websites by the company. They also refused to explain why NHS trusts could send the data in the first place. 

According to the company, advertisers can use its business software tools to grow their business by using health-based advertising to help them achieve their business goals. There are several guides available on its website on how it can display ads to its users that "might be of interest" by leveraging data collected by its business tools. If you look at travel websites, for instance, you might see ads for hotel deals appearing on the website. 

Meta was accused of not complying with part of GDPR (General Data Protection Regulation), in the sense that it moved Facebook users' data from one country to another without permission, according to the DPC. 

Meta Ireland was fined a record fine on Meta Ireland from the European Commission. This order orders it to suspend any future transfers of personal data to the US within five months. They have also ordered the company to stop any future data transfer to the US within the same period. Meta imposed an unjustified fine, according to the company.

Criminal Digitisation: How UK Police Forces Use Technology

 


Researchers and law enforcement communities have yet to fully understand cybercrime's scope and implications, even though it is a growing issue. As a result of the perception that the police were ill-equipped to deal with these types of crimes, according to reports issued by the UK government, victims of cybercrime are unlikely to report the crimes immediately. These reports also identify a lack of cybercrime knowledge among police officers according to the reports. 

In recent days, there have been numerous reports of people falling victim to online fraudsters despite being cautious about doing so. Marc Deruelle almost became one of them due to his actions. He was eager to visit Liverpool this May for the 2023 Eurovision Song Contest. He didn't immediately suspect that someone contacting him via WhatsApp was the receptionist at the accommodation he'd booked online. However, a few days later, he received a call from someone claiming to be the receptionist so he decided to contact them.  

It was good that Deruelle's bank noticed something was going on. It refused to permit £800 to be transferred to Uganda at the last moment. The situation has not been as fortunate for other victims. 

As late as 2022, a woman from North Wales forwarded almost £2,000 over Whatsapp to a scammer pretending to be her daughter and pretending to be based out of Nevada. The mother of two from North Lanarkshire, Scotland, told STV News how she sold her home to repay the loans she had invested in a bogus cryptocurrency investment scheme advertised on Facebook. Jennifer said she had to sell the house to pay. To invest in the bogus scheme, scammers coerced her into taking out further loans - and ultimately she owed £150,000 for the scam. 

Earlier this year, the NCA released the Cyber Crime Assessment 2016. This highlights the need for more partnerships between law enforcement and the private sector to fight cybercrime. Even though cybercrime accounts for only a small proportion of all reported crimes in the U.K., the National Crime Agency has found that cybercrime has overtaken all other types of crime, accounting for 36 percent of all reported crimes, and 17 percent of crimes committed with computers.

There is no denying the fact that cybercrime reports have been growing in the U.K. One explanation for this may be that the British are becoming more skilled at detecting this kind of crime than they used to be. According to the report, there is a conclusion that there is increasing evidence of cybercrime occurring in the U.K., as it was briefly covered in the most recent Crime Survey for England and Wales conducted by the U.K. Office of National Statistics last year. 

As of 2022, fraud will account for more than 40% of all crimes in England and Wales, making it the most common crime committed in the country.    

Moore believes that, when the government launched Action Fraud in 2009, the government had the right intentions. However, the government did not realize how fast fraud would grow, Moore explains. As a result, Moore and Hamilton believe that law enforcement may have lacked funds and expertise. This has caused law enforcement officers to struggle to keep up with cybercrime's rapidly evolving pace, an issue that has left them struggling to keep up. As a result, it has been challenging for public agencies, particularly rural police departments, for a long time to recruit and retain cybersecurity professionals. There is not much money to be made by the police and the local government. As an IT professional, why on earth would you stay in the police force when you can join the private sector if you’re in cybersecurity?   

Despite the growing scale and complexity of cybercrime as well as the intensifying attacks, the report concludes that "so far, the visible financial losses and damage do not have the potential to significantly impact the value of a company's equity over the long run." Cyber attacks on businesses in the UK have not been as damaging and as publicly visible as the ones that were carried out on the Target retail chain in the United States. 

A large, multinational European company would probably be hard-pressed to conceal a breach of the same magnitude as the breach at Target in 2013 if it was similar to that breach. Generally speaking, European nations have not been required to comply with the same kind of data breach disclosure laws on the books in nearly every state in the United States. U.S. companies are forced to publicly acknowledge data breaches each week by laws in effect in nearly every U.S. state.

As the new General Data Protection Regulation of the European Union comes into force, companies that conduct business in Europe or with European customers will be required to provide written notification if, as a result of a breach of security, personal data was accidentally or unlawfully destroyed, lost, altered, or unauthorizedly disclosed, or access was unauthorized. 

As it stands, there may still be some time before British businesses start coming forward about data breaches, especially since the GDPR requirements won't fully come into effect until 2021. Although the GDPR requirements will not take full effect until April 2018, the implementation is expected to take place sooner rather than later.   

A ChatGPT Bug Exposes Sensitive User Data

OpenAI's ChatGPT, an artificial intelligence (AI) language model that can produce text that resembles human speech, has a security flaw. The flaw enabled the model to unintentionally expose private user information, endangering the privacy of several users. This event serves as a reminder of the value of cybersecurity and the necessity for businesses to protect customer data in a proactive manner.

According to a report by Tech Monitor, the ChatGPT bug "allowed researchers to extract personal data from users, including email addresses and phone numbers, as well as reveal the model's training data." This means that not only were users' personal information exposed, but also the sensitive data used to train the AI model. As a result, the incident raises concerns about the potential misuse of the leaked information.

The ChatGPT bug not only affects individual users but also has wider implications for organizations that rely on AI technology. As noted in a report by India Times, "the breach not only exposes the lack of security protocols at OpenAI, but it also brings forth the question of how safe AI-powered systems are for businesses and consumers."

Furthermore, the incident highlights the importance of adhering to regulations such as the General Data Protection Regulation (GDPR), which aims to protect individuals' personal data in the European Union. The ChatGPT bug violated GDPR regulations by exposing personal data without proper consent.

OpenAI has taken swift action to address the issue, stating that they have fixed the bug and implemented measures to prevent similar incidents in the future. However, the incident serves as a warning to businesses and individuals alike to prioritize cybersecurity measures and to be aware of potential vulnerabilities in AI systems.

As stated by Cyber Security Connect, "ChatGPT may have just blurted out your darkest secrets," emphasizing the need for constant vigilance and proactive measures to safeguard sensitive information. This includes regular updates and patches to address security flaws, as well as utilizing encryption and other security measures to protect data.

The ChatGPT bug highlights the need for ongoing vigilance and preventative measures to protect private data in the era of advanced technology. Prioritizing cybersecurity and staying informed of vulnerabilities is crucial for a safer digital environment as AI systems continue to evolve and play a prominent role in various industries.




ChatGPT: A Potential Risk to Data Privacy


ChatGPT, within two months of its release, seems to have taken over the world like a storm. The consumer application has achieved 100 million active users, making it the fastest-growing product ever. Users are intrigued by the tool's sophisticated capabilities, although apprehensive about its potential to upend numerous industries. 

One of the less discussed consequences in regard to ChatGPT is its privacy risk. Google only yesterday launched Bard, its own conversational AI, and others will undoubtedly follow. Technology firms engaged in AI development have certainly entered a race. 

The issue would be its technology, which is entirely based on users’ personal data. 

300 Billion Words, How Many Are Yours? 

ChatGPT is apparently based on a massive language model, which backs up an enormous amount of data to operate and enhance its functions. Implying, the model gets more adept at seeing patterns, foreseeing what will happen next, and producing credible text as more data is used to train it. 

OpenAI, the developer of ChatGPT, sourced the Chatbot model with some 3 million words systematically taken from the internet –  via books, articles, websites, and posts – which also undeniably involves online users’ personal information, gathered without their consent. 

Every blog post, product review, or comment written on an article, which exists or ever existed in the online world has a good chance that the data or information involved it is was consumed by ChatGPT. 

What is the Issue? 

The gathered data used in order to train ChatGPT is problematic for numerous reasons. 

First, the collected data is unconsented, since none of the online users were ever asked if OpenAI could use their seemingly personal information. Thus, this would be a clear violation of privacy, especially when the data is sensitive and can be used to locate us, identify our loved ones, or identify ourselves. 

The usage of data can compromise what we refer to as contextual integrity even when the data is publicly available. This is a cornerstone idea in discussions about privacy law. Information on people must not be made public outside of the context in which it was first created. 

Moreover, OpenAI does not include any procedure for users to monitor whether the company has their personal information in-store, or to request it to be taken down. The European General Data Protection Regulation (GDPR), which guarantees this right, is still being discussed as to whether ChatGPT complies with its criteria. 

This “right to be forgotten” is specifically essential when it comes to situations involving information that is inaccurate or misleading, which seems to be a regular occurrence in ChatGPT. 

Furthermore, the scraped data that ChatGPT was trained on may be confidential or protected by copyright. For instance, the tool replicated the opening few chapters of Joseph Heller's copyrighted book Catch-22. 

Finally, OpenAI did not pay for the internet data it downloaded. Its creators—individuals, website owners, and businesses—were not being compensated. This is especially remarkable in light of the recent US$29 billion valuation of OpenAI, which is more than double its value in 2021. 

OpenAI has also recently announced ChatGPT Plus, which is a paid subscription plan that will provide users ongoing access to the tool, swift response times, and priority access to its new feature. By 2024, it is anticipated that this approach would help generate $1 billion in revenue. 

None of this would have been possible without the usage of ‘our’ data, acquired and utilized without our consent. 

Time to Consider the Issue? 

According to some professionals and experts, ChatGPT is a “tipping point for AI” - The realisation of technological advancement that can revolutionize the way we work, learn, write, and even think. 

Despite its potential advantages, we must keep in mind that OpenAI is a private, for-profit organization whose objectives and business demands may not always coincide with those of the larger community requirements. 

The privacy hazards associated with ChatGPT should serve as a caution. And as users of an increasing number of AI technologies, we need to exercise extreme caution when deciding what data to provide such tools with.  

Microsoft to Roll Out “Data Boundary” for its EU Customers from Jan 1


According to the latest announcement made by Microsoft Corp on Thursday, its cloud customers of the European Union will finally be able to process and store chunks of their data in the region, starting from January 1. 

This phased rollout of its “EU data boundary” will apparently be applied to all of its core cloud services - Azure, Microsoft 365, Dynamics 365 and Power BI platform. 

Since the introduction of General Data Protection Regulation (GDPR) by the EU IN 2018, that protects user privacy, business giants have grown increasingly anxious of the international flow of consumer data. 

The European Commission, which serves as the executive arm of the EU, is developing ideas in order to safeguard the privacy of the European customers whose data is being transferred to the United States. 

"As we dived deeper into this project, we learned that we needed to be taken more phased approach," says Microsoft’s Chief Privacy Officer Julie Brill. “The first phase will be customer data. And then as we move into the next phases, we will be moving logging data, service data and other kind of data into the boundary.” 

The second phase will reportedly be completed by the end of 2023, while the third in year 2024, she added. 

Microsoft runs more than a dozen datacenters throughout the European countries, like France, Germany, Spain and Switzerland. 

Data storage, for large corporation, have become so vast and is distributed across so many different countries that it has now become a challenge to understand where their data is stored and whether it complies with regulations like GDPR. 

"We are creating this solution to make our customers feel more confident and to be able to have clear conversations with their regulators on where their data is being processed as well as stored," says Brill. 

Moreover, Microsoft has previously mentioned how it would eventually challenge government request for customer data, and that it would compensate financially to any customer, whose data it shared in breach of GDPR.