Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label GDPR. Show all posts

Brave Browser’s New ‘Cookiecrumbler’ Tool Aims to Eliminate Annoying Cookie Consent Pop-Ups

 

While the General Data Protection Regulation (GDPR) was introduced with noble intentions—to protect user privacy and control over personal data—its practical side effects have caused widespread frustration. For many internet users, GDPR has become synonymous with endless cookie consent pop-ups and hours of compliance training. Now, Brave Browser is stepping up with a new solution: Cookiecrumbler, a tool designed to eliminate the disruptive cookie notices without compromising web functionality. 

Cookiecrumbler is not Brave’s first attempt at combating these irritating banners. The browser has long offered pop-up blocking capabilities. However, the challenge hasn’t been the blocking itself—it’s doing so while preserving website functionality. Many websites break or behave unexpectedly when these notices are blocked improperly. Brave’s new approach promises to fix that by taking cookie blocking to a new level of sophistication.  

According to a recent announcement, Cookiecrumbler combines large language models (LLMs) with human oversight to automate and refine the detection of cookie banners across the web. This hybrid model allows the tool to scale effectively while maintaining precision. By running on Brave’s backend servers, Cookiecrumbler crawls websites, identifies cookie notices, and generates custom rules tailored to each site’s layout and language. One standout feature is its multilingual capability. Cookie notices often vary not just in structure but in language and legal formatting based on the user’s location. 

Cookiecrumbler accounts for this by using geo-targeted vantage points, enabling it to view websites as a local user would, making detection far more effective. The developers highlight several reasons for using LLMs in this context: cookie banners typically follow predictable language patterns, the work is repetitive, and it’s relatively low-risk. The cost of each crawl is minimal, allowing the team to test different models before settling on smaller, efficient ones that provide excellent results with fine-tuning. Importantly, human reviewers remain part of the process. While AI handles the bulk detection, humans ensure that the blocking rules don’t accidentally interfere with important site functions. 

These reviewers refine and validate Cookiecrumbler’s suggestions before they’re deployed. Even better, Brave is releasing Cookiecrumbler as an open-source tool, inviting integration by other browsers and developers. This opens the door for tools like Vivaldi or Firefox to adopt similar capabilities. 

Looking ahead, Brave plans to integrate Cookiecrumbler directly into its browser, but only after completing thorough privacy reviews to ensure it aligns with the browser’s core principle of user-centric privacy. Cookiecrumbler marks a significant step forward in balancing user experience and privacy compliance—offering a smarter, less intrusive web.

Yoojo Exposes Millions of Sensitive Files Due to Misconfigured Database

 

Yoojo, a European service marketplace, accidentally left a cloud storage bucket unprotected online, exposing around 14.5 million files, including highly sensitive user data. The data breach was uncovered by Cybernews researchers, who immediately informed the company. Following the alert, Yoojo promptly secured the exposed archive.

The database contained a range of personally identifiable information (PII), including full names, passport details, government-issued IDs, user messages, and phone numbers. This level of detail, according to experts, could be exploited for phishing, identity theft, or even financial fraud.

Yoojo offers an online platform connecting users with service providers for tasks like cleaning, gardening, childcare, IT support, moving, and homecare. With over 500,000 downloads on Google Play, the app has gained significant traction in France, Spain, the Netherlands, and the UK.

Cybernews stated that the exposed database was publicly accessible for at least 10 days, though there's no current evidence of malicious exploitation. Still, researchers cautioned that unauthorized parties might have already accessed the data. Yoojo has yet to issue a formal comment on the incident.

“Leaked personal details enables attackers to create highly targeted phishing, vishing, and smishing campaigns. Fraudulent emails and SMS scams could involve impersonating Yoojo service providers asking for sensitive information like payment details or verification documents,” Cybernews researchers said.

The incident underscores how frequently misconfigured databases lead to data exposures. While many organizations rely on cloud services for storing confidential information, they often overlook the shared responsibility model that cloud infrastructure follows.

On a positive note, most companies act swiftly once made aware of such vulnerabilities—just as Yoojo did—by promptly restricting access to the exposed data.

The Need for Unified Data Security, Compliance, and AI Governance

 

Businesses are increasingly dependent on data, yet many continue to rely on outdated security infrastructures and fragmented management approaches. These inefficiencies leave organizations vulnerable to cyber threats, compliance violations, and operational disruptions. Protecting data is no longer just about preventing breaches; it requires a fundamental shift in how security, compliance, and AI governance are integrated into enterprise strategies. A proactive and unified approach is now essential to mitigate evolving risks effectively. 

The rapid advancement of artificial intelligence has introduced new security challenges. AI-powered tools are transforming industries, but they also create vulnerabilities if not properly managed. Many organizations implement AI-driven applications without fully understanding their security implications. AI models require vast amounts of data, including sensitive information, making governance a critical priority. Without robust oversight, these models can inadvertently expose private data, operate without transparency, and pose compliance challenges as new regulations emerge. 

Businesses must ensure that AI security measures evolve in tandem with technological advancements to minimize risks. Regulatory requirements are also becoming increasingly complex. Governments worldwide are enforcing stricter data privacy laws, such as GDPR and CCPA, while also introducing new regulations specific to AI governance. Non-compliance can result in heavy financial penalties, reputational damage, and operational setbacks. Businesses can no longer treat compliance as an afterthought; instead, it must be an integral part of their data security strategy. Organizations must shift from reactive compliance measures to proactive frameworks that align with evolving regulatory expectations. 

Another significant challenge is the growing issue of data sprawl. As businesses store and manage data across multiple cloud environments, SaaS applications, and third-party platforms, maintaining control becomes increasingly difficult. Security teams often lack visibility into where sensitive information resides, making it harder to enforce access controls and protect against cyber threats. Traditional security models that rely on layering additional tools onto existing infrastructures are no longer effective. A centralized, AI-driven approach to security and governance is necessary to address these risks holistically. 

Forward-thinking businesses recognize that managing security, compliance, and AI governance in isolation is inefficient. A unified approach consolidates risk management efforts into a cohesive, scalable framework. By breaking down operational silos, organizations can streamline workflows, improve efficiency through AI-driven automation, and proactively mitigate security threats. Integrating compliance and security within a single system ensures better regulatory adherence while reducing the complexity of data management. 

To stay ahead of emerging threats, organizations must modernize their approach to data security and governance. Investing in AI-driven security solutions enables businesses to automate data classification, detect vulnerabilities, and safeguard sensitive information at scale. Shifting from reactive compliance measures to proactive strategies ensures that regulatory requirements are met without last-minute adjustments. Moving away from fragmented security solutions and adopting a modular, scalable platform allows businesses to reduce risk and maintain resilience in an ever-evolving digital landscape. Those that embrace a forward-thinking, unified strategy will be best positioned for long-term success.

Privacy Concerns Rise Over Antivirus Data Collection

 


To maintain the security of their devices from cyberattacks, users rely critically on their operating systems and trusted anti-virus programs, which are among the most widely used internet security solutions. Well-established operating systems and reputable cybersecurity software need to provide users with regular updates.

As a result of these updates, security flaws in your system are fixed and security programs are upgraded, enhancing your system's protection, and preventing cybercriminals from exploiting vulnerabilities to install malicious software such as malware or spyware. Third-party applications, on the other hand, carry a larger security risk, as they may lack rigorous protection measures. In most cases, modern antivirus programs, firewalls, and other security measures will detect and block any potentially harmful programs. 

The security system will usually generate an alert when, as a result of an unauthorized or suspicious application trying to install on the device, users can take precautions to keep their devices safe. In the context of privacy, an individual is referred to as a person who has the right to remain free from unwarranted monitoring, surveillance, or interception. The concept of gathering data is not new; traditionally data was collected by traditional methods based on paper. 

It has also been proven that by making use of technological advancements, data can now be gathered through automated, computer-driven processes, providing vast amounts of information and analytical information for a variety of purposes every minute from millions of individuals in the world. Keeping a person's privacy is a fundamental right that is recognized as essential to their autonomy and their ability to protect their data. 

The need to safeguard this right is becoming increasingly important in the digital age because of the widespread collection and use of personal information, raising significant concerns about privacy and individual liberties. This evaluation included all of PCMag's Editors' Choices for antivirus and security suites, except AVG AntiVirus Free, which has been around for several years. However, since Avast acquired AVG in 2016, both have been using the same antivirus engine for several years now, so it is less necessary for them to be evaluated separately. 

It was determined that each piece of security software was evaluated based on five key factors: Data Collection, Data Sharing, Accessibility, Software & Process Control, and Transparency, of which a great deal of emphasis should be placed on Data Collection and Data Sharing. This assessment was performed by installing each antivirus program on a test system with network monitoring tools, which were then examined for their functionality and what data was transmitted to the company's parent company as a result of the assessment. In addition, the End User License Agreements (EULAs) for each product were carefully reviewed to determine if they disclosed what kind and how much data was collected. 

A comprehensive questionnaire was also sent to security companies to provide further insights into their capabilities beyond the technical analysis and contractual review. There may be discrepancies between the stated policies of a business and the actual details of its network activities, which can adversely affect its overall score. Some vendors declined to answer specific questions because there was a security concern. 

Moreover, the study highlights that while some data collection-such as payment information for licensing purposes-must be collected, reducing the amount of collected data generally results in a higher Data Collection score, a result that the study findings can explain. The collecting of data from individuals can provide valuable insights into their preferences and interests, for example, using information from food delivery apps can reveal a user's favourite dishes and the frequency with which they order food. 

In the same vein, it is common for targeted advertisements to be delivered using data derived from search queries, shopping histories, location tracking, and other digital interactions. Using data such as this helps businesses boost sales, develop products, conduct market analysis, optimize user experiences, and improve various functions within their organizations. It is data-driven analytics that is responsible for bringing us personalized advertisements, biometric authentication of employees, and content recommendations on streaming platforms such as Netflix and Amazon Prime.

Moreover, athletes' performance metrics in the field of sports are monitored and compared to previous records to determine progress and areas for improvement. It is a fact that systematic data collection and analysis are key to the development and advancement of the digital ecosystem. By doing so, businesses and industries can operate more efficiently, while providing their customers with better experiences. 

As part of the evaluation of these companies, it was also necessary to assess their ability to manage the data they collect as well as their ability to make the information they collect available to people. This information has an important role to play in ensuring consumer safety and freedom of choice. As a whole, companies that provide clear, concise language in their End User License Agreements (EULA) and privacy policies will receive higher scores for accessibility. 

Furthermore, if those companies provide a comprehensive FAQ that explains what data is collected and why it's used, they will further increase their marks. About three-quarters of the participants in the survey participating in the survey responded to the survey, constituting a significant share of those who received acknowledgement based on the transparency they demonstrated. The more detailed the answers, the greater the score was. Furthermore, the availability of third-party audits significantly influenced the rating. 

Even thought a company may handle its personal data with transparency and diligence, any security vulnerabilities introduced by its partners can undermine the company's efforts. As part of this study, researchers also examined the security protocols of the companies' third-party cloud storage services. Companies that have implemented bug bounty programs, which reward users for identifying and reporting security flaws, received a higher score in this category than those that did not. The possibility exists that a security company could be asked to provide data it has gathered on specific users by a government authority. 

Different jurisdictions have their own unique legal frameworks regarding this, so it is imperative to have an understanding of the location of the data. The General Data Protection Regulation (GDPR) in particular enforces a strict set of privacy protections, which are not only applicable to data that is stored within the European Union (EU) but also to data that concerns EU residents, regardless of where it may be stored. 

Nine of the companies that participated in the survey declined to disclose where their server farms are located. Of those that did provide answers, three chose to keep their data only within the EU, five chose to store the data in both the EU and the US, and two maintained their data somewhere within the US and India. Despite this, Kaspersky has stated that it stores data in several different parts of the world, including Europe, Canada, the United States, and Russia. In some cases, government agencies may even instruct security companies to issue a "special" update to a specific user ID to monitor the activities of certain suspects of terrorist activity. 

In response to a question regarding such practices, the Indian company eScan confirmed that they are involved in such activities, as did McAfee and Microsoft. Eleven of the companies that responded affirmed that they do not distribute targeted updates of this nature. Others chose not to respond, raising concerns about transparency in the process. `

GDPR Violation by EU: A Case of Self-Accountability

 


There was a groundbreaking decision by the European Union General Court on Wednesday that the EU Commission will be held liable for damages incurred by a German citizen for not adhering to its own data protection legislation. 

As a result of the court's decision that the Commission transferred the citizen's personal data to the United States without adequate safeguards, the citizen received 400 euros ($412) in compensation. During the hearing conducted by the EU General Court, the EU General Court found that the EU had violated its own privacy rules, which are governed by the General Data Protection Regulation (GDPR). 

According to the ruling, the EU has to pay a fine for the first time in history. German citizens who were registering for a conference through a European Commission webpage used the "Sign in with Facebook" option, which resulted in a German citizen being a victim of the EU's brazen disregard for the law. 

The user clicked the button, which transferred information about their browser, device, and IP address through Amazon Web Services' content delivery network, ultimately finding its way to servers run by Facebook's parent company Meta Platforms located in the United States after they were pushed to the content delivery network. According to the court, this transfer of data was conducted without proper safeguards, which constitutes a breach of GDPR rules. 

The EU was ordered to pay a fine of €400 (about $412) directly to the plaintiff for breaching GDPR rules. It has been widely documented that the magnitude and frequency of fines imposed by different national data protection authorities (DPAs) have varied greatly since GDPR was introduced. This is due to both the severity and the rigour of enforcement. A total of 311 fines have been catalogued by the International Network of Privacy Law Professionals, and by analysing them, several key trends can be observed.

The Netherlands, Turkey, and Slovakia have been major focal points for GDPR enforcement, with the Netherlands leading in terms of high-value fines. Moreover, Romania and Slovakia frequently appear on the list of the lower fines, indicating that even less severe violations are being enforced. The implementation of the GDPR has been somewhat of a mixed bag since its introduction a year ago. There is no denying that the EU has captured the attention of the public with the major fines it has imposed on Silicon Valley giants. However, enforcement takes a very long time; even the EU's first self-imposed fine for violating one person's privacy took over two years to complete. 

Approximately three out of every four data protection authorities have stated that they lack the budget and personnel needed to investigate violations, and numerous examples illustrate that the byzantine collection of laws has not been able to curb the invasive practices of surveillance capitalism, despite their attempts. Perhaps the EU could begin by following its own rules and see if that will help. A comprehensive framework for data protection has been developed by the General Data Protection Regulation (GDPR). 

Established to protect and safeguard individuals' data and ensure their privacy, rigorous standards regarding the collection, processing, and storage of data were enacted. Nevertheless, in an unexpected development, the European Union itself was found to have violated these very laws, causing an unprecedented uproar. 

A recent internal audit revealed a serious weakness in data management practices within European institutions, exposing the personal information of EU citizens to the risk of misuse or access by unauthorized individuals. Ultimately, the European Court of Justice handed down a landmark decision stating that the EU failed to comply with its data protection laws due to this breach. 

As a result of the GDPR, implemented in 2018, organisations are now required to obtain user consent to collect or use personal data, such as cookie acceptance notifications, which are now commonplace. This framework has become the foundation for data privacy and a defining framework for data privacy. By limiting the amount of information companies can collect and making its use more transparent, GDPR aims to empower individuals while posing a significant compliance challenge for technology companies. 

It is worth mentioning that Meta has faced substantial penalties for non-compliance and is among those most negatively impacted. There was a notable case last year when Meta was fined $1.3 billion for failing to adequately protect European users' data during its transfer to U.S. servers. This left them vulnerable to American intelligence agencies since their data could be transferred to American servers, a risk that they did not manage adequately. 

The company also received a $417 million fine for violations involving Instagram's privacy practices and a $232 million fine for not being transparent enough regarding WhatsApp's data processing practices in the past. This is not the only issue Meta is facing concerning GDPR compliance, as Amazon was fined $887 million by the European Union in 2021 for similar violations. 

A Facebook login integration that is part of Meta's ecosystem was a major factor in the recent breach of the European Union's data privacy regulations. The incident illustrates the challenges that can be encountered even by the enforcers of the GDPR when adhering to its strict requirements.

Tech's Move Toward Simplified Data Handling

 


The ethos of the tech industry for a long time has always been that there is no shortage of data, and that is a good thing. Recent patents from IBM and Intel demonstrate that the concept of data minimization is becoming more and more prevalent, with an increase in efforts toward balancing the collection of information from users, their storage, and their use as effectively as possible. 

It is no secret that every online action, whether it is an individual's social media activity or the operation of a global corporation, generates data that can potentially be collected, shared, and analyzed. Big data and the recognition of data as a valuable resource have led to an increase in data storage. Although this proliferation of data has raised serious concerns about privacy, security, and regulatory compliance, it also raises serious security concerns. 

There is no doubt that the volume and speed of data flowing within an organization is constantly increasing and that this influx brings both opportunities and risks, because, while the abundance of data can be advantageous for business growth and decision-making, it also creates new vulnerabilities. 

There are several practices users should follow to minimize the risk of data loss and ensure an environment that is safer, and one of these practices is to closely monitor and manage the amount of digital data that users company retains and processes beyond its necessary lifespan. This is commonly referred to as data minimization. 

According to the principle of data minimization, it means limiting the amount of data collected and retained to what is necessary to accomplish a given task. This is a principle that is a cornerstone of privacy law and regulation, such as the EU General Data Protection Regulation (GDPR). In addition to reducing data breaches, data minimization also promotes good data governance and enhances consumer trust by minimizing risks. 

Several months ago IBM filed a patent application for a system that would enable the efficient deletion of data from dispersed storage environments. In this method, the data is stored across a variety of cloud sites, which makes managing outdated or unnecessary data extremely challenging, to achieve IBM's objective of enhancing data security, reducing operational costs, and optimizing the performance of cloud-based ecosystems, this technology has been introduced by IBM. 

By introducing the proposed system, Intel hopes to streamline the process of removing redundant data from a system, addressing critical concerns in managing modern data storage, while simultaneously, Intel has submitted a patent proposal for a system that aims to verify data erasure. Using this technology, programmable circuits, which are custom-built pieces of hardware that perform specific computational tasks, can be securely erased.

To ensure the integrity of the erasure process, the system utilizes a digital signature and a private key. This is a very important innovation in safeguarding data security in hardware applications, especially for training environments, where the secure handling of sensitive information is of great importance, such as artificial intelligence training. A growing emphasis is being placed on robust data management and security within the technology sector, reflected in both advancements. 

The importance of data minimization serves as a basis for the development of a more secure, ethical, and privacy-conscious digital ecosystem, as a result of which this practice stands at the core of responsible data management, offering several compelling benefits that include security, ethics, legal compliance, and cost-effectiveness. 

Among the major benefits of data minimization is that it helps reduce privacy risks by limiting the amount of data that is collected only to the extent that is strictly necessary or by immediately removing obsolete or redundant information that is no longer required. To reduce the potential impact of data breaches, protect customer privacy, and reduce reputational damage, organizations can reduce the exposure of sensitive data to the highest level, allowing them to effectively mitigate the potential impact of data breaches. 

Additionally, data minimization highlights the importance of ethical data usage. A company can build trust and credibility with its stakeholders by ensuring that individual privacy is protected and that transparent data-handling practices are adhered to. It is the commitment to integrity that enhances customers', partners', and regulators' confidence, reinforcing the organization's reputation as a responsible steward of data. 

Data minimization is an important proactive measure that an organization can take to minimize liability from the perspective of reducing liability. By keeping less data, an organization is less likely to be liable for breaches or privacy violations, which in turn minimizes the possibility of a regulatory penalty or legal action. A data retention policy that aligns with the principles of minimization is also more likely to ensure compliance with privacy laws and regulations. 

Additionally, organizations can save significant amounts of money by minimizing their data expenditures, because storing and processing large datasets requires a lot of infrastructure, resources, and maintenance efforts to maintain. It is possible to streamline an organization's operation, reduce overhead expenditures, and improve the efficiency of its data management systems by gathering and retaining only essential data. 

Responsible data practices emphasize the importance of data minimization, which provides many benefits that are beyond security, including ethical, legal, and financial benefits. Organizations looking to navigate the complexities of the digital age responsibly and sustainably are critical to adopting this approach. There are numerous benefits that businesses across industries can receive from data minimization, including improving operational efficiency, privacy, and compliance with regulatory requirements. 

Using data anonymization, organizations can create a data-democratizing environment by ensuring safe, secure, collaborative access to information without compromising individual privacy, for example. A retail organization may be able to use anonymized customer data to facilitate a variety of decision-making processes that facilitate agility and responsiveness to market demands by teams across departments, for example. 

Additionally, it simplifies business operations by ensuring that only relevant information is gathered and managed to simplify the management of business data. The use of this approach allows organizations to streamline their workflows, optimize their resource allocations, and increase the efficiency of functions such as customer service, order fulfillment, and analytics. 

Another important benefit of this approach is strengthening data privacy, which allows organizations to reduce the risk of data breaches and unauthorized access, safeguard sensitive customer data, and strengthen the trust that they have in their commitment to security by collecting only essential information. Last but not least, in the event of a data breach, it is significantly less impactful if only critical data is retained. 

By doing this, users' organization and its stakeholders are protected from extensive reputational and financial damage, as well as extensive financial loss. To achieve effective, ethical, and sustainable data management, data minimization has to be a cornerstone.

Meta Introduces AI Features For Ray-Ban Glasses in Europe

 

Meta has officially introduced certain AI functions for its Ray-Ban Meta augmented reality (AR) glasses in France, Italy, and Spain, marking a significant step in the company's spread of its innovative wearable technology across Europe. 

Starting earlier this week, customers in these nations were able to interact with Meta's AI assistant solely through their voice, allowing them to ask general enquiries and receive responses through the glasses. 

As part of Meta's larger initiative to make its AI assistant more widely available, this latest deployment covers French, Italian, and Spanish in addition to English. The announcement was made nearly a year after the Ray-Ban Meta spectacles were first released in September 2023.

In a blog post outlining the update, Meta stated, "We are thrilled to introduce Meta AI and its cutting-edge features to regions of the EU, and we look forward to expanding to more European countries soon.” However, not all of the features accessible in other regions will be included in the European rollout. 

While customers in the United States, Canada, and Australia benefit from multimodal AI capabilities on their Ray-Ban Meta glasses, such as the ability to gain information about objects in view of the glasses' camera, these functions will not be included in the European update at present.

For example, users in the United States can ask their glasses to identify landmarks in their surroundings, such as "Tell me more about this landmark," but these functionalities are not available in Europe due to ongoing regulatory issues. 

Meta has stated its commitment to dealing with Europe's complicated legal environment, specifically the EU's AI Act and the General Data Protection Regulation (GDPR). The company indicated that it is aiming to offer multimodal capabilities to more countries in the future, but there is no set date. 

While the rollout in France, Italy, and Spain marks a significant milestone, Meta's journey in the European market is far from done. As the firm navigates the regulatory landscape and expands its AI solutions, users in Europe can expect more updates and new features for their Ray-Ban Meta glasses in the coming months. 

As Meta continues to grow its devices and expand its AI capabilities, all eyes will be on how the firm adjusts to Europe's legal system and how this will impact the future of AR technology worldwide.

Meta Fined €91 Million by EU Privacy Regulator for Improper Password Storage

 

On Friday, Meta was fined €91 million ($101.5 million) by the European Union's primary privacy regulator for accidentally storing some user passwords without proper encryption or protection.

The investigation began five years ago when Meta informed Ireland's Data Protection Commission (DPC) that it had mistakenly saved certain passwords in plaintext format. At the time, Meta publicly admitted to the issue, and the DPC confirmed that no external parties had access to the passwords.

"It is a widely accepted practice that passwords should not be stored in plaintext due to the potential risk of misuse by unauthorized individuals," stated Graham Doyle, Deputy Commissioner of the Irish DPC.

A Meta spokesperson mentioned that the company took swift action to resolve the error after it was detected during a 2019 security audit. Additionally, there is no evidence suggesting the passwords were misused or accessed inappropriately.

Throughout the investigation, Meta cooperated fully with the DPC, the spokesperson added in a statement on Friday.

Given that many major U.S. tech firms base their European operations in Ireland, the DPC serves as the leading privacy regulator in the EU. To date, Meta has been fined a total of €2.5 billion for violations under the General Data Protection Regulation (GDPR), which was introduced in 2018. This includes a record €1.2 billion penalty issued in 2023, which Meta is currently appealing.