Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Facebook. Show all posts

Gmail and Facebook Users Advised to Secure Their Accounts Immediately

 



In a recent report by Action Fraud, it has been disclosed that millions of Gmail and Facebook users are at risk of cyberattacks, with Brits losing a staggering £1.3 million to hackers. The data reveals that a concerning 22,530 individuals fell victim to account breaches in the past year alone.

According to Pauline Smith, Head of Action Fraud, the ubiquity of social media and email accounts makes everyone susceptible to fraudulent activities and cyberattacks. As technology advances, detecting fraud becomes increasingly challenging, emphasising the critical need for enhanced security measures.

The report highlights three primary methods exploited by hackers to compromise accounts: on-platform chain hacking, leaked passwords, and phishing. On-platform chain hacking involves cybercriminals seizing control of one account to infiltrate others. Additionally, leaked passwords from data breaches pose a significant threat to account security.

To safeguard against such threats, Action Fraud recommends adopting robust security practices. Firstly, users are advised to create strong and unique passwords for each of their email and social media accounts. One effective method suggested is combining three random words that hold personal significance, balancing memorability with security.

Moreover, implementing 2-Step Verification (2SV) adds an extra layer of protection to accounts. With 2SV, users are prompted to provide additional verification, such as a code sent to their phone, when logging in from a new device or making significant changes to account settings. This additional step fortifies account security, mitigating the risk of unauthorised access even if passwords are compromised.

Recognizing the signs of phishing scams is also crucial in preventing account breaches. Users should remain vigilant for indicators such as spelling errors, urgent requests for information, and suspicious inquiries. By staying informed and cautious, individuals can reduce their vulnerability to cyber threats.

In response to the escalating concerns, tech giants like Google have implemented measures to enhance password security. Features such as password security alerts notify users of compromised, weak, or reused passwords, empowering them to take proactive steps to safeguard their accounts.

The prevalence of online account breaches demands users to stay on their tiptoes when it comes to online security. By adopting best practices such as creating strong passwords, enabling 2-Step Verification, and recognizing phishing attempts, users can safeguard their personal information and financial assets from malicious actors.



Technical Glitch Causes Global Disruption for Meta Users

 


In a recent setback for Meta users, a widespread service outage occurred on March 5th, affecting hundreds of thousands worldwide. Meta's spokesperson, Andy Stone, attributed the disruption to a "technical issue," apologising for any inconvenience caused.

Shortly after the incident, multiple hacktivist groups, including Skynet, Godzilla, and Anonymous Sudan, claimed responsibility. However, cybersecurity firm Cyberint revealed that the disruption might have been a result of a cyberattack, as abnormal traffic patterns indicative of a DDoS attack were detected.

The outage left Facebook and Instagram users unable to access the platforms, with many being inexplicably logged out. Some users, despite entering correct credentials, received "incorrect password" messages, raising concerns about a potential hacking event. Both desktop and mobile users, totaling over 550,000 on Facebook and 90,000 on Instagram globally, were impacted.

This isn't the first time Meta (formerly Facebook) faced such issues. In late 2021, a six-hour outage occurred when the Border Gateway Protocol (BGP) routes were withdrawn, effectively making Facebook servers inaccessible. The BGP functions like a railroad switchman, directing data packets' paths, and the absence of these routes caused a communication breakdown.

As the outage unfolded, users found themselves abruptly logged out of the platform, exacerbating the inconvenience. The disruption's ripple effect triggered concerns among users, with fears of a potential cyberattack amplifying the chaos.

It's worth noting that hacktivist groups often claim responsibility for disruptions they may not have caused, aiming to boost their perceived significance and capabilities. In this case, the true source of the disruption remains under investigation, and Meta continues to work on strengthening its systems against potential cyber threats.

In the contemporary sphere of technology, where service interruptions have become more prevalent, it is vital for online platforms to educate themselves on cybersecurity measures. Users are urged to exercise vigilance and adhere to best practices in online security, thus effectively mitigating the repercussions of such incidents.

This incident serves as a reminder of the interconnected nature of online platforms and the potential vulnerabilities that arise from technical glitches or malicious activities. Meta assures users that they are addressing the issue promptly and implementing measures to prevent future disruptions.

As the digital world persists in evolution, users and platforms alike must adapt to the dynamic landscape, emphasising the importance of cybersecurity awareness and resilient systems to ensure a secure online experience for all.




Meta's AI Ambitions Raised Privacy and Toxicity Concerns

In a groundbreaking announcement following Meta CEO Mark Zuckerberg's latest earnings report, concerns have been raised over the company's intention to utilize vast troves of user data from Facebook and Instagram to train its own AI systems, potentially creating a competing chatbot. 

Zuckerberg's revelation that Meta possesses more user data than what was employed in training ChatGPT has sparked widespread apprehension regarding privacy and toxicity issues. The decision to harness personal data from Facebook and Instagram posts and comments for the development of a rival chatbot has drawn scrutiny from both privacy advocates and industry observers. 

This move, unveiled by Zuckerberg, has intensified anxieties surrounding the handling of sensitive user information within Meta's ecosystem. As reported by Bloomberg, the disclosure of Meta's strategic shift towards leveraging its extensive user data for AI development has set off a wave of concerns regarding the implications for user privacy and the potential amplification of toxic behaviour within online interactions. 

Additionally, the makers will potentially offer it free of charge to the public which led to different concerns in the tech community. While the prospect of accessible AI technology may seem promising, critics argue that Zuckerberg's ambitious plans lack adequate consideration for the potential consequences and ethical implications. 

Following the new development, Mark Zuckerberg reported to the public that he sees Facebook's continued user growth as an opportunity to leverage data from Facebook and Instagram to develop powerful, general-purpose artificial intelligence. With hundreds of billions of publicly shared images and tens of billions of public videos on these platforms, along with a significant volume of public text posts, Zuckerberg believes this data can provide unique insights and feedback loops to advance AI technology. 

Furthermore, as per Zuckerberg, Meta has access to an even larger dataset than Common Crawl, comprised of user-generated content from Facebook and Instagram, which could potentially enable the development of a more sophisticated chatbot. This advantage extends beyond sheer volume; the interactive nature of the data, particularly from comment threads, is invaluable for training conversational AI agents. This strategy mirrors OpenAI's approach of mining dialogue-rich platforms like Reddit to enhance the capabilities of its chatbot. 

What is Threatening? 

Meta's plan to train its AI on personal posts and conversations from Facebook comments raises significant privacy concerns. Additionally, the internet is rife with toxic content, including personal attacks, insults, racism, and sexism, which poses a challenge for any chatbot training system. Apple, known for its cautious approach, has faced delays in its Siri relaunch due to these issues. However, Meta's situation may be particularly problematic given the nature of its data sources. 

Facebook's Two Decades: Four Transformative Impacts on the World

 

As Facebook celebrates its 20th anniversary, it's a moment to reflect on the profound impact the platform has had on society. From revolutionizing social media to sparking privacy debates and reshaping political landscapes, Facebook, now under the umbrella of Meta, has left an indelible mark on the digital world. Here are four key ways in which Facebook has transformed our lives:

1. Revolutionizing Social Media Landscape:
Before Facebook, platforms like MySpace existed, but Mark Zuckerberg's creation quickly outshone them upon its 2004 launch. Within a year, it amassed a million users, surpassing MySpace within four years, propelled by innovations like photo tagging. Despite fluctuations, Facebook steadily grew, reaching over a billion monthly users by 2012 and 2.11 billion daily users by 2023. Despite waning popularity among youth, Facebook remains the world's foremost social network, reshaping online social interaction.

2. Monetization and Privacy Concerns:
Facebook demonstrated the value of user data, becoming a powerhouse in advertising alongside Google. However, its data handling has been contentious, facing fines for breaches like the Cambridge Analytica scandal. Despite generating over $40 billion in revenue in the last quarter of 2023, Meta, Facebook's parent company, has faced legal scrutiny and fines for mishandling personal data.

3. Politicization of the Internet:
Facebook's targeted advertising made it a pivotal tool in political campaigning worldwide, with significant spending observed, such as in the lead-up to the 2020 US presidential election. It also facilitated grassroots movements like the Arab Spring. However, its role in exacerbating human rights abuses, as seen in Myanmar, has drawn criticism.

4. Meta's Dominance:
Facebook's success enabled Meta, previously Facebook, to acquire and amplify companies like WhatsApp, Instagram, and Oculus. Meta boasts over three billion daily users across its platforms. When unable to acquire rivals, Meta has been accused of replicating their features, facing regulatory challenges and accusations of market dominance. The company is shifting focus to AI and the Metaverse, indicating a departure from its Facebook-centric origins.

Looking ahead, Facebook's enduring popularity poses a challenge amidst rapid industry evolution and Meta's strategic shifts. As Meta ventures into the Metaverse and AI, the future of Facebook's dominance remains uncertain, despite its monumental impact over the past two decades.

Mark Zuckerberg Apologizes to Families in Fiery US Senate Hearing

Mark Zuckerberg Apologizes to Families in Fiery US Senate Hearing

In a recent US Senate hearing, Mark Zuckerberg, the CEO of Meta (formerly Facebook), faced intense scrutiny over the impact of social media platforms on children. Families who claimed their children had been harmed by online content were present, and emotions ran high throughout the proceedings.

The Apology and Its Context

Zuckerberg's apology came after families shared heartbreaking stories of self-harm and suicide related to social media content. The hearing focused on protecting children online, and it provided a rare opportunity for US senators to question tech executives directly. Other CEOs, including those from TikTok, Snap, X (formerly Twitter), and Discord, were also in the hot seat.

The central theme was clear: How can we ensure the safety and well-being of young users in the digital age? The families' pain and frustration underscored the urgency of this question.

The Instagram Prompt and Child Sexual Abuse Material

One important topic during the hearing was an Instagram prompt related to child sexual abuse material. Zuckerberg acknowledged that the prompt was a mistake and expressed regret. The prompt mistakenly directed users to search for explicit content when they typed certain keywords. This incident raised concerns about the effectiveness of content moderation algorithms and the need for continuous improvement.

Zuckerberg defended the importance of free expression but also recognized the responsibility that comes with it. He emphasized the need to strike a balance between allowing diverse viewpoints and preventing harm. The challenge lies in identifying harmful content without stifling legitimate discourse.

Directing Users Toward Helpful Resources

During his testimony, Zuckerberg highlighted efforts to guide users toward helpful resources. When someone searches for self-harm-related content, Instagram now directs them to resources that promote mental health and well-being. While imperfect, this approach reflects a commitment to mitigating harm.

The Role of Parents and Educators

Zuckerberg encouraged parents to engage with their children about online safety and set boundaries. He acknowledged that technology companies cannot solve these issues alone; collaboration with schools and communities is essential.

Mark Zuckerberg's apology was a significant moment, but it cannot be the end. Protecting children online requires collective action from tech companies, policymakers, parents, and educators. We must continue to address the challenges posed by social media while fostering a healthy digital environment for the next generation.

As the hearing concluded, the families' pain remained palpable. Their stories serve as a stark reminder that behind every statistic and algorithm lies a real person—a child seeking connection, validation, and safety. 

Trading Tomorrow's Technology for Today's Privacy: The AI Conundrum in 2024

 


Artificial Intelligence (AI) is a technology that continually absorbs and transfers humanity's collective intelligence with machine learning algorithms. It is a technology that is all-pervasive, and it will soon be all-pervasive as well. It is becoming increasingly clear that, as technology advances, so does its approach to data management the lack thereof. Thus, as the start of 2024 approaches, certain developments will have long-lasting impacts. 

Taking advantage of Google's recent integration of Bard, its chat-based AI tool, into a host of other Google apps and services is a good example of how generative AI is being moved more directly into consumer life through the use of text, images, and voice. 

A super-charged version of Google Assistant, Bard is equipped with everything from Gmail, Docs, and Drive, to Google Maps, YouTube, Google Flights, and hotels, all of which are bundled with it. Using a conversational, natural-language mode, Bard can filter enormous amounts of data online, while providing personalized responses to individual users, all while doing so in an unprecedented way. 

Creating shopping lists, summarizing emails, booking trips — all things that a personal assistant would do — for those without one. As of 2023, we have seen many examples of how not everything one sees or hears on the internet is real, whether it be politics, movies, or even wars. 

Artificial intelligence technology continues to advance rapidly, and the advent of deep fakes has raised concern in the country about its potential to influence electoral politics, especially during the Lok Sabha elections that are planned to take place next year. 

There is a sharp rise in deep fakes that have caused widespread concern in the country. In a deepfake, artificial intelligence can be used to create videos or audio that make sense of the actions or statements of people they did not do or say, resulting in the spread of misinformation and damage to their reputation. 

In the wake of the massive leap in public consciousness about the importance of generative AI that occurred in 2023, individuals and businesses will be putting artificial intelligence at the centre of even more decisions in the coming year. 

Artificial intelligence is no longer a new concept. In 2023, ChatGPT, MidJourney, Google Bard, corporate chatbots, and other artificial intelligence tools have taken the internet by storm. Their capabilities have been commended by many, while others have expressed concerns regarding plagiarism and the threat they pose to certain careers, including those related to content creation in the marketing industry. 

There is no denying that artificial intelligence, no matter what you think about it, has dramatically changed the privacy landscape. Despite whatever your feelings about AI are, the majority of people will agree on the fact that AI tools are trained on data that is collected from the creators and the users of them. 

For privacy reasons, it can be difficult to maintain transparency regarding how this data is handled since it can be difficult to understand how it is being handled. Additionally, users may forget that their conversations with AI are not as private as text conversations with other humans and that they may inadvertently disclose sensitive data during these conversations. 

According to the GDPR, users are already protected from fully automated decisions making a decision about the course of their lives by the GDPR -- for example, an AI cannot deny a bank loan based on how it analyzes someone's financial situation. The proposed legislation in many parts of the world will eventually lead to more law enforcement regulating artificial intelligence (AI) in 2024. 

Additionally, AI developers will likely continue to refine their tools to change them into (hopefully) more privacy-conscious oriented tools as the laws governing them become more complex. As Zamir anticipates that Bard Extensions will become even more personalized and integrated with the online shopping experience, such as auto-filling out of checkout forms, tracking shipments, and automatically comparing prices, Bard extensions are on course to become even more integrated with the online shopping experience. 

All of that entails some risk, according to him, from the possibility of unauthorized access to personal and financial information during the process of filling out automated forms, the possibility of maliciously intercepting information on real-time tracking, and even the possibility of manipulated data in price comparisons. 

During 2024, there will be a major transformation in the tapestry of artificial intelligence, a transformation that will stir a debate on privacy and security. From Google's Bard to deepfake anxieties, let's embark on this technological odyssey with vigilant minds as users ride the wave of AI integration. Do not be blind to the implications of artificial intelligence. The future of AI is to be woven by a moral compass, one that guides innovation and ensures that AI responsibly enriches lives.

Meta Extends Ad-Free Facebook and Instagram Premium Access Worldwide



With the introduction of its ad-free subscription service, Meta, the parent company of Facebook and Instagram, offers European users the chance to enjoy their favourite social platforms without being bombarded with advertisements. The recent ruling of the EU's Court of Justice ordered Meta to obtain the consent of users before personalizing any ads for those users in response to a recent ruling issued by the Court of Justice of the EU. With this move, Meta is showing that it is complying with the regulatory framework that is changing in the European Union. 

According to the announcement, users in these regions will have the opportunity to choose between continuing to use the platforms for free ad-support or signing up for a free ad-free subscription experience in November. There is no possibility that the user information will be used for targeting adverts during the subscription period. 

Facebook and Instagram users in the European Union are soon going to be able to enjoy an ad-free experience but at a cost. Starting in November of this year, we will be able to opt into the new, premium service offered by Meta, which is the company’s parent company that owns the platforms. Meta is the company behind the platforms that operates the platforms and is the parent company of Meta. 

Regarding pricing, 18-and-up users will be asked to pay €9.99 per month (roughly $10.55 per month) if they want to access sites without advertisements through a web browser, and €12.99 for users who want to access websites through streamlined iOS and Android apps. Facebook users will not be shown ads on Facebook or Instagram after enrolling in the program, and their data and online activities will not be used to tailor future ads based on their browsing activity. 

Every additional account added to a user's Account Center in the future will be charged an additional fee of €6 per month for the web and €8 per month for iOS and Android devices beginning on March 1, 2024, by way of an increase of fees every month.

Historically, Meta has operated solely by offering free social networking services to its users, and by selling advertising to companies who wish to reach those users. As a result of data privacy laws and other government policies that are affecting technology companies, especially in Europe, it illustrate the fact that companies have been redesigning their products to comply with those policies. 

It is estimated that more than 450 million Europeans, across 27 countries, use Amazon, Apple, Google, TikTok and other companies to comply with new rules in the European Union. The number of people using Facebook each month is estimated to be 258 million, according to Meta's estimates. According to Meta's estimate, 257 million people use Instagram every month as well. 

For iOS and Android, it is important to note that the prices are adjusted based on the fees imposed by Apple and Google by their respective purchasing policies. The subscription will be valid until March 1, 2024, for all linked accounts within the Account Center of a user for six months. A monthly fee of €6 will, however, be charged starting March 1, 2024, for each additional account listed in a user's Account Center, starting on the web and €8 for iOS and Android. 

Meta was effectively barred from combining data collected from users across its various platforms - including Facebook, Instagram and WhatsApp - as well as from outside websites and apps in July, by the European Court of Justice, the highest court in the European Union, to protect the privacy of users. The E.U. regulators issued the fine in January for forcing Meta users to accept personalized ads as a requirement of using Facebook in a condition of fines of three billion euros. That decision was issued in response to a violation of privacy regulations. This may be a solution to comply in full with the judgment provided that we offer a subscription service without displaying adverts to our subscribers in Europe, Meta said in response to the European Court of Justice's judgement of July. 

A subscription can allow users to access the platforms without being exposed to the advertising that is displayed to their subscribers. There has been no paid and ad-free subscription for services like Facebook and Instagram since Facebook and its founder Mark Zuckerberg were formed in the early days of the company. As far as they are concerned, they have always believed that they can only offer their services for free, provided that advertisements accompany them. 

However, Meta is now offering a way for Instagram and Facebook users to subscribe to both services through one simple option. Due to pressure from the European Union, the move was made after the move was put forward, and therefore, the option is only available to customers in the European Union. 

This means that Instagram users in India will remain exposed to ads no matter whether they choose them or not, and will still see them on their feeds. In any case, if Instagram subscription plans prove to be popular in the European Union and Meta sees value in them, it might be possible for similar Instagram subscription plans to be introduced to India in the future.

It does seem quite a steep subscription price - even more so if users look at the figures in Indian rupees which would be Rs 880-Rs 1150 - but given that it allows users to enjoy Instagram and Facebook in ad-free settings, it is tempting. As well as this promise, Meta also promises that users of their paid plans will not be able to use their personal information for targeted marketing purposes. 

A short time ago Mark Zuckerberg said in an interview that Facebook wants their users to have free access to their service and added ads to it so that users and the company benefit from the process. This is one of the things that has been talked about again and again by Facebook and their CEO.

There will be no change to the ad-supported experience that Facebook and Instagram currently provide to users who choose to continue using the service for free. In Meta, users will be able to control their ad preferences and the ads shown to them as well as the data used for ad targeting by using tools and settings that will enable them to influence what ads they see and what data is used.

It is important to note that advertisers will continue to be able to target users who have opted for free, ad-supported online services in Europe, so they will still be able to conduct personalised advertising campaigns. To preserve both user and business value on its platforms, Meta commits to investing in new tools that offer enhanced controls over ad experiences on its platforms, so it can preserve value for both.

Meta is actively exploring options to provide teenagers with a responsible ad experience in line with the evolving regulatory landscape so that they will be able to explore advertising in a safe environment. Users over 18 will have the option of becoming subscribers for an ad-free experience, and Meta is actively exploring options to support teenagers in this area.

Privacy Class Action Targets OpenAI and Microsoft

A new consumer privacy class action lawsuit has targeted OpenAI and Microsoft, which is a significant step. This legal action is a response to alleged privacy violations in how they handled user data, and it could be a turning point in the continuing debate over internet companies and consumer privacy rights.

The complaint, which was submitted on September 6, 2023, claims that OpenAI and Microsoft both failed to protect user information effectively, infringing on the rights of consumers to privacy. According to the plaintiffs, the corporations' policies for gathering, storing, and exchanging data did not adhere to current privacy laws.

According to the plaintiffs, OpenAI and Microsoft were accused of amassing vast quantities of personal data without explicit user consent, potentially exposing sensitive information to unauthorized third parties. The complaint also raises concerns about the transparency of these companies' data-handling policies.

This lawsuit follows a string of high-profile privacy-related incidents in the tech industry, emphasizing the growing importance of protecting user data. Critics argue that as technology continues to play an increasingly integral role in daily life, companies must take more proactive measures to ensure the privacy and security of their users.

The case against OpenAI and Microsoft echoes similar legal battles involving other tech giants, including Meta (formerly Facebook), further underscoring the need for comprehensive privacy reform. Sarah Silverman, a prominent figure in the entertainment industry, recently filed a lawsuit against OpenAI, highlighting the potentially far-reaching implications of this case.

The outcome of this lawsuit could potentially set a precedent for future legal action against companies that fall short of safeguarding consumer privacy. It may also prompt a broader conversation about the role of regulatory bodies in enforcing stricter privacy standards within the tech industry.

As the legal proceedings unfold, all eyes will be on the courts to see how this case against OpenAI and Microsoft will shape the future of consumer privacy rights in the United States and potentially serve as a catalyst for more robust data protection measures across the industry.

Vietnamese Cybercriminals Exploit Malvertising to Target Facebook Business Accounts

Cybercriminals associated with the Vietnamese cybercrime ecosystem are exploiting social media platforms, including Meta-owned Facebook, as a means to distribute malware. 

According to Mohammad Kazem Hassan Nejad, a researcher from WithSecure, malicious actors have been utilizing deceptive ads to target victims with various scams and malvertising schemes. This tactic has become even more lucrative with businesses increasingly using social media for advertising, providing attackers with a new type of attack vector – hijacking business accounts.

Over the past year, cyber attacks against Meta Business and Facebook accounts have gained popularity, primarily driven by activity clusters like Ducktail and NodeStealer, known for targeting businesses and individuals operating on Facebook. 

Social engineering plays a crucial role in gaining unauthorized access to user accounts, with victims being approached through platforms such as Facebook, LinkedIn, WhatsApp, and freelance job portals like Upwork. Search engine poisoning is another method employed to promote fake software, including CapCut, Notepad++, OpenAI ChatGPT, Google Bard, and Meta Threads.

Common tactics among these cybercrime groups include the misuse of URL shorteners, the use of Telegram for command-and-control (C2), and legitimate cloud services like Trello, Discord, Dropbox, iCloud, OneDrive, and Mediafire to host malicious payloads.

Ducktail, for instance, employs lures related to branding and marketing projects to infiltrate individuals and businesses on Meta's Business platform. In recent attacks, job and recruitment-related themes have been used to activate infections. 

Potential targets are directed to fraudulent job postings on platforms like Upwork and Freelancer through Facebook ads or LinkedIn InMail. These postings contain links to compromised job description files hosted on cloud storage providers, leading to the deployment of the Ducktail stealer malware.

The Ducktail malware is designed to steal saved session cookies from browsers, with specific code tailored to take over Facebook business accounts. These compromised accounts are sold on underground marketplaces, fetching prices ranging from $15 to $340.

Recent attack sequences observed between February and March 2023 involve the use of shortcut and PowerShell files to download and launch the final malware. The malware has evolved to harvest personal information from various platforms, including X (formerly Twitter), TikTok Business, and Google Ads. It also uses stolen Facebook session cookies to create fraudulent ads and gain elevated privileges.

One of the primary methods used to take over a victim's compromised account involves adding the attacker's email address, changing the password, and locking the victim out of their Facebook account.

The malware has incorporated new features, such as using RestartManager (RM) to kill processes that lock browser databases, a technique commonly found in ransomware. Additionally, the final payload is obfuscated using a loader to dynamically decrypt and execute it, making analysis and detection more challenging.

To hinder analysis efforts, the threat actors use uniquely generated assembly names and rely on SmartAssembly, bloating, and compression to obfuscate the malware.

Researchers from Zscaler also observed instances where the threat actors initiated contact using compromised LinkedIn accounts belonging to users in the digital marketing field, leveraging the authenticity of these accounts to aid in social engineering tactics. This highlights the worm-like propagation of Ducktail, where stolen LinkedIn credentials and cookies are used to log in to victims' accounts and expand their reach.

Ducktail is just one of many Vietnamese threat actors employing shared tools and tactics for fraudulent schemes. A Ducktail copycat known as Duckport, which emerged in late March 2023, engages in information stealing and Meta Business account hijacking. Notably, Duckport differs from Ducktail in terms of Telegram channels used for command and control, source code implementation, and distribution, making them distinct threats.

Duckport employs a unique technique of sending victims links to branded sites related to the impersonated brand or company, redirecting them to download malicious archives from file hosting services. Unlike Ducktail, Duckport replaces Telegram as a channel for passing commands to victims' machines and incorporates additional information stealing and account hijacking capabilities, along with taking screenshots and abusing online note-taking services as part of its command and control chain.

"The Vietnamese-centric element of these threats and high degree of overlaps in terms of capabilities, infrastructure, and victimology suggests active working relationships between various threat actors, shared tooling and TTPs across these threat groups, or a fractured and service-oriented Vietnamese cybercriminal ecosystem (akin to ransomware-as-a-service model) centered around social media platforms such as Facebook," WithSecure said.

Fines for Facebook Privacy Breaches in Norway Crack Down on Meta

 


A fine of 1 million crowns ($98,500) will be imposed on the owner of Facebook, Meta Platforms, by the Norwegian Data Protection Authority (Datatilsynet) starting August 14 due to a privacy breach that occurred before that date. A significant penalty of this magnitude could have major implications for other countries in Europe as well since it may set a precedent.

In a court filing, Meta Platforms has requested that a court in Norway stay a fine imposed by the Nordic country's information regulator on the company that owns Facebook and Instagram. It argued that the company breached users' privacy via Facebook and Instagram. 

It appears that Meta Platforms has filed a court filing requesting a temporary injunction against the order to prevent execution. During a two-day hearing to be held on August 22, the petition will be presented by the company. Media inquiries should be directed to Meta company's Norwegian lawyer, according to company's Norwegian lawyer. An inquiry for comment was not responded to by Meta Platforms. 

According to Datatilsynet, Meta Platforms have been instructed not to collect any personal data related to users in Norway, including their physical locations as part of behavioral advertising, i.e. advertising that is targeted at specific user groups. 

Big Tech companies tend to do this type of thing a lot. Tobias Judin, Head of Datatilsynet's international section, said that the company will be fined 1 million crowns per day as of next Monday if the company does not comply with the court order. 

Meta Platforms have filed a court protest against the imposition of the fine, according to Norway's data regulator, Datatilsynet. Datatilsynet will be able to make the fine permanent by referring the decision to the European Data Protection Board, which also holds the authority to endorse the Norwegian regulator's decision, after which the fine will be effective until November 3 at which point it could be made permanent by the Norwegian regulator. 

Successful adoption of this decision would have an impact on the entire European region if it were to be approved. Currently, Datatilsynet has not taken any further steps in implementing these measures. In a recent announcement, Meta announced that it intends to seek consent from users in the European Union before allowing businesses to use targeted advertisements based on how they interact with Meta's services like Instagram and Facebook. 

Judin pointed out that Meta's proposed method of seeking consent from users was insufficient and that such a step would not be wise. As a result, he required Meta to immediately cease all data processing, and not to resume it until a fully functional consent mechanism had been established. There is a violation of people's rights with the implementation of Monday, even though many people are unaware of this violation. 

A Meta spokesperson explained that the decision to modify their approach was prompted by regulatory obligations in the European region, which came as a result of an order issued in January by the Irish Data Protection Commissioner regarding EU-wide data protection regulations. 

According to the Irish authority, which acts as Meta's primary regulator within the European Union, the company is now required to review the legal basis of the methods that it uses to target customers with advertisements. While Norway may not be a member of the European Union, it remains a member of the European Single Market, even though it is not a member of the EU.

Elon Musk's X Steps Up: Pledges Legal Funds for Workers Dealing with Unfair Bosses

 


In a recent interview, Elon Musk said that his company X social media platform, formerly known as Twitter, would cover members' legal bills and sue those whose jobs are unfairly treated by their employers for posting or liking something on the platform.  

There have been no further details shared by Musk about how "unfair treatment" by employers is viewed by him or how he will vet users seeking legal counsel. 

In a follow-up, he stated that the company would fund the legal fees regardless of how much they charge. However, there has not been any response from the company regarding who qualifies for legal support and how users will be screened for eligibility for legal support. 

Throughout the years, Facebook users, as well as celebrities and many other public figures, have faced controversy with their employers in the form of posts, likes, or reposts they have made while using the platform. 

As Musk announced earlier in the day, a fight between him and Matrix's CEO Mark Zuckerberg would also be streamed live on the microblogging platform, which is largely operated by Facebook. Two of the top tech titans had faced off against one another in a cage fight last month after both had accepted a challenge from the other. 

Musk has made a statement to the effect that the Zuck v Musk fight will be live-streamed on X and all proceeds will go to a charity for veterans. In late October, the tech billionaire shared a graph showing the latest count, and a statement that he had reached a new record for monthly users of X. 

X had reached 540 million users at the end of October, he added. It was reported in January by the Daily Wire that Kara Lynne, a streamer at a gaming company, was fired from her job for following the controversial X account "Libs of TikTok".

In the wake of organizational changes at the company and in an attempt to boost falling advertising revenue, the figures have come out and the company is going through restructuring. The Twitter logo was familiar for 17 years, but in July, Musk launched a new logo accompanied by a new name, renaming the social media platform to X and committing to building an "all-in-one app" rather than the existing blue bird logo.  

A few weeks ago, Musk stated that the platform has a negative cash flow because advertising revenues have dropped nearly 50 percent and the platform has a large amount of debt. Even though advertising revenues rose in June more than expected, the good news did not play out as expected. 

Many previously banned users have been allowed to rejoin since he has taken control of the company—including former President Donald Trump, for example. In addition, he has weakened the content moderation policies and fired a majority of the team responsible for overseeing hate speech/other forms of potentially harmful content on the site, as well as loosened up the rules regarding moderation. 

As Musk's commitment to free speech has been demonstrated, it has not been without consequences for those who exercise that right, as several journalists who wrote about Musk's organization were temporarily suspended by Musk, and an account that tracked his private jet's flight path using publicly available data was banned as well. 

Several reports indicate Musk also publicly fired an employee who criticized him on his platform and laid off colleagues who criticized him in private, but both actions were reportedly taken in response to criticism. There is an apparent presence of a "woke mind virus" in the minds of people that Musk campaigns against some social causes such as transgender rights since he launched his initial bid to acquire Twitter early last year and has shared several posts on social media. 

The CEO of Tesla, Elon Musk, also tweeted that "cis" and "cisgender" would now be considered slurs on the app, a change he announced back in June. There has been a rise in the number of employee terminations after employees post or publicly endorse offensive content on social media platforms, and this is not just for controversial activities that relate to social issues, but also for a wide range of other major reasons. 

The Californian tech worker Michelle Serna, who posted a video on TikTok while a meeting was taking place in the background, was fired from her company in May after posting the video online. Inadequate moderation of hate speech during recent months, the tycoon who purchased Twitter for $44 billion last October has seen the company's advertising business collapse, in part because the company did not moderate hate speech as it should have, and previously banned accounts have returned to the platform. 

According to Musk, his desire for free expression motivates his changes, and he has often lashed out at what he views as a threat posed to free expression caused by the shifting cultural sensibilities influencing technological advancement. CCDH, the non-profit organization focused on countering the spread of hate speech on the Internet, feels that the platform has flourished under the influence of hate speech.  This finding of the CCDH is disputed by X and he is suing the agency for its findings. 

Trump's Twitter account was reinstated by Musk in December, but it appears the former US president is yet to resume his use of Twitter. Several supporters of the ex-president tried unsuccessfully to overturn the results of the 2020 election by attacking the Capitol Building on January 6 of the following year, but he was banned from Twitter in early 2021 as a result of his role in the attack. A US media outlet reports that social media platform X recently reinstated Kanye West's account after he was suspended eight months ago when it was found that he posted an antisemitic comment.

The Met Police passed victims' data to Facebook

 


Using its website to report crimes, such as sexual offences, domestic violence, and other crimes, the most powerful police force in the country gathered sensitive details about the people using the site. Observer reports that Facebook shared users' data to target advertising to them during their visit. 

As part of the analyses, the Metropolitan Police website included a tracking tool that recorded information about people's browsing activity and about the "secure" online reporting form they used to report crimes and crimes against them. 

Using a tracking tool called a Meta Pixel used on the police force's website, the police force sent the information, which included the type of offence being reported and the Facebook profile code of the user, to the social media giant. 

A week after The Observer published its findings on Meta Pixel tracking, the Met removed the tracker from its website. This was after The Observer raised concerns about its use. There is something wrong with this approach as it demonstrates a lack of respect for human rights and human dignity. Additionally, the report added that no personal data - such as the messages they sent to police when reporting a crime - was exchanged with the police based on the responses they provided. 

There was a suggestion that data transmission had been accidental. A tracking tool has been installed at Met to help serve ads to people who indicate they are interested in becoming a Met member. Several steps were taken to ensure that any Meta Pixels from pages that were not related to recruitment marketing campaigns on the Met's website, as well as any Pixels placed on pages that were not related to recruitment, were removed to avoid unnecessary concerns. 

When the Observer analysed police websites across England, Wales, Scotland and Northern Ireland, it found that the Met was using the pixel to track its visitors. The tool was found to be being used by four police forces, including the Metropolitan Police, during the testing last week. Additionally, there were three other police forces involved: Police Scotland, Norfolk Constabulary, and Suffolk Constabulary. 

As with the Met, Norfolk and Suffolk have also provided data about how people access sensitive web pages. This data was shared with the Met. As a result of this, Norfolk and Suffolk police have said they have been using the tracking tools “for recruitment purposes” when web visitors clicked links to report antisocial behaviour, domestic abuse, rape, hate crimes, and corruption, as well as when they clicked the “Tell us something anonymously” button. The tracking tools, Norfolk and Suffolk police have said, were used “for recruitment purposes”. 

There is criticism from victims' charities and privacy experts who have called this exchange of data a shocking violation of trust, one that could undermine the confidence of the public in the police. 

Dame Vera Baird, the former victims’ commissioner, said: "You think you are dealing with a public authority you can trust and you are dealing with Facebook and the wild world of advertising." 

Using advertising pixels in this context, said Mark Richards, a privacy researcher who focuses on online privacy, is like asking a person to report a crime while a stranger is present in the room. 

The Alan Turing Institute's director of ethics, Prof David Leslie, has said that the collection and sharing of the data feels "reckless", and that people appear to have been given "partially" or "misleading" information about how their data will be considered. 

The UK's privacy watchdog, the Information Commissioner's Office, said in a statement that the findings raised serious privacy concerns. These sites are for the convenience of crime victims, as well as their family members and witnesses. They would expect their information to be handled thoughtfully," the report said. There is already an investigation being conducted by it into the use of the Meta Pixel by NHS trusts on their websites, and according to it, the latest evidence will be taken into account. 

To reach people who have visited their websites in future marketing campaigns, businesses use Meta Pixel. This is a free tool offered by Facebook that gives them access to tracking information on people who have visited their sites. 

As part of their marketing arsenal, Facebook is pitching this tool as a way for organisations of all sizes to gain insight and insight into the performance and behaviour of their websites, as well as that of those who do not have Facebook accounts.

As the Meta Pixel collects unique identifiers, such as IP addresses and Facebook profile IDs, there does not seem to be any evidence that the company has attempted to identify people as victims of crimes or that they have targeted them with advertisements based on their status as victims or witnesses, even though this is the case. The details of interactions with the police are not included in the information shared with the company on the website. However, there is no hint that this information is shared with the company as part of the information shared. 

According to the Observer's investigation, many police websites share data with Google for advertising, in addition to Facebook. It is noteworthy that this information included that a person had visited a police website, but didn't appear to provide further information regarding the types of sensitive websites they visited or their use of online reporting tools or online forms. A police force and a military force are also believed to have shared data with Twitter to allow them to advertise their services. The ICO's chief digital privacy adviser, Stef Elliott, described the problems with Google advertising to the watchdog earlier this month following a report she made about the issue. Elliott described the problem as "systemic," according to her. 

A consent banner pops up after a web user clicks the "I agree" button on a police website, including the Met site after being shown a pop-up consent banner that asks for consent to share data. On the banner, you would normally see the words "We use cookies on this site to give you a better, more personalized experience," without ever mentioning advertising or saying that the data would be shared with third parties, like Facebook, for example. As the Met's privacy statement stated, data collected may be utilized for recruitment campaigns. However, the information collected may not be used by third parties for business purposes. However, the Met's privacy statement also mentioned advertising but said the information would only be used for recruitment campaigns and not for third-party use.

Will Threads be a 'Threat' to Twitter?


About Threads

Meta, Instagram’s parent company launched Threads, which will be a text-based conversation app, rivaling Twitter.

Threads, released on Wednesday evening, a day before its scheduled release, allows users to join up directly from their Instagram accounts; it is a platform that allows users to publish short posts or updates that are up to 500 characters. They can include links, photos, or videos up to 5 minutes long.

More than 2 billion monthly active users will be able to import their accounts into Threads once it is made available to everyone.

Threads now have 70 million signups, according to a Friday morning post by Meta CEO Mark Zuckerberg, and that number is certain to rise over the next few days. (In comparison, Instagram has 1.3 billion users that log on every day. Twitter has 259 million daily active users at the end of 2022. 13 million accounts in total are on Mastodon.)

A Threat to Twitter

Adam Mosseri, the CEO of Instagram, claimed that under Musk, Twitter's "volatility" and "unpredictability" gave Instagram the chance to compete. According to Mosseri in an interview, Threads is made for "public conversations," which is an obvious reference to how Twitter executives have described the service's function throughout the years.

In regards to its threads’ competitor space, Mosseri says “Obviously, Twitter pioneered the space[…]And there are a lot of good offerings out there for public conversations. But just given everything that was going on, we thought there was an opportunity to build something that was open and something that was good for the community that was already using Instagram.”

For some time now, Meta has been getting ready to introduce Threads, which it calls a "sanely run" substitute for Twitter. The response to Musk's recent limitation on how many tweets people may watch per day, according to internal business documents I've seen, served as the impetus for this week's app release. Furthermore, they assert that Meta expects "tens of millions" of users to use Threads within the first few months of its release.

As described by Mosseri, Thread is a “risky endeavor,” especially considering that it's a brand-new program that users must download. After receiving access to Threads earlier, users were able to rapidly fill out account information and follow lists by having Meta automatically pull information from my Instagram account.

In many important aspects, Threads is surprisingly similar to Twitter. Posts (or, as Mosseri refers to them, "threads") from accounts you follow are displayed in the app's main feed along with accounts that Instagram's algorithm has recommended. Reposting something allows you to add users’ opinions, and main feed answers are clearly shown. Though it might be added later, there is no feed that solely contains the people you follow.

Since Twitter has been around for a while and has amassed a distinctive network, it presents another element that Threads must deal with. It is evident from Meta's behavior that, despite Musk's theatrics over the previous few months, unseating Twitter would not be easy. It would be a mistake, in Mosseri's opinion, to "undervalue Twitter and Elon." The community on Twitter is tremendously powerful and vibrant, and it has a long history. The network effects are very powerful.

AI: the cause of the metaverse's demise?

 


In a dramatic change from its past plans to create a virtual world known as "the metaverse," Facebook has taken a completely different direction that has not been seen before. It was a project that consumed billions of dollars and resulted in a cumulative loss of $26 billion, despite spending billions on it. As a result, Facebook and other companies were forced to die in the metaverse due to investor pressure, forcing them to pursue the latest trend: artificial intelligence.

After being abandoned by the business world, the Metaverse, a once-hot technology that promised to give users a disorienting video-game-like virtual world in which to interact awkwardly, has died years after being touted as a future new era in communication technology. It had been around for three years at that time. 

As CEO of Meta Platforms, Mark Zuckerberg abandoned his ambitious project, Metaverse, to focus on Artificial Intelligence (AI) and the AI industry. Zuckerberg was planning to launch the Metaverse as his next big thing, but he chose to quietly shelve the project indefinitely. 

Facebook's CEO Mark Zuckerberg announced in a post on Monday, 27 February, that Meta would establish an artificial intelligence product group dedicated to generative artificial intelligence. 

There was a time when the advent of the Metaverse was touted as the dawn of a dynamic, remote interactive environment. It was regarded as a turning point in technology. Despite its success, it faced severe criticism and backlash when it became the talk of the town. This was when it became a phenomenon. In recent years, people's interest in these topics has rapidly declined. Mark Zuckerberg reintroduced the metaverse concept, he is no longer pitching it to advertisers for the same reason. 

The virtual estate is becoming more popular. The price of Ethereum, the cryptocurrency that powers so much of this activity has a direct impact on the value of virtual land in this metaverse. While Ethereum prices have been volatile recently, many buyers and sellers struggle to keep up with the market. 

WeMeta also reports that virtual land parcel average sale prices have plummeted from over US$11,000 over the past year to under US$2,000, a significant drop compared to physical land parcel average sales prices. 

There has also been a remarkable 85% decline in virtual land sales in 2022. Ethereum-based metaverse projects, such as Decentraland and Sandbox, are seeing significant reductions in their valuations and other significant metrics as a result. 

In February 2022, some of the highest prices for land sold across Decentraland ever reached, at the time of this writing, an average of US$37,200 per acre. As a result, by August, their average value had fallen to US$5,100, a decrease of approximately 25 percent. Furthermore, Sandbox's average sale price dropped between US$35,500 in January and US$2,800 in August. This was with the same price falling from around US$35,500 in January. 

A substantial level of uncertainty has been introduced to the market by the volatility of cryptocurrency prices, specifically Ethereum. This has left investors uncertain about virtual investments. Furthermore, there is a lack of proper infrastructure, governance, and collaboration within this version of the metaverse at present. In that regard, it may be that some people believe the metaverse is nothing more than a marketing gimmick at the moment. 

The Metaverse has now joined the list of failed tech ideas buried at the deep end of the graveyard. The fact that the Metaverse was born and died in a way that angers the tech world shows the extent to which the industry was influenced by technology. 

As technology advances through AI, there is a real possibility of revolutionizing how consumers and businesses run their businesses. This is evidenced by the shift to AI. A chatbot powered by artificial intelligence can help automate repetitive tasks efficiently. A search engine powered by AI, such as ChatGPT, can interact with queries in a human-like fashion. As Reality Labs places more emphasis on AI, it may reduce company losses and open new possibilities for the company to tap into in the future.

AI, one of the fastest-developing fields, continues to make rapid advances in many industries today. These industries include marketing, media, and even healthcare, as the sector develops rapidly. According to Gartner, a leading research company, generative AI in these fields is predicted to grow dramatically shortly. By 2025, large organizations will create more outbound marketing messages from less than 2 percent to 30%. This is a dramatic increase from outbound marketing messages in 2022. However, generative AI won't be the only impact on society. 

In the transition from text to video, 90% of the content could be handled by AI by 2030, according to Gartner's projections. This would be possible because 90% of the content would come from AI and the rest from human input. 

Generative artificial intelligence has vast possibilities, but its access is not as wide as it could be. As an example, ChatGPT, as well as its mechanisms, are not open-sourced, meaning it is not available to the public in any way. Other companies would find it difficult to replicate this model because of this limitation. While Facebook intends to make these types of AI models smaller, this will, in turn, make them more accessible and easier to use for companies. This will enable generative AI to become more widespread and widely available in the future. 

There have been some reports suggesting that this is the end of the metaverse. However, other reports have suggested that we shouldn't think of Meta's redirection as a rejection of the metaverse at large. As an example, computer scientist Roy Amara developed Amara's Law. This states that humans often misjudge technology's timing and potential, overestimating or underestimating their short-term impact, and drastically underestimating their lasting impact in the long run. Skepticism and hype surrounding emerging technologies, such as self-driving cars, virtual reality (VR), and augmented reality (AR) are examples of this tendency. This is evident in the skepticism and hype surrounding these systems. It was once considered a fad to think that the internet would be a thing of the past. 

It may also be that AI, especially generative AI, can lead to more convincing environments and characters in the metaverse. This could lead to significant advancements in the metaverse as a whole.  

The fact is that some deny the metaverse's death and even its waning popularity. This is especially true for women. It is predicted that the metaverse will succeed in the future as many companies employ it. 

Nevertheless, for this to happen, it will be necessary to implement some structural changes within the organization. For VR headsets to be affordable and more private, they will need to be sold at a significantly lower price. 

In the beginning, all inventions were just ideas—ones that had the potential to be terrifying, despite this, as time goes on, these small technological innovations become increasingly integrated into our daily lives to such a degree that we cannot imagine a world without them any longer. It may be that the metaverse tends toward this fate. Perhaps another immersive technological invention will replace it as soon as possible, so it must be discussed whether or not it will emerge again.

A metaverse can be described as a virtual platform that creates a social network of sorts. There is potential here. Nonetheless, it should be remembered that a fully functional system should be able to integrate interactive technologies such as VR, AR, and AI. It should however be noted that generative AI does not necessarily spell the end of the metaverse itself. However, they could benefit each other's development by promoting each other's success.

Facebook Shares Private Information With NHS Trusts

 


In a report published by The Observer, NHS trusts have been revealed to share private information with Facebook. As a result of a newspaper investigation, it was discovered that all of the websites of 20 NHS trusts were using a covert tracking tool to collect browsing data that was shared with the tech giant, it is a major breach of privacy that violated patient privacy. 

The trust has assured people that it will not collect personal information about them. It has not obtained the consent of the people involved in the process. Data were collected showing the pages people visited, the buttons they clicked, and the keywords they searched for.

As part of the system, the user's IP address was matched with the data and often the data was associated with their Facebook account details. 

A person's medical condition, the doctor's appointment, and the treatments they have received may be known once this information is matched with their medical information. 

Facebook might use it for advertising campaigns related to its business objectives as part of its business strategy. 

The news of this weekend's breach of Meta Pixel has caused panic across the NHS trust community. This is due to 17 of the 20 trusts using the tracking tool taking drastic measures, even apologizing for the incident. 

How does a Meta Pixel tracker work? What is it all about? 

Meta's advertising tracking tool allows companies to track visitor activity on their web pages and gain a deeper understanding of their actions. 

A meta-pixel has been identified as an element of 33 hospital websites where, whenever someone clicks on an appointment button to make an appointment, Facebook receives “a packet of data” from the Meta Pixel. Data about an individual household may be associated with an IP address, which in turn can be linked to its specific IP address. 

It has been reported that eight doctors have apologized to their patients. Furthermore, multiple trusts were unaware they sent patient data to Facebook. This was when they installed tracking pixels to monitor recruitment and charity campaigns. They thought they monitored recruitment specifically. The Information Commissioner's Office (ICO) has proceeded with its investigation despite this and privacy experts have verbally expressed their concerns in concert as well.

As a result of the research findings, the Meta Pixel has been removed from the Friedrich Hospital website. 

Piedmont Healthcare used Meta Pixels to collect data about patients' upcoming doctor appointments through Piedmont Healthcare's patient portal. These data included patients' names, dates, and times of appointments. 

Privacy experts have expressed concern over these findings, who are concerned that they indicate widespread potential breaches of patient confidentiality and data protection that are in their view “completely unacceptable ”. 

There is a possibility that the company will receive health information of a special category, which is legally protected in certain situations. As defined by the law, health information consists of information that relates to an individual's health status, such as medical conditions, tests, treatments, or any other information that relates to health. 

It is impossible to determine the exact usage of the data once it is accessed by Facebook's servers. The company states that the submission of sensitive medical data to the company is prohibited. It has filters in place to weed out such information if it is received accidentally. 

As several of the trusts involved explained, they originally implemented the tracking pixel to monitor recruitment or charity campaigns. They had no idea that patient information is sent to Facebook as part of that process. 

BHNHST, a healthcare trust in the town of Buckinghamshire, has removed the tracking tool from its website. It has been commented that the appearance of Meta Pixel on this site was an unintentional error on the part of the organization. 

When BHNHST users accessed a patient handbook about HIV medications, it appears that BHNHST shared some information with Facebook as a result of the access. According to the report, this data included details such as the name of the drug, the trust's name, the user's IP address, and the details of their Instagram account. 

In its privacy policy, the trust has made it explicitly clear that any consumer health information collected by it will not be used for marketing purposes without the consumer's explicit consent. 

When Alder Hey Children's Trust in Liverpool was linked to Facebook each time a user accessed a webpage related to a sexual development issue, a crisis mental health service, or an eating disorder, the organization also shared information with Facebook. 

Professor David Leslie, director of ethics at the Alan Turing Institute, warned that the transfer of patient information to third parties by the National Health Service would erode the "delicate relationship of trust" between the NHS and its patients. When accessing an NHS website, we have a reasonable expectation that our personal information will not be extracted and shared with third-party advertising companies or companies that might use it to target ads or link our personal information to health conditions."

According to Wolfie Christl, a data privacy expert who has been researching the ad tech industry to find out what is happening, "This should have been stopped long ago by regulators, rather than what is happening now. This is unacceptable in any way, and it must stop immediately as it is irresponsible and negligent." 

20 NHS trusts in England use the tracking tool to find their locations. Together the 20 trusts cover a 22 million population in England, reaching from Devon to the Pennines. Several people had used it for many years before it was discontinued. 

Moreover, Meta is facing litigation over allegations that it intentionally received sensitive health information - including information taken from health portals - and did not take any steps to prevent it. Several plaintiffs have filed lawsuits against Meta, alleging it violated their medical privacy by intercepting and selling their individually identifiable health information from its partner websites. T

Meta stated that the trusts had been contacted to remind them of the privacy policies in place, essentially to prohibit the sharing of health information between the organization and Meta. 

"Our corporate communication department educates advertisers on the proper use of business tools to avoid this kind of situation," the spokesperson added. The group added that it was the owner's responsibility to make sure that the website complied with all applicable data protection laws and that consent was obtained before sending any personal information. 

Several questions have been raised concerning the effectiveness of its filters designed to weed out potentially sensitive, or what types of information would be blocked from hospital websites by the company. They also refused to explain why NHS trusts could send the data in the first place. 

According to the company, advertisers can use its business software tools to grow their business by using health-based advertising to help them achieve their business goals. There are several guides available on its website on how it can display ads to its users that "might be of interest" by leveraging data collected by its business tools. If you look at travel websites, for instance, you might see ads for hotel deals appearing on the website. 

Meta was accused of not complying with part of GDPR (General Data Protection Regulation), in the sense that it moved Facebook users' data from one country to another without permission, according to the DPC. 

Meta Ireland was fined a record fine on Meta Ireland from the European Commission. This order orders it to suspend any future transfers of personal data to the US within five months. They have also ordered the company to stop any future data transfer to the US within the same period. Meta imposed an unjustified fine, according to the company.

FTC Proposes Ban on Meta Profiting Off Children’s Data

The Federal Trade Commission (FTC) has accused Facebook of violating its 2019 privacy agreement by allowing advertisers to target children with ads based on their activity on other apps and websites. The FTC has proposed a ban on Meta from profiting off children's data and a blanket prohibition on any company monetizing the data of children aged under 13.

According to the FTC, Facebook’s Messenger Kids app, which is aimed at children under 13, was also used to gather data on children's activity that was used for advertising purposes. The Messenger Kids app is designed to allow children to communicate with friends and family in a safe and controlled environment, but the FTC alleges that Facebook failed to adequately protect children's data and privacy.

The proposed ban would prevent Meta from using children's data to target ads or sharing such data with third-party advertisers. The FTC also suggested that the company should provide parents with greater control over the data that is collected about their children.

Facebook has responded to the FTC's allegations, stating that it has taken significant steps to protect children's privacy, including requiring parental consent before children can use the Messenger Kids app. The company has also stated that it will continue to work with the FTC to resolve any concerns and will take any necessary steps to comply with the law.

The proposed ban on profiting off children's data is part of a wider crackdown by regulators on big tech companies and their data practices. The FTC has also proposed new rules that would require companies to obtain explicit consent from consumers before collecting or sharing their personal information.

In addition to the FTC's proposed ban, lawmakers in the US have also proposed new legislation that would strengthen privacy protections for children online. The bill, known as the Children's Online Privacy Protection Modernization Act, would update the Children's Online Privacy Protection Act (COPPA) to reflect changes in technology and the way children use the internet.

The proposed legislation would require companies to obtain parental consent before collecting any personal information from children under 16, and would also establish a new agency to oversee online privacy protections for children.

The proposed ban on profiting off children's data, along with the proposed legislation, highlights the growing concern among lawmakers and regulators over the use of personal data, particularly when it comes to vulnerable groups such as children. While companies may argue that they are taking steps to protect privacy, regulators are increasingly taking a tougher stance and pushing for more stringent rules to ensure that individuals' data is properly safeguarded.