Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Social Media. Show all posts

Where is AI Leading Content Creation?


Artificial Intelligence (AI) is reshaping the world of social media content creation, offering creators new possibilities and challenges. The fusion of art and technology is empowering creators by automating routine tasks, allowing them to channel their energy into more imaginative pursuits. AI-driven tools like Midjourney, ElevenLabs, Opus Clip, and Papercup are democratising content production, making it accessible and cost-effective for creators from diverse backgrounds.  

Automation is at the forefront of this revolution, freeing up time and resources for creators. These AI-powered tools streamline processes such as research, data analysis, and content production, enabling creators to produce high-quality content more efficiently. This democratisation of content creation fosters diversity and inclusivity, amplifying voices from various communities. 

Yet, as AI takes centre stage, questions arise about authenticity and originality. While AI-generated content can be visually striking, concerns linger about its soul and emotional depth compared to human-created content. Creators find themselves navigating this terrain, striving to maintain authenticity while leveraging AI-driven tools to enhance their craft. 

AI analytics are playing a pivotal role in content optimization. Platforms like YouTube utilise AI algorithms for A/B testing headlines, predicting virality, and real-time audience sentiment analysis. Creators, armed with these insights, refine their content strategies to tailor messages, ultimately maximising audience engagement. However, ethical considerations like algorithmic bias and data privacy need careful attention to ensure the responsible use of AI analytics in content creation. 

The rise of virtual influencers, like Lil Miquela and Shudu Gram, poses a unique challenge to traditional content creators. While these virtual entities amass millions of followers, they also threaten the livelihoods of human creators, particularly in influencer marketing campaigns. Human creators, by establishing genuine connections with their audience and upholding ethical standards, can distinguish themselves from virtual counterparts, maintaining trust and credibility. 

As AI continues its integration into content creation, ethical and societal concerns emerge. Issues such as algorithmic bias, data privacy, and intellectual property rights demand careful consideration for the responsible deployment of AI technologies. Upholding integrity and ethical standards in creative practices, alongside collaboration between creators, technologists, and policymakers, is crucial to navigating these challenges and fostering a sustainable content creation ecosystem. 

In this era of technological evolution, the impact of AI on social media content creation is undeniable. As we embrace the possibilities it offers, addressing ethical concerns and navigating through the intricacies of this digitisation is of utmost importance for creators and audiences alike.

 

Meta’s Facebook, Instagram Back Online After Two Hour Long Outage

 

On March 5, a technical failure resulted in widespread login issues across Meta's Facebook, Instagram, Threads, and Messenger platforms.

Meta's head of communications, Andy Stone, confirmed the issues on X, formerly known as Twitter, and stated that the company "resolved the issue as quickly as possible for everyone who was impacted, and we apologise for any inconvenience." 

Users reported getting locked out of their Facebook accounts, and the platform's feeds, as well as Threads and Instagram, did not refresh. WhatsApp, which is also owned by Meta, seems unaffected.

A senior official from the United States Cybersecurity and Infrastructure Security Agency told reporters Tuesday that the agency was "not cognizant of any specific election nexus nor any specific malicious cyber activity nexus to the outage.” 

The outage occurs just ahead of the March 7th deadline for Big Tech firms to comply with the European Union's new Digital Markets Act. To comply, Meta is making modifications, including allowing users to separate their Facebook and Instagram accounts, and preventing personal information from being pooled to target them with online adverts. It is unclear whether the downtime is related to Meta's preparations for the DMA. 

Facebook, Instagram, and WhatsApp went down for hours in 2021, which the firm blamed on inaccurate changes to routers that coordinate network traffic between its data centres. The following year, WhatsApp experienced another brief outage. 

Facebook engineers were dispatched to one of its key US data centres in California to restore service, indicating that the fix had to be done remotely. Further complicating matters, the outage briefly prevented some employees from using their badges to access workplaces and conference rooms, according to The New York Times, which initially reported that engineers had been called to the data centre.

The “Mother of All Breaches”: Implications for Businesses


In the vast digital landscape, data breaches have become an unfortunate reality. However, some breaches stand out as monumental, and the recent discovery of the “mother of all breaches” (MOAB) is one such instance. Let’s delve into the details of this massive security incident and explore its implications for businesses.

The MOAB Unveiled

At the beginning of this year, cybersecurity researchers stumbled upon a staggering dataset containing 26 billion leaked entries. This treasure trove of compromised information includes data from prominent platforms like LinkedIn, Twitter.com, Tencent, Dropbox, Adobe, Canva, and Telegram. But the impact didn’t stop there; government agencies in the U.S., Brazil, Germany, the Philippines, and Turkey were also affected.

The MOAB isn’t your typical data breach—it’s a 12-terabyte behemoth that cybercriminals can wield as a powerful weapon. Here’s why it’s a game-changer:

Identity Theft Arsenal: The stolen personal data within this dataset provides threat actors with a comprehensive toolkit. From email addresses and passwords to sensitive financial information, it’s a goldmine for orchestrating identity theft and other malicious activities.

Global Reach: The MOAB’s reach extends across borders. Organizations worldwide are at risk, and the breach’s sheer scale means that no industry or sector is immune.

Implications for Businesses

As business leaders, it’s crucial to understand the implications of the MOAB and take proactive measures to safeguard your organization:

1. Continual Threat Landscape

The MOAB isn’t a one-time event—it’s an ongoing threat. Businesses must adopt a continuous monitoring approach to detect any signs of compromise. Here’s what to watch out for:

  • Uncommon Access Scenarios: Keep an eye on access logs. Sudden spikes in requests or unfamiliar IP addresses could indicate unauthorized entry. Logins during odd hours may also raise suspicion.
  • Suspicious Account Activity: Scammers might attempt to take over compromised accounts. Look for unexpected changes in user privileges, irregular login times, and frequent location shifts.
  • Phishing Surge: Breaches like the MOAB create fertile ground for phishing attacks. Educate employees and customers about recognizing phishing scams.

2. Infrastructure Vigilance

Patch and Update: Regularly update software and apply security patches. Vulnerabilities in outdated systems can be exploited.

Multi-Factor Authentication (MFA): Implement MFA wherever possible. It adds an extra layer of security by requiring additional verification beyond passwords.

Data Encryption: Encrypt sensitive data both at rest and in transit. Even if breached, encrypted data remains useless to attackers.

Incident Response Plan: Have a robust incident response plan in place. Know how to react swiftly if a breach occurs.

3. Customer Trust and Reputation

Transparency: If your organization is affected, be transparent with customers. Promptly inform them about the breach, steps taken, and precautions they should follow.

Reputation Management: A breach can tarnish your brand’s reputation. Communicate openly, take responsibility, and demonstrate commitment to security.

4. Legal and Regulatory Compliance

Data Protection Laws: Understand the legal obligations related to data breaches in your jurisdiction. Compliance is critical to avoid penalties.

Notification Requirements: Depending on the severity, you may need to notify affected individuals, authorities, or regulatory bodies.

5. Employee Training

Security Awareness: Train employees to recognize phishing attempts, use strong passwords, and follow security protocols.

Incident Reporting: Encourage employees to report any suspicious activity promptly.

What next?

The MOAB serves as a wake-up call for businesses worldwide. Cybersecurity isn’t a one-and-done task—it’s an ongoing commitment. By staying vigilant, implementing best practices, and prioritizing data protection, organizations can mitigate the impact of breaches and safeguard their customers’ trust.



Analysis: AI-Driven Online Financial Scams Surge

 

Cybersecurity experts are sounding the alarm about a surge in online financial scams, driven by artificial intelligence (AI), which they warn is becoming increasingly difficult to control. This warning coincides with an investigation by AAP FactCheck into cryptocurrency scams targeting the Pacific Islands.

AAP FactCheck's analysis of over 100 Facebook accounts purporting to be crypto traders reveals deceptive tactics such as fake profile images, altered bank notifications, and false affiliations with prestigious financial institutions.

The experts point out that Pacific Island nations, with their low levels of financial and media literacy and under-resourced law enforcement, are particularly vulnerable. However, they emphasize that this issue extends globally.

In 2022, Australians lost over $3 billion to scams, with a significant portion involving fraudulent investments. Ken Gamble, co-founder of IFW Global, notes that AI is amplifying the sophistication of scams, enabling faster dissemination across social media platforms and rendering them challenging to combat effectively.

Gamble highlights that scammers are leveraging AI to adapt to local languages, enabling them to target victims worldwide. While the Pacific Islands are a prime target due to their limited law enforcement capabilities, organized criminal groups from various countries, including Israel, China, and Nigeria, are behind many of these schemes.

Victims recount their experiences, such as a woman in PNG who fell prey to a scam after her relative's Facebook account was hacked, resulting in a loss of over 15,000 kina.

Dan Halpin from Cybertrace underscores the necessity of a coordinated global response involving law enforcement, international organizations like Interpol, public awareness campaigns, regulatory enhancements, and cross-border collaboration.

Halpin stresses the importance of improving cyber literacy levels in the region to mitigate these risks. However, Gamble warns that without prioritizing this issue, fueled by AI advancements, the situation will only deteriorate further.

Facebook's Two Decades: Four Transformative Impacts on the World

 

As Facebook celebrates its 20th anniversary, it's a moment to reflect on the profound impact the platform has had on society. From revolutionizing social media to sparking privacy debates and reshaping political landscapes, Facebook, now under the umbrella of Meta, has left an indelible mark on the digital world. Here are four key ways in which Facebook has transformed our lives:

1. Revolutionizing Social Media Landscape:
Before Facebook, platforms like MySpace existed, but Mark Zuckerberg's creation quickly outshone them upon its 2004 launch. Within a year, it amassed a million users, surpassing MySpace within four years, propelled by innovations like photo tagging. Despite fluctuations, Facebook steadily grew, reaching over a billion monthly users by 2012 and 2.11 billion daily users by 2023. Despite waning popularity among youth, Facebook remains the world's foremost social network, reshaping online social interaction.

2. Monetization and Privacy Concerns:
Facebook demonstrated the value of user data, becoming a powerhouse in advertising alongside Google. However, its data handling has been contentious, facing fines for breaches like the Cambridge Analytica scandal. Despite generating over $40 billion in revenue in the last quarter of 2023, Meta, Facebook's parent company, has faced legal scrutiny and fines for mishandling personal data.

3. Politicization of the Internet:
Facebook's targeted advertising made it a pivotal tool in political campaigning worldwide, with significant spending observed, such as in the lead-up to the 2020 US presidential election. It also facilitated grassroots movements like the Arab Spring. However, its role in exacerbating human rights abuses, as seen in Myanmar, has drawn criticism.

4. Meta's Dominance:
Facebook's success enabled Meta, previously Facebook, to acquire and amplify companies like WhatsApp, Instagram, and Oculus. Meta boasts over three billion daily users across its platforms. When unable to acquire rivals, Meta has been accused of replicating their features, facing regulatory challenges and accusations of market dominance. The company is shifting focus to AI and the Metaverse, indicating a departure from its Facebook-centric origins.

Looking ahead, Facebook's enduring popularity poses a challenge amidst rapid industry evolution and Meta's strategic shifts. As Meta ventures into the Metaverse and AI, the future of Facebook's dominance remains uncertain, despite its monumental impact over the past two decades.

Telegram Emerges as Hub for Cybercrime, Phishing Attacks as Cheap as $230

Cybersecurity experts raise alarms as Telegram becomes a hotspot for cybercrime, fueling the rise of phishing attacks. This trend facilitates mass assaults at a shockingly low cost, highlighting the "democratization" of cyber threats. In a recent development, cybersecurity researchers shed light on the democratization of the phishing landscape, courtesy of Telegram's burgeoning role in cybercrime activities. 

This messaging platform has swiftly transformed into a haven for threat actors, offering an efficient and cost-effective infrastructure for orchestrating large-scale phishing campaigns. Gone are the days when sophisticated cyber attacks required substantial resources. Now, malevolent actors can execute mass phishing endeavours for as little as $230, making cybercrime accessible to a wider pool of perpetrators. 

The affordability and accessibility of such tactics underscore the urgent need for heightened vigilance in the digital realm. Recent revelations regarding Telegram's involvement in cybercrime underscore a recurring issue with the platform's lenient content moderation policies. Experts emphasize that Telegram's history of lax moderation has fostered a breeding ground for various illicit activities, including the distribution of illegal content and cyber attacks. 

Criticism has been directed at Telegram in the past for its failure to effectively address issues such as misinformation, hate speech, and extremist content, highlighting concerns about user safety. With cyber threats evolving and the digital landscape growing more complex, the necessity for stringent moderation measures within platforms like Telegram becomes increasingly urgent. 

However, balancing user privacy with security poses a significant challenge, given the platform's encryption and privacy features. As discussions continue, Telegram and similar platforms must prioritize user safety and implement effective moderation strategies to mitigate risks effectively. 

"This messaging app has transformed into a bustling hub where seasoned cybercriminals and newcomers alike exchange illicit tools and insights creating a dark and well-oiled supply chain of tools and victims' data," Guardio Labs threat researchers Oleg Zaytsev and Nati Tal reported. 

Furthermore, they added that "free samples, tutorials, kits, even hackers-for-hire – everything needed to construct a complete end-to-end malicious campaign." The company also described Telegram as a "scammers paradise" and a "breeding ground for modern phishing operations." 

In April 2023, Kaspersky revealed that phishers are using Telegram to teach and advertise malicious bots. One such bot, Telekopye (aka Classiscam), helps create fake web pages, emails, and texts for large-scale phishing scams. Guardio warns that Telegram offers easy access to phishing tools, some even free, facilitating the creation of scam pages. 

These kits, along with compromised WordPress sites and backdoor mailers, enable scammers to send convincing emails from legitimate domains, bypassing spam filters. Researchers stress the dual responsibility of website owners to protect against exploitation for illicit activities. 

Telegram offers professionally crafted email templates ("letters") and bulk datasets ("leads") for targeted phishing campaigns. Leads are highly specific, and sourced from cybercrime forums or fake survey sites. Stolen credentials are monetized through the sale of "logs" to other criminal groups, yielding high returns. Social media accounts may sell for $1, while banking details can fetch hundreds. With minimal investment, anyone can launch a significant phishing operation.

Cybercriminals Exploit X Gold Badge, Selling Compromised Accounts on Dark Web

 A recent report highlights the illicit activities of cybercriminals exploiting the "Gold" verification badge on X (formerly Twitter). Following Elon Musk's acquisition of X in 2022, a paid verification system was introduced, allowing regular users to purchase blue ticks. Additionally, organizations could obtain the coveted gold check mark through a monthly subscription. 

Unfortunately, the report reveals that hackers are capitalizing on this feature by selling compromised accounts, complete with the gold verification badge, on dark web marketplaces and forums. CloudSEK, in its findings, notes a consistent pattern of advertisements promoting the sale of accounts with gold verification badges. 

These advertisements were not limited to dark web platforms but were also observed on popular communication channels such as Telegram. The exploitation of the gold verification badge poses a significant risk, as cybercriminals leverage these compromised accounts for phishing and scams, potentially deceiving unsuspecting users. 

This underscores the ongoing challenges in maintaining the security and integrity of online verification systems in the evolving landscape of cyber threats. CloudSek found some ads by just searching on Google, Facebook, and Telegram using words like "Twitter Gold buy." They saw dark web ads, and some were even on Facebook. People were selling X Gold accounts, and the price depended on how popular the account was. 

CloudSek's report said that some ads named the companies for sale, and the cost ranged from $1200 to $2000. This shows that hackers think they can make real money by selling accounts with the gold badge, based on how well-known and followed they are. It's a clear way cybercriminals make cash by selling compromised accounts on the dark web, showing why they do it. 

On the Dark web, a source from CloudSek managed to obtain a quote for 15 inactive X accounts, priced at $35 per account. The seller went a step further, offering a recurring deal of 15 accounts every week, accumulating a total of 720 accounts annually. 

It's noteworthy that the responsibility of activating these accounts with the coveted "gold" status lies with the purchaser, should they choose to do so. This information underscores the thriving market for inactive accounts and the potential volume of compromised assets available for illicit transactions.

Trading Tomorrow's Technology for Today's Privacy: The AI Conundrum in 2024

 


Artificial Intelligence (AI) is a technology that continually absorbs and transfers humanity's collective intelligence with machine learning algorithms. It is a technology that is all-pervasive, and it will soon be all-pervasive as well. It is becoming increasingly clear that, as technology advances, so does its approach to data management the lack thereof. Thus, as the start of 2024 approaches, certain developments will have long-lasting impacts. 

Taking advantage of Google's recent integration of Bard, its chat-based AI tool, into a host of other Google apps and services is a good example of how generative AI is being moved more directly into consumer life through the use of text, images, and voice. 

A super-charged version of Google Assistant, Bard is equipped with everything from Gmail, Docs, and Drive, to Google Maps, YouTube, Google Flights, and hotels, all of which are bundled with it. Using a conversational, natural-language mode, Bard can filter enormous amounts of data online, while providing personalized responses to individual users, all while doing so in an unprecedented way. 

Creating shopping lists, summarizing emails, booking trips — all things that a personal assistant would do — for those without one. As of 2023, we have seen many examples of how not everything one sees or hears on the internet is real, whether it be politics, movies, or even wars. 

Artificial intelligence technology continues to advance rapidly, and the advent of deep fakes has raised concern in the country about its potential to influence electoral politics, especially during the Lok Sabha elections that are planned to take place next year. 

There is a sharp rise in deep fakes that have caused widespread concern in the country. In a deepfake, artificial intelligence can be used to create videos or audio that make sense of the actions or statements of people they did not do or say, resulting in the spread of misinformation and damage to their reputation. 

In the wake of the massive leap in public consciousness about the importance of generative AI that occurred in 2023, individuals and businesses will be putting artificial intelligence at the centre of even more decisions in the coming year. 

Artificial intelligence is no longer a new concept. In 2023, ChatGPT, MidJourney, Google Bard, corporate chatbots, and other artificial intelligence tools have taken the internet by storm. Their capabilities have been commended by many, while others have expressed concerns regarding plagiarism and the threat they pose to certain careers, including those related to content creation in the marketing industry. 

There is no denying that artificial intelligence, no matter what you think about it, has dramatically changed the privacy landscape. Despite whatever your feelings about AI are, the majority of people will agree on the fact that AI tools are trained on data that is collected from the creators and the users of them. 

For privacy reasons, it can be difficult to maintain transparency regarding how this data is handled since it can be difficult to understand how it is being handled. Additionally, users may forget that their conversations with AI are not as private as text conversations with other humans and that they may inadvertently disclose sensitive data during these conversations. 

According to the GDPR, users are already protected from fully automated decisions making a decision about the course of their lives by the GDPR -- for example, an AI cannot deny a bank loan based on how it analyzes someone's financial situation. The proposed legislation in many parts of the world will eventually lead to more law enforcement regulating artificial intelligence (AI) in 2024. 

Additionally, AI developers will likely continue to refine their tools to change them into (hopefully) more privacy-conscious oriented tools as the laws governing them become more complex. As Zamir anticipates that Bard Extensions will become even more personalized and integrated with the online shopping experience, such as auto-filling out of checkout forms, tracking shipments, and automatically comparing prices, Bard extensions are on course to become even more integrated with the online shopping experience. 

All of that entails some risk, according to him, from the possibility of unauthorized access to personal and financial information during the process of filling out automated forms, the possibility of maliciously intercepting information on real-time tracking, and even the possibility of manipulated data in price comparisons. 

During 2024, there will be a major transformation in the tapestry of artificial intelligence, a transformation that will stir a debate on privacy and security. From Google's Bard to deepfake anxieties, let's embark on this technological odyssey with vigilant minds as users ride the wave of AI integration. Do not be blind to the implications of artificial intelligence. The future of AI is to be woven by a moral compass, one that guides innovation and ensures that AI responsibly enriches lives.

China Issues Warning About Theft of Military Geographic Data in Data Breaches

 

China issued a cautionary notice regarding the utilization of foreign geographic software due to the discovery of leaked information concerning its critical infrastructure and military. The Ministry of State Security, while refraining from assigning blame, asserted that the implicated software contained "backdoors" deliberately designed for unauthorized data access.

Prompted by this revelation, the Chinese government has called upon organizations to conduct thorough examinations for potential security vulnerabilities and incidents of data breaches. Through its official WeChat account, the government emphasized that foreign software had collected data encompassing state secrets, posing a substantial threat to China's national security.

The compromised data reportedly involves precise geographic information and three-dimensional geomorphological mapping crucial to key sectors such as transportation, energy, and the military, as reported by Reuters. Against the backdrop of heightened global tensions, China has prioritized enhancing the security of vital industries, particularly in response to increased geopolitical tensions with Taiwan and ongoing reassurances from the United States to the island nation.

Suspicions surround China's involvement in recent cyberattacks targeting U.S. infrastructure, purportedly aimed at formulating a strategic playbook for potential conflicts between the two superpowers. In parallel, the United States has taken proactive measures to bolster its domestic semiconductor production for military applications. 

Through substantial investments, as outlined in the CHIPS Act, the U.S. aims to establish semiconductor factories across the country, deeming this move crucial for national security. The rationale behind this initiative lies in mitigating the risk of Chinese espionage associated with current semiconductor imports from East Asian production hubs.

Laptops with Windows Hello Fingerprint Authentication Vulnerable

 


Microsoft’s Windows Hello security, which offers a passwordless method of logging into Windows-powered machines may not be as secure as users think. Microsoft Windows Hello fingerprint authentication was evaluated for security over its fingerprint sensors embedded in laptops. This led to the discovery of multiple vulnerabilities that would allow a threat actor to bypass Windows Hello Authentication completely. 

As reported by Blackwing Intelligence in a blog post, Microsoft's Offensive Research and Security Engineering (MORSE) had asked them to conduct an assessment of the security of the three top fingerprint sensors embedded in laptops, in response to a recent request. 

There was research conducted on three laptops, the Dell Inspiron 15, the Lenovo ThinkPad T14, and the Microsoft Surface Pro Type Cover with Fingerprint ID, which were used in the study. It was discovered that several vulnerabilities in the Windows Hello fingerprint authentication system could be exploited by researchers working on the project.

In addition, The document also reveals that the fingerprint sensors used in Lenovo ThinkPad T14, Dell Inspiron 15, Surface Pro 8 and X tablets made by Goodix, Synaptics, and ELAN were vulnerable to man-in-the-middle attacks due to their underlying technology. 

A premier sensor enabling fingerprint authentication through Windows Hello is not as secure as manufacturers would like. It has been discovered that there are several security flaws in many fingerprint sensors used in many laptops that are compatible with the Windows Hello authentication feature due to the use of outdated firmware. 

It was discovered by researchers at Blackwing Intelligence, a company that conducts research into the security, offensive capabilities, and vulnerability of hardware and software products. The researchers found weaknesses in fingerprint sensors embedded in the devices from Goodix, Synaptics, and ELAN, all of which are manufactured by these manufacturers. 

Using fingerprint reader exploits requires users to already have fingerprint authentication set up on their targeted laptops so that the exploits can work. Three fingerprint sensors in the system are all part of a type of sensor that is known as "match on chip" (MoC), which includes all biometric management functions in the integrated circuit of the sensor itself.

Concept Of Vulnerability Match On Chip As reported by Cyber Security News, this vulnerability is due to a flaw within the concept of the "match on chip" type sensors. Microsoft removed the option of storing some fingerprint templates on the host machine and replaced it with a "match on chip" sensor.  This means that the fingerprint templates are now stored on the chip, thus potentially reducing the concern that fingerprints might be exfiltrated from the host if the host becomes compromised, which could compromise the privacy of your data. 

Despite this, this method has a downside as it does not prevent malicious sensors from spoofing the communication between the sensor and the host, so in this case, an authorized and authenticated user who is using the sensor can easily be fooled. 

There have been several successful attempts at defeating Windows Hello biometric-based authentication systems in the past, but this isn't the first time. This month, Microsoft released two patches (CVE-2021-34466, CVSS score: 6.1), aimed at patching up a security flaw that was rated medium severity in July 2021, and that could allow an adversary to hijack the login process by spoofing the target's face. 

The validity of Microsoft's statement as to whether they will be able to find a fix for the flaws is still unclear; however, this is not the first time Windows Hello, a biometric-based system, has been the victim of attacks. A proof of concept in 2021 showed that by using an infrared photo of a victim with the facial recognition feature of Windows Hello, it was possible to bypass the authentication method. Following this, Microsoft fixed the issue to prevent the problem from occurring again.

Guarding the Gate: How to Thwart Initial Access Brokers' Intrusions

 


The term "Access-as-a-service" (AaaS) refers to a new business model in the underground world of cybercrime in which threat actors sell one-time methods to gain access to networks to infiltrate networks for as little as one dollar. 

One group of criminals, which are known as access brokers, initial access brokers, and initial access traders (IABs), are stealing credentials of enterprise users and selling them to other groups of attackers. There are also encryption tools that can be used by these buyers to secretly exfiltrate your personal information from the target organization using malware-as-a-service (MaaS) or ransomware-as-a-service (RaaS). 

Cybercrime-as-a-service (CaaS) is a growing trend that is increasingly being used as a platform for committing crimes. A significant portion of the evolution of ransomware attacks over the last decade has taken place at both the technological level and organizational level as threat actors have attempted to expand the scope and profitability of their operations. 

A pivotal factor behind the widespread increase in the frequency and complexity of ransomware attacks can be attributed to the provision of ransomware as a service (RaaS). RaaS, which operates much like SaaS, and involves the creation of ransomware capabilities and selling or leasing them to buyers, has lowered the barrier to entry for the extortion business and provided a simpler and more accessible model. 

There are now a number of operators working together in unison to orchestrate the attacks in order to achieve the goal, including Users, Affiliates, and Initial Access Brokers, who act as a cohesive team. According to the recent report, "Rise of Initial Access Brokers", these intermediaries, which are the first to get access to cyberattack victims, are playing a key role at the top of the kill-chain funnel of cyberattacks. 

An independent analysis bureau (IAB) can be defined as a de facto intermediary whose business model is exactly what their name suggests: they breach the networks of as many companies as they are able to. Upon accessing victims, they then sell to the highest bidders at the highest prices. There is a tendency for ransomware groups to buy the ransomware from the buyers. 

A growing number of independent advisory boards have been formed recently mainly as a result of the pandemic and the ensuing migration to work from home. As a result of workers log in remotely and connecting to untrustworthy Wi-Fi networks, untrustworthy Wi-Fi networks can be exploited to allow attackers to gain access to systems.

There is a growing trend among cybercriminals of scanning at scale for vulnerabilities that will allow them to access remote systems, such as virtual private networks (VPNs) and selling this access to their victims. Once the details of a vulnerability are made public, the Information Assurance Business deploys info stealers to gather keystrokes, session cookies, credentials, screenshots and video recordings, local information, browser history, bookmarks, and clipboard material from the compromised device as soon as the details are made public. 

As soon as an information stealer is installed in an organization or system, a remote access Trojan (RAT) will begin to collect raw log files to log information. As a result, these logs are manually reviewed to identify usernames and passwords that may be used to sell or monetize identities on the Dark Web. This means that IABs are seeking login credentials to access virtual private networks (VPNs), remote desktop protocols (RDPs), Web applications, and email servers that will aid in the recruitment of spear phishing scammers and potential business email compromise schemes. Occasionally, some brokers have direct contact with system administrators or end users who may be willing to sell access to their systems directly through them. 

Threat groups have been advertising (on the Dark Web) in recent months for administrators and end users who are willing to share their credentials with them in exchange for large amounts of cryptocurrency in exchange for sharing credentials for a few minutes. 

Threat groups have contacted employees from specific organizations to obtain access to their systems in exchange for larger payments. It is safe to say that initial access brokers have taken the spotlight in the past year because they have demonstrated a significant ability to facilitate network intrusions by ransomware affiliates and operators, and they have been very successful at it. As the cybercrime underground ecosystem becomes more active and popular, these initial access brokers ("IABs") will continue to gain popularity as the cybercrime underground ecosystem grows. 

A Guide to Defending Against Access Brokers 


Users should identify their attack surface and develop a plan to address it, to close security gaps, security teams must gain an outside-in perspective on their entire enterprise attack surface. Empower user security teams to map their assets, visualize attack paths, and define plans to address them so that they can close the gaps.  

Identity protection should be considered a priority, today, plenty of malware-free attacks, social engineering, and similar attempts have been made to steal and use credentials, making it crucial that strong identity protection is implemented. Employees need to be taught about social media, not just how to use it. 

Avoid announcing department closures or IT service changes on social media, and remind them to refrain from sharing private information on social media. Users should train their staff not to share credentials over support calls, emails, or support tickets. 

Finally, users should avoid publishing executive or IT contact information on their company's website — it might facilitate impersonation attempts on their behalf. 

To protect the cloud, a strong cloud protection strategy is required. There have been increasing attacks on cloud infrastructure and attackers have been employing a variety of tactics, techniques, and procedures to compromise cloud-based data and applications that are critical to businesses. 

The role of IABs in the realm of RaaS (Ransomware-as-a-Service) is continuously evolving. By understanding and keeping up with their shifting tactics, methods, and trends, organizations can better prepare themselves to effectively mitigate the risk and impact of ransomware attacks. As IABs continually remodel and refine their strategies, it becomes increasingly crucial for organizations to adopt and implement robust security measures. 

Strengthening the security of the supply chain, implementing multi-factor authentication across all systems and platforms, deploying advanced threat-hunting solutions to proactively detect and prevent attacks, and conducting regular and comprehensive training sessions for employees are key steps that organizations should take to effectively mitigate the growing threat posed by IABs.

YouTube Faces Struggle from EU Regulators for Dropping Use of Ad Blockers


Alexander Hanff, a privacy activist is suing the European Commission, claiming that YouTube’s new ad blocker detection violates European law. 

In response to the Hanff’s claims to the European Commission, German Pirate Party MEP asked for a legal position on two key issues: whether this type of detection is "absolutely necessary to provide a service such as YouTube" and whether "protection of information stored on the device (Article 5(3) ePR) also cover information as to whether the user's device hides or blocks certain page elements, or whether ad-blocking software is used on the device."

YouTube’s New Policy 

Recently, YouTube has made it mandatory for users to cease using ad blockers or else they will receive notifications that may potentially prevent them from accessing any material on the platform. The majority of nations will abide by the new regulations, which YouTube claims are intended to increase revenue for creators.

However, the reasons that the company provides are not likely to hold up in Europe. Experts in privacy have noted that YouTube's demand to allow advertisements to run for free users is against EU legislation. Since it can now identify users who have installed ad blockers to avoid seeing advertisements on the site, YouTube has really been accused of spying on its customers.

EU regulators has already warned tech giants like Google and Apple. Now, YouTube is the next platform that could face lengthy legal battles with the authorities as it attempts to defend the methods used to identify these blocks and compel free YouTube viewers to watch advertisements regularly in between videos. Google and other digital behemoths like Apple have previously faced the wrath of EU regulators. Due to YouTube's decision to show adverts for free users, many have uninstalled ad blockers from their browsers as a result of these developments.

According to experts, YouTube along with violating the digital laws, is also violating certain Fundamental consumer rights. Thus, it is likely that the company would have to change its position in the area if the platform is found to be in violation of the law with its anti-ad blocker regulations. This is something that Meta was recently forced to do with Instagram and Facebook.

The social networking giant has further decided on the policy that if its users (Facebook and Instagram) do not want to see ads while browsing the platforms, they will be required to sign up for its monthly subscriptions, where the platforms are free from advertisements.  

Canada Reports Targeting of Trudeau and Others by Chinese Bots

 

Canada has revealed the detection of a disinformation campaign believed to be linked to China, targeting numerous politicians, including Prime Minister Justin Trudeau. 

This campaign, termed "spamouflage," utilized a barrage of online posts to discredit Canadian Members of Parliament, according to the country's foreign ministry. The objective appeared to be suppressing criticism of Beijing. China has consistently denied involvement in Canadian affairs.

Global Affairs Canada disclosed that its Rapid Response Mechanism, designed to monitor state-sponsored disinformation from foreign sources, identified a "spamouflage" campaign associated with Beijing in August. 

This effort, which intensified in early September, employed a bot network to inundate the social media accounts of various Canadian politicians with comments in both English and French. These comments alleged that a critic of the Chinese Communist Party in Canada had accused the politicians of legal and ethical transgressions.

The campaign also featured the likely use of "deep fake" videos, digitally altered by artificial intelligence, targeting individuals. This is the latest in a series of allegations from Canadian intelligence agencies and officials asserting Beijing's interference in Canada's elections.

A "spamouflage" campaign employs a network of new or commandeered social media accounts to disseminate propaganda messages across platforms like Facebook, Twitter, Instagram, YouTube, Medium, Reddit, TikTok, and LinkedIn. The same accounts were also involved in spreading misinformation about the Hawaii wildfires in August, falsely attributing them to a covert US military "weather weapon."

In addition to the Prime Minister, the campaign targeted Conservative opposition leader Pierre Polievre and several members of Mr. Trudeau's cabinet. Global Affairs Canada has notified the affected social media platforms, leading to the removal of a significant portion of the activity and network. The department has also informed the affected politicians, providing guidance on safeguarding themselves and reporting any suspected foreign interference.

Officials suggest that the bot network behind this campaign may be linked to a broader, well-known Spamouflage network previously acknowledged by tech giants like Meta and Microsoft. This network has also been examined by the Australian Strategic Policy Institute, a non-partisan think tank based in Canberra, which assisted Canada in its assessments.

Earlier in September, Canada launched an inquiry into foreign interference, tasked with investigating potential meddling in its elections by China, Russia, and other actors. The BBC has sought comment from the Chinese embassy in Canada.

Protecting Goa's Seniors from Increasing Cyber Threats

Cybercrimes have increased alarmingly in recent years in Goa, primarily targeting elderly people who are more vulnerable. The number of cybercrime incidents in the state has been continuously increasing, according to reports from Herald Goa, raising concerns among the public and law enforcement.

Data from the Goa Police Department indicates a concerning rise in cases of cybercrime against senior citizens. Scammers frequently use sophisticated techniques to prey on this group's lack of digital literacy. To acquire unlawful access to private data and financial assets, they employ deceptive schemes, phishing emails, and bogus websites.

In an interview with Herald Goa, Inspector General of Police, Jaspal Singh, emphasized the need for enhanced awareness and education regarding online safety for senior citizens. He stated, "It is crucial for our senior citizens to be aware of the potential threats they face online. Education is our strongest weapon against cybercrime."

To address this issue, the Goa Police Department has compiled a comprehensive set of cybercrime prevention tips, available on their official website. These guidelines provide valuable insights into safeguarding personal information, recognizing phishing attempts, and securing online transactions.

Additionally, experts advise seniors to be cautious when sharing personal information on social media platforms. Cybercriminals often exploit oversharing tendencies to gather sensitive data, which can be used for malicious purposes. Individuals must exercise discretion and limit the information they disclose online.

Furthermore, the importance of strong, unique passwords cannot be overstated. A study conducted by cybersecurity firm Norton revealed that 65% of individuals use the same password for multiple accounts, making them vulnerable to hacking. Senior citizens are encouraged to create complex passwords and consider using password manager tools to enhance security.

The increasing number of cybercrimes in Goa that target senior folks highlights how urgent the problem is. It is essential to give priority to education, awareness, and preventative security measures to combat this expanding threat. Seniors can use the internet safely if they follow the advice for prevention and stay educated about potential risks. 

Revolutionizing the Future: How AI is Transforming Healthcare, Cybersecurity, and Communications


Healthcare

Artificial intelligence (AI) is transforming the healthcare industry by evaluating combinations of substances and procedures that will improve human health and thwart pandemics. AI was crucial in helping medical personnel respond to the COVID-19 outbreak and in the development of the COVID-19 vaccination medication. 

AI is also being used in medication discovery to find new treatments for diseases. For example, AI can analyze large amounts of data to identify patterns and relationships that would be difficult for humans to see. This can lead to the discovery of new drugs or treatments that can improve patient outcomes.

Cybersecurity

AI is also transforming the field of cybersecurity. With the increasing amount of data being generated and stored online, there is a growing need for advanced security measures to protect against cyber threats. 

AI can help by analyzing data to identify patterns and anomalies that may indicate a security breach. This can help organizations detect and respond to threats more quickly, reducing the potential damage caused by a cyber attack. AI can also be used to develop more advanced security measures, such as biometric authentication, that can provide an additional layer of protection against cyber threats.

Communication

Finally, AI is transforming the field of communications. With the rise of social media and other digital communication platforms, there is a growing need for advanced tools to help people communicate more effectively.

AI can help by providing language translation services, allowing people to communicate with others who speak different languages. AI can also be used to develop chatbots that can provide customer service or support, reducing the need for human agents. This can improve the efficiency of communication and reduce costs for organizations.

AI is transforming many industries, including healthcare, cybersecurity, and communications. By analyzing large amounts of data and identifying patterns and relationships, AI can help improve outcomes in these fields. As technology continues to advance, we can expect to see even more applications of AI in these and other industries.

Tech Giants Grapple Russian Propaganda: EU's Call to Action

 


In a recent study published by the European Commission, it was found that after Elon Musk changed X's safety policies, Russian propaganda was able to reach a much wider audience, thanks to the changes made by Musk. 

After an EU report revealed they failed to curb a massive Kremlin disinformation campaign surrounding Russia's invasion of Ukraine last month, there has been intense scrutiny on social media platforms Meta, YouTube, X (formerly Twitter), and TikTok, among others. 

As part of the study conducted by civil society groups and published last week by the European Commission, it was revealed that after the dismantling of Twitter's safety standards, very clearly Kremlin-backed accounts have gained further influence in the early part of 2023, especially because of the weakened safety standards. 

In the first two months of 2022, pro-Russian accounts have garnered over 165 million subscribers across major platforms, and have generated over 16 billion views since then. There have still been few details as to whether or not the EU will ban the content of Russian state media. According to the EU study, the failure of X to deal with disinformation, had these rules been in place last year, would have violated these rules if they had been in effect at the time. 

Musk has proven to be more cooperative than social media companies in terms of limiting propaganda on their platforms, even though they are finding it hard to do the same. In fact, according to the study, Telegram and Meta, the company that owns Instagram and Facebook, have made little headway in limiting Russian disinformation campaigns as a result of their efforts. 

There has been a much more aggressive approach to the fight against disinformation in Europe than the US has. By the Digital Services Act that took effect last month, major tech companies are expected to take proactive measures to reduce risks related to children's safety, harassment, the use of illegal content, and threats to democratic processes, or risk getting fined significantly. 

There were tougher rules introduced for the world's biggest online platforms earlier this month under the EU's Digital Services Act (DSA). Several large social media companies are currently operating under the DSA's stricter rules, which demand that they take a more aggressive approach to policing content after the website has been identified as having a minimum of 45 million monthly active users, which includes hate speech and disinformation.

If the DSA had been operational a month earlier, there is a possibility that social media companies could have been fined if they had breached their legal duties – leading to a breach of legal duties. The most damning aspect of Elon Musk's acquisition of X last October is the rapid growth of hate and lies that have reigned on the social network. 

As a result of the new owner's decision to lift mitigation measures on Kremlin-backed accounts, along with removing labels from related Russian state-affiliated accounts, engagement grew by 36 percent between January and May 2023. The new owner argued that "all news" is propaganda to some degree, thus increasing engagement percentages. 

As a consequence, the Kremlin has stepped up its sophisticated information warfare campaign across Europe, threatening free and fair elections across the continent as well as fundamental human rights. There is a chance that platforms will be required to act fast before it is too late to comply with the new EU digital regulation that is now in effect, the Digital Services Act, which was implemented on August 25th, before the European parliamentary elections in 2024 arrive.

It was recently outlined by the Digital Security Alliance that large social media companies and search engines in the EU with at least 45 million monthly active users are now required to adopt stricter content moderation policies, which include clamping down on hate speech and disinformation in a proactive manner, or else face heavy fines if they do not.   

The Race to Train AI: Who's Leading the Pack in Social Media?

 


A growing number of computer systems and data sets consisting of large, complex information have enabled the rise of artificial intelligence over the last few years. AI has the potential to be practical and profitable by being used in numerous applications such as machine learning, which provides a way for a system to locate patterns within large sets of data. 

In modern times, AI is playing a significant role in a wide range of computer systems, including iPhones that recognize and translate voice, driverless cars that carry out complicated manoeuvres under their power, and robots in factories and homes that automate tasks. 

AI has become increasingly important in research over the last few years, and it is now being used in several applications, such as the processing of vast amounts of data that lie at the heart of fields like astronomy and genomics, producing weather forecasts and climate models, and interpreting medical imaging images for signs of disease. 

In a recent update to its privacy policy, X, the social media platform that used to be known as Twitter, stated it may train an AI model based on posts from users. According to Bloomberg early this week, X's recently updated privacy policy informs its users that the company is now collecting various kinds of information about its users, including biometric data, as well as their job history and educational background. 

The data that X will be collecting on users appears to be more than what it has planned to do with them. Another update to the company's policy specifies that it plans to use the data it collects along with other publicly available information to train its machine learning and artificial intelligence models based on the information it collects and other data sources.

Several schools are recommending the use of private data, such as text messages in direct messages, to train their models, according to The Office of Elon Musk, owner of the company and former CEO. There is no reason to be surprised by this change. 

According to Musk, his latest startup, xAI, was founded to help researchers and engineers in the enterprise build new products by utilizing data collected from the microblogging site and his latest startup, Twitter. For accessing the company's data via an API, X charges companies using its API $42,000. 

It was reported in April that X was removed from Microsoft's advertising platforms due to increased fees and, in response, had threatened to sue the company for allegedly using Twitter data illegally. These fees increased after Microsoft reportedly pulled X from its advertising platforms. Elon Musk has called on AI research labs to halt work on systems that can compete with human intelligence, in a tweet published late Thursday.

Musk has called on several tech leaders to stop the development of systems that are at human levels of brightness. Several AI labs have been strongly urged to cease training models more powerful than GPT-4, the newest version of the large language model software developed by the U.S. startup OpenAI, according to an open letter signed by Musk, Steve Wozniak, and 2020 presidential candidate Andrew Yang from the Future of Life Institute. 

Founded in Cambridge, Massachusetts, the Future of Life Institute is a non-profit organization dedicated to pushing forward the responsible and ethical development of artificial intelligence in the future. A few of the founders of the company include Max Tegmark, a cosmologist at MIT, and Jaan Tallinn, the co-founder of Skype. 

Musk and Google's AI lab DeepMind, which is owned by Google, have previously agreed not to develop lethal autonomous weapons systems as part of the organization's previous campaign. In an appeal to all AI labs, the institute said it was taking immediate steps to “pause for at least 6 months at least the training of AI systems with higher levels of power than the GPT-4.” 

The GPT-4, which was released earlier this month, is believed to be a far more sophisticated version of the GPT-3 than its predecessor. Researchers have been amazed to learn that ChatGPT, the viral artificial intelligence chatbot, has been able to mimic human-type responses when users ask its questions. In only two months after its launch in January this year, ChatGPT had accrued 100 million monthly active users, making it the fastest-growing application in the history of consumer applications. 

A machine learning algorithm is trained by taking vast amounts of data from the internet at a time and applying it to write poetry in the style of William Shakespeare to draft legal opinions based on the facts in a case. However, some ethical and moral scholars have raised concerns that AI might also be abused for crime and misinformation purposes, which could lead to exploitation. 

During CNBC's contact with OpenAI, the company was unable to comment immediately upon being contacted. Microsoft, the world's largest technology company whose headquarters are located in Redmond, Washington, has invested $10 billion in OpenAI, which is backed by the company. 

Microsoft is also integrating its natural language processing technology, called GPT, into its Bing search engine for natural language search to make it more conversational and useful to users. There was a follow-up announcement from Google, which announced its line of conversational artificial intelligence (AI) products aimed at consumers. 

According to Musk, AI, or artificial intelligence, may represent one of the biggest threats to civilization shortly. OpenAI was founded by Elon Musk and Sam Altman in 2015, though Musk left OpenAI's board in 2018 and therefore does not hold any stake in the company that he helped found. It has been his view that the organization has diverged from its original purpose several times lately, and he has voiced his opinion about the same.

There is also a race among regulators to get a grip on AI tools due to the rapid advance of technology in this area. In a report published on Wednesday, the United Kingdom announced the publication of a white paper on artificial intelligence, deferring the job of overseeing the use of such tools in different sectors by applying the existing laws within their jurisdictions.

Why Sharing Boarding Pass Pictures on Social Media Is a Privacy Risk, Warns Expert

 

Individuals flying for the first time are aware that an airline boarding pass includes certain details about a traveler, such as their name, flight number, and seat assignment. However, what might not be common knowledge is that these tickets, whether in paper form or electronic, harbor more personal information than readily apparent.

In particular, the barcode on a boarding pass has the capacity to reveal information like a frequent flier number, contact details, or other identifying particulars. According to privacy researcher Bill Fitzgerald, the specifics contained within the barcode can vary from one airline to another. Nevertheless, a prudent approach is to always assume that the scannable code contains personal information about the traveler and their itinerary.

Moreover, travelers should also consider that these barcodes may encompass driver's license and passport details, as these are typically provided to the airline during check-in or at the airport. Consequently, it is crucial to handle paper boarding passes with care, refraining from casually discarding them into the trash. As Fitzgerald emphasizes, posting them on social media is an absolute no-go.

While these precautions may seem like standard data protection advice, even the most experienced travelers have made mistakes when safeguarding their boarding passes. A prime example is former Australian Prime Minister Tony Abbott, who inadvertently exposed his personal information by sharing an Instagram photo of his Qantas flight boarding pass in March 2020. Although the hacker who gained access to Abbott's details did not misuse the information, the potential for malicious intent is a looming concern.

Most attackers could utilize this data, which may seem insignificant on its own, to initiate further online attacks against the traveler's digital accounts and identity. Mark Scrano, an information security manager at cybersecurity firm Cobalt, warns that many airlines rely solely on the data from the boarding pass, particularly the confirmation code and last name, to grant full access to the traveler's online account. This vulnerability could be exploited to access personal data stored by the airline.

These seemingly inconsequential details, when used strategically, could lead to significant troubles for travelers, including identity theft. Fitzgerald advises against sharing barcodes in any way to protect against this risk. Although paper boarding passes are becoming less common, they are still required in certain situations beyond the passenger's control, such as last-minute seat changes at the gate.

According to Fitzgerald, shredding a boarding pass is one of the safest methods for disposal.

While mobile boarding passes might appear to be a convenient solution for safeguarding personal data, Fitzgerald cautions that using electronic tickets within airline apps or loyalty apps is not as straightforward as it seems. He points out that these apps often pose privacy concerns and frequently incorporate various forms of tracking, including first-party and third-party tracking. Additionally, some apps may disclose the user's location in near-real-time, further complicating the choice between paper and electronic boarding passes.

For travelers who prefer using their smartphones instead of paper tickets, Fitzgerald recommends taking a screenshot of the QR code on the mobile boarding pass and saving it to their photos, eliminating the need for an additional app to access it.

In summary, it is advisable to treat any version of your airline ticket as you would a sensitive personal document, even if it appears that information such as flight numbers or barcodes holds little significance. As Fitzgerald notes, while the consequences of such information falling into the wrong hands may not be catastrophic, travelers should not make it easier for potential threats to exploit their data.

Fines for Facebook Privacy Breaches in Norway Crack Down on Meta

 


A fine of 1 million crowns ($98,500) will be imposed on the owner of Facebook, Meta Platforms, by the Norwegian Data Protection Authority (Datatilsynet) starting August 14 due to a privacy breach that occurred before that date. A significant penalty of this magnitude could have major implications for other countries in Europe as well since it may set a precedent.

In a court filing, Meta Platforms has requested that a court in Norway stay a fine imposed by the Nordic country's information regulator on the company that owns Facebook and Instagram. It argued that the company breached users' privacy via Facebook and Instagram. 

It appears that Meta Platforms has filed a court filing requesting a temporary injunction against the order to prevent execution. During a two-day hearing to be held on August 22, the petition will be presented by the company. Media inquiries should be directed to Meta company's Norwegian lawyer, according to company's Norwegian lawyer. An inquiry for comment was not responded to by Meta Platforms. 

According to Datatilsynet, Meta Platforms have been instructed not to collect any personal data related to users in Norway, including their physical locations as part of behavioral advertising, i.e. advertising that is targeted at specific user groups. 

Big Tech companies tend to do this type of thing a lot. Tobias Judin, Head of Datatilsynet's international section, said that the company will be fined 1 million crowns per day as of next Monday if the company does not comply with the court order. 

Meta Platforms have filed a court protest against the imposition of the fine, according to Norway's data regulator, Datatilsynet. Datatilsynet will be able to make the fine permanent by referring the decision to the European Data Protection Board, which also holds the authority to endorse the Norwegian regulator's decision, after which the fine will be effective until November 3 at which point it could be made permanent by the Norwegian regulator. 

Successful adoption of this decision would have an impact on the entire European region if it were to be approved. Currently, Datatilsynet has not taken any further steps in implementing these measures. In a recent announcement, Meta announced that it intends to seek consent from users in the European Union before allowing businesses to use targeted advertisements based on how they interact with Meta's services like Instagram and Facebook. 

Judin pointed out that Meta's proposed method of seeking consent from users was insufficient and that such a step would not be wise. As a result, he required Meta to immediately cease all data processing, and not to resume it until a fully functional consent mechanism had been established. There is a violation of people's rights with the implementation of Monday, even though many people are unaware of this violation. 

A Meta spokesperson explained that the decision to modify their approach was prompted by regulatory obligations in the European region, which came as a result of an order issued in January by the Irish Data Protection Commissioner regarding EU-wide data protection regulations. 

According to the Irish authority, which acts as Meta's primary regulator within the European Union, the company is now required to review the legal basis of the methods that it uses to target customers with advertisements. While Norway may not be a member of the European Union, it remains a member of the European Single Market, even though it is not a member of the EU.

Data Leak from Far-Right Forum Poast Reveals Daycare Owner with Nazi Avatar





In May of this year, Poast, a far-right social media forum, experienced a data breach that resulted in the leak of thousands of email addresses, usernames, and direct messages. Poast is a federated social network that functions similarly to Mastodon and is similar to sites such as 4chan and the notorious Kiwi Farms.

Initial Findings

Initial analysis of the data showed widespread praise of Nazi ideology as well as frequent use of racial and homophobic slurs. The Global Project Against Hate and Extremism reported that there were 28,382 mentions of the N-word in the direct messages of users alone.

Further Examination

Further examination of the data revealed employees from leading tech giants, academia, and military among the site’s users. One user, who had an image of the Nazi sunwheel symbol as their profile picture, appears to work as a professor at a private Christian liberal arts school in North Carolina. They describe themselves as an “Unapologetic National Socialist” in their Poast bio.

Another user, who uses a Nazi “Totenkopf” skull for her profile picture, appears to run a daycare center in Georgia. The woman frequently reposted another user named “DustyShekel” who promoted Nazi-themed “Swastika Soap” bars on a separate antisemitic website.

What's next?

The data leak from Poast raises serious concerns about privacy and security, as well as the spread of hate speech and extremist ideologies. It serves as a reminder of the importance of protecting personal information and being vigilant about online security.