The European Union’s main police agency, Europol, has raised an alarm about how artificial intelligence (AI) is now being misused by criminal groups. According to their latest report, criminals are using AI to carry out serious crimes like drug dealing, human trafficking, online scams, money laundering, and cyberattacks.
This report is based on information gathered from police forces across all 27 European Union countries. Released every four years, it helps guide how the EU tackles organized crime. Europol’s chief, Catherine De Bolle, said cybercrime is growing more dangerous as criminals use advanced digital tools. She explained that AI is giving criminals more power, allowing them to launch precise and damaging attacks on people, companies, and even governments.
Some crimes, she noted, are not just about making money. In certain cases, these actions are also designed to cause unrest and weaken countries. The report explains that criminal groups are now working closely with some governments to secretly carry out harmful activities.
One growing concern is the rise in harmful online content, especially material involving children. AI is making it harder to track and identify those responsible because fake images and videos look very real. This is making the job of investigators much more challenging.
The report also highlights how criminals are now able to trick people using technology like voice imitation and deepfake videos. These tools allow scammers to pretend to be someone else, steal identities, and threaten people. Such methods make fraud, blackmail, and online theft harder to spot.
Another serious issue is that countries are now using criminal networks to launch cyberattacks against their rivals. Europol noted that many of these attacks are aimed at important services like hospitals or government departments. For example, a hospital in Poland was recently hit by a cyberattack that forced it to shut down for several hours. Officials said the use of AI made this attack more severe.
The report warns that new technology is speeding up illegal activities. Criminals can now carry out their plans faster, reach more people, and operate in more complex ways. Europol urged countries to act quickly to tackle this growing threat.
The European Commission is planning to introduce a new security policy soon. Magnus Brunner, the EU official in charge of internal affairs, said Europe needs to stay alert and improve safety measures. He also promised that Europol will get more staff and better resources in the coming years to fight these threats.
In the end, the report makes it clear that AI is making crime more dangerous and harder to stop. Stronger cooperation between countries and better cyber defenses will be necessary to protect people and maintain safety across Europe.
European authorities are raising concerns about DeepSeek, a thriving Chinese artificial intelligence (AI) company, due to its data practices. Italy, Ireland, Belgium, Netherlands, France regulators are examining the data collection methods of this firm, seeing whether they comply with the European General Data Protection Regulation or, if they also might consider that personal data is anyway transferred unlawfully to China.
Hence, due to these issues, the Italian authority has released a temporary restrainment to access the DeepSeek chatbot R1 for the time-being under which investigation will be conducted on what and how data get used, and how much has affected training in the AI model.
What Type of Data Does DeepSeek Actually Collect?
DeepSeek collects three main forms of information from the user:
1. Personal data such as names and emails.
2. Device-related data, including IP addresses.
3. Data from third parties, such as Apple or Google logins.
Moreover, there is an action that an app would be able to opt to take if at all that user was active elsewhere on those devices for "Community Security." Unlike many companies I have said where there are actual timelines or limits on data retention, it is stated that retention of data can happen indefinitely by DeepSeek. This can also include possible sharing with others-advertisers, analytics firms, governments, and copyright holders.
Noting that most AI companies like the case of OpenAI's ChatGPT and Anthropic's Claude have met such privacy issues, experts would observe that DeepSeek doesn't expressly provide users the rights to deletion or restrictions on its use of their data as mandated requirement in the GDPR.
The Collected Data Where it Goes
One of major problems of DeepSeek is that it saves user data in China. Supposedly, the company has secure security measures in place for the data set and observes local laws for data transfer, but from a legal perspective, there is no valid basis being presented by DeepSeek concerning the storing of data from its European users outside the EU.
According to the EDPB, privacy laws in China lay more importance on "stability of community than that of individual privacy," thus permitting broadly-reaching access to personal data for purposes such as national security or criminal investigations. Yet it is not clear whether that of foreign users will be treated differently than that of Chinese citizens.
Cybersecurity and Privacy Threats
As accentuated by cyber crime indices in 2024, China is one of the countries most vulnerable to cyberattacks. Cisco's latest report shows that DeepSeek's AI model does not have such strong security against hacking attempts. Other AI models can block at least some "jailbreak" cyberattacks, while DeepSeek turned out to be completely vulnerable to such assaults, which have made it softer for manipulation.
Should Users Worry?
According to experts, users ought to exercise caution when using DeepSeek and avoid sharing highly sensitive personal details. The uncertain policies of the company with respect to data protection, storage in China, and relatively weak security defenses could avail pretty heavy risks to users' privacy and as such warrant such caution.
European regulators will then determine whether DeepSeek will be allowed to conduct business in the EU as investigations continue. Until then, users should weigh risks against their possible exposure when interacting with the platform.
For tech enthusiasts and environmentalists in the European Union (EU), December 28, 2024, marked a major turning point as USB-C officially became the required standard for electronic gadgets.
The new policy mandates that phones, tablets, cameras, and other electronic devices marketed in the EU must have USB-C connectors. This move aims to minimise e-waste and make charging more convenient for customers. Even industry giants like Apple are required to adapt, signaling the end of proprietary charging standards in the region.
Apple’s Transition to USB-C
Apple has been slower than most Android manufacturers in adopting USB-C. The company introduced USB-C connectors with the iPhone 15 series in 2023, while older models, such as the iPhone 14 and the iPhone SE (3rd generation), continued to use the now-outdated Lightning connector.
To comply with the new EU regulations, Apple has discontinued the iPhone 14 and iPhone SE in the region, as these models include Lightning ports. While they remain available through third-party retailers until supplies run out, the regulation prohibits brands from directly selling non-USB-C devices in the EU. However, outside the EU, including in major markets like the United States, India, and China, these models are still available for purchase.
Looking Ahead: USB-C as the Future
Apple’s decision aligns with its broader strategy to phase out the Lightning connection entirely. The transition is expected to culminate in early 2025 with the release of a USB-C-equipped iPhone SE. This shift not only ensures compliance with EU regulations but also addresses consumer demands for a more streamlined charging experience.
The European Commission (EC) celebrated the implementation of this law with a playful yet impactful tweet, highlighting the benefits of a universal charging standard. “Today’s the day! USB-C is officially the common standard for electronic devices in the EU! It means: The same charger for all new phones, tablets & cameras; Harmonised fast-charging; Reduced e-waste; No more ‘Sorry, I don’t have that cable,’” the EC shared on X (formerly Twitter).
Environmental and Consumer Benefits
This law aims to alleviate the frustration of managing multiple chargers while addressing the growing environmental issues posed by e-waste. By standardising charging technology, the EU hopes to:
With the EU leading this shift, other regions may follow suit, further promoting sustainability and convenience in the tech industry.
One of the most notable changes in the NIS2 Directive is its expanded scope. While the original NIS Directive primarily targeted operators of essential services and digital service providers, NIS2 extends its reach to include a wider range of sectors. This includes public administration entities, the healthcare sector, and providers of digital infrastructure. By broadening the scope, the EU aims to ensure that more entities are covered under the directive, thereby enhancing the overall cybersecurity posture of the region.
The move brings more stringent security requirements for entities within its scope. Organizations are now required to implement robust cybersecurity measures, including risk management practices, incident response plans, and regular security assessments. These measures are designed to ensure that organizations are better prepared to prevent, detect, and respond to cyber threats.
Additionally, the directive emphasizes the importance of supply chain security. Organizations must now assess and manage the cybersecurity risks associated with their supply chains, ensuring that third-party vendors and partners adhere to the same high standards of security.
Another significant aspect of the NIS2 Directive is the enhanced incident reporting obligations. Under the new directive, organizations are required to report significant cybersecurity incidents to the relevant authorities within 24 hours of detection. This rapid reporting is crucial for enabling a swift response to cyber threats and minimizing the potential impact on critical infrastructure and services.
The directive also mandates that organizations provide detailed information about the incident, including the nature of the threat, the affected systems, and the measures taken to mitigate the impact. This level of transparency is intended to facilitate better coordination and information sharing among EU member states, ultimately strengthening the collective cybersecurity resilience of the region.
Organizations are required to designate a responsible person or team for overseeing cybersecurity measures and ensuring compliance with the directive. This includes conducting regular audits and assessments to verify the effectiveness of the implemented security measures.
Organizations that fail to meet the requirements of the NIS2 Directive may face significant fines and other sanctions. This serves as a strong incentive for organizations to prioritize cybersecurity and ensure that they are fully compliant with the directive.
It also offers numerous opportunities. By implementing the required cybersecurity measures, organizations can significantly enhance their security posture and reduce the risk of cyber incidents. This not only protects their own operations but also contributes to the overall security of the EU.
The directive also encourages greater collaboration and information sharing among EU member states. This collective approach to cybersecurity can lead to more effective threat detection and response, ultimately making the region more resilient to cyber threats.
Meta will reportedly amend its privacy policy beginning June 26 to allow its AI to be educated on your data.
The story spread on social media after Meta sent out emails and notifications to subscribers in the United Kingdom and the European Union informing them of the change and offering them the option to opt out of data collecting.
One UK-based user, Phillip Bloom, publicly published the message, informing everyone about the impending changes, which appear to also affect Instagram users.
These changes provide Meta permission to use your information and personal material from Meta-related services to train its AI. This implies that the social media giant will be able to use public Facebook posts, Instagram photographs and captions, and messages to Meta's AI chatbots to train its huge language model and other AI capabilities.
Meta states that private messages will not be included in the training data, and the business emphasizes in its emails and notifications that each user (in a protected region) has the "right to object" to the data being utilized.
Once implemented, the new policy will begin automatically extracting information from the affected types of material. To avoid Meta removing your content, you can opt out right now by going to this Facebook help website.
Keep in mind that this page will only load if you are in the European Union, the United Kingdom, or any country where Meta is required by law to provide an opt-out option.
If you live in the European Union, the United Kingdom, or another country with severe enough data protection regulations for Meta to provide an opt-out, go to the support page listed above, fill out the form, and submit it.
You'll need to select your nation and explain why you're opting out in a text box, and you'll have the option to offer more information below that. You should receive a response indicating whether Meta will honor your request to opt out of having your data utilized.
Prepare to fight—some users say that their requests are being denied, even though in countries governed by legislation such as the European Union's GDPR, Meta should be required to honor your request.
There are a few caveats to consider. While the opt-out protects you, it does not guarantee that your postings will be protected if they are shared by friends or family members who have not opted out of using data for AI training.
Make sure that any family members who use Facebook or other Meta services opt out, if possible. This move isn't surprising given that Meta has been gradually expanding its AI offerings on its platforms.
As a result, the utilization of user data, particularly among Meta services, was always expected. There is too much data for the corporation to pass up as training material for its numerous AI programs.
In a move set to reshape the scope of AI deployment, the European Union's AI Act, slated to come into effect in May or June, aims to impose stricter regulations on the development and use of generative AI technology. The Act, which categorises AI use cases based on associated risks, prohibits certain applications like biometric categorization systems and emotion recognition in workplaces due to concerns over manipulation of human behaviour. This legislation will compel companies, regardless of their location, to adopt a more responsible approach to AI development and deployment.
For businesses venturing into generative AI adoption, compliance with the EU AI Act will necessitate a thorough evaluation of use cases through a risk assessment lens. Existing AI deployments will require comprehensive audits to ensure adherence to regulatory standards and mitigate potential penalties. While the Act provides a transition period for compliance, organisations must gear up to meet the stipulated requirements by 2026.
This isn't the first time US companies have faced disruption from overseas tech regulations. Similar to the impact of the GDPR on data privacy practices, the EU AI Act is expected to influence global AI governance standards. By aligning with EU regulations, US tech leaders may find themselves better positioned to comply with emerging regulatory mandates worldwide.
Despite the parallels with GDPR, regulating AI presents unique challenges. The rollout of GDPR witnessed numerous compliance hurdles, indicating the complexity of enforcing such regulations. Additionally, concerns persist regarding the efficacy of fines in deterring non-compliance among large corporations. The EU's proposed fines for AI Act violations range from 7.5 million to 35 million euros, but effective enforcement will require the establishment of robust regulatory mechanisms.
Addressing the AI talent gap is crucial for successful implementation and enforcement of the Act. Both the EU and the US recognize the need for upskilling to attend to the complexities of AI governance. While US efforts have focused on executive orders and policy initiatives, the EU's proactive approach is poised to drive AI enforcement forward.
For CIOs preparing for the AI Act's enforcement, understanding the tools and use cases within their organisations is imperceptible. By conducting comprehensive inventories and risk assessments, businesses can identify areas of potential non-compliance and take corrective measures. It's essential to recognize that seemingly low-risk AI applications may still pose significant challenges, particularly regarding data privacy and transparency.
Companies like TransUnion are taking a nuanced approach to AI deployment, tailoring their strategies to specific use cases. While embracing AI's potential benefits, they exercise caution in deploying complex, less explainable technologies, especially in sensitive areas like credit assessment.
As the EU AI Act reshapes the regulatory landscape, CIOs must proactively adapt their AI strategies to ensure compliance and mitigate risks. By prioritising transparency, accountability, and ethical considerations, organisations can navigate the evolving regulatory environment while harnessing the transformative power of AI responsibly.
OpenAI's ChatGPT is facing renewed scrutiny in Italy as the country's data protection authority, Garante, asserts that the AI chatbot may be in violation of data protection rules. This follows a previous ban imposed by Garante due to alleged breaches of European Union (EU) privacy regulations. Although the ban was lifted after OpenAI addressed concerns, Garante has persisted in its investigations and now claims to have identified elements suggesting potential data privacy violations.
Garante, known for its proactive stance on AI platform compliance with EU data privacy regulations, had initially banned ChatGPT over alleged breaches of EU privacy rules. Despite the reinstatement after OpenAI's efforts to address user consent issues, fresh concerns have prompted Garante to escalate its scrutiny. OpenAI, however, maintains that its practices are aligned with EU privacy laws, emphasising its active efforts to minimise the use of personal data in training its systems.
"We assure that our practices align with GDPR and privacy laws, emphasising our commitment to safeguarding people's data and privacy," stated the company. "Our focus is on enabling our AI to understand the world without delving into private individuals' lives. Actively minimising personal data in training systems like ChatGPT, we also decline requests for private or sensitive information about individuals."
In the past, OpenAI confirmed fulfilling numerous conditions demanded by Garante to lift the ChatGPT ban. The watchdog had imposed the ban due to exposed user messages and payment information, along with ChatGPT lacking a system to verify users' ages, potentially leading to inappropriate responses for children. Additionally, questions were raised about the legal basis for OpenAI collecting extensive data to train ChatGPT's algorithms. Concerns were voiced regarding the system potentially generating false information about individuals.
OpenAI's assertion of compliance with GDPR and privacy laws, coupled with its active steps to minimise personal data, appears to be a key element in addressing the issues that led to the initial ban. The company's efforts to meet Garante's conditions signal a commitment to resolving concerns related to user data protection and the responsible use of AI technologies. As the investigation takes its stride, these assurances may play a crucial role in determining how OpenAI navigates the challenges posed by Garante's scrutiny into ChatGPT's data privacy practices.
In response to Garante's claims, OpenAI is gearing up to present its defence within a 30-day window provided by Garante. This period is crucial for OpenAI to clarify its data protection practices and demonstrate compliance with EU regulations. The backdrop to this investigation is the EU's General Data Protection Regulation (GDPR), introduced in 2018. Companies found in violation of data protection rules under the GDPR can face fines of up to 4% of their global turnover.
Garante's actions underscore the seriousness with which EU data protection authorities approach violations and their willingness to enforce penalties. This case involving ChatGPT reflects broader regulatory trends surrounding AI systems in the EU. In December, EU lawmakers and governments reached provisional terms for regulating AI systems like ChatGPT, emphasising comprehensive rules to govern AI technology with a focus on safeguarding data privacy and ensuring ethical practices.
OpenAI's cooperation and its ability to address concerns regarding personal data usage will play a pivotal role. The broader regulatory trends in the EU indicate a growing emphasis on establishing comprehensive guidelines for AI systems, addressing data protection and ethical considerations. For readers, understanding these developments determines the importance of compliance with data protection regulations and the ongoing efforts to establish clear guidelines for AI technologies in the EU.
Trento was the first local administration in Italy to be sanctioned by the GPDP watchdog for using data from AI tools. The city has been fined a sum of 50,000 euros (454,225). Trento has also been urged to take down the data gathered in the two European Union-sponsored projects.
The privacy watchdog, known to be one of the most proactive bodies deployed by the EU, for evaluating AI platform compliance with the bloc's data protection regulations temporarily outlawed ChatGPT, a well-known chatbot, in Italy. In 2021, the authority also reported about a facial recognition system tested under the Italian Interior Ministry, which did not meet the terms of privacy laws.
Concerns around personal data security and privacy rights have been brought up by the rapid advancements in AI across several businesses.
Following a thorough investigation of the Trento projects, the GPDP found “multiple violations of privacy regulations,” they noted in a statement, while also recognizing how the municipality acted in good faith.
Also, it mentioned that the data collected in the project needed to be sufficiently anonymous and that it was illicitly shared with third-party entities.
“The decision by the regulator highlights how the current legislation is totally insufficient to regulate the use of AI to analyse large amounts of data and improve city security,” it said in a statement.
Moreover, in its presidency of the Group of Seven (G7) major democracies, the government of Italy which is led by Prime Minister Giorgia Meloni has promised to highlight the AI revolution.
Legislators and governments in the European Union reached a temporary agreement in December to regulate ChatGPT and other AI systems, bringing the technology one step closer to regulations. One major source of contention concerns the application of AI to biometric surveillance.
At a meeting with European Commission officials on Thursday, the e-commerce behemoth was informed that the transaction would probably be denied, according to sources familiar with the situation. The political leadership of the EU must still formally approve a final decision, which is required by February 14. Meanwhile, Amazon declined to comment on the issue.
On Friday, iRobot’s shares, based in Bedford, Massachusetts, fell as much as 31% to $16.30, expanding the deal spread to over $35, the greatest since the merger was disclosed more than a year ago.
Regulators believe that other vacuum manufacturers may find it more difficult to compete as a result of iRobot's partnership with Amazon, particularly if Amazon decides to give Roomba advantages over competitors on its online store.
There will probably be opposition to the deal in the US as well. People with an insight into the situation claim that the Federal Trade Commission has been preparing a lawsuit to try and stop the transaction. According to persons speaking about an ongoing investigation, the three FTC commissioners have yet to vote on a challenge or hold a final meeting with Amazon to discuss the possible case.
The investigation over Amazon’s acquisition of iRobot was initiated in July 2023 by the European Commission (EC), the EU’s competition watchdog.
The EC has until February 14 to make a decision. The commission's 27 most powerful political members must agree to reject the proposal before the EC can make a final decision.
While iRobot was all set to expand its business in the market of smart home appliances, it witnessed a 40% dip in its shares a few hours after the first reporting of the EU’s intentions in the Wall Street Journal.
Given that the company has been struggling with declining revenues, the acquisition by Amazon was initially viewed as a boon.
In regards to the situation, Matt Schruers, president of tech lobbying group Computer and Communications Industry Association comments that "If the objective is to have more competition in the home robotics sector, this makes no sense[…]Blocking this deal may well leave consumers with fewer options, and regulators cannot sweep that fact under the rug."
A well-known ransomware organization operating in Ukraine has been successfully taken down by an international team under the direction of Europol, marking a major win against cybercrime. In this operation, the criminal group behind several high-profile attacks was the target of multiple raids.
The joint effort, which included law enforcement agencies from various countries, highlights the growing need for global cooperation in combating cyber threats. The dismantled group had been a prominent player in the world of ransomware, utilizing sophisticated techniques to extort individuals and organizations.
The operation comes at a crucial time, with Ukraine already facing challenges due to ongoing geopolitical tensions. Europol's involvement underscores the commitment of the international community to address cyber threats regardless of the geopolitical landscape.
One of the key events leading to the takedown was a series of coordinated raids across Ukraine. These actions, supported by Europol, aimed at disrupting the ransomware gang's infrastructure and apprehending key individuals involved in the criminal activities. The raids not only targeted the group's operational base but also sought to gather crucial evidence for further investigations.
Europol, in a statement, emphasized the significance of international collaboration in combating cybercrime. "This successful operation demonstrates the power of coordinated efforts in tackling transnational threats. Cybercriminals operate globally, and law enforcement must respond with a united front," stated the Europol representative.
The dismantled ransomware gang was reportedly using the Lockergoga ransomware variant, known for its sophisticated encryption methods and targeted attacks on high-profile victims. The group's activities had raised concerns globally, making its takedown a priority for law enforcement agencies.
In the aftermath of the operation, cybersecurity experts are optimistic about the potential impact on reducing ransomware threats. However, they also stress the importance of continued vigilance and collaboration to stay ahead of evolving cyber threats.
As the international community celebrates this successful operation, it serves as a reminder of the ongoing battle against cybercrime. The events leading to the dismantlement of the Ukrainian-based ransomware gang underscore the necessity for countries to pool their resources and expertise to protect individuals, businesses, and critical infrastructure from the ever-evolving landscape of cyber threats.
In response to the Hanff’s claims to the European Commission, German Pirate Party MEP asked for a legal position on two key issues: whether this type of detection is "absolutely necessary to provide a service such as YouTube" and whether "protection of information stored on the device (Article 5(3) ePR) also cover information as to whether the user's device hides or blocks certain page elements, or whether ad-blocking software is used on the device."
Recently, YouTube has made it mandatory for users to cease using ad blockers or else they will receive notifications that may potentially prevent them from accessing any material on the platform. The majority of nations will abide by the new regulations, which YouTube claims are intended to increase revenue for creators.
However, the reasons that the company provides are not likely to hold up in Europe. Experts in privacy have noted that YouTube's demand to allow advertisements to run for free users is against EU legislation. Since it can now identify users who have installed ad blockers to avoid seeing advertisements on the site, YouTube has really been accused of spying on its customers.
EU regulators has already warned tech giants like Google and Apple. Now, YouTube is the next platform that could face lengthy legal battles with the authorities as it attempts to defend the methods used to identify these blocks and compel free YouTube viewers to watch advertisements regularly in between videos. Google and other digital behemoths like Apple have previously faced the wrath of EU regulators. Due to YouTube's decision to show adverts for free users, many have uninstalled ad blockers from their browsers as a result of these developments.
According to experts, YouTube along with violating the digital laws, is also violating certain Fundamental consumer rights. Thus, it is likely that the company would have to change its position in the area if the platform is found to be in violation of the law with its anti-ad blocker regulations. This is something that Meta was recently forced to do with Instagram and Facebook.
The social networking giant has further decided on the policy that if its users (Facebook and Instagram) do not want to see ads while browsing the platforms, they will be required to sign up for its monthly subscriptions, where the platforms are free from advertisements.
According to Ivan Kolpakov, Meduza’s editor-in-chief based in Latvia, it was obvious that Europeans should be very concerned about Pegasus in light of the discoveries regarding the hacking of his colleague Galina Timichenko by an as-yet-unconfirmed EU country.
“If they can use it against an exiled journalist there are no guarantees they cannot use it against local journalists as well[…]Unfortunately, there are a lot of fans in Europe, and we are not only talking about Poland and Hungary, but Western European countries as well,” said Kolpakov.
Since last month, the European Commission has been working on guidelines for how governments could employ surveillance technologies like spyware in compliance with EU data privacy and national security rules since last month. Despite the fact that member states are responsible for their own national security, the Commission is considering adopting a position after learning that 14 EU governments had purchased the Pegasus technology from NSO Group.
Apparently, Timichenko was targeted by Pegasus in February 2023 when she was in Berlin for a private gathering of Russian media workers exile. The meeting's subject was the threats posed by the Russian government's categorization of independent Russian media outlets as foreign agents.
Taking into account the work that Timichenko deals with, Russia was first suspected; but, according to the digital rights organization Access Now, additional information suggests that one of the intelligence services of an EU member state — the exact one is yet unknown — is more likely to be to blame.
Allegedly, the motive behind the hack could be that numerous Baltic nations, to whom Russia has consistently posed a threat, are worried that a few FSB or GRU agents may have infiltrated their borders among expatriate dissidents and journalists.
“It may happen and probably it actually happens, but in my opinion, it does not justify the usage of that kind of brutal tool as Pegasus against a prominent independent journalist,” Kolpakov said.
Kolpakov believes that the revelations have left the exiled community feeling they are not safe in Europe. “This spyware has to be banned here in Europe. It really violates human rights,” he added.
The news was announced on Twitter, by EU’s internal market commissioner Thierry Breton. Breton later took to social media, warning Twitter that it cannot escape from the legal liability consequences that are incoming.
“Twitter leaves EU voluntary Code of Practice against disinformation. But obligations remain. You can run but you can’t hide[…]Beyond voluntary commitments, fighting disinformation will be legal obligation under #DSA as of August 25. Our teams will be ready for enforcement,” Breton wrote.
Herein, he referred to the legal duties that the platform must follow as a "very large online platform" (VLOP) under the EU's Digital Services Act (DSA).
European Union Disinformation Code
A number of tech firms, small and big, are apparently signed up to the EU’s disinformation code, along with Facebook’s parent company Meta, TikTok, Google, Microsoft and Twitch.
The code, which was introduced in June of last year, seeks to decrease profiteering from fake news and disinformation, increase transparency, and stop the spread of bots and fraudulent accounts. Companies who sign the code are free to decide on the what obligations they want to make, such as working with fact-checkers or monitoring political advertising.
Apparently, since Elon Musk took over Twitter, the company’s moderation has largely reduced, which as per the critics has resulted in a increase in spread of disinformation.
However, experts and former Twitter employees claim that the majority of these specialists left their positions or were fired. The social media company once had a dedicated team that tried to combat coordinated disinformation campaigns.
Last month, BBC exposed hundreds of Russian and Chinese state propaganda accounts lurking on Twitter. However, Musk claims that there is now “less misinformation rather than more,” since he took Twitter’s ownership.
Moreover, the EU, along with its voluntary code has brought in a Digital Service Act- a law which will coerce firms to put more efforts in tackling illegal contents online.
From August 25, platforms with more than 45 million active users per month in the EU—including Twitter—must abide by the DSA's legislative requirements.
Twitter will be required by legislation to implement measures to combat the spread of misinformation, provide users with a way to identify illegal content, and respond "expeditiously" to notifications.
In regards to the issue, AFP news agency on Friday quoted a statement of a EU Commission official saying “If (Elon Musk) doesn’t take the code seriously, then it’s better that he quits.”