In a move set to reshape the scope of AI deployment, the European Union's AI Act, slated to come into effect in May or June, aims to impose stricter regulations on the development and use of generative AI technology. The Act, which categorises AI use cases based on associated risks, prohibits certain applications like biometric categorization systems and emotion recognition in workplaces due to concerns over manipulation of human behaviour. This legislation will compel companies, regardless of their location, to adopt a more responsible approach to AI development and deployment.
For businesses venturing into generative AI adoption, compliance with the EU AI Act will necessitate a thorough evaluation of use cases through a risk assessment lens. Existing AI deployments will require comprehensive audits to ensure adherence to regulatory standards and mitigate potential penalties. While the Act provides a transition period for compliance, organisations must gear up to meet the stipulated requirements by 2026.
This isn't the first time US companies have faced disruption from overseas tech regulations. Similar to the impact of the GDPR on data privacy practices, the EU AI Act is expected to influence global AI governance standards. By aligning with EU regulations, US tech leaders may find themselves better positioned to comply with emerging regulatory mandates worldwide.
Despite the parallels with GDPR, regulating AI presents unique challenges. The rollout of GDPR witnessed numerous compliance hurdles, indicating the complexity of enforcing such regulations. Additionally, concerns persist regarding the efficacy of fines in deterring non-compliance among large corporations. The EU's proposed fines for AI Act violations range from 7.5 million to 35 million euros, but effective enforcement will require the establishment of robust regulatory mechanisms.
Addressing the AI talent gap is crucial for successful implementation and enforcement of the Act. Both the EU and the US recognize the need for upskilling to attend to the complexities of AI governance. While US efforts have focused on executive orders and policy initiatives, the EU's proactive approach is poised to drive AI enforcement forward.
For CIOs preparing for the AI Act's enforcement, understanding the tools and use cases within their organisations is imperceptible. By conducting comprehensive inventories and risk assessments, businesses can identify areas of potential non-compliance and take corrective measures. It's essential to recognize that seemingly low-risk AI applications may still pose significant challenges, particularly regarding data privacy and transparency.
Companies like TransUnion are taking a nuanced approach to AI deployment, tailoring their strategies to specific use cases. While embracing AI's potential benefits, they exercise caution in deploying complex, less explainable technologies, especially in sensitive areas like credit assessment.
As the EU AI Act reshapes the regulatory landscape, CIOs must proactively adapt their AI strategies to ensure compliance and mitigate risks. By prioritising transparency, accountability, and ethical considerations, organisations can navigate the evolving regulatory environment while harnessing the transformative power of AI responsibly.
OpenAI's ChatGPT is facing renewed scrutiny in Italy as the country's data protection authority, Garante, asserts that the AI chatbot may be in violation of data protection rules. This follows a previous ban imposed by Garante due to alleged breaches of European Union (EU) privacy regulations. Although the ban was lifted after OpenAI addressed concerns, Garante has persisted in its investigations and now claims to have identified elements suggesting potential data privacy violations.
Garante, known for its proactive stance on AI platform compliance with EU data privacy regulations, had initially banned ChatGPT over alleged breaches of EU privacy rules. Despite the reinstatement after OpenAI's efforts to address user consent issues, fresh concerns have prompted Garante to escalate its scrutiny. OpenAI, however, maintains that its practices are aligned with EU privacy laws, emphasising its active efforts to minimise the use of personal data in training its systems.
"We assure that our practices align with GDPR and privacy laws, emphasising our commitment to safeguarding people's data and privacy," stated the company. "Our focus is on enabling our AI to understand the world without delving into private individuals' lives. Actively minimising personal data in training systems like ChatGPT, we also decline requests for private or sensitive information about individuals."
In the past, OpenAI confirmed fulfilling numerous conditions demanded by Garante to lift the ChatGPT ban. The watchdog had imposed the ban due to exposed user messages and payment information, along with ChatGPT lacking a system to verify users' ages, potentially leading to inappropriate responses for children. Additionally, questions were raised about the legal basis for OpenAI collecting extensive data to train ChatGPT's algorithms. Concerns were voiced regarding the system potentially generating false information about individuals.
OpenAI's assertion of compliance with GDPR and privacy laws, coupled with its active steps to minimise personal data, appears to be a key element in addressing the issues that led to the initial ban. The company's efforts to meet Garante's conditions signal a commitment to resolving concerns related to user data protection and the responsible use of AI technologies. As the investigation takes its stride, these assurances may play a crucial role in determining how OpenAI navigates the challenges posed by Garante's scrutiny into ChatGPT's data privacy practices.
In response to Garante's claims, OpenAI is gearing up to present its defence within a 30-day window provided by Garante. This period is crucial for OpenAI to clarify its data protection practices and demonstrate compliance with EU regulations. The backdrop to this investigation is the EU's General Data Protection Regulation (GDPR), introduced in 2018. Companies found in violation of data protection rules under the GDPR can face fines of up to 4% of their global turnover.
Garante's actions underscore the seriousness with which EU data protection authorities approach violations and their willingness to enforce penalties. This case involving ChatGPT reflects broader regulatory trends surrounding AI systems in the EU. In December, EU lawmakers and governments reached provisional terms for regulating AI systems like ChatGPT, emphasising comprehensive rules to govern AI technology with a focus on safeguarding data privacy and ensuring ethical practices.
OpenAI's cooperation and its ability to address concerns regarding personal data usage will play a pivotal role. The broader regulatory trends in the EU indicate a growing emphasis on establishing comprehensive guidelines for AI systems, addressing data protection and ethical considerations. For readers, understanding these developments determines the importance of compliance with data protection regulations and the ongoing efforts to establish clear guidelines for AI technologies in the EU.
Trento was the first local administration in Italy to be sanctioned by the GPDP watchdog for using data from AI tools. The city has been fined a sum of 50,000 euros (454,225). Trento has also been urged to take down the data gathered in the two European Union-sponsored projects.
The privacy watchdog, known to be one of the most proactive bodies deployed by the EU, for evaluating AI platform compliance with the bloc's data protection regulations temporarily outlawed ChatGPT, a well-known chatbot, in Italy. In 2021, the authority also reported about a facial recognition system tested under the Italian Interior Ministry, which did not meet the terms of privacy laws.
Concerns around personal data security and privacy rights have been brought up by the rapid advancements in AI across several businesses.
Following a thorough investigation of the Trento projects, the GPDP found “multiple violations of privacy regulations,” they noted in a statement, while also recognizing how the municipality acted in good faith.
Also, it mentioned that the data collected in the project needed to be sufficiently anonymous and that it was illicitly shared with third-party entities.
“The decision by the regulator highlights how the current legislation is totally insufficient to regulate the use of AI to analyse large amounts of data and improve city security,” it said in a statement.
Moreover, in its presidency of the Group of Seven (G7) major democracies, the government of Italy which is led by Prime Minister Giorgia Meloni has promised to highlight the AI revolution.
Legislators and governments in the European Union reached a temporary agreement in December to regulate ChatGPT and other AI systems, bringing the technology one step closer to regulations. One major source of contention concerns the application of AI to biometric surveillance.
At a meeting with European Commission officials on Thursday, the e-commerce behemoth was informed that the transaction would probably be denied, according to sources familiar with the situation. The political leadership of the EU must still formally approve a final decision, which is required by February 14. Meanwhile, Amazon declined to comment on the issue.
On Friday, iRobot’s shares, based in Bedford, Massachusetts, fell as much as 31% to $16.30, expanding the deal spread to over $35, the greatest since the merger was disclosed more than a year ago.
Regulators believe that other vacuum manufacturers may find it more difficult to compete as a result of iRobot's partnership with Amazon, particularly if Amazon decides to give Roomba advantages over competitors on its online store.
There will probably be opposition to the deal in the US as well. People with an insight into the situation claim that the Federal Trade Commission has been preparing a lawsuit to try and stop the transaction. According to persons speaking about an ongoing investigation, the three FTC commissioners have yet to vote on a challenge or hold a final meeting with Amazon to discuss the possible case.
The investigation over Amazon’s acquisition of iRobot was initiated in July 2023 by the European Commission (EC), the EU’s competition watchdog.
The EC has until February 14 to make a decision. The commission's 27 most powerful political members must agree to reject the proposal before the EC can make a final decision.
While iRobot was all set to expand its business in the market of smart home appliances, it witnessed a 40% dip in its shares a few hours after the first reporting of the EU’s intentions in the Wall Street Journal.
Given that the company has been struggling with declining revenues, the acquisition by Amazon was initially viewed as a boon.
In regards to the situation, Matt Schruers, president of tech lobbying group Computer and Communications Industry Association comments that "If the objective is to have more competition in the home robotics sector, this makes no sense[…]Blocking this deal may well leave consumers with fewer options, and regulators cannot sweep that fact under the rug."
A well-known ransomware organization operating in Ukraine has been successfully taken down by an international team under the direction of Europol, marking a major win against cybercrime. In this operation, the criminal group behind several high-profile attacks was the target of multiple raids.
The joint effort, which included law enforcement agencies from various countries, highlights the growing need for global cooperation in combating cyber threats. The dismantled group had been a prominent player in the world of ransomware, utilizing sophisticated techniques to extort individuals and organizations.
The operation comes at a crucial time, with Ukraine already facing challenges due to ongoing geopolitical tensions. Europol's involvement underscores the commitment of the international community to address cyber threats regardless of the geopolitical landscape.
One of the key events leading to the takedown was a series of coordinated raids across Ukraine. These actions, supported by Europol, aimed at disrupting the ransomware gang's infrastructure and apprehending key individuals involved in the criminal activities. The raids not only targeted the group's operational base but also sought to gather crucial evidence for further investigations.
Europol, in a statement, emphasized the significance of international collaboration in combating cybercrime. "This successful operation demonstrates the power of coordinated efforts in tackling transnational threats. Cybercriminals operate globally, and law enforcement must respond with a united front," stated the Europol representative.
The dismantled ransomware gang was reportedly using the Lockergoga ransomware variant, known for its sophisticated encryption methods and targeted attacks on high-profile victims. The group's activities had raised concerns globally, making its takedown a priority for law enforcement agencies.
In the aftermath of the operation, cybersecurity experts are optimistic about the potential impact on reducing ransomware threats. However, they also stress the importance of continued vigilance and collaboration to stay ahead of evolving cyber threats.
As the international community celebrates this successful operation, it serves as a reminder of the ongoing battle against cybercrime. The events leading to the dismantlement of the Ukrainian-based ransomware gang underscore the necessity for countries to pool their resources and expertise to protect individuals, businesses, and critical infrastructure from the ever-evolving landscape of cyber threats.
In response to the Hanff’s claims to the European Commission, German Pirate Party MEP asked for a legal position on two key issues: whether this type of detection is "absolutely necessary to provide a service such as YouTube" and whether "protection of information stored on the device (Article 5(3) ePR) also cover information as to whether the user's device hides or blocks certain page elements, or whether ad-blocking software is used on the device."
Recently, YouTube has made it mandatory for users to cease using ad blockers or else they will receive notifications that may potentially prevent them from accessing any material on the platform. The majority of nations will abide by the new regulations, which YouTube claims are intended to increase revenue for creators.
However, the reasons that the company provides are not likely to hold up in Europe. Experts in privacy have noted that YouTube's demand to allow advertisements to run for free users is against EU legislation. Since it can now identify users who have installed ad blockers to avoid seeing advertisements on the site, YouTube has really been accused of spying on its customers.
EU regulators has already warned tech giants like Google and Apple. Now, YouTube is the next platform that could face lengthy legal battles with the authorities as it attempts to defend the methods used to identify these blocks and compel free YouTube viewers to watch advertisements regularly in between videos. Google and other digital behemoths like Apple have previously faced the wrath of EU regulators. Due to YouTube's decision to show adverts for free users, many have uninstalled ad blockers from their browsers as a result of these developments.
According to experts, YouTube along with violating the digital laws, is also violating certain Fundamental consumer rights. Thus, it is likely that the company would have to change its position in the area if the platform is found to be in violation of the law with its anti-ad blocker regulations. This is something that Meta was recently forced to do with Instagram and Facebook.
The social networking giant has further decided on the policy that if its users (Facebook and Instagram) do not want to see ads while browsing the platforms, they will be required to sign up for its monthly subscriptions, where the platforms are free from advertisements.
According to Ivan Kolpakov, Meduza’s editor-in-chief based in Latvia, it was obvious that Europeans should be very concerned about Pegasus in light of the discoveries regarding the hacking of his colleague Galina Timichenko by an as-yet-unconfirmed EU country.
“If they can use it against an exiled journalist there are no guarantees they cannot use it against local journalists as well[…]Unfortunately, there are a lot of fans in Europe, and we are not only talking about Poland and Hungary, but Western European countries as well,” said Kolpakov.
Since last month, the European Commission has been working on guidelines for how governments could employ surveillance technologies like spyware in compliance with EU data privacy and national security rules since last month. Despite the fact that member states are responsible for their own national security, the Commission is considering adopting a position after learning that 14 EU governments had purchased the Pegasus technology from NSO Group.
Apparently, Timichenko was targeted by Pegasus in February 2023 when she was in Berlin for a private gathering of Russian media workers exile. The meeting's subject was the threats posed by the Russian government's categorization of independent Russian media outlets as foreign agents.
Taking into account the work that Timichenko deals with, Russia was first suspected; but, according to the digital rights organization Access Now, additional information suggests that one of the intelligence services of an EU member state — the exact one is yet unknown — is more likely to be to blame.
Allegedly, the motive behind the hack could be that numerous Baltic nations, to whom Russia has consistently posed a threat, are worried that a few FSB or GRU agents may have infiltrated their borders among expatriate dissidents and journalists.
“It may happen and probably it actually happens, but in my opinion, it does not justify the usage of that kind of brutal tool as Pegasus against a prominent independent journalist,” Kolpakov said.
Kolpakov believes that the revelations have left the exiled community feeling they are not safe in Europe. “This spyware has to be banned here in Europe. It really violates human rights,” he added.
The news was announced on Twitter, by EU’s internal market commissioner Thierry Breton. Breton later took to social media, warning Twitter that it cannot escape from the legal liability consequences that are incoming.
“Twitter leaves EU voluntary Code of Practice against disinformation. But obligations remain. You can run but you can’t hide[…]Beyond voluntary commitments, fighting disinformation will be legal obligation under #DSA as of August 25. Our teams will be ready for enforcement,” Breton wrote.
Herein, he referred to the legal duties that the platform must follow as a "very large online platform" (VLOP) under the EU's Digital Services Act (DSA).
European Union Disinformation Code
A number of tech firms, small and big, are apparently signed up to the EU’s disinformation code, along with Facebook’s parent company Meta, TikTok, Google, Microsoft and Twitch.
The code, which was introduced in June of last year, seeks to decrease profiteering from fake news and disinformation, increase transparency, and stop the spread of bots and fraudulent accounts. Companies who sign the code are free to decide on the what obligations they want to make, such as working with fact-checkers or monitoring political advertising.
Apparently, since Elon Musk took over Twitter, the company’s moderation has largely reduced, which as per the critics has resulted in a increase in spread of disinformation.
However, experts and former Twitter employees claim that the majority of these specialists left their positions or were fired. The social media company once had a dedicated team that tried to combat coordinated disinformation campaigns.
Last month, BBC exposed hundreds of Russian and Chinese state propaganda accounts lurking on Twitter. However, Musk claims that there is now “less misinformation rather than more,” since he took Twitter’s ownership.
Moreover, the EU, along with its voluntary code has brought in a Digital Service Act- a law which will coerce firms to put more efforts in tackling illegal contents online.
From August 25, platforms with more than 45 million active users per month in the EU—including Twitter—must abide by the DSA's legislative requirements.
Twitter will be required by legislation to implement measures to combat the spread of misinformation, provide users with a way to identify illegal content, and respond "expeditiously" to notifications.
In regards to the issue, AFP news agency on Friday quoted a statement of a EU Commission official saying “If (Elon Musk) doesn’t take the code seriously, then it’s better that he quits.”
Criminal and risky online behaviour is under threat of becoming normalized among a generation of young people throughout Europe. The findings come from a European Union (EU) funded research that found one in four in 16- to 19-year-olds have trolled someone online and one in three have engaged in digital piracy.
An EU-funded study discovered proof of widespread risky, delinquent, and criminal among the 16-19 age group in nine European countries which includes the UK.
A survey of 8000 young participants suggests that one out four has trolled someone on the web, one out of eight were involved in online harassment, and one out of ten were involved in hacking or hate speech, one out of five were involved in sexting, and one out of three were involved in digital piracy. The survey also suggests four out of ten were involved in watching pornography.
Risky and criminal online behaviour has become almost normalized in young people of Europe, said Julia Davidson, a co-author of the research and professor of criminology at the University of East London (UEL).
The research suggests that a big proportion of youth in the EU are getting involved in some kind of cybercrime to such a level that doing low-level crimes on web and online-risk taking has almost become normal nowadays.
Davidson said that the research findings said the research findings hint at more male involvement in criminal or risky behaviour, around three quarters of male have accepted involvement in some form of online risk-taking or cybercrime, in comparison to 65% of females.
The Guards reports, "The survey asked young people about 20 types of behaviour online, including looking at pornographic material, posting revenge porn, making self-generated sexual images and posting hate speech. According to the survey findings, just under half of participants engaged in behaviour that could be considered criminal in most jurisdictions, such as hacking, non-consensual sharing of intimate images or “money muling” – where someone receives money from a third party and passes it on, in a practice linked to the proceeds of cybercrime."
The survey consists of 9 countries, these are: UK, France, Spain, Italy, Germany, the Netherlands, Sweden, Norway and Romania. The country that has the highest proportion of "cyberdeviancy" is Spain at 75%, followed by Romania, Germany, and Netherlands. The UK is at the last position at 58%. Cyberdeviancy, according to the survey, means a mixture of criminal and non-criminal risky behaviours.
"The survey, conducted by a research agency with previously used sample groups, found that half of 16- to 19-year-olds spent four to seven hours a day online, with nearly four out of 10 spending more than eight hours a day online, primarily on phones. It found that the top five platforms among the group were YouTube, Instagram, WhatsApp, TikTok and Snapchat," Guardian said.
Facebook Inc is much better than it was in 2016 at tackling election interference but cannot guarantee the site will not be used to undermine European Parliament elections in May, Chief Executive Officer Mark Zuckerberg said on Tuesday.
Chastened since suspected Russian operatives used Facebook and other social media to influence an election that surprisingly brought Donald Trump to power in the United States, Facebook has said it has ploughed resources and staff into safeguarding the May 26 EU vote.
Zuckerberg said there had been a lot of important elections since 2016 that have been relatively clean and demonstrated the defenses it has built up to protect their integrity.
“We’ve certainly made a lot of progress ... But no, I don’t think anyone can guarantee in a world where you have nation states that are trying to interfere in elections, there’s no single thing we can do and say okay we’ve now solved the issue,” Zuckerberg told Irish national broadcaster RTE in an interview.
“This is an ongoing arms race where we’re constantly building up our defenses and these sophisticated governments are also evolving their tactics.”
U.S. intelligence agencies concluded that Russia ran a disinformation and hacking operation to undermine the American democratic process and help Republican Trump’s 2016 campaign. Moscow denies interfering in the election.
Under pressure from EU regulators to do more to guard against foreign meddling in the bloc’s upcoming legislative election, Facebook toughened its rules on political advertising in Europe last week.
It also announced plans to ramp up efforts to fight misinformation ahead of the vote and will partner with German news agency DPA to boost its fact checking.