Privacy and security in financial transactions are becoming increasingly important in our digital age. The Consumer Finance Group's recent call for stricter privacy protections for the digital Euro is a proactive step to ensure that people's financial information is protected.
The Consumer Finance Group, a prominent advocate for consumer rights, has raised concerns about the potential privacy vulnerabilities associated with the digital Euro, which is currently under development by the European Central Bank. As reported by ThePrint and Reuters, the group emphasizes the need for robust privacy protections.
One of the key concerns highlighted by the Consumer Finance Group is the risk of digital Euro transactions being traced and monitored without adequate safeguards. This could lead to an invasion of financial privacy, as every transaction could potentially be linked to an individual, raising concerns about surveillance and misuse of data.
To address these concerns, the group has proposed several measures:
While these measures are essential for safeguarding privacy, it's essential to strike a balance between privacy and security. Implementing stringent privacy measures must also consider the need to combat financial crimes such as money laundering and terrorism financing.
The European Central Bank and policymakers should carefully consider the recommendations put forth by the Consumer Finance Group. Finding the right balance between privacy and security in the digital Euro's design will be crucial in gaining public trust and ensuring the widespread adoption of this digital currency.
The need for stronger privacy protections in the digital Euro is a reminder of the importance of safeguarding personal financial data in our increasingly digitalized society. Regulators and financial institutions must prioritize addressing these privacy issues as digital currencies become more widely used.
After Italy became the first Western country to block advanced chatbot ChatGPT on Friday due to a lack of transparency in its data use, Europe is wondering who will follow. Several neighboring countries have already expressed interest in the decision.
“In the space of a few days, specialists from all over the world and a country, Italy, are trying to slow down the meteoric progression of this technology, which is as prodigious as it is worrying,” writes the French daily Le Parisien.
Many cities in France have already begun with their own research “to assess the changes brought about by ChatGPT and the consequences of its use in the context of local action,” reports Ouest-France.
The city of Montpellier wants to ban ChatGPT for municipal staff, as a precaution," the paper reports. “The ChatGPT software should be banned within municipal teams considering that its use could be detrimental.”
According to the BBC, the Irish data protection commission is following up with the Italian regulator to understand the basis for its action and "will coordinate with all E.U. (European Union) data protection authorities" in relation to the ban.
The Information Commissioner's Office, the United Kingdom's independent data regulator, also told the BBC that it would "support" AI developments while also "challenging non-compliance" with data protection laws.
ChatGPT is already restricted in several countries, including China, Iran, North Korea, and Russia. The E.U. is in the process of preparing the Artificial Intelligence Act, legislation “to define which AIs are likely to have societal consequences,” explains Le Parisien. “This future law should in particular make it possible to fight against the racist or misogynistic biases of generative artificial intelligence algorithms and software (such as ChatGPT).
The Artificial Intelligence Act also proposes appointing one regulator in charge of artificial intelligence in each country.
The Italian situation
The Italian data protection authority explained that it was banning and investigating ChatGPT due to privacy concerns about the model, which was developed by a U.S. start-up called OpenAI, which is backed by billions of dollars in investment from Microsoft.
The decision "with immediate effect" announced by the Italian National Authority for the Protection of Personal data was taken because “the ChatGPT robot is not respecting the legislation on personal data and does not have a system to verify the age of minor users,” Le Point reported.
“The move by the agency, which is independent from the government, made Italy the first Western country to take action against a chatbot powered by artificial intelligence,” wrote Reuters.
The Italian data protection authority stated that it would not only block OpenAI's chatbot, but would also investigate whether it complied with the EU's General Data Protection Regulation.
It goes on to say that the new technology "exposes minors to completely inappropriate answers in comparison to their level of development and awareness."
According to the press release from the Italian Authority, on March 20, ChatGPT "suffered a loss of data ('data breach') concerning user conversations and information relating to the payment of subscribers to the paid service."
It also mentions the "lack of a legal basis justifying the mass collection and storage of personal data for the purpose of 'training' the algorithms underlying the platform's operation."
ChatGPT was released to the public in November and was quickly adopted by millions of users who were impressed by its ability to answer difficult questions clearly, mimic writing styles, write sonnets and papers, and even pass exams. ChatGPT can also be used without any technical knowledge to write computer code.
“Since its release last year, ChatGPT has set off a tech craze, prompting rivals to launch similar products and companies to integrate it or similar technologies into their apps and products,” writes Reuters.
"On Friday, OpenAI, which disabled ChatGPT for users in Italy in response to the agency's request, said it is actively working to reduce the use of personal data in training its AI systems like ChatGPT."
According to Euronews, the Italian watchdog has now asked OpenAI to "communicate within 20 days the measures undertaken" to remedy the situation, or face a fine of €20 million ($21.7 million) or up to 4% of annual worldwide turnover.
The announcement comes after Europol, the European police agency, warned on Monday that criminals were ready to use AI chatbots like ChatGPT to commit fraud and other cybercrimes. The rapidly evolving capabilities of chatbots, from phishing to misinformation and malware, are likely to be quickly exploited by those with malicious intent, Europol warned in a report.
Authorities in Europe, led by French and Dutch forces disclosed how the EncroChar network had been compromised several months earlier. More than 100 million messages were siphoned out by malware the police covertly inserted into the encrypted system, exposing the inner workings of the criminal underworld. People openly discussed drug deals, coordinated kidnappings, premeditated killings, and worse.
The hack, considered one of the largest ever being conducted by the police, was an intelligence gold mine. It led to hundreds of arrests, home raids, and thousands of kilograms of drugs being seized. Following this, thousands of EncroChat members are now imprisoned in Europe, including the UK, Germany, France, and the Netherlands, after two years have passed.
The EncroChat phone network, which was established in 2016, had about 60,000 users when it was uncovered by law enforcement. According to EncroChat's company website, subscribers paid hundreds of dollars to use a customized Android phone that could "guarantee anonymity." The phone's security features included the ability to "panic wipe" everything on the device, live customer assistance, and encrypted conversations, notes, and phone calls using a version of the Signal protocol. Its GPS chip, microphone, and camera may all be taken out.
Instead of decrypting the phone network, it appears that the police who hacked it compromised the EncroChat servers in Roubaix, France, and then distributed malware to devices.
According to court filings, 32,477 of EncroChat's 66,134 users in 122 countries were affected, despite the little-known fact on how the breach occurred or the kind of malware deployed.
The Documents obtained by Motherboard indicated that the investigators might potentially collect all of the data on the phones. The participating law enforcement agencies in the inquiry exchanged this information. (EncroChat claimed to be a legitimate business before shutting down as a result of the breach.)
In regard to the hack, Europe is facing several legal challenges.
While in many countries the court has ruled that the hacked EncroChat messages can be utilized as legal shreds of evidence, these decisions have now been disputed.
According to a report by Computer Weekly, many of the reported cases possess complexity: Every country has a unique legal system with distinct guidelines about the kinds of evidence that may be utilized and the procedures prosecutors must adhere to. For instance, Germany places strict restrictions on the installation of malware on mobile devices, while the UK generally forbids the use of "intercepted" evidence in court.
The most well-known objection to date comes from German attorneys. One of the top courts on the continent, the Court of Justice of the European Union (CJEU), received an EncroChat appeal from a regional court in Berlin in October.
The judge asked the court to rule on 14 issues relating to the use of the data in criminal cases and how it was moved across Europe. The Berlin court emphasized how covert the investigation was. The court decision's machine translation states that "technical specifics on the operation of the trojan software and the storage, assignment, and filtering of the data by the French authorities and Europol are not known." "French military secrecy inherently affects how the trojan software functions."
Despite the legal issues, police departments all around Europe have praised the EncroChat breach and how it has assisted in locking up criminals. In massive coordinated policing operations that began as soon as the hack was revealed in June 2020, hundreds of people were imprisoned. In the Netherlands, police found criminals using shipping containers as "torture chambers."
Since then, a steady stream of EncroChat cases has been brought before courts, and individuals have been imprisoned for some of the most severe crimes. The data from EncroChat has been a tremendous help to law enforcement; as a result of the police raids, organized crime arrests in Germany increased by 17%, and at least 2,800 persons have been detained in the UK.
Despite the police being lauded for capturing the criminals, according to the lawyers, this method of investigation is flawed and should not be presented as evidence in court. They emphasized how the secrecy of the hacking indicates that suspects have not received fair trials. A lawsuit from Germany was then sent to Europe's top court toward the end of 2022.
If successful, the appeal could jeopardize criminals' convictions across Europe. Additionally, analysts claim that the consequences have an impact on end-to-end encryption globally.
“Even bad people have rights in our jurisdictions because we are so proud of our rule of law […] We’re not defending criminals or defending crimes. We are defending the rights of accused people,” says Lödden.
US-based company "Evolv" known for selling artificial intelligence (AI) scanners, claims it detects all weapons.
However, the research firm IPVM says Evolv might fail in detecting various types of knives and some components and bombs.
Evolv says it has told venues of all "capabilities and limitations." Marion Oswald, from Government Centre for Data Ethics and Innovation said there should be more public knowledge as well as independent evaluation of the systems before they are launched in the UK.
Because these technologies will replace methods of metal detection and physical searches that have been tried and tested.
AI and machine learning allow scanners to make unique "signatures" of weapons that distinguish them from items like computers or keys, lessening the need for preventing long queues in manual checks.
"Metallic composition, shape, fragmentation - we have tens of thousands of these signatures, for all the weapons that are out there. All the guns, all the bombs, and all the large tactical knives," said Peter George, chief executive, in 2021. For years, independent security experts have raised concerns over some of Evolv's claims.
The company in the past didn't allow IPVM to test its technology named Evolv Express. However, last year, Evolve allowed the National Center for Spectator Sports Safety and Security (NCS4).
NCS4's public report, released last year, gave a score of 2.84 out of 3 to Evolv- most of the guns were detected 100% of the time.
However, it also produced a separate report (private), received via a Freedom of Information request by IPVM. The report gave Evolv's ability to identify large knives 42% of the time. The report said that the system failed to detect every knife on the sensitivity level noticed during the exercise.
The report recommended full transparency to potential customers, on the basis of the data collected. ASM Global, owner of Manchester arena said its use of Evolv Express is the "first such deployment at the arena in Europe," it is also planning to introduce technology to other venues.
In an unfortunate incident in 2017, a man detonated a bomb at an Ariana Grande concert in the arena, which kille22 people and injured more than hundreds, primarily children.
Evolv didn't debate IPVM's private report findings. It says that the company believes in communicating sensitive security information, which includes capabilities and limitations of Evolv's systems, allowing security experts to make informed decisions for their specific venues.
We should pay attention to NCS4's report as there isn't much public information as to how Evolv technology works.