Search This Blog

Powered by Blogger.

Blog Archive

Labels

According to Europol, Deepfakes are Used Frequently in Organized Crime

Deepfakes might also have a negative impact on the court system, according to the paper.

 

The Europol Innovation Lab recently released its inaugural report, titled "Facing reality? Law enforcement and the challenge of deepfakes", as part of its Observatory function. The paper presents a full overview of the illegal use of deepfake technology, as well as the obstacles faced by law enforcement in identifying and preventing the malicious use of deepfakes, based on significant desk research and in-depth interaction with law enforcement specialists. 

Deepfakes are audio and audio-visual consents that "convincingly show individuals expressing or doing activities they never did, or build personalities which never existed in the first place" using artificial intelligence. Deepfakes are being utilized for malevolent purposes in three important areas, according to the study: disinformation, non-consensual obscenity, and document fraud. As technology further advances in the near future, it is predicted such attacks would become more realistic and dangerous.

  1. Disinformation: Europol provided several examples of how deepfakes could be used to distribute false information, with potentially disastrous results. In the geopolitical domain, for example, producing a phony emergency warning that warns of an oncoming attack. The US charged the Kremlin with a disinformation scheme to use as a pretext for an invasion of Ukraine in February, just before the crisis between Russia and Ukraine erupted.  The technique may also be used to attack corporations, for example, by constructing a video or audio deepfake which makes it appear as if a company's leader committed contentious or unlawful conduct. Criminals imitating the voice of the top executive of an energy firm robbed the company of $243,000. 
  2. Non-consensual obscenity: According to the analysis, Sensity found non-consensual obscenity was present in 96 percent of phony videos. This usually entails superimposing a victim's face onto the body of a philanderer, giving the impression of the victim is performing the act.
  3. Document fraud: While current fraud protection techniques are making it more difficult to fake passports, the survey stated that "synthetic media and digitally modified facial photos present a new way for document fraud." These technologies, for example, can mix or morph the faces of the person who owns the passport and the person who wants to obtain one illegally, boosting the likelihood the photo will pass screening, including automatic ones. 

Deepfakes might also harm the court system, according to the paper, by artificially manipulating or producing media to show or deny someone's guilt. In a recent child custody dispute, a mother of a kid edited an audiotape of her husband to persuade the court he was abusive to her. 

Europol stated all law enforcement organizations must acquire new skills and tools to properly deal with these types of threats. Manual detection strategies, such as looking for discrepancies, and automatic detection techniques, such as deepfake detection software uses artificial intelligence and is being developed by companies like Facebook and McAfee, are among them. 

It is quite conceivable that malicious threat actors would employ deepfake technology to assist various criminal crimes and undertake misinformation campaigns to influence or corrupt public opinion in the months and years ahead. Machine learning and artificial intelligence advancements will continue to improve the software used to make deepfakes.
Share it:

Cyber Security

Europol

Fake Documents

Malicious actor

McAfee

Misinformation