Search This Blog

Powered by Blogger.

Blog Archive

Labels

Deepfake Phishing: A New Tool of Threat Actors

Hackers utilize AI and machine learning to carry out deepfake phishing assaults.

 

Deepfake Phishing is an emerging attack vector that security experts should be concerned about because of the development of increasingly advanced AI, audio, and video technology as well as the abundance of user personal data that is available on social media. 

How deepfake targets victims 

Hackers utilize AI and machine learning to analyze a variety of information, including photos, videos, and audio snippets, to carry out a deepfake phishing assault. They build a computerized representation of a person using this data. 

Deepfakes have primarily been used up until now for political and entertainment purposes, both good and bad. This strategy's best instance occurred earlier this year. Patrick Hillmann, the chief communication officer at Binance, was the subject of a deepfake hologram created by hackers using information from prior interviews and media appearances. 

With this strategy, threat actors can disobey biometric authentication systems in addition to imitating a person's physical characteristics to deceive human users via social engineering. 

Because of this, Avivah Litan, a Gartner analyst, advises businesses "not to rely on biometric certification for user authentication apps unless it incorporates effective deepfake detection that verifies user liveness and authenticity." 

Litan also points out that as AI used in these assaults develops, it will likely become harder to identify these kinds of attacks as it becomes able to produce more convincing auditory and visual representations. 

Deepfake phishing's state in 2022 and beyond 

Although deepfake technology is still in its infancy, it is becoming more and more popular. It is already being used experimentally by cybercriminals to execute attacks against unwary consumers and organizations. 

The World Economic Forum (WEF) estimates that there are now 900% more deepfake films online each year. In addition, VMware discovers a 23% rise from last year in the proportion of defenders reporting detecting malicious deepfakes utilized in an attack. 

These assaults have deadly effectiveness. For instance, in 2021, fraudsters impersonated the CEO of a significant firm using AI voice cloning, and they deceived the bank manager of the company into transferring $35 million to another account in order to complete an "acquisition." 

A similar incident took place in 2019. Using AI to pretend to be the CEO of the company's German parent company, a fraudster called the CEO of a UK energy company. He asked for a $243,000 quick transfer to a Hungarian supplier. 

According to several analysts, deepfake phishing will only increase, and threat actors will continue to develop phony content that is both more complex and convincing. 

“As deepfake technology matures, [attacks using deepfakes] are expected to become more common and expand into newer scams,” stated KPMG analyst Akhilesh Tuteja. “They are increasingly becoming indistinguishable from reality. It was easy to tell deepfake videos two years ago, as they had a clunky [movement] quality and … the faked person never seemed to blink. But it’s becoming harder and harder to distinguish it now.” 

Prevention Tips 

Security professionals must regularly train end users about this and other new attack routes. Before a deepfake attack spreads, it could be possible to halt it using some unexpected low-tech techniques. 

With security awareness training, there is a genuine chance that you will get bored, but making it satisfying, rewarding, and competitive may help you remember the information. Pre-shared codes may be necessary for an authorized person to transfer substantial sums of money, or multiple persons may need to approve the transaction. 

Employees will likely find the deepfake phishing awareness training to be very interesting, funny, and educational. Share convincing deep fake movies and instruct viewers to watch out for telltale signs like unblinking eyes, unusual lighting, and peculiar facial movements.
Share it:

AI

Cyber Attacks

Deepfake Phishing

Phishing Attacks

User Security