Search This Blog

Powered by Blogger.

Blog Archive

Labels

Identity Hijack: The Next Generation of Identity Theft

Deepfake technology was employed by scammers last week to steal $25 million from a multinational Hong Kong company.

 

Synthetic representations of people's likenesses, or "deepfake" technology, are not new. Picture Mark Hamill's 2019 "The Mandalorian" episode where he played a youthful Luke Skywalker, de-aged. Similarly, artificial intelligence is not a novel concept. 

However, ChatGPT's launch at the end of 2022 made AI technology widely available at a low cost, which in turn sparked a competition to develop more potent models among almost all of the mega-cap tech companies (as well as a number of startups). 

Several experts have been speaking concerning the risks and active threats posed by the current expansion of AI for months, including rising socio economic imbalance, economic upheaval, algorithmic discrimination, misinformation, political instability, and a new era of fraud. 

Over the last year, there have been numerous reports of AI-generated deepfake fraud in a variety of formats, including attempts to extort money from innocent consumers, ridiculing artists, and embarrassing celebrities on a large scale. 

According to Australian Federal Police (AFP), scammers using AI-generated deepfake technology stole nearly $25 million from a multinational firm in Hong Kong last week.

A finance employee at the company moved $25 million into specific bank accounts after speaking with several senior managers, including the company's chief financial officer, via video conference call. Apart from the worker, no one on the call was genuine. 

Despite his initial suspicions, the people on the line appeared and sounded like coworkers he recognised.

"Scammers found publicly available video and audio of the impersonation targets on YouTube, then used deepfake technology to emulate their voices... to lure the victim into following their instructions," acting Senior Superintendent Baron Chan told reporters. 

Lou Steinberg, a deepfake AI expert and the founder of cyber research firm CTM Insights, believes that as AI grows stronger, the situation will worsen. 

"In 2024, AI will run for President, the Senate, the House and the Governor of several states. Not as a named candidate, but by pretending to be a real candidate," Steinberg stated. "We've gone from worrying about politicians lying to us to scammers lying about what politicians said .... and backing up their lies with AI-generated fake 'proof.'" 

"It's 'identity hijacking,' the next generation of identity theft, in which your digital likeness is recreated and fraudulently misused," he added. 

The best defence against static deepfake images, he said, is to embed micro-fingerprint technology into camera apps, which would allow social media platforms to recognise when an image is genuine and when it has been tampered with. 

When it comes to interactive deepfakes (phone calls and videos), Steinberg believes the simple solution is to create a code word that can be employed between family members and friends. 

Companies, such as the Hong Kong corporation, should develop rules to handle nonstandard payment requests that require codewords or confirmations via a different channel, according to Steinberg. A video call cannot be trusted on its own; the officers involved should be called separately and immediately.
Share it:

Cyber Crime

Deep Fakes

Identity Hijack

Identity Theft

Threat Landscape

User Security