Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Deep Fakes. Show all posts

What are Deepfakes and How to Spot Them

 

Artificial intelligence (AI)-generated fraudulent videos that can easily deceive average viewers have become commonplace as modern computers have enhanced their ability to simulate reality.

For example, modern cinema relies heavily on computer-generated sets, scenery, people, and even visual effects. These digital locations and props have replaced physical ones, and the scenes are almost indistinguishable from reality. Deepfakes, one of the most recent trends in computer imagery, are created by programming AI to make one person look like another in a recorded video. 

What is a deepfake? 

Deepfakes resemble digital magic tricks. They use computers to create fraudulent videos or audio that appear and sound authentic. It's like filming a movie, but with real people doing things they've never done before. 

Deepfake technology relies on a complicated interaction of two fundamental algorithms: a generator and a discriminator. These algorithms collaborate within a framework called a generative adversarial network (GAN), which uses deep learning concepts to create and refine fake content. 

Generator algorithm: The generator's principal function is to create initial fake digital content, such as audio, photos, or videos. The generator's goal is to replicate the target person's appearance, voice, or feelings as closely as possible. 

Discriminator algorithm: The discriminator then examines the generator's content to determine if it appears genuine or fake. The feedback loop between the generator and discriminator is repeated several times, resulting in a continual cycle of improvement. 

Why do deepfakes cause concerns? 

Misinformation and disinformation: Deepfakes can be used to make convincing films or audio recordings of people saying or doing things they did not do. This creates a significant risk of spreading misleading information, causing reputational damage and influencing public opinion.

Privacy invasion: Deepfake technology has the ability to violate innocent people's privacy by manipulating their images or voices for malicious intents, resulting in harassment, blackmail, or even exploitation. 

Crime and fraud: Criminals can employ deepfake technology to imitate others in fraudulent operations, making it challenging for authorities to detect and prosecute those responsible. 

Cybersecurity: As deepfake technology progresses, it may become more difficult to detect and prevent cyberattacks based on modified video or audio recordings. 

How to detect deepfakes 

Though recent advances in generative Artificial Intelligence (AI) have increased the quality of deepfakes, we can still identify telltale signals that differentiate a fake video from an original.

- Pay close attention to the video's commencement. For example, many viewers overlooked the fact that the individual's face was still Zara Patel at the start of the viral Mandana film; the deepfake software was not activated until the person boarded the lift.

- Pay close attention to the person's facial expression throughout the video. Throughout a discourse or an act, there will be irregular variations in expression. 

- Look for lip synchronisation issues. There will be some minor audio/visual sync issues in the deepfake video. Always try to watch viral videos several times before deciding whether they are a deepfake or not. 

In addition to tools, government agencies and tech companies should collaborate to develop cross-platform detection tools that will stop the creation of deepfake videos.

Identity Hijack: The Next Generation of Identity Theft

 

Synthetic representations of people's likenesses, or "deepfake" technology, are not new. Picture Mark Hamill's 2019 "The Mandalorian" episode where he played a youthful Luke Skywalker, de-aged. Similarly, artificial intelligence is not a novel concept. 

However, ChatGPT's launch at the end of 2022 made AI technology widely available at a low cost, which in turn sparked a competition to develop more potent models among almost all of the mega-cap tech companies (as well as a number of startups). 

Several experts have been speaking concerning the risks and active threats posed by the current expansion of AI for months, including rising socio economic imbalance, economic upheaval, algorithmic discrimination, misinformation, political instability, and a new era of fraud. 

Over the last year, there have been numerous reports of AI-generated deepfake fraud in a variety of formats, including attempts to extort money from innocent consumers, ridiculing artists, and embarrassing celebrities on a large scale. 

According to Australian Federal Police (AFP), scammers using AI-generated deepfake technology stole nearly $25 million from a multinational firm in Hong Kong last week.

A finance employee at the company moved $25 million into specific bank accounts after speaking with several senior managers, including the company's chief financial officer, via video conference call. Apart from the worker, no one on the call was genuine. 

Despite his initial suspicions, the people on the line appeared and sounded like coworkers he recognised.

"Scammers found publicly available video and audio of the impersonation targets on YouTube, then used deepfake technology to emulate their voices... to lure the victim into following their instructions," acting Senior Superintendent Baron Chan told reporters. 

Lou Steinberg, a deepfake AI expert and the founder of cyber research firm CTM Insights, believes that as AI grows stronger, the situation will worsen. 

"In 2024, AI will run for President, the Senate, the House and the Governor of several states. Not as a named candidate, but by pretending to be a real candidate," Steinberg stated. "We've gone from worrying about politicians lying to us to scammers lying about what politicians said .... and backing up their lies with AI-generated fake 'proof.'" 

"It's 'identity hijacking,' the next generation of identity theft, in which your digital likeness is recreated and fraudulently misused," he added. 

The best defence against static deepfake images, he said, is to embed micro-fingerprint technology into camera apps, which would allow social media platforms to recognise when an image is genuine and when it has been tampered with. 

When it comes to interactive deepfakes (phone calls and videos), Steinberg believes the simple solution is to create a code word that can be employed between family members and friends. 

Companies, such as the Hong Kong corporation, should develop rules to handle nonstandard payment requests that require codewords or confirmations via a different channel, according to Steinberg. A video call cannot be trusted on its own; the officers involved should be called separately and immediately.

Impersonation Attack: Cybercriminals Impersonates AUC Head Using AI


Online fraudsters, in another shocking case, have used AI technology to pose as Moussa Faki Mahamat, the chairman of the African Union Commission. This bold cybercrime revealed gaps in the African Union (AU) leadership's communication channels as imposters successfully mimicked Faki's voice, held video conferences with European leaders, and even set up meetings under false pretence.

About the African Union Commission and its Leadership

The African Union Commission (AUC) is an executive and administrative body, functioning as the secretariat of the African Union (AU). It plays a crucial role in coordinating AU operations and communicating with foreign partners, much like the European Commission does inside the European Union. 

The chairperson of the AUC, Moussa Faki Mahamat, often holds formal meetings with global leaders through a “note verbal.” The AU leadership regularly schedules meetings with representatives of other nations or international organizations using these diplomatic notes.

However, now the routine meetings are unfortunately disrupted due the cybercrime activities revolving around AI. The cybercriminals apparently successfully impersonated Mahamat, conducting meetings under his guise. The imitation, which went so far as to mimic Faki's voice, alarmed leaders in Europe and the AUC.

About the Impersonation Attack

The cybercriminal further copied the email addresses, disguised as AUC’s deputy chief of staff of the AUC in order to set up phone conversations between Faki and foreign leaders. They even went to several European leaders' meetings, using deepfake video editing to pass for Faki.

After realizing the issue, the AUC reported these incidents, confirming that it would communicate with foreign governments through legitimate diplomatic channels, usually through their embassies in Addis Ababa, the home of the AU headquarters.

The AUC has categorized these fraudulent emails as “phishing,” suggesting that the threat actors may have attempted to acquire digital identities for illicit access to critical data. 

Digitalization and Cybersecurity Challenges in Africa

While Africa’s digital economy has had a positive impact on its overall economy, with an estimate of USD 180 billion by 2025, the rapid development in digitalization has also contributed to an increase in cyber threats. According to estimates posted on the Investment Monitor website, cybercrime alone might cost the continent up to USD 4 billion annually.

While the AUC have expressed regrets over the event of a deepfake of the identity of Moussa Faki Mahamat, the organization did not provide any further details of the investigation involved or the identity of the criminals. Neither did the AUC mention any future plans to improve their cyber landscape in regard to deepfake attacks.

The incident has further highlighted the significance of more robust cybersecurity measures and careful channel monitoring for government and international organizations.