Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Deepfake. Show all posts

Nude Deepfakes: What is EU Doing to Prevent Women from Cyber Harassment


The disturbring rise of sexual deepfakes

Deepfakes are a worry in digital development in this age of rapid technical advancement. This article delves deeply into the workings of deepfake technology, exposing both its potential dangers and its constantly changing capabilities.

The manipulation of images and videos to make sexually oriented content may be considered a criminal offense across all the European Union nations. 

The first directive on violence against will move through its final approval stage by April 2024. 

With the help of AI programs, these images are being modified to undress women without their consent. 

What changes will the new directive bring? And what will happen if the women who live in the European Union are the target of manipulation but the attacks happen in countries outside the European Nation?

The victims: Women

If you are wondering how easy it is to create sexual deepfakes, some websites are just a click away and provide free-of-cost services.

According to the 2023 State of Deepfakes research, it takes around 25 minutes to create a sexual deepfake, and it's free. You just need a photo and the face has to be visible. 

A sample of 95000 deepfake videos were analyzed between 2019 and 2023, and the research discloses that there has been a disturbing 550% increase. 

AI and Deepfakes expert Henry Aider says the people who use these stripping tools want to humiliate, defame, traumatize, and in some incidents, sexual pleasure. 

“And it's important to state that these synthetic stripping tools do not work on men. They are explicitly designed to target women. So it's a good example of a technology that is explicitly malicious. There's nothing neutral about that,” says Henry.

The makers of nude deepfakes search for their target's pictures "anywhere and everywhere" on the web. The pictures can be taken from your Instagram account, Facebook account, or even your WhatsApp display picture. 

Prevention: What to do?

When female victims come across nude deepfakes of themselves, there's a societal need to protect them. 

But the solution lies not in the prevention, but in taking immediate actions to remove them. 

Amanda Manyame, Digital Law and Rights Advisor at Equality Now, says “I'm seeing that trend, but it's like a natural trend any time something digital happens, where people say don't put images of you online, but if you want to push the idea further is like, don't go out on the street because you can have an accident.” The expert further says, “unfortunately, cybersecurity can't help you much here because it's all a question of dismantling the dissemination network and removing that content altogether.”

Today, the victims of nude deepfakes seek various laws like the General Data Protection Regulation, the European Union's Privacy Law, and national defamation laws to seek justice and prevention. 

To the victims who suffer such an offense, it is advisable to take screenshots or video recordings of the deepfake content and use them as proof while reporting it to the police and social media platforms where the incident has happened. 

“There is also a platform called StopNCII, or Stop Non-Consensual Abuse of Private Images, where you can report an image of yourself and then the website creates what is called a 'hash' of the content. And then, AI is then used to automatically have the content taken down across multiple platforms," says the Digital Law and Rights at Equality Now.

Global Impact

The new directive aims to combat sexual violence against women, all 27 member states will follow the same set of laws to criminalize all forms of cyber-violence like sexually motivated "deepfakes."

Amanda Manyame says “The problem is that you might have a victim who is in Brussels. You've got the perpetrator who is in California, in the US, and you've got the server, which is holding the content in maybe, let's say, Ireland. So, it becomes a global problem because you are dealing with different countries.”

Addressing this concern, the MEP and co-author of the latest directive explain that “what needs to be done in parallel with the directive" is to increase cooperation with other countries, "because that's the only way we can also combat crime that does not see any boundaries."

"Unfortunately, AI technology is developing very fast, which means that our legislation must also keep up. So we need to revise the directive in this soon. It is an important step for the current state, but we will need to keep up with the development of AI,” Evin Incir further admits.

94% Deepfake Adult Content Targets Celebs

 

The rapid progress in computer technology has ushered in remarkable strides in the realm of simulating reality. A noteworthy development has been the emergence of artificial intelligence (AI)-generated media, specifically videos adept at convincingly emulating real individuals. This phenomenon has captured considerable interest, as these videos have the uncanny ability to convey the impression that a person is engaging in actions or uttering words they have never actually performed. 

According to a recent survey focusing on deepfake content, a staggering 98% of all online deepfake videos consist of adult content. Furthermore, an overwhelming 99% of the convincingly realistic pornography predominantly features female subjects. In terms of vulnerability to deepfake adult content, India holds the sixth position among nations. 

The 2023 State of Deepfakes report, published by Home Security Heroes, a United States-based organization, highlights that individuals in the public eye, especially those within the entertainment sector, are at a heightened risk. This is attributed to their prominence and the potential repercussions on their careers. Utilizing deepfake technology involves the fabrication of videos by either substituting faces or modifying voices. 

As indicated in the report, a staggering 94% of individuals portrayed in deepfake pornography videos have ties to the entertainment industry. This encompasses singers, actresses, social media influencers, models, and athletes. 

Why Deepfake Pornography is on the Rise? 

The survey emphasizes that the evolution of deepfakes has been significantly influenced by two key factors: the proliferation of Generative Adversarial Networks (GANs) and the growing accessibility of user-friendly tools, software, and communities. According to the same survey, a noteworthy statistic reveals that one out of every three deepfake tools grants users the ability to produce adult content through AI-powered manipulation techniques. 

Additionally, it notes that 92.3% of these platforms provide free access, albeit with certain restrictions. In separate incidents, a Twitch streamer was discovered featured on a website notorious for producing AI-generated adult content featuring fellow streamers. Additionally, a cohort of students from New York created a video in which their principal was manipulated to utter racist comments and make threats against students. 

Meanwhile, in Venezuela, AI-generated videos are being employed as a means to spread political propaganda. Evidently, there has been a widespread adoption of user-friendly deepfake tools, with around 42 different tools collectively amassing 1 crore monthly searches. These tools serve a wide-ranging user base, with 40 percent available as downloadable applications and the remaining 60 percent accessible through web-based platforms, offering diverse options for users. 

The survey brings to light that 20 percent of the participants have contemplated acquiring the skills to produce deepfake adult content, indicating a burgeoning interest in this technology. Furthermore, one in ten respondents confessed to having made attempts at creating deepfake adult content featuring public figures.

AI-Based Deepfake Fraud: Police Retrieves Money Worth ₹40,000 Defrauded From Kozhikode Victim


Kozhikode, India: In a ‘deepfake’ incident, a man from Kozhikode, Kerala lost ₹40,000 after he fell prey to an AI-based scam.

According to police officials, the victim, identified as Radhakrishnan received a video call on WhatsApp from an unknown number. Apparently, the swindlers used Artificial Intelligence tools to generate a deepfake video of the victim’s old colleague knew. To further maintain the trust, the scam caller cunningly mentioned the victim’s former acquaintances.

During their conversation, the scammer made a desperate request of ₹40,000, stating a medical urgency of a relative who is in the hospital. Trusting the caller, Radhakrishnan provided the financial aid, via Google Pay.

Later, the caller made another request to Radhakrishnan, of ₹40,000, which raised his suspicions. Following this, he reached out to his colleague directly. To his disbelief, he discovered the entire incident was in fact an AI based deepfake fraud, and he was robbed./ Realizing the fraud, he immediately filed a complaint to the Cyber Police.

The cyber cell promptly investigated the case and managed to the bank authorities of the bank account where the money was kept. Apparently, the bank account was traced back to private bank located in Maharashtra.

This was the first incidence of deepfake fraud based on Al that has been detected in the state, according to the Kerala Police Cyber Cell.

Modus Operandi: The scammers collect images from social media profiles and use artificial intelligence to create misleading films. These con artists use Al technology in conjunction with details like mutual friends' names to appear legitimate and con innocent individuals.

How to Protect Oneself From Deepfakes? 

Similar cases of deepfakes and other AI-based frauds have raised concerns for cyber security professionals.

Experts have cautioned against such scams and provided some safety advice. Because the majority of deepfakes have subpar resolution, people are urged to examine the video quality. When closely examined, it is obvious that the deepfake films are fake since they either abruptly end or loop back to the beginning after a predetermined length of time. Before conducting any financial transactions, it is also a good idea to get in touch with a person separately to confirm that they are truly participating in the video conversation. 

Generative AI Threatens Digital Identity Verification, Says Former CTO of Aadhar

 

Srikanth Nadhamuni, who formerly held the position of chief technology officer (CTO) of Aadhar between 2009 and 2012, believes that the tremendous improvement we are seeing in the field of artificial intelligence, particularly generative AI, poses a clear and present danger to digital identity verification. He and Vinod Khosla co-founded Bangalore-based incubator Khosla Labs, where he serves as CEO. 

The trust mechanisms that have been meticulously built into identification systems throughout time are seriously threatened by deep fakes, synthetic media that effectively mimic actual human speech, behaviour, and appearance. The need for a "proof-of-personhood" verification capability, probably using a person's biometrics, becomes paramount in this increasingly likely future scenario where AI-generated impersonations cause chaos and erode trust in the system, the tech expert wrote in a LinkedIn post titled "The Future of Digital Identity Verification: In the era of AI Deep Fakes." 

Disinformation is now taking on a whole new dimension thanks to generative AI. Text-to-image AI models like DALL-E2, Midjourney, and Stable Diffusion can produce incredibly realistic visuals that are simple to mistake for the real thing. The ability to create misleading visual information has been made possible by this technology, further obscuring the distinction between truth and fiction.

Even though the Indian government has stated that it will not regulate artificial intelligence (AI), it has revealed that the impending Digital India Act (DIA) will include provisions to address disinformation produced by AI.

“We are not going to regulate AI but we will create guardrails. There will be no separate legislation but a part of DIA will address threats related to high-risk AI,” Union Minister Rajeev Chandrasekhar said. 

The draft hasn't been released yet, so it's unclear how it will address the challenge that generative AI poses to digital identity verification. 

How to identify deep fake images

According to Sandy Fliderman, president, CTO, and creator of industry fintech, it was simpler to spot fakes in old recordings because of changes in skin tone, odd blinking patterns, or jerky motions. But since technology has advanced so much, many of the traditional "tells'' are no longer valid. Today, red flags could show up as irregularities in lighting and shading, which deepfake technology is still working to perfect.

Humans can seek for a number of indicators to distinguish between authentic and fraudulent photographs, such as the following: 

  • Body components and the skin have irregularities.
  • Eyes have a shadowy area. 
  • Unorthodox blinking patterns.
  • Spectacles with an unusual glare. 
  • Mouth gestures that are not realistic. 
  • Lip colour is unnaturally different from the face. 

Deepfake Apps Remain Popular in China Despite Crackdown

The Chinese government has recently launched a crackdown on deepfakes, a type of synthetic media that involves manipulating images, videos, or audio to make them appear to be real. Despite these efforts, however, several Chinese apps that utilize deepfakes are finding a large audience in the country.

Deepfakes have become a significant concern in recent years due to their potential to spread misinformation and manipulate public opinion. Cybersecurity experts warn that deepfakes can be used for nefarious purposes such as identity theft, fraud, and even political propaganda.

China's new laws aim to prevent the spread of false information and improve cybersecurity. However, the government's efforts have not deterred developers from creating deepfake apps that remain popular among Chinese consumers. These apps allow users to create deepfake videos and images with ease, making it possible to manipulate content in ways that were previously impossible.

While these apps are designed to be entertaining and harmless, they can pose significant risks to personal privacy and security. Deepfake technology is becoming increasingly advanced, and it is becoming more difficult to distinguish between real and fake content.

To protect themselves, users should exercise caution when using deepfake apps and be aware of the potential risks. They should also ensure that they are downloading apps from reputable sources and regularly update their devices to the latest software version to mitigate any vulnerabilities.

The proliferation of deepfake apps highlights the importance of continued vigilance in the fight against cyber threats. Governments, organizations, and individuals must work together to stay ahead of evolving threats and take steps to mitigate risks.

China's crackdown on deepfakes has not stopped the popularity of deepfake apps in the country. Cybersecurity experts warn that these apps can pose significant risks to personal privacy and security, and users should exercise caution when using them. The continued proliferation of deepfakes emphasizes the importance of continued vigilance in the fight against cyber threats.

 Sophos: Hackers Avoid Deep Fakes as Phishing Attacks are Effective

According to a prominent security counsel for the UK-based infosec business Sophos, the fear of deepfake scams is entirely exaggerated.

According to John Shier, senior security adviser for cybersecurity company Sophos, hackers may never need to utilize deepfakes on a large scale because there are other, more effective ways to deceive individuals into giving up personal information and financial data.

As per Shier, phishing and other types of social engineering are much more effective than deepfakes, which are artificial intelligence-generated videos that imitate human speech.

What are deepfakes?

Scammers frequently use technology to carry out 'Identity Theft'. In order to demonstrate the risks of deepfakes, researchers in 2018 employed the technology to assume the identity of former US President Barack Obama and disseminate a hoax online.

Shier believes that while deepfakes may be overkill for some kinds of fraud, romance scams—in which a scammer develops a close relationship with their victim online in order to persuade them to send them money—could make good use of the technology because videos will give an online identity inherent legitimacy.

Since deepfake technology has gotten simpler to access and apply, Eric Horvitz, chief science officer at Microsoft, outlines his opinion that in the near future, "we won't be able to tell if the person we're chatting to on a video conversation is real or an impostor."

The expert also anticipates that deepfakes will become more common in several sectors, including romance scams. Making convincing false personas requires a significant commitment of time, effort, and devotion, and adding a deepfake does not require much more work. Shier is concerned that deepfaked romance frauds might become an issue if AI makes it possible for the con artist to operate on a large scale.

Shier was hesitant to assign a date for industrialized deepfake bots, but he claimed that the required technology is becoming better and better every year.

The researcher noted that "AI experts make it sound like it is still a few years away from the huge effect." In the interim, we will observe well-funded criminal organizations carrying out the subsequent degree of compromise to deceive victims into writing checks into accounts.

Deepfakes have historically been employed primarily to produce sexualized images and movies, almost always featuring women.

Nevertheless, a Binance PR executive recently disclosed that fraudsters had developed a deepfaked clone that took part in Zoom calls and attempted to conduct bitcoin scams.

Deepfakes may not necessarily be a scammer's primary tactic, but security researchers at Trend Micro said last month that they are frequently used to augment other techniques. The lifelike computerized images have recently appeared in online advertisements, phony business meetings, and job seeker frauds. The distress is that anybody could become a victim because the internet is so pervasive.






Binance Executive: Scammers Created a 'Deep Fake Hologram' of him to Fool Victims

 

According to a Binance public relations executive, fraudsters created a deep-fake "AI hologram" of him to scam cryptocurrency projects via Zoom video calls.

Patrick Hillmann, chief communications officer at the crypto hypermart, stated he received messages from project teams thanking him for meeting with them virtually to discuss listing their digital assets on Binance over the past month. This raised some suspicions because Hillmann isn't involved in the exchange's listings and doesn't know the people messaging him.

"It turns out that a sophisticated hacking team used previous news interviews and TV appearances over the years to create a 'deep fake' of me," Hillmann said. "Other than the 15 pounds that I gained during COVID being noticeably absent, this deep fake was refined enough to fool several highly intelligent crypto community members."

Hillmann included a screenshot of a project manager asking him to confirm that he was, in fact, on a Zoom call in his write-up this week. The hologram is the latest example of cybercriminals impersonating Binance employees and executives on Twitter, LinkedIn, and other social media platforms.

Scams abound in the cryptocurrency world.
Despite highlighting a wealth of security experts and systems at Binance, Hillman insisted that users must be the first line of defence against scammers. He wrote that they can do so by being vigilant, using the Binance Verify tool, and reporting anything suspicious to Binance support.

“I was not prepared for the onslaught of cyberattacks, phishing attacks, and scams that regularly target the crypto community. Now I understand why Binance goes to the lengths it does,” he added.

The only proof Hillman provided was a screenshot of a chat with someone asking him to confirm a Zoom call they previously had. Hillman responds: “That was not me,” before the unidentified person posts a link to somebody’s LinkedIn profile, telling Hillman “This person sent me a Zoom link then your hologram was in the zoom, please report the scam”.

The fight against deepfakes
Deepfakes are becoming more common in the age of misinformation and artificial intelligence, as technological advancements make convincing digital impersonations of people online more viable.

They are sometimes highly realistic fabrications that have sparked global outrage, particularly when used in a political context. A deepfake video of Ukrainian President Volodymyr Zelenskyy was posted online in March of this year, with the digital impersonation of the leader telling citizens to surrender to Russia.

On Twitter, one version of the deepfake was viewed over 120,000 times. In its fight against disinformation, the European Union has targeted deepfakes, recently requiring tech companies such as Google, Facebook, and Twitter to take countermeasures or face heavy fines.