Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Deepfake Images. Show all posts

Deepfakes: A Rising Threat to Cybersecurity and Society

 

The late NBA player Kobe Bryant appeared in the music video for Kendrick Lamar's song "The Heart Part 5", which stunned the audience. Deepfake technology was employed in the video to pay tribute to the late legend. 

Deepfakes are images and videos that have been altered with advanced deep learning technologies such as autoencoders or generative adversarial networks.

With the support of deepfake technology, realistic yet manipulated media assets can be easily generated. However, deepfake technology is deceptive. The technology is utilised in virtual reality, video games, and filmmaking, but it might also be used as a weapon in cyberwarfare, the fifth dimension of warfare. Additionally, it can be used to share false information to influence public opinion along with political agendas.

Cybercrime is on the rise as the internet's global penetration grows. According to the National Crime Records Bureau, there were around 50,000 incidents of cybercrime in 2020. The national capital witnessed a 111% increase in cybercrime in 2021 compared to 2020 as reported by NCRB.

The majority of these incidents involved online fraud, online sexual harassment, and the release of private content, among other things. Deepfake technology may lead to an increase in such incidents that are weaponized for financial gain. 

Notably, the technology is not only a threat to the right to privacy protected by Article 21 of the Constitution, but it also plays a key role in cases of humiliation, misinformation, and defamation. Whaling attacks, deepfake voice phishing, and other frauds that target individuals and companies are thus likely to rise. 

Mitigation Tips

The difficulties caused by deepfakes can be addressed using ChatGPT, the generative AI that has recently gained attention. To offer viable options, ChatGPT can be integrated into search engines. In order to combat the dissemination of misinformation, the AI-enabled ChatGPT, based on Natural Language Processing, is trained to reject inappropriate requests. It can also process complicated algorithms to carry out complex reasoning operations. 

In order to swiftly purge such information from the internet after deployment, the dataset needs to be fine-tuned using supervised learning. It can be further tweaked due to its accessibility to offer a quicker, more practical solution that is also affordable. However, to stop AI from scooping up new deepfakes from the test set, the train set must be constantly monitored. 

Additionally, a greater influx of cyber security specialists is required to achieve this. India's GDP currently only accounts for 0.7% of research and development, compared to 3.3% in affluent nations like the United States of America. The National Cyber Security Policy of 2013 must be improved in order to adapt to new technologies and stop the spread of cybercrimes as these manipulations become more complex over time.

Seeing is No Longer Believing as Deepfakes Become Better and More Dangerous

 

Numerous industries are being transformed by artificial intelligence (AI), yet with every benefit comes a drawback. Deepfake detection is becoming more and more challenging as AI image generators become more advanced. 

The impact of AI-generated deepfakes on social media and in war zones is alarming world leaders and law enforcement organisations, who feel anxious regarding the situation. 

"We're getting into an era where we can no longer believe what we see," Marko Jak, co-founder, and CEO of Secta Labs, stated. "Right now, it's easier because the deep fakes are not that good yet, and sometimes you can see it's obvious." 

The time when it won't be feasible to recognise a faked image at first glance, in Jak's opinion, is not that far away—possibly within a year. Jak is the CEO of a company that generates AI images, therefore he must be aware of this. 

Secta Labs, an Austin-based generative AI firm that Jak co-founded in 2022, specialises in producing high-quality AI-generated photographs. Users can upload photos of themselves to create avatars and headshots using artificial intelligence. 

According to Jak, Secta Labs considers customers to be the proprietors of the AI models produced from their data, whilst the company is only a custodian assisting in creating images from these models. 

The potential for abuse of more sophisticated AI models has prompted leaders from around the world to demand for fast action on AI legislation and driven businesses to decide against making their cutting-edge technologies available to the general public. 

After launching its new Voicebox AI-generated voice platform last week, Meta stated it will not make the AI available to the general public. 

"While we believe it is important to be open with the AI community and to share our research to advance the state of the art in AI,” the Meta spokesperson explained. “It’s also necessary to strike the right balance between openness with responsibility." 

The U.S. Federal Bureau of Investigation issued a warning about AI deepfake extortion scams earlier this month, as well as about criminals who fabricate content using images and videos from social media. Jak suggested that exposing deepfakes rather than being able to identify them may be the key to defeating them. 

"AI is the first way you could spot [a deepfake]," Jak added. "There are people developing artificial intelligence that can tell you if an image in a video was produced by AI or not."

The entertainment industry is raging over the usage of generative AI and the possible application of AI-generated imagery in movies and television. Prior to beginning contract discussions, SAG-AFTRA members agreed to authorise a strike due to serious concerns about artificial intelligence.

As technology advances and malicious actors develop more sophisticated deepfakes to thwart equipment intended to identify them, Jak noted that the difficulty is the AI arms race that is taking place.

He claimed the technology and encryption might be able to address the deepfake issue despite the fact that blockchain has been overused—some might even say overhyped—as a fix for actual issues. The wisdom of the public may hold the answer, even though technology can address a variety of problems with deepfakes.