Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Online Securit. Show all posts

Seeing is No Longer Believing as Deepfakes Become Better and More Dangerous

 

Numerous industries are being transformed by artificial intelligence (AI), yet with every benefit comes a drawback. Deepfake detection is becoming more and more challenging as AI image generators become more advanced. 

The impact of AI-generated deepfakes on social media and in war zones is alarming world leaders and law enforcement organisations, who feel anxious regarding the situation. 

"We're getting into an era where we can no longer believe what we see," Marko Jak, co-founder, and CEO of Secta Labs, stated. "Right now, it's easier because the deep fakes are not that good yet, and sometimes you can see it's obvious." 

The time when it won't be feasible to recognise a faked image at first glance, in Jak's opinion, is not that far away—possibly within a year. Jak is the CEO of a company that generates AI images, therefore he must be aware of this. 

Secta Labs, an Austin-based generative AI firm that Jak co-founded in 2022, specialises in producing high-quality AI-generated photographs. Users can upload photos of themselves to create avatars and headshots using artificial intelligence. 

According to Jak, Secta Labs considers customers to be the proprietors of the AI models produced from their data, whilst the company is only a custodian assisting in creating images from these models. 

The potential for abuse of more sophisticated AI models has prompted leaders from around the world to demand for fast action on AI legislation and driven businesses to decide against making their cutting-edge technologies available to the general public. 

After launching its new Voicebox AI-generated voice platform last week, Meta stated it will not make the AI available to the general public. 

"While we believe it is important to be open with the AI community and to share our research to advance the state of the art in AI,” the Meta spokesperson explained. “It’s also necessary to strike the right balance between openness with responsibility." 

The U.S. Federal Bureau of Investigation issued a warning about AI deepfake extortion scams earlier this month, as well as about criminals who fabricate content using images and videos from social media. Jak suggested that exposing deepfakes rather than being able to identify them may be the key to defeating them. 

"AI is the first way you could spot [a deepfake]," Jak added. "There are people developing artificial intelligence that can tell you if an image in a video was produced by AI or not."

The entertainment industry is raging over the usage of generative AI and the possible application of AI-generated imagery in movies and television. Prior to beginning contract discussions, SAG-AFTRA members agreed to authorise a strike due to serious concerns about artificial intelligence.

As technology advances and malicious actors develop more sophisticated deepfakes to thwart equipment intended to identify them, Jak noted that the difficulty is the AI arms race that is taking place.

He claimed the technology and encryption might be able to address the deepfake issue despite the fact that blockchain has been overused—some might even say overhyped—as a fix for actual issues. The wisdom of the public may hold the answer, even though technology can address a variety of problems with deepfakes.