Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Deepfake Images. Show all posts

Grok AI Faces Global Backlash Over Nonconsensual Image Manipulation on X

 

A dispute over X's internal AI assistant, Grok, is gaining attention - questions now swirl around permission, safety measures online, yet also how synthetic media tools can be twisted. This tension surfaced when Julie Yukari, a musician aged thirty-one living in Rio de Janeiro, posted a picture of herself unwinding with her cat during New Year’s Eve celebrations. Shortly afterward, individuals on the network started instructing Grok to modify that photograph, swapping her outfit for skimpy beach attire through digital manipulation. 

What started as skepticism soon gave way to shock. Yukari had thought the system wouldn’t act on those inputs - yet it did. Images surfaced, altered, showing her with minimal clothing, spreading fast across the app. She called the episode painful, a moment that exposed quiet vulnerabilities. Consent vanished quietly, replaced by algorithms working inside familiar online spaces. 

A Reuters probe found that Yukari’s situation happens more than once. The organization uncovered multiple examples where Grok produced suggestive pictures of actual persons, some seeming underage. No reply came from X after inquiries about the report’s results. Earlier, xAI - the team developing Grok - downplayed similar claims quickly, calling traditional outlets sources of false information. 

Across the globe, unease is growing over sexually explicit images created by artificial intelligence. Officials in France have sent complaints about X to legal authorities, calling such content unlawful and deeply offensive to women. A similar move came from India’s technology ministry, which warned X it did not stop indecent material from being made or shared online. Meanwhile, agencies in the United States, like the FCC and FTC, chose silence instead of public statements. 

A sudden rise in demands for Grok to modify pictures into suggestive clothing showed up in Reuters' review. Within just ten minutes, over one00 instances appeared - mostly focused on younger females. Often, the system produced overt visual content without hesitation. At times, only part of the request was carried out. A large share vanished quickly from open access, limiting how much could be measured afterward. 

Some time ago, image-editing tools driven by artificial intelligence could already strip clothes off photos, though they mostly stayed on obscure websites or required payment. Now, because Grok is built right into a well-known social network, creating such fake visuals takes almost no work at all. Warnings had been issued earlier to X about launching these kinds of features without tight controls. 

People studying tech impacts and advocacy teams argue this situation followed clearly from those ignored alerts. From a legal standpoint, some specialists claim the event highlights deep flaws in how platforms handle harmful content and manage artificial intelligence. Rather than addressing risks early, observers note that X failed to block offensive inputs during model development while lacking strong safeguards on unauthorized image creation. 

In cases such as Yukari’s, consequences run far beyond digital space - emotions like embarrassment linger long after deletion. Although aware the depictions were fake, she still pulled away socially, weighed down by stigma. Though X hasn’t outlined specific fixes, pressure is rising for tighter rules on generative AI - especially around responsibility when companies release these tools widely. What stands out now is how little clarity exists on who answers for the outcomes.

AI Can Create Deepfake Videos of Children Using Just 20 Images, Expert Warns

 

Parents are being urged to rethink how much they share about their children online, as experts warn that criminals can now generate realistic deepfake videos using as few as 20 images. This alarming development highlights the growing risks of digital identity theft and fraud facing children due to oversharing on social media platforms.  

According to Professor Carsten Maple of the University of Warwick and the Alan Turing Institute, modern AI tools can construct highly realistic digital profiles, including 30-second deepfake videos, from a small number of publicly available photos. These images can be used not only by criminal networks to commit identity theft, open fraudulent accounts, or claim government benefits in a child’s name but also by large tech companies to train their algorithms, often without the user’s full awareness or consent. 

New research conducted by Perspectus Global and commissioned by Proton surveyed 2,000 UK parents of children under 16. The findings show that on average, parents upload 63 images to social media every month, with 59% of those being family-related. A significant proportion of parents—21%—share these photos multiple times a week, while 38% post several times a month. These frequent posts not only showcase images but also often contain sensitive data like location tags and key life events, making it easier for bad actors to build a detailed online profile of the child. Professor Maple warned that such oversharing can lead to long-term consequences. 

Aside from potential identity theft, children could face mental distress or reputational harm later in life from having a permanent digital footprint that they never consented to create. The problem is exacerbated by the fact that many parents are unaware of how their data is being used. For instance, 48% of survey respondents did not realize that cloud storage providers can access the data stored on their platforms. In fact, more than half of the surveyed parents (56%) store family images on cloud services such as Google Drive or Apple iCloud. On average, each parent had 185 photos of their children stored digitally—images that may be accessed or analyzed under vaguely worded terms and conditions.  

Recent changes to Instagram’s user agreement, which now allows the platform to use uploaded images to train its AI systems, have further heightened privacy concerns. Additionally, experts have warned about the use of personal images by other Big Tech firms to enhance facial recognition algorithms and advertising models. To protect their children, parents are advised to implement a range of safety measures. These include using secure and private cloud storage, adjusting privacy settings on social platforms, avoiding public Wi-Fi when sharing or uploading data, and staying vigilant against phishing scams. 

Furthermore, experts recommend setting boundaries with children regarding online activity, using parental controls, antivirus tools, and search filters, and modeling responsible digital behavior. The growing accessibility of AI-based image manipulation tools underscores the urgent need for greater awareness and proactive digital hygiene. What may seem like harmless sharing today could expose children to significant risks in the future.

Deepfakes: A Rising Threat to Cybersecurity and Society

 

The late NBA player Kobe Bryant appeared in the music video for Kendrick Lamar's song "The Heart Part 5", which stunned the audience. Deepfake technology was employed in the video to pay tribute to the late legend. 

Deepfakes are images and videos that have been altered with advanced deep learning technologies such as autoencoders or generative adversarial networks.

With the support of deepfake technology, realistic yet manipulated media assets can be easily generated. However, deepfake technology is deceptive. The technology is utilised in virtual reality, video games, and filmmaking, but it might also be used as a weapon in cyberwarfare, the fifth dimension of warfare. Additionally, it can be used to share false information to influence public opinion along with political agendas.

Cybercrime is on the rise as the internet's global penetration grows. According to the National Crime Records Bureau, there were around 50,000 incidents of cybercrime in 2020. The national capital witnessed a 111% increase in cybercrime in 2021 compared to 2020 as reported by NCRB.

The majority of these incidents involved online fraud, online sexual harassment, and the release of private content, among other things. Deepfake technology may lead to an increase in such incidents that are weaponized for financial gain. 

Notably, the technology is not only a threat to the right to privacy protected by Article 21 of the Constitution, but it also plays a key role in cases of humiliation, misinformation, and defamation. Whaling attacks, deepfake voice phishing, and other frauds that target individuals and companies are thus likely to rise. 

Mitigation Tips

The difficulties caused by deepfakes can be addressed using ChatGPT, the generative AI that has recently gained attention. To offer viable options, ChatGPT can be integrated into search engines. In order to combat the dissemination of misinformation, the AI-enabled ChatGPT, based on Natural Language Processing, is trained to reject inappropriate requests. It can also process complicated algorithms to carry out complex reasoning operations. 

In order to swiftly purge such information from the internet after deployment, the dataset needs to be fine-tuned using supervised learning. It can be further tweaked due to its accessibility to offer a quicker, more practical solution that is also affordable. However, to stop AI from scooping up new deepfakes from the test set, the train set must be constantly monitored. 

Additionally, a greater influx of cyber security specialists is required to achieve this. India's GDP currently only accounts for 0.7% of research and development, compared to 3.3% in affluent nations like the United States of America. The National Cyber Security Policy of 2013 must be improved in order to adapt to new technologies and stop the spread of cybercrimes as these manipulations become more complex over time.

Seeing is No Longer Believing as Deepfakes Become Better and More Dangerous

 

Numerous industries are being transformed by artificial intelligence (AI), yet with every benefit comes a drawback. Deepfake detection is becoming more and more challenging as AI image generators become more advanced. 

The impact of AI-generated deepfakes on social media and in war zones is alarming world leaders and law enforcement organisations, who feel anxious regarding the situation. 

"We're getting into an era where we can no longer believe what we see," Marko Jak, co-founder, and CEO of Secta Labs, stated. "Right now, it's easier because the deep fakes are not that good yet, and sometimes you can see it's obvious." 

The time when it won't be feasible to recognise a faked image at first glance, in Jak's opinion, is not that far away—possibly within a year. Jak is the CEO of a company that generates AI images, therefore he must be aware of this. 

Secta Labs, an Austin-based generative AI firm that Jak co-founded in 2022, specialises in producing high-quality AI-generated photographs. Users can upload photos of themselves to create avatars and headshots using artificial intelligence. 

According to Jak, Secta Labs considers customers to be the proprietors of the AI models produced from their data, whilst the company is only a custodian assisting in creating images from these models. 

The potential for abuse of more sophisticated AI models has prompted leaders from around the world to demand for fast action on AI legislation and driven businesses to decide against making their cutting-edge technologies available to the general public. 

After launching its new Voicebox AI-generated voice platform last week, Meta stated it will not make the AI available to the general public. 

"While we believe it is important to be open with the AI community and to share our research to advance the state of the art in AI,” the Meta spokesperson explained. “It’s also necessary to strike the right balance between openness with responsibility." 

The U.S. Federal Bureau of Investigation issued a warning about AI deepfake extortion scams earlier this month, as well as about criminals who fabricate content using images and videos from social media. Jak suggested that exposing deepfakes rather than being able to identify them may be the key to defeating them. 

"AI is the first way you could spot [a deepfake]," Jak added. "There are people developing artificial intelligence that can tell you if an image in a video was produced by AI or not."

The entertainment industry is raging over the usage of generative AI and the possible application of AI-generated imagery in movies and television. Prior to beginning contract discussions, SAG-AFTRA members agreed to authorise a strike due to serious concerns about artificial intelligence.

As technology advances and malicious actors develop more sophisticated deepfakes to thwart equipment intended to identify them, Jak noted that the difficulty is the AI arms race that is taking place.

He claimed the technology and encryption might be able to address the deepfake issue despite the fact that blockchain has been overused—some might even say overhyped—as a fix for actual issues. The wisdom of the public may hold the answer, even though technology can address a variety of problems with deepfakes.