Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Deepfake. Show all posts

Deepfakes Are More Polluting Than People Think

 


Artificial intelligence, while blurring the lines between imagination and reality, is causing a new digital controversy to unfold at a time when ethics and creativity have become less important and the digital realm has become a much more fluid one. 

With the advent of advanced artificial intelligence platforms such as OpenAI's Sora, deepfake videos have been able to flood social media feeds with astoundingly lifelike representations of celebrities and historic figures, resurrected in scenes that at times appear sensational but at other times are deeply offensive, thanks to advanced artificial intelligence platforms.

In fact, the phenomenon has caused widespread concern amongst families of revered personalities such as Dr Martin Luther King Jr. Several people are publicly urging technology companies to put more safeguards in place to prevent the unauthorised use of their loved ones' likenesses.

However, as the debate over the ethical boundaries of synthetic media intensifies, there is one hidden aspect of the issue that is quietly surfacing, namely, the hidden environmental impact that synthetic media has on the environment. 

The creation of these hyperrealistic videos requires a great deal of computational power, as explained by Dr Kevin Grecksch, a professor at the University of Oxford. They also require a substantial amount of energy and water to maintain the necessary cooling systems within the data centres. Despite appearing as a fleeting piece of digital art, it has a significant environmental cost hidden beneath it, adding an unexpected layer of concerns surrounding the digital revolution that is a growing concern. 

As social media platforms have grown, there has been an increasing prevalence of deepfake videos, whose uncanny realism has captured audiences while blurring the line between truth and fabrication, while also captivating them. 

As AI-powered tools such as OpenAI's Sora have become more widely available, these videos have become viral as a result of being able to conjure up lifelike portraits of individuals – some of whom have long passed away – into fabricated scenes that range from bizarre to deeply inappropriate in nature. 

Several families who have been portrayed in this disturbing trend, including that of Dr Martin Luther King Jr., have raised alarm over the trend and have called on technology companies to prevent unauthorised digital resurrections of their loved ones. However, there is much more to the controversy surrounding deepfakes than just issues of dignity and consent at play. 

Despite the convincingly rendered nature of these videos, Dr Kevin Grecksch, a lecturer at Oxford University, has stressed that these videos have a significant ecological impact that is often overlooked. Generating such content is dependent upon the installation of powerful data centres that consume vast amounts of electricity and water to cool, resources that contribute substantially to the growing environmental footprint of this technology on a large scale. 

It has emerged that deep fakes, a form of synthetic media that is rapidly advancing, are one of the most striking examples of how artificial intelligence is reshaping digital communication. By combining complex deep learning algorithms with a massive dataset, these technologies can convincingly replace or manipulate faces, voices, and even gestures with ease. 

The likeness of one person is seamlessly merged with the likeness of another, creating a seamless transition from one image to the next. Additionally, shallow fakes, which are less technologically complex but equally important, are also closely related, but they rely on simple editing techniques to distort reality to an alarming degree, blurring the line between authenticity and fabrication. The proliferation of deepfakes has accelerated rapidly at an unprecedented pace over the past few years. 

A new report suggests that the number of such videos that circulate online has doubled every six months. It is estimated that there will be 500,000 deepfake videos and audio clips being shared globally by 2023, and if current trends hold true, this number is expected to reach almost 8 million by 2025. Using advanced artificial intelligence tools and the availability of publicly available data, experts attribute this explosive growth to the fact that these tools are widely accessible and there is a tremendous amount of public data, which creates an ideal environment for manipulated media to flourish. 

As a result of the rise of deepfake technology, legal and policy circles have been riddled with intense debate, underscoring the urgency of redefining the boundaries of accountability in an era in which synthetic media is so prevalent. With hyper-realistic digital forgeries created by advanced deep learning algorithms, which are very realistic, it poses a complex challenge that goes well beyond the technological edge. 

Legal scholars have warned that deep fakes pose a significant threat to privacy, intellectual property, and dignity, while also undermining public trust in the information they provide. A growing body of evidence suggests that these fabrications may carry severe social, ethical, and legal consequences not only from their ability to mislead but also from their ability to influence electoral outcomes and facilitate non-consensual pornography, all of which carry severe social, ethical, and legal consequences. 

In an effort to contain the threat, the European Union is enforcing legislation such as its Artificial Intelligence Act and Digital Services Act, which aim to assign responsibility for large online platforms and establish standards for the governance of Artificial Intelligence. Even so, experts contend that such initiatives remain insufficient due to the absence of comprehensive definitions, enforcement mechanisms, and protocols for assisting victims. 

Moreover, the situation is compounded by a fragmented international approach: although many states in the United States have enacted laws addressing fake media, there are still inconsistencies across jurisdictions, and countries like Canada continue to face challenges in regulating deepfake pornography and other forms of synthetic nonconsensual media. 

Social media has become an increasingly important platform for spreading manipulated media, which in turn increases these risks. Scholars have advocated sweeping reforms, which range from stricter privacy laws to recalibrating free speech to preemptive restrictions on deepfake generation, to mitigate future damage that is likely to occur. This includes identity theft, fraud, and other harms that existing legal systems are incapable of dealing with. 

It is important to note that ethical concerns are also emerging outside of the policy arena, in unexpected circumstances, such as the use of deep fakes in grief therapy and entertainment, where the line between emotional comfort and manipulation becomes dangerously blurred during times of emotional distress. 

The researchers, who are calling for better detection and prevention frameworks, are reaching a common conclusion: ideepfakes must be regulated in a manner that strikes a delicate balance between innovation and protection, ensuring that technological advances do not take away truth, justice, or human dignity as a result. This era of synthetic media has provoked a heated debate within legal and policy circles concerning the rise of deepfake technology, emphasising the importance of redefining the boundaries of accountability in a world where deep fakes have become a common phenomenon.

The hyper-realistic digital forgeries produced by advanced deep learning algorithms pose a challenge that goes well beyond the novelty of technology. There is considerable concern that deepfakes may threaten the integrity of information, as well as undermine public trust in it, while also undermining the core principles of privacy, intellectual property, and personal dignity. 

As a result of the fabrications' ability to distort reality, it has already been reported that they are capable of spreading misinformation, influencing elections, and facilitating nonconsensual pornography, all of which can have serious social, ethical, and legal repercussions. In an effort to contain the threat, the European Union is enforcing legislation such as its Artificial Intelligence Act and Digital Services Act, which aim to assign responsibility for large online platforms and establish standards for the governance of Artificial Intelligence.

Even so, experts contend that such initiatives remain insufficient due to the absence of comprehensive definitions, enforcement mechanisms, and protocols for assisting victims. Inconsistencies persist across jurisdictions, which further complicates the situation. While some U.S. states have enacted laws to address false media, there are still inconsistencies across jurisdictions, and countries such as Canada are still struggling with how to regulate non-consensual synthetic materials, including deepfake pornography and other forms of pseudo-synthetic material. 

Social media has become an increasingly important platform for spreading manipulated media, which in turn increases these risks. Scholars have advocated sweeping reforms, which range from stricter privacy laws to recalibrating free speech to preemptive restrictions on deepfake generation, to mitigate future damage that is likely to occur. This includes identity theft, fraud, and other harms that existing legal systems are incapable of dealing with. 

Additionally, ethical concerns are emerging beyond the realm of policy as well, in unexpected contexts such as those regarding deepfakes' use in grief therapy and entertainment, where the line between emotional comfort and manipulative behaviours becomes a dangerous blur to the point of becoming dangerously indistinguishable. 

Research suggests that more robust detection and prevention frameworks are needed to detect and prevent deepfakes. One conclusion becomes increasingly evident as a result of these findings: the regulation of deepfakes requires a delicate balance between innovation and protection, so that progress in technology does not trample upon truth, justice, and human dignity in its wake. 

A growing number of video generation tools powered by artificial intelligence have become so popular that they have transformed online content creation, but have also raised serious concerns about the environmental consequences of these tools. Data centres are the vast digital backbones that make such technologies possible, and they use large quantities of electricity and fresh water to cool servers on a large scale. 

The development of applications like OpenAI’s Sora has made it easy for users to create and share hyperrealistic videos quickly, but social media platforms have also seen an increase in deepfake content, which has helped such apps rise to the top of the global download charts. Within just five days, Sora had over one million downloads within just five days, cementing its position as the dominant app in the US Apple App Store. 

In the midst of this surge of creative enthusiasm, however, there is a growing environmental dilemma that has been identified by DDrKevin Grecksch of the University of Oxford in his recently published warning against ignoring the water and energy demands of AI infrastructure. He urged users and policymakers alike to be aware that digital innovation has a significant ecological footprint, and that it takes a lot of water to carry out, and that it needs to be carefully considered when using water. 

It has been argued that the "cat is out of the sack" with the adoption of artificial intelligence, but that more integrated planning is imperative when it comes to determining where and how data-centric systems should be built and cooled. 

A warning he made was that even though the government envisions South Oxfordshire as a potential hub for the development of artificial intelligence, insufficient attention has been paid to the environmental logistics, particularly where the necessary water supply will come from. Since enthusiasm for the development of generative technologies continues to surge, experts insist that the conversation regarding the future of AI needs to go beyond innovation and efficiency, encompassing sustainability, resource management, and long-term environmental responsibility. 

There is no denying that the future of artificial intelligence demands more than admiration for its brilliance as it stands at a crossroads between innovation and accountability, but responsibility as to how it is applied. Even though deepfake technology is a testament to human ingenuity, it should be governed by ethics, regulation, and sustainability, as well as other factors.

There is a need to collaborate between policymakers, technology firms, and environmental authorities in order to develop frameworks which protect both digital integrity as well as natural resources. For a safer and more transparent digital era, we must encourage the use of renewdatacentresin datacentress, enforce stricter consent-based media laws, and invest in deepfake detection systems in order to ensure that deepfake detection systems are utilised. 

AI offers the promise of creating a world without human intervention, yet its promise lies in our capacity to control its outcomes - ensuring that in a world increasingly characterised by artificial intelligence, progress remains a force for truth, equity, and ecological balance.

India Plans Techno-Legal Framework to Combat Deepfake Threats

 

India will introduce comprehensive regulations to combat deepfakes in the near future, Union IT Minister Ashwini Vaishnaw announced at the NDTV World Summit 2025 in New Delhi. The minister emphasized that the upcoming framework will adopt a dual-component approach combining technical solutions with legal measures, rather than relying solely on traditional legislation.

Vaishnaw explained that artificial intelligence cannot be effectively regulated through conventional lawmaking alone, as the technology requires innovative technical interventions. He acknowledged that while AI enables entertaining applications like age transformation filters, deepfakes pose unprecedented threats to society by potentially misusing individuals' faces and voices to disseminate false messages completely disconnected from the actual person.

The minister highlighted the fundamental right of individuals to protect their identity from harmful misuse, stating that this principle forms the foundation of the government's approach to deepfake regulation. The techno-legal strategy distinguishes India's methodology from the European Union's primarily regulatory framework, with India prioritizing innovation alongside societal protection.

As part of the technical solution, Vaishnaw referenced ongoing work at the AI Safety Institute, specifically mentioning that the Indian Institute of Technology Jodhpur has developed a detection system capable of identifying deepfakes with over 90 percent accuracy. This technological advancement will complement the legal framework to create a more robust defense mechanism.

The minister also discussed India's broader AI infrastructure development, noting that two semiconductor manufacturing units, CG Semi and Kaynes, have commenced production operations in the country. Additionally, six indigenous AI models are currently under development, with two utilizing approximately 120 billion parameters designed to be free from biases present in Western models.

The government has deployed 38,000 graphics processing units (GPUs) for AI development and secured a $15 billion investment commitment from Google to establish a major AI hub in India. This infrastructure expansion aims to enhance the nation's research capabilities and application development in artificial intelligence.

OpenAI's Sora App Raises Facial Data Privacy Concerns

 

OpenAI's video-generating app, Sora, has raised significant questions regarding the safety and privacy of user's biometric data, particularly with its "Cameo" feature that creates realistic AI videos, or "deepfakes," using a person's face and voice. 

To power this functionality, OpenAI confirms it must store users' facial and audio data. The company states this sensitive data is encrypted during both storage and transmission, and uploaded cameo data is automatically deleted after 30 days. Despite these assurances, privacy concerns remain. The app's ability to generate hyper-realistic videos has sparked fears about the potential for misuse, such as the creation of unauthorized deepfakes or the spread of misinformation. 

OpenAI acknowledges a slight risk that the app could produce inappropriate content, including sexual deepfakes, despite the safeguards in place. In response to these risks, the company has implemented measures to distinguish AI-generated content, including visible watermarks and invisible C2PA metadata in every video created with Sora .

The company emphasizes that users have control over their likeness. Individuals can decide who is permitted to use their cameo and can revoke access or delete any video featuring them at any time. However, a major point of contention is the app's account deletion policy. Deleting a Sora account also results in the termination of the user's entire OpenAI account, including ChatGPT access, and the user cannot register again with the same email or phone number. 

While OpenAI has stated it is developing a way for users to delete their Sora account independently, this integrated deletion policy has surprised and concerned many users who wish to remove their biometric data from Sora without losing access to other OpenAI services.

The app has also drawn attention for potential copyright violations, with users creating videos featuring well-known characters from popular media. While OpenAI provides a mechanism for rights holders to request the removal of their content, the platform's design has positioned it as a new frontier for intellectual property disputes.

AI Turns Personal: Criminals Now Cloning Loved Ones to Steal Money, Warns Police

 



Police forces in the United Kingdom are alerting the public to a surge in online fraud cases, warning that criminals are now exploiting artificial intelligence and deepfake technology to impersonate relatives, friends, and even public figures. The warning, issued by West Mercia Police, stresses upon how technology is being used to deceive people into sharing sensitive information or transferring money.

According to the force’s Economic Crime Unit, criminals are constantly developing new strategies to exploit internet users. With the rapid evolution of AI, scams are becoming more convincing and harder to detect. To help people stay informed, officers have shared a list of common fraud-related terms and explained how each method works.

One of the most alarming developments is the use of AI-generated deepfakes, realistic videos or voice clips that make it appear as if a known person is speaking. These are often used in romance scams, investment frauds, or emotional blackmail schemes to gain a victim’s trust before asking for money.

Another growing threat is keylogging, where fraudsters trick victims into downloading malicious software that secretly records every keystroke. This allows criminals to steal passwords, banking details, and other private information. The software is often installed through fake links or phishing emails that look legitimate.

Account takeover, or ATO, remains one of the most common types of identity theft. Once scammers access an individual’s online account, they can change login credentials, reset security settings, and impersonate the victim to access bank or credit card information.

Police also warned about SIM swapping, a method in which criminals gather personal details from social media or scam calls and use them to convince mobile providers to transfer a victim’s number to a new SIM card. This gives the fraudster control over the victim’s messages and verification codes, making it easier to access online accounts.

Other scams include courier fraud, where offenders pose as police officers or bank representatives and instruct victims to withdraw money or purchase expensive goods. A “courier” then collects the items directly from the victim’s home. In many cases, scammers even ask for bank cards and PIN numbers.

The force’s notice also included reminders about malware and ransomware, malicious programs that can steal or lock files. Criminals may also encourage victims to install legitimate-looking remote access tools such as AnyDesk, allowing them full control of a victim’s device.

Additionally, spoofing — the act of disguising phone numbers, email addresses, or website links to appear genuine, continues to deceive users. Fraudsters often combine spoofing with AI to make fake communication appear even more authentic.

Police advise the public to remain vigilant, verify any unusual requests, and avoid clicking on suspicious links. Anyone seeking more information or help can visit trusted resources such as Action Fraud or Get Safe Online, which provide updates on current scams and guidance on reporting cybercrime.



How Scammers Use Deepfakes in Financial Fraud and Ways to Stay Protected

 

Deepfake technology, developed through artificial intelligence, has advanced to the point where it can convincingly replicate human voices, facial expressions, and subtle movements. While once regarded as a novelty for entertainment or social media, it has now become a dangerous tool for cybercriminals. In the financial world, deepfakes are being used in increasingly sophisticated ways to deceive institutions and individuals, creating scenarios where it becomes nearly impossible to distinguish between genuine interactions and fraudulent attempts. This makes financial fraud more convincing and therefore more difficult to prevent. 

One of the most troubling ways scammers exploit this technology is through face-swapping. With many banks now relying on video calls for identity verification, criminals can deploy deepfake videos to impersonate real customers. By doing so, they can bypass security checks and gain unauthorized access to accounts or approve financial decisions on behalf of unsuspecting individuals. The realism of these synthetic videos makes them difficult to detect in real time, giving fraudsters a significant advantage. 

Another major risk involves voice cloning. As voice-activated banking systems and phone-based transaction verifications grow more common, fraudsters use audio deepfakes to mimic a customer’s voice. If a bank calls to confirm a transaction, criminals can respond with cloned audio that perfectly imitates the customer, bypassing voice authentication and seizing control of accounts. Scammers also use voice and video deepfakes to impersonate financial advisors or bank representatives, making victims believe they are speaking to trusted officials. These fraudulent interactions may involve fake offers, urgent warnings, or requests for sensitive data, all designed to extract confidential information. 

The growing realism of deepfakes means consumers must adopt new habits to protect themselves. Double-checking unusual requests is a critical step, as fraudsters often rely on urgency or trust to manipulate their targets. Verifying any unexpected communication by calling a bank’s official number or visiting in person remains the safest option. Monitoring accounts regularly is another defense, as early detection of unauthorized or suspicious activity can prevent larger financial losses. Setting alerts for every transaction, even small ones, can make fraudulent activity easier to spot. 

Using multi-factor authentication adds an essential layer of protection against these scams. By requiring more than just a password to access accounts, such as one-time codes, biometrics, or additional security questions, banks make it much harder for criminals to succeed, even if deepfakes are involved. Customers should also remain cautious of video and audio communications requesting sensitive details. Even if the interaction appears authentic, confirming through secure channels is far more reliable than trusting what seems real on screen or over the phone.  

Deepfake-enabled fraud is dangerous precisely because of how authentic it looks and sounds. Yet, by staying vigilant, educating yourself about emerging scams, and using available security tools, it is possible to reduce risks. Awareness and skepticism remain the strongest defenses, ensuring that financial safety is not compromised by increasingly deceptive digital threats.

How AI Impacts KYC and Financial Security

How AI Impacts KYC and Financial Security

Finance has become a top target for deepfake-enabled fraud in the KYC process, undermining the integrity of identity-verification frameworks that help counter-terrorism financing (CTF) and anti-money laundering (AML) systems.

Experts have found a rise in suspicious activity using AI-generated media, highlighting that threat actors exploit GenAI to “defraud… financial institutions and their customers.”

Wall Street’s FINRA has warned that deepfake audio and video scams can cause losses of $40 billion by 2027 in the finance sector.

Biometric safety measures do not work anymore. A 2024 Regula research revealed that 49% businesses throughout industries such as fintech and banking have faced fraud attacks using deepfakes, with average losses of $450,000 per incident. 

As these numbers rise, it becomes important to understand how deepfake invasion can be prevented to protect customers and the financial industry globally. 

More than 1,100 deepfake attacks in Indonesia

Last year, an Indonesian bank reported over 1,100 attempts to escape its digital KYC loan-application process within 3 months, cybersecurity firm Group-IB reports.

Threat actors teamed AI-powered face-swapping with virtual-camera tools to imitate the bank’s  liveness-detection controls, despite the bank’s “robust, multi-layered security measures." According to Forbes, the estimated losses “from these intrusions have been estimated at $138.5 million in Indonesia alone.”

The AI-driven face-swapping tools allowed actors to replace the target’s facial features with those of another person, allowing them to exploit “virtual camera software to manipulate biometric data, deceiving institutions into approving fraudulent transactions,” Group-IB reports.

How does the deepfake KYC fraud work

Scammers gather personal data via malware, the dark web, social networking sites, or phishing scams. The date is used to mimic identities. 

After data acquisition, scammers use deepfake technology to change identity documents, swapping photos, modifying details, and re-creating entire ID to escape KYC checks.

Threat actors then use virtual cameras and prerecorded deepfake videos, helping them avoid security checks by simulating real-time interactions. 

This highlights that traditional mechanisms are proving to be inadequate against advanced AI scams. A study revealed that every 5 minutes, one deepfake attempt was made. Only 0.1 of people could spot deepfakes. 

U.S. Senators Propose New Task Force to Tackle AI-Based Financial Scams

 


In response to the rising threat of artificial intelligence being used for financial fraud, U.S. lawmakers have introduced a new bipartisan Senate bill aimed at curbing deepfake-related scams.

The bill, called the Preventing Deep Fake Scams Act, has been brought forward by Senators from both political parties. If passed, it would lead to the formation of a new task force headed by the U.S. Department of the Treasury. This group would bring together leaders from major financial oversight bodies to study how AI is being misused in scams, identity theft, and data-related crimes and what can be done about it.

The proposed task force would include representatives from agencies such as the Federal Reserve, the Consumer Financial Protection Bureau, and the Federal Deposit Insurance Corporation, among others. Their goal will be to closely examine the growing use of AI in fraudulent activities and provide the U.S. Congress with a detailed report within a year.


This report is expected to outline:

• How financial institutions can better use AI to stop fraud before it happens,

• Ways to protect consumers from being misled by deepfake content, and

• Policy and regulatory recommendations for addressing this evolving threat.


One of the key concerns the bill addresses is the use of AI to create fake voices and videos that mimic real people. These deepfakes are often used to deceive victims—such as by pretending to be a friend or family member in distress—into sending money or sharing sensitive information.

According to official data from the Federal Trade Commission, over $12.5 billion was stolen through fraud in the past year—a 25% increase from the previous year. Many of these scams now involve AI-generated messages and voices designed to appear highly convincing.

While this particular legislation focuses on financial scams, it adds to a broader legislative effort to regulate the misuse of deepfake technology. Earlier this year, the U.S. House passed a bill targeting nonconsensual deepfake pornography. Meanwhile, law enforcement agencies have warned that fake messages impersonating high-ranking officials are being used in various schemes targeting both current and former government personnel.

Another Senate bill, introduced recently, seeks to launch a national awareness program led by the Commerce Department. This initiative aims to educate the public on how to recognize AI-generated deception and avoid becoming victims of such scams.

As digital fraud evolves, lawmakers are urging financial institutions, regulators, and the public to work together in identifying threats and developing solutions that can keep pace with rapidly advancing technologies.

AI Can Create Deepfake Videos of Children Using Just 20 Images, Expert Warns

 

Parents are being urged to rethink how much they share about their children online, as experts warn that criminals can now generate realistic deepfake videos using as few as 20 images. This alarming development highlights the growing risks of digital identity theft and fraud facing children due to oversharing on social media platforms.  

According to Professor Carsten Maple of the University of Warwick and the Alan Turing Institute, modern AI tools can construct highly realistic digital profiles, including 30-second deepfake videos, from a small number of publicly available photos. These images can be used not only by criminal networks to commit identity theft, open fraudulent accounts, or claim government benefits in a child’s name but also by large tech companies to train their algorithms, often without the user’s full awareness or consent. 

New research conducted by Perspectus Global and commissioned by Proton surveyed 2,000 UK parents of children under 16. The findings show that on average, parents upload 63 images to social media every month, with 59% of those being family-related. A significant proportion of parents—21%—share these photos multiple times a week, while 38% post several times a month. These frequent posts not only showcase images but also often contain sensitive data like location tags and key life events, making it easier for bad actors to build a detailed online profile of the child. Professor Maple warned that such oversharing can lead to long-term consequences. 

Aside from potential identity theft, children could face mental distress or reputational harm later in life from having a permanent digital footprint that they never consented to create. The problem is exacerbated by the fact that many parents are unaware of how their data is being used. For instance, 48% of survey respondents did not realize that cloud storage providers can access the data stored on their platforms. In fact, more than half of the surveyed parents (56%) store family images on cloud services such as Google Drive or Apple iCloud. On average, each parent had 185 photos of their children stored digitally—images that may be accessed or analyzed under vaguely worded terms and conditions.  

Recent changes to Instagram’s user agreement, which now allows the platform to use uploaded images to train its AI systems, have further heightened privacy concerns. Additionally, experts have warned about the use of personal images by other Big Tech firms to enhance facial recognition algorithms and advertising models. To protect their children, parents are advised to implement a range of safety measures. These include using secure and private cloud storage, adjusting privacy settings on social platforms, avoiding public Wi-Fi when sharing or uploading data, and staying vigilant against phishing scams. 

Furthermore, experts recommend setting boundaries with children regarding online activity, using parental controls, antivirus tools, and search filters, and modeling responsible digital behavior. The growing accessibility of AI-based image manipulation tools underscores the urgent need for greater awareness and proactive digital hygiene. What may seem like harmless sharing today could expose children to significant risks in the future.