Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Deepfakes. Show all posts

Amid Federal Crackdown, Microsoft Warns Against Rising North Korean Jobs Scams

Amid Federal Crackdown, Microsoft Warns Against Rising North Korean Jobs Scams

North Korean hackers are infiltrating high-profile US-based tech firms through scams. Recently, they have even advanced their tactics, according to the experts. In a recent investigation by Microsoft, the company has requested its peers to enforce stronger pre-employment verification measures and make policies to stop unauthorized IT management tools. 

Further investigation by the US government revealed that these actors were working to steal money for the North Korean government and use the funds to run its government operations and its weapons program.  

US imposes sanctions against North Korea

The US has imposed strict sanctions on North Korea, which restrict US companies from hiring North Korean nationals. It has led to threat actors making fake identities and using all kinds of tricks (such as VPNs) to obscure their real identities and locations. This is being done to avoid getting caught and get easily hired. 

Recently, the threat actors have started using spoof tactics such as voice-changing tools and AI-generated documents to appear credible. In one incident, the scammers somehow used an individual residing in New Jersey, who set up shell companies to fool victims into believing they were paying a legitimate local business. The same individual also helped overseas partners to get recruited. 

DoJ arrests accused

The clever campaign has now come to an end, as the US Department of Justice (DoJ) arrested and charged a US national called Zhenxing “Danny” Wanf with operating a “year-long” scam. The scheme earned over $5 million. The agency also arrested eight more people - six Chinese and two Taiwanese nationals. The arrested individuals are charged with money laundering, identity theft, hacking, sanctions violations, and conspiring to commit wire fraud.

In addition to getting paid in these jobs, which Microsoft says is a hefty payment, these individuals also get access to private organization data. They exploit this access by stealing sensitive information and blackmailing the company.

Lazarus group behind such scams

One of the largest and most infamous hacking gangs worldwide is the North Korean state-sponsored group, Lazarus. According to experts, the gang extorted billions of dollars from the Korean government through similar scams. The entire campaign is popular as “Operation DreamJob”. 

"To disrupt this activity and protect our customers, we’ve suspended 3,000 known Microsoft consumer accounts (Outlook/Hotmail) created by North Korean IT workers," said Microsoft.

Deepfakes Explained: How They Operate and How to Safeguard Yourself

 

In May of this year, an anonymous person called and texted elected lawmakers and business executives pretending to be a senior White House official. U.S. senators were among the recipients who believed they were speaking with White House chief of staff Susie Wiles. In reality, though, it was a phoney. 

The scammer employed AI-generated deepfake software to replicate Wiles' voice. This easily accessible, low-cost software modifies a public speech clip to deceive the target. 

Why are deepfakes so convincing? 

Deepfakes are alarming because of how authentic they appear. AI models can analyse public photographs or recordings of a person (for example, from social media or YouTube) and then create a fake that mimics their face or tone very accurately. As a result, many people overestimate their ability to detect fakes. In an iProov poll, 43% of respondents stated they couldn't tell the difference between a real video and a deepfake, and nearly one-third had no idea what a deepfake was, highlighting a vast pool of potential victims.

Deepfakes rely on trust: the victim recognises a familiar face or voice, and alarms do not sound. These scams also rely on haste and secrecy (for example, 'I need this wire transfer now—do not tell anyone'). When we combine emotional manipulation with visual/auditory reality, it is no surprise that even professionals have been duped. The employee in the $25 million case saw something odd—the call stopped abruptly, and he never communicated directly with colleagues—but only realised it was a scam after the money was stolen. 

Stay vigilant 

Given the difficulty in visually recognising a sophisticated deepfake, the focus switches to verification. If you receive an unexpected request by video call, phone, or voicemail, especially if it involves money, personal data, or anything high-stakes, take a step back. Verify the individual's identity using a separate channel.

For example, if you receive a call that appears to be from a family member in distress, hang up and call them back at their known number. If your supervisor requests that you buy gift cards or transfer payments, attempt to confirm in person or through an official company channel. It is neither impolite or paranoid; rather, it is an essential precaution today. 

Create secret safewords or verification questions with loved ones for emergencies (something a deepfake impostor would not know). Be wary of what you post publicly. If possible, limit the amount of high-quality videos or voice recordings you provide, as these are used to design deepfakes.

Denmark Empowers Public Against Deepfake Threats


 

A groundbreaking bill has been proposed by the Danish government to curb the growing threat of artificial intelligence-generated deepfakes, a threat that is expected to rise in the future. In the proposed framework, individuals would be entitled to claim legal ownership rights over their own likeness and voice, allowing them to ask for the removal of manipulated digital content that misappropriates their identity by requesting its removal. 

According to Danish Culture Minister Jakob Engel-Schmidt, the initiative has been launched as a direct response to the rapid advancements of generative artificial intelligence, resulting in the alarmingly easy production of convincing audio and video for malicious or deceptive purposes. According to the minister, current laws have failed to keep up with the advancement of technology, leaving artists, public figures, and ordinary citizens increasingly vulnerable to digital impersonation and exploitation. 

Having established a clear property right over personal attributes, Denmark has sought to safeguard its population from identity theft, which is a growing phenomenon in this digital age, as well as set a precedent for responsible artificial intelligence governance. As reported by Azernews, the Ministry of Culture has formally presented a draft law that will incorporate the images and voices of citizens into national copyright legislation to protect these personal attributes. 

The proposal embodies an important step towards curbing the spread and misuse of deepfake technologies, which are increasingly being used to deceive audiences and damage reputations. A clear prohibition has been established in this act against reproducing or distributing an individual's likeness or voice without their explicit consent, providing affected parties with the legal right to seek financial compensation should their likeness or voice be abused. 

Even though exceptions will be made for satire, parody, and other content classified as satire, the law places a strong stop on the use of deepfakes for artistic performances without permission. In order to comply with the proposed measures, online platforms hosting such material would be legally obligated to remove them upon request or face substantial fines for not complying. 

While the law is limited to the jurisdiction of Denmark, it is expected to be passed in Parliament by overwhelming margins, with estimates suggesting that up to 90% of lawmakers support it. Several high-profile controversies have emerged over the past few weeks, including doctored videos targeted at the Danish Prime Minister and escalating legal battles against creators of explicitly deepfake content, thus emphasizing the need for comprehensive safeguards in the age of digital technology. 

It has recently been established by the European Union, in its recently passed AI Act, that a comprehensive regulatory framework is being established for the output of artificial intelligence on the European continent, which will be categorized according to four distinct risks: minimal, limited, high, and unacceptable. 

The deepfakes that fall under the "limited risk" category are not outright prohibited, but they have to adhere to specific transparency obligations that have been imposed on them. According to these provisions, companies that create or distribute generative AI tools must make sure that any artificial intelligence-generated content — such as manipulated videos — contains clear disclosures about that content. 

To indicate that the material is synthetic, watermarks or similar labels may typically be applied in order to indicate this. Furthermore, developers are required to publicly disclose the datasets they used in training their AI models, allowing them to be held more accountable and scrutinized. Non-compliance carries significant financial consequences: organisations that do not comply with transparency requirements could face a penalty of up to 15 million euros or 3 per cent of their worldwide revenue, depending on which figure is greater. 

In the event of practices which are explicitly prohibited by the Act, such as the use of certain deceptive or harmful artificial intelligence in certain circumstances, a maximum fine of €35 million or 7 per cent of global turnover is imposed. Throughout its history, the EU has been committed to balancing innovation with safeguards that protect its citizens from the threat posed by advanced generative technologies that are on the rise. 

In her opinion, Athena Karatzogianni, an expert on technology and society at the University of Leicester in England, said that Denmark's proposed legislation reflects a broader effort on the part of international governments and institutions to combat the dangers that generative artificial intelligence poses. She pointed out that this is just one of hundreds of policies emerging around the world that deal with the ramifications of advanced synthetic media worldwide. 

According to Karatzogianni, deepfakes have a unique problem because they have both a personal and a societal impact. At an individual level, they can violate privacy, damage one's reputation, and violate fundamental rights. In addition, she warned that the widespread use of such manipulated content is a threat to public trust and threatens to undermine fundamental democratic principles such as fairness, transparency, and informed debate. 

A growing number of deepfakes have made it more accessible and sophisticated, so robust legal frameworks must be put in place to prevent misuse while maintaining the integrity of democratic institutions. As a result of this, Denmark's draft law can serve as an effective measure in balancing technological innovation with safeguards to ensure that citizens as well as the fabric of society are protected. 

Looking ahead, Denmark's legislative initiative signals a broader recognition that regulatory frameworks need to evolve along with technological developments in order to prevent abuse before it becomes ingrained in digital culture. As ambitious as the measures proposed are, they also demonstrate the delicate balance policymakers need to strike between protecting individual rights while preserving legitimate expression and creativity at the same time. 

The development of generative artificial intelligence tools, as well as the collaboration between governments, technology companies, and civil society will require governments, technology companies, and civil society to work together closely to establish compliance mechanisms, public education campaigns, and cross-border agreements in order to prevent misuse of these tools.

In this moment of observing the Danish approach, other nations and regulatory bodies have a unique opportunity to evaluate both the successes and the challenges it faces as a result. For emerging technologies to contribute to the public good rather than undermining trust in institutions and information, it will be imperative to ensure that proactive governance, transparent standards, and sustained public involvement are crucial. 

Finally, Denmark's efforts could serve as a catalyst for the development of more resilient and accountable digital landscapes across the entire European continent and beyond, but only when stakeholders act decisively in order to uphold ethical standards while embracing innovation responsibly at the same time.

Eight Arrested Over Financial Scam Using Deepfakes

 

Hong Kong police have detained eight people accused of running a scam ring that overcame bank verification checks to open accounts by replacing images on lost identification cards with deepfakes that included scammers' facial features. 

Senior Superintendent Philip Lui Che-ho of the force's financial intelligence and investigation division stated on Saturday that the raid was part of a citywide operation on scams, cybercrime, and money laundering that took place between April 7 and 17. Officers arrested 503 persons aged 18 to 80. Losses in the cases surpassed HK$1.5 billion (US$193.2 million. 

Officers arrested the eight suspects on Thursday for allegedly using at least 21 Hong Kong identification cards that were reported lost to make 44 applications to create local bank accounts, according to Chief Inspector Sun Yi-ki of the force's cybersecurity and technology crime branch. 

“The syndicate first tried to use deepfake technology to merge the scammer’s facial features with the cardholder’s appearance, followed by uploading the scammer’s selfie to impersonate the cardholder and bypass the online verification process,” Sun said. 

Following the successful completion of online identification checks at banks, thirty out of the forty-four applications were accepted. In half of the successful attempts, artificial intelligence was used to construct images that combined the identity card's face with the scammer's. The others just substituted the scammer's photo for the one on the ID.

Police claimed the bank accounts were used to apply for loans and make credit card transactions worth HK$860,000, as well as to launder more than HK$1.2 million in suspected illegal proceeds. Sun said the force was still looking into how the syndicate obtained the ID cards, which were claimed lost between 2023 and 2024. On suspicion of conspiracy to defraud and money laundering, police detained the six men and two women and seized numerous laptops, phones, and external storage devices. 

The accused range in age from 24 to 41, with the mastermind and main members of the ring allegedly belonging to local triad gangs. Lui urged the public against renting, lending, or selling access to their bank accounts to anyone.

The 333 men and 170 women arrested during the citywide raid were discovered to be engaged in 404 crimes, the most of which were employment frauds, financial swindles, and internet shopping scams. They were caught for conspiracy to defraud, gaining property by deception, and money laundering. Two cross-border money-laundering operations were busted in coordination with mainland Chinese authorities over the last two weeks. 

Lui claimed that one of the syndicates laundered alleged illicit earnings from fraud operations by hiring tourists from the mainland to purchase gold jewellery in Hong Kong. Between last December and March of this year, the syndicate was discovered to have been involved in 240 mainland scam instances, resulting in losses of 18.5 million yuan (US$2.5 million). 

“Syndicate masterminds would recruit stooges from various provinces on the mainland, bringing them to Hong Kong via land borders and provide hostel accommodation,” the senior superintendent stated.

Syndicate members would then arrange for the recruits to purchase gold jewellery in the city using digital payment methods, with each transaction costing tens to hundreds of thousands of Hong Kong dollars. On Tuesday last week, Hong Kong police apprehended three individuals who had just purchased 34 pieces of gold jewellery for HK$836,000 per the syndicate's orders. Two of them had two-way passes, which are travel documents that allow mainlanders to access the city. The third suspect was a Hong Konger.

On the same day, mainland police arrested 17 persons. The second cross-border syndicate arranged for mainlanders to create accounts in Hong Kong using fraudulent bank, employment, and utility bill documents. Police in Hong Kong and the mainland arrested a total of 16 persons in connection with the investigation. From December 2023 to April, the syndicate was involved in 61 scam instances in the city, resulting in losses of HK$26.7 million. Accounts were created to receive the scam money.

AI Deepfakes Pose New Threats to Cryptocurrency KYC Compliance

 


ProKYC is a recently revealed artificial intelligence (AI)-powered deep fake tool that nefarious actors can use to circumvent high-level Know Your Customer (KYC) protocols on cryptocurrency exchanges, presenting as a very sophisticated method to circumvent high-level KYC protocols. A recent report from cybersecurity firm Cato Networks refers to this development as an indication that cybercriminals have stepped up their tactics to get ahead of law enforcement. 

It has been common practice for identity fraud to involve people buying forged documents on the dark web to commit the crime. There is a difference in approach, however, between ProKYC and another company. Fraudsters can use the tool in order to create entirely new identities, which they can use for fraud purposes. Cato Networks report that the AI tool is aimed at targeting crypto exchanges and financial institutions with the special purpose of exploiting them. 

When a new user registers with one of these organizations, they use technology to verify that he is who he claims to be. During this process, a government-issued identification document, such as a passport or driver's license, must be uploaded and matched with a live webcam image that is displayed on the screen. A design in ProKYC maximizes the ability of customers to bypass these checks by generating a fake identity, as well as a deepfakes video. Thereby, criminals are able to circumvent the facial recognition software, allowing them to commit fraud. 

As noted in the press release from Cato Networks, this method introduces a new level of sophistication to the crypto fraud industry. A Cato Networks report published on Oct. 9 reported that Etay Maor, the company's chief security strategist, believes that the new tool represents a significant step forward in terms of what cybercriminals are doing to get around two-factor authentication and KYC mechanisms. 

In the past, fraudsters were forced to buy counterfeit identification documents on the dark web, but with AI-based tools, they can create brand-new ID documents from scratch. This new tool was developed by Cato specifically for crypto exchanges and financial firms whose KYC protocols require matching photos of a new user's face to their government-issued identification documents, such as a passport or a driver's license taken from the webcam of their computers.  

Using the tool of ProKYC, we have been able to generate fake ID documents, as well as accompanying deepfake videos, in order to pass the facial recognition challenges used by some of the largest crypto exchanges around the world. The user creates an artificially intelligent generated face, and then adds that AI-generated profile picture to a template of a passport that is based on an Australian passport. 

The next step is the ProKYC tool, which uses artificial intelligence (AI) to create a fake video and image of the artificially generated person, which is used to bypass the KYC protocols on the Dubai-based crypto exchange Bybit, which is not in compliance with the Eurozone.  It has been reported recently by the cybersecurity company Cato Networks that a deepfake AI tool that can create fake fake accounts is being used by exchanges to evade KYC checks that are being conducted. 

There is a tool called ProKYC that can be downloaded for the price of 629 dollars a year and used by fraudsters to create fake identification documents and generate videos that look almost real. This package includes a camera, a virtual emulator, facial animations, fingerprints, and an image generation program that generates the documents that need to be verified. A recent report highlights the emergence of an advanced AI deepfake tool, custom-built to exploit financial companies’ KYC protocols. 

This tool, designed to circumvent biometric face checks and document cross-verification, has raised concerns by breaching security measures that were previously impenetrable, even by the most sophisticated AI systems. The deepfake, created with a tool known as ProKYC, was showcased in a blog post by Cato Networks. It demonstrates how AI can generate counterfeit ID documents capable of bypassing KYC verification at exchanges like Bybit. 

In one instance, the system accepted a fictitious name, a fraudulent document, and an artificially generated video, allowing the user to complete the platform’s verification process seamlessly. Despite the severity of this challenge, Cato Networks notes that certain methods can still detect these AI-generated identities. 

Techniques such as having human analysts scrutinize unusually high-quality images and videos or identifying inconsistencies in facial movements and image quality are potential safeguards. Legal Ramifications of Identity Fraud The legal consequences of identity fraud, particularly in the United States, are stringent. Penalties can reach up to 15 years in prison, along with substantial fines, depending on the crime's scope and gravity. With the rise of AI tools like ProKYC, combating identity fraud is becoming more difficult for law enforcement, raising the stakes for financial institutions. Rising Activity Among Scammers 

In addition to these developments, September saw a marked increase in deepfake AI activity among crypto scammers. Gen Digital, the parent company of Norton, Avast, and Avira, reported a spike in the use of deepfake videos to deceive investors into fraudulent cryptocurrency schemes. This uptick underscores the need for stronger security measures and regulatory oversight to protect the growing number of investors in the crypto sector. 

The advent of AI-powered tools such as ProKYC marks a new era in cyber fraud, particularly within the cryptocurrency industry. As cybercriminals increasingly leverage advanced technology to evade KYC protocols, financial institutions and exchanges must remain vigilant and proactive. Collaboration among cybersecurity firms, regulatory agencies, and technology developers will be critical to staying ahead of this evolving threat and ensuring robust defenses against identity fraud.

Voice Cloning and Deepfake Threats Escalate AI Scams Across India

 


The rapid advancement of AI technology in the past few years has brought about several benefits for society, but these advances have also led to sophisticated cyber threats. India is experiencing explosive growth in digital adoption, making it one of the most sought-after targets for a surge in artificial intelligence-based scams. This is an alarming reality of today's cybercriminals who are exploiting these emerging technologies in alarming ways to exploit the trust of unsuspecting individuals through voice cloning schemes, the manipulation of public figures' identities and deepfakes. 

There is an increasing problem with scammers finding new ways to deceive the public as AI capabilities become more refined, making it increasingly difficult to tell between genuine and manipulated content as the abilities of AI become more refined. In terms of cyber security expertise and everyday users, the line between reality and digital fabrication is becoming blurred, presenting a serious challenge to both professionals and those who work in the field. 

Among the many high-profile case studies involving voice cloning in the country and the use of deep-fake technology, the severity of these threats and the number of people who have fallen victim to sophisticated methods of deception have led to a rise in these criminal activities in India. It is believed that the recent trend in AI-driven fraud shows that more security measures and public awareness are urgently needed to combat AI-driven fraud to prevent it from spreading.

The scam occurred last year when a scammer swindled a 73-year-old retired government employee in Kozhikode, Kerala, out of 40,000 rupees by using an AI-generated deepfake video that a deep learning algorithm had generated. He created the illusion of an emergency that led to his loss by blending voice manipulation with video manipulation. It is important to realize that the problem runs much deeper than that. 

In Delhi, cybercrime groups have used voice cloning to swindle 50,000 rupees from Lakshmi Chand Chawla, an elderly lady from Yamuna Vihar, by using the practice of voice cloning. It was on October 24 that Chawla received a WhatsApp message saying that his cousin's son had been kidnapped by thugs. This was made believable by recording a voice record of the child who was cloned using artificial intelligence, crying for help to convince the jury. 

The panicked Chawla transferred 20,000 rupees through Paytm to withdraw the funds. It was not until he contacted his cousin, that he realized that the child was never in danger, even though he thought so at first. It is clear from all of these cases that scammers are exploiting AI to gain people's trust in their business. Scammers are no longer anonymous voices, they sound like friends or family members who are in crisis right now.

The McAfee company has released the 'Celebrity Hacker Hot List 2024', which shows which Indian celebrities have name searches that generate the highest level of "risky" searches on the Internet. In this year's results, it was evident that the more viral an individual is, the more appealing their names are to cybercriminals, who are seeking to exploit their fame by creating malicious sites and scams based on their names. These scams have affected many people, leading to big data breaches, financial losses, and the theft of sensitive personal information.  

There is no doubt that Orhan Awatramani, also known as Orry, is on top of the list for India. He has gained a great deal of popularity fast, and in addition to being associated with other high-profile celebrities, he has also gotten a great deal of attention in the media, making him an attractive target for cybercriminals. Especially in this context, it illustrates how cybercriminals can utilize the increase in unverified information about public figures who are new or are upcoming to lure consumers in search of the latest information. 

It has been reported that Diljit Dosanjh, an actor and singer, is being targeted by fraudsters in connection with his upcoming 'Dil-Luminati' concert tour, which is set to begin next month. This is unfortunately not an uncommon occurrence that happens due to overabundant fan interest and a surge in search volume at large-scale events like these, which often leads to fraudulent ticketing websites, discount schemes, resale schemes, and phishing scams.  

As generative artificial intelligence has gained traction, as well as deepfakes, the cybersecurity landscape has become even more complex. Several celebrities are being misled, and this is affecting their careers. Throughout the year, Alia Bhatt has been subject to several incidents that are related to deep fakes, while actors Ranveer Singh and Aamir Khan have falsely been portraying themselves as endorsing political parties in the course of election-related deep fakes. It has been discovered that prominent celebrities such as Virat Kohli and Shahrukh Khan have appeared in deepfake content designed to promote betting apps. 

It is known that scammers are utilizing tactics such as malicious URLs, deceptive messages, and artificially intelligent image, audio, and video scams to take advantage of fans' curiosity. This leads to financial losses as well as damaging the reputation of the impacted celebrities and damaging customer confidence.   There is a disturbing shift in how fraud is being handled (AI-Generated Representative Image) that is alarming (PIQuarterly News) As alarming as voice cloning scams may seem, the danger doesn't end there, as there are many dangers in front of us.

Increasingly, deepfake technology is pushing the boundaries to even greater heights, blending reality with electronic manipulation at an ever-increasing pace, resulting in increasingly difficult detection methods. In recent years, the same technology has been advancing into real-time video deception, starting with voice cloning. Facecam.ai is one of the most striking examples of this technology, which enables users to create deepfake videos that they can live-stream using just one image while users do so. It caused a lot of buzz, highlighting the ability to convincingly mimic a person's face in real time, and showcasing how easily it can be done.

Uploading a photo allowed users to seamlessly swap faces in the video stream without downloading anything. Despite its popularity, the tool had to be shut down after a backlash over its potential usefulness had been raised. It is important to note that this does not mean that the problem has been resolved. The rise of artificial intelligence has led to the proliferation of numerous platforms that offer sophisticated capabilities for creating deepfake videos and manipulating identities, posing serious risks to digital security. 

Although some platforms like Facecam. Ai—which gained popularity for allowing users to create live-streaming deep fake videos using deep fake images—has been taken down due to concerns over misuse, other tools continue to operate with dangerous potential. Notably, platforms like Deep-Live-Cam are still thriving, enabling individuals to swap faces during live video calls. This technology allows users to impersonate anyone, whether it be a celebrity, a politician, or even a friend or family member. What is particularly alarming is the growing accessibility of these tools. As deepfake technology becomes more user-friendly, even those with minimal technical skills can produce convincing digital forgeries. 

The ease with which such content can be created heightens the potential for abuse, turning what might seem like harmless fun into tools for fraud, manipulation, and reputational harm. The dangers posed by these tools extend far beyond simple pranks. As the availability of deepfake technology spreads, the opportunities for its misuse expand exponentially. Fraudulent activities, including impersonation in financial transactions or identity theft, are just a few examples of the potential harm. Manipulation of public opinion, personal relationships, or professional reputations is also at risk, especially as these tools become more widespread and increasingly difficult to regulate. 

The global implications of these scams are already being felt. In one high-profile case, scammers in Hong Kong used a deepfake video to impersonate the Chief Financial Officer of a company, leading to a financial loss of more than $25 million. This case underscores the magnitude of the problem: with the rise of such advanced technology, virtually anyone—not just high-profile individuals—can become a victim of deepfake-related fraud. As artificial intelligence continues to blur the lines between real and fake, society is entering a new era where deception is not only easier to execute but also harder to detect. 

The consequences of this shift are profound, as it fundamentally challenges trust in digital interactions and the authenticity of online communications. To address this growing threat, experts are discussing potential solutions such as Personhood Credentials—a system designed to verify and authenticate that the individual behind a digital interaction is, indeed, a real person. One of the most vocal proponents of this idea is Srikanth Nadhamuni, the Chief Technology Officer of Aadhaar, India's biometric-based identity system.

Nadhamuni co-authored a paper in August 2024 titled "Personhood Credentials: Artificial Intelligence and the Value of Privacy-Preserving Tools to Distinguish Who is Real Online." In this paper, he argues that as deepfakes and voice cloning become increasingly prevalent, tools like Aadhaar, which relies on biometric verification, could play a critical role in ensuring the authenticity of digital interactions.Nadhamuni believes that implementing personhood credentials can help safeguard online privacy and prevent AI-generated scams from deceiving people. In a world where artificial intelligence is being weaponized for fraud, systems rooted in biometric verification offer a promising approach to distinguishing real individuals from digital impersonators.

Engineering Giant Arup Falls Victim to £20m Deepfake Video Scam

 

The 78-year-old London-based architecture and design company Arup has a lot of accolades. With more than 18,000 employees spread over 34 offices worldwide, its accomplishments include designing the renowned Sydney Opera House and Manchester's Etihad Stadium. Currently, it is engaged in building the La Sagrada Familia construction in Spain. It is now the most recent victim of a deepfake scam that has cost millions of dollars. 

Earlier this year, CNN Business reported that an employee at Arup's Hong Kong office was duped into a video chat with deepfakes of the company's CFO and other employees. After dismissing his initial reservations, the employee eventually sent $25.6 million (200 million Hong Kong dollars) to the scammers over 15 transactions.

He later realised he had been duped after checking with the design company's U.K. headquarters. The ordeal lasted a week, from when the employee was notified to when the company started looking into the matter. 

“We can confirm that fake voices and images were used,” a spokesperson at Arup told a local media outlet. “Our financial stability and business operations were not affected and none of our internal systems were compromised.” 

Seeing is no longer the same as believing 

The list of recent high-profile targets involving fake images, videos, or audio recordings intended to defame persons has risen with Arup's deepfake encounter. Fraudsters are targeting everyone in their path, whether it's well-known people like Drake and Taylor Swift, companies like the advertising agency WPP, or a regular school principal. An official at the cryptocurrency exchange Binance disclosed two years ago that fraudsters had created a "hologram" of him in order to get access to project teams. 

Because of how realistic the deepfakes appear, they have been successful in defrauding innocent victims. Deepfakes, such as the well-known one mimicking Pope Francis, can go viral and become difficult to manage disinformation when shared on the internet. The latter is particularly troubling since it has the potential to sway voters during a period when several countries are holding elections. 

Attempts to defraud businesses have increased dramatically, with everything from phishing schemes to WhatsApp voice cloning, Arup's chief information officer Rob Greig told Fortune. “This is an industry, business and social issue, and I hope our experience can help raise awareness of the increasing sophistication and evolving techniques of bad actors,” he stated. 

Deepfakes are getting more sophisticated, just like other tech tools. That means firms must stay up to date on the latest threat and novel ways to deal with them. Although deepfakes might appear incredibly realistic, there are ways to detect them. 

The most effective approach is to simply ask a person on a video conference to turn—if the camera struggles to get the whole of their profile or the face becomes deformed it's probably worth investigating. Sometimes asking someone to use a different light source or pick up a pencil can assist expose deepfakes.

Can Face Biometrics Prevent AI-Generated Deepfakes?


AI-Generated deep fakes on the rise

A serious threat to the reliability of identity verification and authentication systems is the emergence of AI-generated deepfakes that attack face biometric systems. The prediction by Gartner, Inc. that by 2026, 30% of businesses will doubt these technologies' dependability emphasizes how urgently this new threat needs to be addressed.

Deepfakes, or synthetic images that accurately imitate genuine human faces, are becoming more and more powerful tools in the toolbox of cybercriminals as artificial intelligence develops. These entities circumvent security mechanisms by taking advantage of the static nature of physical attributes like fingerprints, facial shapes, and eye sizes that are employed for authentication. 

Moreover, the capacity of deepfakes to accurately mimic human speech introduces an additional level of intricacy to the security problem, potentially evading voice recognition software. This changing environment draws attention to a serious flaw in biometric security technology and emphasizes the necessity for enterprises to reassess the effectiveness of their present security measures.

According to Gartner researcher Akif Khan, significant progress in AI technology over the past ten years has made it possible to create artificial faces that closely mimic genuine ones. Because these deep fakes mimic the facial features of real individuals, they open up new possibilities for cyberattacks and can go beyond biometric verification systems.

As Khan demonstrates, these developments have significant ramifications. When organizations are unable to determine whether the person trying access is authentic or just a highly skilled deepfake representation, they may rapidly begin to doubt the integrity of their identity verification procedures. The security protocols that many rely on are seriously in danger from this ambiguity.

Deepfakes introduce complex challenges to biometric security measures by exploiting static data—unchanging physical characteristics such as eye size, face shape, or fingerprints—that authentication devices use to recognize individuals. The static nature of these attributes makes them vulnerable to replication by deepfakes, allowing unauthorized access to sensitive systems and data.

Deepfakes and challenges

Additionally, the technology underpinning deepfakes has evolved to replicate human voices with remarkable accuracy. By dissecting audio recordings of speech into smaller fragments, AI systems can recreate a person’s vocal characteristics, enabling deepfakes to convincingly mimic someone’s voice for use in scripted or impromptu dialogue.

By taking advantage of static data—unchanging physical traits like eye size, face shape, or fingerprints—that authentication devices use to identify people, deepfakes pose sophisticated threats to biometric security systems. Because these qualities are static, deepfakes can replicate them and gain unauthorized access to confidential information and systems.

Furthermore, the technology underlying deepfakes has advanced to remarkably accurately mimic human voices. Artificial intelligence (AI) systems can accurately replicate a person's voice by breaking down speech recordings into smaller segments. This allows deepfakes to realistically imitate a person's voice for usage in pre-recorded or spontaneous dialogue.

MFA and PAD

By taking advantage of static data—unchanging physical traits like eye size, face shape, or fingerprints—that authentication devices use to identify people, deepfakes pose sophisticated threats to biometric security systems. Because these qualities are static, deepfakes can replicate them and gain unauthorized access to confidential information and systems.

Furthermore, the technology underlying deepfakes has advanced to remarkably accurately mimic human voices. Artificial intelligence (AI) systems can accurately replicate a person's voice by breaking down speech recordings into smaller segments. This allows deepfakes to realistically imitate a person's voice for usage in pre-recorded or spontaneous dialogue.

Deepfakes are sophisticated threats to biometric security systems because they use static data, which is unchangeable physical attributes like eye size, face shape, or fingerprints that authentication devices use to identify persons.