Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Deepfake Scam. Show all posts

Deepfakes: A Rising Threat to Cybersecurity and Society

 

The late NBA player Kobe Bryant appeared in the music video for Kendrick Lamar's song "The Heart Part 5", which stunned the audience. Deepfake technology was employed in the video to pay tribute to the late legend. 

Deepfakes are images and videos that have been altered with advanced deep learning technologies such as autoencoders or generative adversarial networks.

With the support of deepfake technology, realistic yet manipulated media assets can be easily generated. However, deepfake technology is deceptive. The technology is utilised in virtual reality, video games, and filmmaking, but it might also be used as a weapon in cyberwarfare, the fifth dimension of warfare. Additionally, it can be used to share false information to influence public opinion along with political agendas.

Cybercrime is on the rise as the internet's global penetration grows. According to the National Crime Records Bureau, there were around 50,000 incidents of cybercrime in 2020. The national capital witnessed a 111% increase in cybercrime in 2021 compared to 2020 as reported by NCRB.

The majority of these incidents involved online fraud, online sexual harassment, and the release of private content, among other things. Deepfake technology may lead to an increase in such incidents that are weaponized for financial gain. 

Notably, the technology is not only a threat to the right to privacy protected by Article 21 of the Constitution, but it also plays a key role in cases of humiliation, misinformation, and defamation. Whaling attacks, deepfake voice phishing, and other frauds that target individuals and companies are thus likely to rise. 

Mitigation Tips

The difficulties caused by deepfakes can be addressed using ChatGPT, the generative AI that has recently gained attention. To offer viable options, ChatGPT can be integrated into search engines. In order to combat the dissemination of misinformation, the AI-enabled ChatGPT, based on Natural Language Processing, is trained to reject inappropriate requests. It can also process complicated algorithms to carry out complex reasoning operations. 

In order to swiftly purge such information from the internet after deployment, the dataset needs to be fine-tuned using supervised learning. It can be further tweaked due to its accessibility to offer a quicker, more practical solution that is also affordable. However, to stop AI from scooping up new deepfakes from the test set, the train set must be constantly monitored. 

Additionally, a greater influx of cyber security specialists is required to achieve this. India's GDP currently only accounts for 0.7% of research and development, compared to 3.3% in affluent nations like the United States of America. The National Cyber Security Policy of 2013 must be improved in order to adapt to new technologies and stop the spread of cybercrimes as these manipulations become more complex over time.

AI-Based Deepfake Fraud: Police Retrieves Money Worth ₹40,000 Defrauded From Kozhikode Victim


Kozhikode, India: In a ‘deepfake’ incident, a man from Kozhikode, Kerala lost ₹40,000 after he fell prey to an AI-based scam.

According to police officials, the victim, identified as Radhakrishnan received a video call on WhatsApp from an unknown number. Apparently, the swindlers used Artificial Intelligence tools to generate a deepfake video of the victim’s old colleague knew. To further maintain the trust, the scam caller cunningly mentioned the victim’s former acquaintances.

During their conversation, the scammer made a desperate request of ₹40,000, stating a medical urgency of a relative who is in the hospital. Trusting the caller, Radhakrishnan provided the financial aid, via Google Pay.

Later, the caller made another request to Radhakrishnan, of ₹40,000, which raised his suspicions. Following this, he reached out to his colleague directly. To his disbelief, he discovered the entire incident was in fact an AI based deepfake fraud, and he was robbed./ Realizing the fraud, he immediately filed a complaint to the Cyber Police.

The cyber cell promptly investigated the case and managed to the bank authorities of the bank account where the money was kept. Apparently, the bank account was traced back to private bank located in Maharashtra.

This was the first incidence of deepfake fraud based on Al that has been detected in the state, according to the Kerala Police Cyber Cell.

Modus Operandi: The scammers collect images from social media profiles and use artificial intelligence to create misleading films. These con artists use Al technology in conjunction with details like mutual friends' names to appear legitimate and con innocent individuals.

How to Protect Oneself From Deepfakes? 

Similar cases of deepfakes and other AI-based frauds have raised concerns for cyber security professionals.

Experts have cautioned against such scams and provided some safety advice. Because the majority of deepfakes have subpar resolution, people are urged to examine the video quality. When closely examined, it is obvious that the deepfake films are fake since they either abruptly end or loop back to the beginning after a predetermined length of time. Before conducting any financial transactions, it is also a good idea to get in touch with a person separately to confirm that they are truly participating in the video conversation. 

Deepfake Deception: Man Duped of Rs 5 Crore as Chinese Scammer Exploits AI Technology

 

A recent incident has shed light on the alarming misuse of artificial intelligence (AI) through the deployment of advanced 'deepfake' technology, in which a man was deceived into losing a substantial amount of money exceeding Rs 5 crore. Deepfakes, which leverage AI capabilities to generate counterfeit images and videos, have raised concerns due to their potential to spread misinformation.

According to a recent report by Reuters, the perpetrator employed AI-powered face-swapping technology to impersonate the victim's close acquaintance. Posing as the friend, the scammer engaged in a video call with the victim and urgently requested a transfer of 4.3 million yuan, falsely claiming the funds were urgently needed for a bidding process. Unaware of the deception, the victim complied and transferred the requested amount.

The elaborate scheme began to unravel when the real friend expressed no knowledge of the situation, leaving the victim perplexed. It was at this point that he realized he had fallen victim to a deepfake scam. Fortunately, the local authorities in Baotou City successfully recovered most of the stolen funds and are actively pursuing the remaining amount.

This incident has raised concerns in China regarding the potential misuse of AI in financial crimes. While AI has brought significant advancements across various domains, its misapplication has become an increasingly worrisome issue. In a similar occurrence last month, criminals exploited AI to replicate a teenager's voice and extort ransom from her mother, generating shockwaves worldwide.

Jennifer DeStefano, a resident of Arizona, received a distressing call from an unknown number, drastically impacting her life. At the time, her 15-year-old daughter was on a skiing trip. When DeStefano answered the call, she recognized her daughter's voice, accompanied by sobbing. The situation escalated when a male voice threatened her and cautioned against involving the authorities.

In the background, DeStefano could hear her daughter's voice pleading for help. The scammer demanded a ransom of USD 1 million in exchange for the teenager's release. Convinced by the authenticity of her daughter's voice, DeStefano was deeply disturbed by the incident.

Fortunately, DeStefano's daughter was unharmed and had not been kidnapped. This incident underscored the disconcerting capabilities of AI, as fraudsters can exploit the technology to emotionally manipulate and deceive individuals for financial gain.

As AI continues to advance rapidly, it is imperative for individuals to maintain vigilance and exercise caution. These incidents emphasize the significance of robust cybersecurity measures and the need to raise public awareness regarding the risks associated with deepfake technology. Authorities worldwide are working tirelessly to combat these emerging threats and protect innocent individuals from falling victim to such sophisticated scams.

The incident in China serves as a stark reminder that as technological progress unfolds, increased vigilance and understanding are essential. Shielding ourselves and society from the misuse of AI is a collective responsibility that necessitates a multifaceted approach, encompassing technological advancements and the cultivation of critical thinking skills.

These cases illustrate the potential exploitation of AI for financial crimes. It is crucial to remain cognizant of the potential risks as AI technology continues to evolve.