Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label AI Scams. Show all posts

Authorities Warn of AI Being Employed by Scammers to Target Canadians

 

As the usage of artificial intelligence (AI) grows, fraudsters employ it more frequently in their methods, and Canadians are taking note. According to the Royal Bank of Canada’s (RBC's) annual Fraud Prevention Month Poll, 75% of respondents are more concerned with fraud than ever before. Nine out of 10 Canadians feel that the use of AI will boost scam attempts over the next year (88%), thereby making everyone more exposed to fraud (89%).

As per the survey, 81 percent of Canadians think that AI will make phone fraud efforts more difficult to identify, and 81 percent are worried about scams that use voice cloning and impersonation techniques. 

"With the recent rise in voice cloning and deepfakes, fraudsters are able to employ a new level of sophistication to phone and online scams," stated Kevin Purkiss, vice president, Fraud Management, RBC. "The good news is that awareness of these types of scams is high, but we also need to take action to safeguard ourselves from fraudsters.”

The study also discovered that phishing (generic scams via email or text), spear phishing (emails or texts that appear authentic), and vishing (specific phone or voicemail scams) were among the top three types of fraud. More than half also report an increase in deepfake frauds (56%), while over half (47%) claim voice cloning scams are on the rise. 

Prevention tips

Set up notifications for your accounts, utilise multi-factor authentication whenever possible, and make the RBC Mobile App your primary banking tool. Keep an eye out for impersonation scams, in which fraudsters appear to be credible sources such as the government, bank employees, police enforcement, or even a family member. 

Some experts also recommend sharing a personal password with loved ones to ensure that you're conversing with the right individual. 

To avoid robo-callers from collecting your identity or voice, limit what you disclose on social media and make your voicemail generic and short. Ignore or delete unwanted emails and texts that request personal information or contain dubious links or money schemes.

How can You Protect Yourself From the Increasing AI Scams?


Recent years have witnessed a revolution in terms of innovative technology, especially in the field of Artificial Intelligence. However, these technological advancement has also opened new portals for cybercrime activities. 

The latest tactic used by threat actors has been deepfakes, where a cybercriminal may exploit the audio and visual media for their use in conducting extortions and other frauds. In some cases, fraudsters have used AI-generated voices to impersonate someone close to the targeted victim, making it impossible to realize they are being defrauded.  

According to ABC13, the most recent instance of this included an 82-year-old Texan called Jerry who fell victim to a scam by a criminal posing as a sergeant with the San Antonio Police Department. The con artist informed the victim that his son-in-law had been placed under arrest and that Jerry would need to provide $9,500 in bond to be released. Furthermore, Jerry was duped into paying an extra $7,500 to finish the entire process. The victim, who lives in an elderly living home, is thinking about getting a job to make up for the money they lost, but the criminals are still at large.  

The aforementioned case is however not the first time where AI has been used for fraud. According to Reuters, a Chinese man was defrauded of more than half a million dollars earlier this year after a cybercriminal fooled him into transferring the money by posing as his friend using an AI face-swapping tool.   

Cybercriminals often go with similar tactics, like sending morphed media of a person close to the victim in an attempt to coerce money under the guise of an emergency. Although impostor frauds are not new, here is a contemporary take on them. The FTC reported in February 2023 that around $2.6 billion was lost by American residents in 2022 as a result of this type of scam. However, the introduction of generative AI has significantly increased the stakes.  

How can You Protect Yourself from AI Scammers? 

A solution besides ignoring calls or texts from suspicious numbers could be – establishing a unique codeword with loved ones. This way, one can distinguish if the person on the other end is actually them. To verify if they really are in a difficult circumstance, one can also attempt to get in touch with them directly. Experts also advise hanging up and giving the individual a call directly, or at least double-checking the information before answering.  

Unfortunately, scammers employ a variety of AI-based attacks in addition to voice cloning. Deepfaked content extortion is a related domain. Recently, there have been multiple attempts by nefarious actors to use graphic pictures generated by artificial intelligence to blackmail people. Numerous examples where deepfakes destroyed the lives of numerous youngsters have been revealed in a report by The Washington Post. In such a case, it is advisable to get in touch with law enforcement right away rather than handling things on one's own.     

AI Scams: When Your Child's Voice Isn't Their Own

 

A fresh species of fraud has recently surfaced, preying on unwary victims by utilizing cutting-edge artificial intelligence technologies. A particularly alarming development is the use of AI-generated voice calls, in which con artists imitate children's voices to trick parents into thinking they are chatting with their own children only to be duped by a fatal AI hoax.

For law enforcement organizations and families around the world, these AI fraud calls are an increasing issue. These con artists imitate a child's voice using cutting-edge AI speech technology to trick parents into thinking their youngster needs money right away and is in distress.

Numerous high-profile incidents have been published, garnering attention and making parents feel exposed and uneasy. One mother reported getting a frightening call from a teenager who claimed to be her daughter's age and to be involved in a kidnapping. She paid a sizeable sum of money to the con artists in a panic and a desperate attempt to protect her child, only to learn later that it was an AI voice and that her daughter had been safe the entire time.

Due to the widespread reporting of these instances, awareness-raising efforts and preventative actions are urgently needed. It's crucial to realize that AI-generated voices have developed to a remarkable level of sophistication and are now almost indistinguishable from actual human voices in order to comprehend how these frauds work. Fraudsters rely on parents' natural desire to protect their children at all costs by using this technology to influence emotions.

Technology businesses and law enforcement organizations are collaborating to fight these AI scams in response to the growing worry. One method involves improving voice recognition software to better accurately identify sounds produced by AI. To stay one step ahead of their schemes, however, is difficult because con artists are constantly changing their strategies.

Experts stress the significance of maintaining vigilance and taking proactive steps to guard oneself and loved ones against falling for such fraud. It is essential to establish the caller's identification through other ways if parents receive unexpected calls asking for money, especially under upsetting circumstances. The scenario can be verified by speaking with the youngster directly or by requesting a dependable relative or acquaintance.

Children must be taught about AI scams in order to avoid accidentally disclosing personal information that scammers could use against them. Parents should talk to their children about the dangers of giving out personal information over the phone or online and highlight the need to always confirm a caller's identity, even if they seem familiar.

Technology is always developing, which creates both opportunities and difficulties. Scammers can take advantage of modern techniques to target people's vulnerabilities, as evidenced by the increase of AI frauds. Technology companies, law enforcement, and people all need to work together to combat these scams in order to prevent themselves and their loved ones from falling for AI fraud. People must also be knowledgeable, careful, and proactive in preventing AI fraud.

AI 'Kidnapping' Scams: A Growing Threat

Cybercriminals have started using artificial intelligence (AI) technology to carry out virtual abduction schemes, which is a worrying trend. These scams, which use chatbots and AI voice cloning techniques, have become much more prevalent recently and pose a serious threat to people. 

The emergence of AI-powered voice cloning tools has provided cybercriminals with a powerful tool to execute virtual kidnapping scams. By using these tools, perpetrators can mimic the voice of a target's family member or close acquaintance, creating a sense of urgency and fear. This psychological manipulation is designed to coerce the victim into complying with the scammer's demands, typically involving a ransom payment.

Moreover, advancements in natural language processing and AI chatbots have made it easier for cybercriminals to engage in conversation with victims, making the scams more convincing and sophisticated. These AI-driven chatbots can simulate human-like responses and engage in prolonged interactions, making victims believe they are indeed communicating with their loved ones in distress.

The impact of these AI 'kidnapping' scams can be devastating, causing immense emotional distress and financial losses. Victims who fall prey to these scams often endure intense fear and anxiety, genuinely believing that their loved ones are in danger. The scammers take advantage of this vulnerability to extort money or personal information from the victims.

To combat this growing threat, law enforcement agencies and cybersecurity experts are actively working to raise awareness and develop countermeasures. It is crucial for individuals to be vigilant and educate themselves about the tactics employed by these scammers. Recognizing the signs of a virtual kidnapping scam, such as sudden demands for money, unusual behavior from the caller, or inconsistencies in the story, can help potential victims avoid falling into the trap.

A proactive approach to solving this problem is also required from technology businesses and AI developers. To stop the abuse of AI voice cloning technology, strict security measures must be put in place. Furthermore, using sophisticated algorithms to identify and stop malicious chatbots can deter attackers.

Watch Out For These ChatGPT and AI Scams

 

Since ChatGPT's inception in November of last year, it has consistently shown to be helpful, with people all around the world coming up with new ways to use the technology every day. The strength of AI tools, however, means that they may also be employed for sinister purposes like creating malware programmes and phishing emails. 

Over the past six to eight months, hackers have been observed exploiting the trend to defraud individuals of their money and information by creating false investment opportunities and scam applications. They have also been observed using artificial intelligence to plan scams. 

AI scams are some of the hardest to spot, and many people don't use technologies like Surfshark antivirus, which alerts users before they visit dubious websites or download dubious apps. As a result, we have compiled a list of all the prevalent strategies that have lately been seen in the wild. 

Phishing scams with AI assistance 

Phishing scams have been around for a long time. Scammers can send you emails or texts pretending to be from a trustworthy organisation, like Microsoft, in an effort to trick you into clicking a link that will take you to a dangerous website.

A threat actor can then use that location to spread malware or steal sensitive data like passwords from your device. Spelling and grammar mistakes, which a prominent corporation like Microsoft would never make in a business email to its clients, have historically been one of the simplest ways to identify them. 

However, in 2023 ChatGPT will be able to produce clear, fluid copy that is free of typos with just a brief suggestion. This makes it far more difficult to differentiate between authentic letters and phishing attacks. 

Voice clone AI scams

In recent months, frauds utilising artificial intelligence (AI) have gained attention. 10% of respondents to a recent global McAfee study said they have already been personally targeted by an AI voice scam. 15% more people claimed to be acquainted with a victim. 

AI voice scams use text-to-speech software to create new content that mimics the original audio by stealing audio files from a target's social network account. These kinds of programmes have valid, non-nefarious functions and are accessible online for free. 

The con artist will record a voicemail or voice message in which they portray their target as distressed and in need of money desperately. In the hopes that their family members won't be able to tell the difference between their loved one's voice and an AI-generated one, this will then be transmitted to them. 

Scams with AI investments

 
Scammers are using the hype surrounding AI, as well as the technology itself, in a manner similar to how they did with cryptocurrencies, to create phoney investment possibilities that look real.

Both "TeslaCoin" and "TruthGPT Coin" have been utilised in fraud schemes, capitalising on the attention that Elon Musk and ChatGPT have received in the media and positioning themselves as hip investment prospects. 

According to California's Department of Financial Protection & Innovation, Maxpread Technologies fabricated an AI-generated CEO and programmed it with a script enticing potential investors to make investments. An order to cease and desist has been given to the corporation. 

The DFPI claims that Harvest Keeper, another investment firm, collapsed back in March. According to Forbes, Harvest Keeper employed an actor to pose as their CEO in an effort to calm irate clients. This demonstrates the lengths some con artists will go to make sure their sales spiel is plausible enough.

Way forward

Consumers in the US lost a staggering $8.8 billion to scammers in 2022, and 2023 is not expected to be any different. Periods of financial instability frequently coincide with rises in fraud, and many nations worldwide are experiencing difficulties. 

Artificial intelligence is currently a goldmine for con artists. Although everyone is talking about it, relatively few people are actually knowledgeable about it, and businesses of all sizes are rushing AI products to market. 

Keeping up with the most recent scams is crucial, and now that AI has made them much more difficult to detect, it's even more crucial. Following them on social media for the most recent information is strongly encouraged because the FTC, FBI, and other federal agencies frequently issue warnings. 

Security professionals advised buying a VPN that detects spyware, such NordVPN or Surfshark. In addition to alerting you to dubious websites hidden on Google Search results pages, they both will disguise your IP address like a conventional VPN. It's crucial to arm oneself with technology like this if you want to be safe online.