Search This Blog

Powered by Blogger.

Blog Archive

Labels

Are Chatbots Making it Difficult to Trace Phishing Emails?

Chatbots can now rectify the flaws that trip spam filters or alert human readers, addressing a basic flaw with some phishing attempts.


Chatbots are curbing a crucial line of defense against bogus phishing emails by rectifying grammatical and spelling errors, a key attribute to trace fraudulent mails, according to experts. 

The warning comes as international advisory published from the law enforcement agency Europol concerning the potential criminal use of ChatGPT and other "large language models." 

How Does Chatbot Aid Phishing Campaign? 

Phishing campaigns are frequently used as bait by cybercriminals to lure victims into clicking links that download malicious software or provide sensitive information like passwords or pin numbers. 

According to the Office for National Statistics, half of all adults in England and Wales reported receiving a phishing email last year, making phishing emails one of the most frequent kinds of cyber threat. 

However, artificial intelligence (AI) chatbots can now rectify the flaws that trip spam filters or alert human readers, addressing a basic flaw with some phishing attempts—poor spelling and grammar. 

According to Corey Thomas, chief executive of the US cybersecurity firm Rapid7 “Every hacker can now use AI that deals with all misspellings and poor grammar[…]The idea that you can rely on looking for bad grammar or spelling in order to spot a phishing attack is no longer the case. We used to say that you could identify phishing attacks because the emails look a certain way. That no longer works.” 

As per the data, ChatGPT, the market leader that rose to fame after its launch last year, is being used for cybercrime, with the development of "large language models" (LLM) finding one of its first significant commercial applications in creating malicious communications. 

Phishing emails are increasingly being produced by bots, according to data from cybersecurity specialists at the UK company Darktrace. This allows crooks to send longer messages that are less likely to be detected by spam filters and to get beyond the bad English used in human-written emails. 

Since the huge prevalence of ChatGPT last year the overall volume of malicious email scams that attempt to trick users into clicking a link has decreased, being replaced by emails that are more linguistically complicated. According to Max Heinemeyer, the company's chief product officer, this indicates that a sizable proportion of threat actors who create phishing and other harmful emails have developed the ability to create longer, more complicated prose—likely using an LLM like ChatGPT or something similar. 

In Europol’s advisory report in a study on the usage of AI chatbots, the firm mentioned similar potential issues, such as fraud and social engineering, disinformation, and cybercrime. According to the report, the systems are helpful for guiding potential offenders through the processes needed to hurt others. Since the model can be used to deliver detailed instructions by posing pertinent questions, it is much simpler for criminals to comprehend and ultimately commit different forms of crime. 

In a report published this month, the US-Israeli cybersecurity company Check Point claimed to have created a convincing-looking phishing email using the most recent version of ChatGPT. By instructing the chatbot that it wanted a sample phishing email for a program on staff awareness, it got beyond the chatbot's safety procedures. 

With the last week's launch of its Bard product in the US and the UK, Google has also entered the chatbot race. Bard cooperated gladly, if without much finesse when the Guardian asked him to write an email that would convince someone to click on a suspicious-looking link: "I am writing to you today to give a link to an article that I think you will find interesting." 

Additionally, Google highlighted its “prohibited use” policy for AI, according to which users are not allowed to use its AI models to create content for the purpose of “deceptive or fraudulent activities, scams, phishing, or malware”. 

In regards to the issue, OpenAI, the company behind ChatGPT mentioned its terms of use, which says users “may not use the services in a way that infringes, misappropriates or violates any person’s rights”.  

Share it:

AI

AI Chatbot

Artificial Intelligence

ChatGPT

Cyber Attacks

Phishing Attacks

Phishing Mails

Rapid7

The Guardian