Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Deepfake Phishing. Show all posts

Modern Phishing Attacks: Insights from the Egress Phishing Threat Trends Report

 

Phishing attacks have long been a significant threat in the cybersecurity landscape, but as technology evolves, so do the tactics employed by cybercriminals. The latest insights from the Egress Phishing Threat Trends Report shed light on the sophistication and evolution of these attacks, offering valuable insights into the current threat landscape. 

One notable trend highlighted in the report is the proliferation of QR code payloads in phishing emails. While QR code payloads were relatively rare in previous years, they have seen a significant increase, accounting for 12.4% of attacks in 2023 and remaining at 10.8% in 2024. This shift underscores the adaptability of cybercriminals and their ability to leverage emerging technologies to perpetrate attacks. 

In addition to QR code payloads, social engineering tactics have also become increasingly prevalent in phishing attacks. These tactics, which involve manipulating individuals into divulging sensitive information, now represent 19% of phishing attacks. 

Moreover, phishing emails have become over three times longer since 2021, likely due to the use of generative AI to craft more convincing messages. Multi-channel attacks have also emerged as a prominent threat, with platforms like Microsoft Teams and Slack being utilized as the second step in these attacks. Microsoft Teams, in particular, has experienced a significant increase in usage, with a 104.4% rise in 2024 compared to the previous year. This trend highlights the importance of securing not just email communications but also other communication channels within organizations. 

Another concerning development is the use of deepfakes in phishing attacks. These AI-generated audio and video manipulations have become increasingly sophisticated and are being used to deceive victims into disclosing sensitive information. The report predicts that the use of deepfakes in cyberattacks will continue to rise in the coming years, posing a significant challenge for defenders. Despite advancements in email security, many phishing attacks still successfully bypass Secure Email Gateways (SEGs). Obfuscation techniques, such as hijacking legitimate hyperlinks and masking phishing URLs within image attachments, are commonly used to evade detection. This highlights the need for organizations to implement robust security measures beyond traditional email filtering solutions. 

Furthermore, the report identifies millennials as the top targets for phishing attacks, receiving 37.5% of phishing emails. Industries such as finance, legal, and healthcare are among the most targeted, with individuals in accounting and finance roles receiving the highest volume of phishing emails. As cybercriminals continue to innovate and adapt their tactics, organizations must remain vigilant and proactive in their approach to cybersecurity. 

This includes implementing comprehensive security awareness training programs, leveraging advanced threat detection technologies, and regularly updating security policies and procedures. 

The Egress Phishing Threat Trends Report provides valuable insights into the evolving nature of phishing attacks and underscores the importance of a multi-layered approach to cybersecurity in today's threat landscape. By staying informed and proactive, organizations can better protect themselves against the growing threat of phishing attacks.

Seeing is No Longer Believing as Deepfakes Become Better and More Dangerous

 

Numerous industries are being transformed by artificial intelligence (AI), yet with every benefit comes a drawback. Deepfake detection is becoming more and more challenging as AI image generators become more advanced. 

The impact of AI-generated deepfakes on social media and in war zones is alarming world leaders and law enforcement organisations, who feel anxious regarding the situation. 

"We're getting into an era where we can no longer believe what we see," Marko Jak, co-founder, and CEO of Secta Labs, stated. "Right now, it's easier because the deep fakes are not that good yet, and sometimes you can see it's obvious." 

The time when it won't be feasible to recognise a faked image at first glance, in Jak's opinion, is not that far away—possibly within a year. Jak is the CEO of a company that generates AI images, therefore he must be aware of this. 

Secta Labs, an Austin-based generative AI firm that Jak co-founded in 2022, specialises in producing high-quality AI-generated photographs. Users can upload photos of themselves to create avatars and headshots using artificial intelligence. 

According to Jak, Secta Labs considers customers to be the proprietors of the AI models produced from their data, whilst the company is only a custodian assisting in creating images from these models. 

The potential for abuse of more sophisticated AI models has prompted leaders from around the world to demand for fast action on AI legislation and driven businesses to decide against making their cutting-edge technologies available to the general public. 

After launching its new Voicebox AI-generated voice platform last week, Meta stated it will not make the AI available to the general public. 

"While we believe it is important to be open with the AI community and to share our research to advance the state of the art in AI,” the Meta spokesperson explained. “It’s also necessary to strike the right balance between openness with responsibility." 

The U.S. Federal Bureau of Investigation issued a warning about AI deepfake extortion scams earlier this month, as well as about criminals who fabricate content using images and videos from social media. Jak suggested that exposing deepfakes rather than being able to identify them may be the key to defeating them. 

"AI is the first way you could spot [a deepfake]," Jak added. "There are people developing artificial intelligence that can tell you if an image in a video was produced by AI or not."

The entertainment industry is raging over the usage of generative AI and the possible application of AI-generated imagery in movies and television. Prior to beginning contract discussions, SAG-AFTRA members agreed to authorise a strike due to serious concerns about artificial intelligence.

As technology advances and malicious actors develop more sophisticated deepfakes to thwart equipment intended to identify them, Jak noted that the difficulty is the AI arms race that is taking place.

He claimed the technology and encryption might be able to address the deepfake issue despite the fact that blockchain has been overused—some might even say overhyped—as a fix for actual issues. The wisdom of the public may hold the answer, even though technology can address a variety of problems with deepfakes.

Growing Threat From Deep Fakes and Misinformation

 


The prevalence of synthetic media is rising as a result of the development of tools that make it simple to produce and distribute convincing artificial images, videos, and music. The propagation of deepfakes increased by 900% in 2020, according to Sentinel, over the previous year.

With the rapid advancement of technology, cyber-influence operations are becoming more complex. The methods employed in conventional cyberattacks are increasingly being utilized to cyber influence operations, both in terms of overlap and extension. In addition, we have seen growing nation-state coordination and amplification.

Tech firms in the private sector could unintentionally support these initiatives. Companies that register domain names, host websites, advertise content on social media and search engines, direct traffic, and support the cost of these activities through digital advertising are examples of enablers.

Deep learning, a particular type of artificial intelligence, is used to create deepfakes. Deep learning algorithms can replace a person's likeness in a picture or video with other people's visage. Deepfake movies of Tom Cruise on TikTok in 2021 captured the public. Deepfake films of celebrities were first created by face-swapping photographs of celebrities online.

There are three stages of cyber influence operations, starting with prepositioning, in which false narratives are introduced to the public. The launch phase involves a coordinated campaign to spread the narrative through media and social channels, followed by the amplification phase, where media and proxies spread the false narrative to targeted audiences. The consequences of cyber influence operations include market manipulation, payment fraud, and impersonation. However, the most significant threat is trust and authenticity, given the increasing use of artificial media that can dismiss legitimate information as fake.

Business Can Defend Against Synthetic Media:

Deepfakes and synthetic media have become an increasing concern for organizations, as they can be used to manipulate information and damage reputations. To protect themselves, organizations should take a multi-layered approach.
  • Firstly, they should establish clear policies and guidelines for employees on how to handle sensitive information and how to verify the authenticity of media. This includes implementing strict password policies and data access controls to prevent unauthorized access.
  • Secondly, organizations should invest in advanced technology solutions such as deepfake detection software and artificial intelligence tools to detect and mitigate any threats. They should also ensure that all systems are up-to-date with the latest security patches and software updates.
  • Thirdly, organizations should provide regular training and awareness programs for employees to help them identify and respond to deepfake threats. This includes educating them on the latest deepfake trends and techniques, as well as providing guidelines on how to report suspicious activity.
Furthermore, organizations should have a crisis management plan in place in case of a deepfake attack. This should include clear communication channels and protocols for responding to media inquiries, as well as an incident response team with the necessary expertise to handle the situation. By adopting a multi-layered approach to deepfake protection, organizations can reduce the risks of synthetic media attacks and protect their reputation and sensitive information.


Deepfakes: The Emerging Phishing Technology


Phishing has been a known concept for over a few decades now. Attackers manipulate victims into performing actions like clicking a malicious URL, downloading a malicious attachment, transferring funds, or sharing sensitive data by utilizing human psychology, taking advantage of human nature (such as impulsivity, grievances, and curiosity), by posing as legitimate companies. 

While phishing is most commonly executed via emails, it has now evolved into utilizing voice (vishing), social media, and SMS in order to seem more legitimate to the victims. With deepfakes, phishing is reemerging as the most severe type of cybercrime. 

What are Deepfakes? 

According to Steve Durbin of the Information Security Forum, deepfake technology (or deepfakes) is "a kind of artificial intelligence (AI) capable of generating synthetic voice, video, pictures, and virtual personalities." Users may already be familiar with this via their smartphones, consisting of apps that tend to revive the dead, exchange faces with famous persons, and produce effects that are quite lifelike like de-aging Hollywood celebrities. 

Although deepfakes were apparently introduced for entertainment purposes, threat actors later utilized this technology to execute phishing attacks, identity theft, financial fraud, information manipulation, and political unrest. 

Recently, deepfakes are being created by numerous methods, such as swapping (an individual’s face is superimposed upon another), attribute editing, face re-enactment, or entirely artificial content in which a person’s image is entirely made up. 

One may assume deepfake as a futuristic concept, but a widespread and malicious use of deepfakes is in fact readily available and being used in reality. 

A number of instances of deepfake-enabled phishing have already been reported, such as: 

  • AI voice cloning technology conned a bank manager into initiating wire transfers worth $35 million. 
  • A deepfake video of Elon Musk promoting a crypto scam went viral on social media. 
  • An AI hologram, impersonating a chief operating officer at one of the world’s biggest crypto exchanges on a Zoom call and scammed another exchange into losing all their liquid funds. 
  • A deepfake make headlines, showing former US president Barack Obama speaking about the dangers of false information and fake news. 

How Can an Organization Protect Themselves from Deepfake Phishing? 

Deepfake phishing could be the reason for massive damage to businesses and their employees. Businesses could face harsh penalties and a higher risk of financial fraud. Since deepfake technology is currently widely available, anyone with even the smallest bad intent may synthesize audio and video and carry out a sophisticated phishing assault. 

The following steps must be followed to ensure prevention. 

  • Conduct sessions regarding security awareness, so that the employees could understand their responsibility and accountability pertaining to cybersecurity. 
  • Run phishing simulations to expose employees to deepfake phishing so they may learn how these frauds operate. 
  • Implement technologies such as phishing-resistant multi-factor authentication (MFA) and zero-trust in order to mitigate risks of identity fraud. 
  • Encourage people to report suspicious activities and check the credibility of requests, especially if they involve significant money transactions. 

One could not possibly prevent activities like deepfakes from happening, but the risks can still be mitigated by taking certain measures such as nurturing and developing cybersecurity instincts among employees. This will ultimately reinforce the overall cybersecurity culture of the organization.  

Deepfake Phishing: A New Tool of Threat Actors

 

Deepfake Phishing is an emerging attack vector that security experts should be concerned about because of the development of increasingly advanced AI, audio, and video technology as well as the abundance of user personal data that is available on social media. 

How deepfake targets victims 

Hackers utilize AI and machine learning to analyze a variety of information, including photos, videos, and audio snippets, to carry out a deepfake phishing assault. They build a computerized representation of a person using this data. 

Deepfakes have primarily been used up until now for political and entertainment purposes, both good and bad. This strategy's best instance occurred earlier this year. Patrick Hillmann, the chief communication officer at Binance, was the subject of a deepfake hologram created by hackers using information from prior interviews and media appearances. 

With this strategy, threat actors can disobey biometric authentication systems in addition to imitating a person's physical characteristics to deceive human users via social engineering. 

Because of this, Avivah Litan, a Gartner analyst, advises businesses "not to rely on biometric certification for user authentication apps unless it incorporates effective deepfake detection that verifies user liveness and authenticity." 

Litan also points out that as AI used in these assaults develops, it will likely become harder to identify these kinds of attacks as it becomes able to produce more convincing auditory and visual representations. 

Deepfake phishing's state in 2022 and beyond 

Although deepfake technology is still in its infancy, it is becoming more and more popular. It is already being used experimentally by cybercriminals to execute attacks against unwary consumers and organizations. 

The World Economic Forum (WEF) estimates that there are now 900% more deepfake films online each year. In addition, VMware discovers a 23% rise from last year in the proportion of defenders reporting detecting malicious deepfakes utilized in an attack. 

These assaults have deadly effectiveness. For instance, in 2021, fraudsters impersonated the CEO of a significant firm using AI voice cloning, and they deceived the bank manager of the company into transferring $35 million to another account in order to complete an "acquisition." 

A similar incident took place in 2019. Using AI to pretend to be the CEO of the company's German parent company, a fraudster called the CEO of a UK energy company. He asked for a $243,000 quick transfer to a Hungarian supplier. 

According to several analysts, deepfake phishing will only increase, and threat actors will continue to develop phony content that is both more complex and convincing. 

“As deepfake technology matures, [attacks using deepfakes] are expected to become more common and expand into newer scams,” stated KPMG analyst Akhilesh Tuteja. “They are increasingly becoming indistinguishable from reality. It was easy to tell deepfake videos two years ago, as they had a clunky [movement] quality and … the faked person never seemed to blink. But it’s becoming harder and harder to distinguish it now.” 

Prevention Tips 

Security professionals must regularly train end users about this and other new attack routes. Before a deepfake attack spreads, it could be possible to halt it using some unexpected low-tech techniques. 

With security awareness training, there is a genuine chance that you will get bored, but making it satisfying, rewarding, and competitive may help you remember the information. Pre-shared codes may be necessary for an authorized person to transfer substantial sums of money, or multiple persons may need to approve the transaction. 

Employees will likely find the deepfake phishing awareness training to be very interesting, funny, and educational. Share convincing deep fake movies and instruct viewers to watch out for telltale signs like unblinking eyes, unusual lighting, and peculiar facial movements.