Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Malicious Campaigns. Show all posts

Sharp Increase in Malware Attacks via USB Flash Drives

 

Instances of cybercriminals employing USB drives for malware attacks have seen a significant rise. According to security researchers from Mandiant, there has been a three-fold increase in malware attacks via USB drives aimed at stealing sensitive information during the first half of 2023. These researchers have disclosed details regarding two specific attack campaigns.

One of the attack campaigns, attributed to the China-linked cyberespionage group TEMP.Hex, targeted both public and private organizations in Europe, Asia, and the U.S. The attackers utilized USB flash drives to introduce the SOGU malware into compromised systems and extract valuable data. 

The flash drives contained multiple malicious software and employed a DLL hijacking technique to download the final payload into the memory of the compromised systems. Once executed, the SOGU malware carried out various actions such as capturing screenshots, recording keystrokes, establishing reverse shell connections, and enabling remote desktop connections for executing additional files. 

The stolen data was sent to the attackers' command and control (C2) server using a custom binary protocol over TCP, UDP, or ICMP. Industries targeted by this attack campaign included construction, engineering, government, manufacturing, retail, media, and pharmaceutical sectors.

In an attack campaign, victims were enticed to click on a file that appeared to be a legitimate executable file found in the root folder of a USB drive. Upon executing this file, an infection chain was triggered, leading to the download of a shellcode-based backdoor named SNOWYDRIVE.

The malware not only copied itself to removable drives connected to infected systems but also performed various other operations, such as writing or deleting files, initiating file uploads, and executing reverse shell commands.

Recently, the Check Point Research Team uncovered a new USB-based attack campaign attributed to a China-based group called Camaro Dragon. 

The campaign specifically targeted a healthcare institution in Europe and involved the deployment of several updated versions of malware toolsets, including WispRider and HopperTick. It was reported that Camaro Dragon effectively utilized USB drives to launch attacks in Myanmar, South Korea, Great Britain, India, and Russia.

Organizations are strongly advised to prioritize access restrictions on USB devices and conduct comprehensive scans for malicious files before connecting them to their networks. 

Additionally, it is crucial for organizations to enhance their awareness and understanding of such attack campaigns in order to proactively defend against threats from the outset. It can be achieved by implementing a robust and automated Threat Intelligence Platform (TIP) that provides real-time tactical and technical insights into attacks.

ChatGPT: When Cybercrime Meets the Emerging Technologies


The immense capability of ChatGPT has left the entire globe abuzz. Indeed, it solves both practical and abstract problems, writes and debugs code, and even has the potential to aid with Alzheimer's disease screening. The OpenAI AI-powered chatbot, however, is at high risk of abuse, as is the case with many new technologies. 

How Can ChatGPT be Used Maliciously? 

Recently, researchers from Check Point Software discovered that ChatGPT could be utilized to create phishing emails. When combined with Codex, a natural language-to-code system by OpenAI, ChatGPT can develop and disseminate malicious code. 

According to Sergey Shykevich, threat intelligence group manager at Check Point Software, “Our researchers built a full malware infection chain starting from a phishing email to an Excel document that has malicious VBA [Visual Basic for Application] code. We can compile the whole malware to an executable file and run it in a machine.” 

He adds that ChatGPT primarily produces “much better and more convincing phishing and impersonation emails than real phishing emails we see in the wild now.” 

In regards to the same, Lorrie Faith Cranor, director and Bosch Distinguished Professor of the CyLab Security and Privacy Institute and FORE Systems Professor of computer science and of engineering and public policy at Carnegie Mellon University says, “I haven’t tried using ChatGPT to generate code, but I’ve seen some examples from others who have. It generates code that is not all that sophisticated, but some of it is actually runnable code[…]There are other AI tools out there for generating code, and they are all getting better every day. ChatGPT is probably better right now at generating text for humans, and may be particularly well suited for generating things like realistic spoofed emails.” 

Moreover, the researchers have also discovered hackers that create malicious tools like info-stealers and dark web markets using ChatGPT. 

What AI Tools are More Worrisome? 

Cranor says “I think to use these [AI] tools successfully today requires some technical knowledge, but I expect over time it will become easier to take the output from these tools and launch an attack[…]So while it is not clear that what the tools can do today is much more worrisome than human-developed tools that are widely distributed online, it won’t be long before these tools are developing more sophisticated attacks, with the ability to quickly generate large numbers of variants.” 

Furthermore, complications could as well arise from the inability to detect whether the code was created by utilizing ChatGPT. “There is no good way to pinpoint that a specific software, malware, or even phishing email was written by ChatGPT because there is no signature,” says Shykevich. 

What Could be the Solution? 

One of the methods OpenAI is opting for is to “watermark” the output of GPT models, which could later be used to determine whether they are created by AI or humans. 

In order to safeguard companies and individuals from these AI-generated threats, Shykevich advises using appropriate cybersecurity measures. While the current safeguards are still in effect, it is critical to keep upgrading and bolstering their application. 

“Researchers are also working on ways to use AI to discover code vulnerabilities and detect attacks[…]Hopefully, advances on the defensive side will be able to keep up with advances on the attacker side, but that remains to be seen,” says Cranor. 

While ChatGPT and other AI-backed systems have the potential to fundamentally alter how individuals interact with technology, they also carry some risk, particularly when used in dangerous ways. 

“ChatGPT is a great technology and has the potential to democratize AI,” adds Shykevich. “AI was kind of a buzzy feature that only computer science or algorithmic specialists understood. Now, people who aren’t tech-savvy are starting to understand what AI is and trying to adopt it in their day-to-day. But the biggest question, is how would you use it—and for what purposes?”