The latest information-stealing malware, made in the Rust programming language, has surfaced as a major danger to users of Chromium-based browsers such as Microsoft Edge, Google Chrome, and others.
Known as “RustStealer” by cybersecurity experts, this advanced malware is made to retrieve sensitive data, including login cookies, browsing history, and credentials, from infected systems.
The growth in Rust language known for memory safety and performance indicates a transition toward more resilient and hard-to-find problems, as Rust binaries often escape traditional antivirus solutions due to their combined nature and lower order in malware environments.
RustStealers works with high secrecy, using sophisticated obfuscation techniques to escape endpoint security tools. Initial infection vectors hint towards phishing campaigns, where dangerous attachments or links in evidently genuine emails trick users into downloading the payload.
After execution, the malware makes persistence via registry modifications or scheduled tasks, to make sure it remains active even after the system reboots.
The main aim is on Chromium-based browsers, abusing the accessibility of unencrypted information stored in browser profiles to harvest session tokens, usernames, and passwords.
Besides this, RustStealer has been found to extract data to remote C2 servers via encrypted communication channels, making detection by network surveillance tools such as Wireshark more challenging.
Experts have also observed its potential to attack cryptocurrency wallet extensions, exposing users to risks in managing digital assets via browser plugins. This multi-faceted approach highlights the malware’s goal to increase data robbery while reducing the chances of early detection, a technique similar to advanced persistent threats (APTs).
What makes RustStealer different is its modular build, letting hackers rework its strengths remotely. This flexibility reveals that future ve
This adaptability suggests that future replications could integrate functionalities such as ransomware components or keylogging, intensifying threats in the longer run.
The deployment of Rust also makes reverse-engineering efforts difficult, as the language’s output is less direct to decompile in comparison to scripts like Python or other languages deployed in outdated malware strains.
Businesses are advised to remain cautious, using strong phishing securities, frequently updating browser software, and using endpoint detection and response (EDR) solutions to detect suspicious behavior.
Cyberattacks aren’t what they used to be. Instead of one group planning and carrying out an entire attack, today’s hackers are breaking the process into parts and handing each step to different teams. This method, often seen in cybercrime now, is making it more difficult for security experts to understand and stop attacks.
In the past, cybersecurity analysts looked at threats by studying them as single operations done by one group with one goal. But that method is no longer enough. These days, many attackers specialize in just one part of an attack—like finding a way into a system, creating malware, or demanding money—and then pass on the next stage to someone else.
To better handle this shift, researchers from Cisco Talos, a cybersecurity team, have proposed updating an older method called the Diamond Model. This model originally focused on four parts of a cyberattack: the attacker, the target, the tools used, and the systems involved. The new idea is to add a fifth layer that shows how different hacker groups are connected and work together, even if they don’t share the same goals.
By tracking relationships between groups, security teams can better understand who is doing what, avoid mistakes when identifying attackers, and spot patterns across different incidents. This helps them respond more accurately and efficiently.
The idea of cybercriminals selling services isn’t new. For years, online forums have allowed criminals to buy and sell services—like renting out access to hacked systems or offering ransomware as a package. Some of these relationships are short-term, while others involve long-term partnerships where attackers work closely over time.
In one recent case, a group called ToyMaker focused only on breaking into systems. They then passed that access to another group known as Cactus, which launched a ransomware attack. This type of teamwork shows how attackers are now outsourcing parts of their operations, which makes it harder for investigators to pin down who’s responsible.
Other companies, like Elastic and Google’s cyber threat teams, have also started adapting their systems to deal with this trend. Google, for example, now uses separate labels to track what each group does and what motivates them—whether it's financial gain, political beliefs, or personal reasons. This helps avoid confusion when different groups work together for different reasons.
As cybercriminals continue to specialize, defenders will need smarter tools and better models to keep up. Understanding how hackers divide tasks and form networks may be the key to staying one step ahead in this ever-changing digital battlefield.
Many people don't realize how much of their personal data is floating around the internet. Even if you're careful and don’t use the internet much, your information like name, address, phone number, or email could still be listed on various websites. This can lead to annoying spam or, in serious cases, scams and fraud.
To help people become aware of this, ExpressVPN has created a free tool that lets you check where your personal information might be available online.
How the Tool Works
Using the tool is easy. You just enter your first and last name, age, city, and state. Once done, the tool scans 68 websites that collect and sell user data. These are called data broker sites.
It then shows whether your details, such as phone number, email address, location, or names of your relatives, appear on those sites. For example, one person searched their legal name and only one result came up. But when they searched the name they usually use online, many results appeared. This shows that the more you interact online, the more your data might be exposed.
Ways to Remove Your Data
The scan is free, but if you want the tool to remove your data, it offers a paid option. However, there are free ways to remove your information by yourself.
Most data broker sites have a page where you can ask them to delete your data. These pages are not always easy to find and often have names like “Opt-Out” or “Do Not Sell My Info.” But they are available and do work if you take the time to fill them out.
You can also use a feature from Google that allows you to request the removal of your personal data from its search results. This won’t delete the information from the original site, but it will make it harder for others to find it through a search engine. You can search for your name along with the site’s name and then ask Google to remove the result.
Other Tools That Can Help
If you don’t want to do this manually, there are paid services that handle the removal for you. These tools usually cost around $8 per month and can send deletion requests to hundreds of data broker sites.
It’s important to know what personal information of yours is available online. With this free tool from ExpressVPN, you can quickly check and take steps to protect your privacy. Whether you choose to handle removals yourself or use a service, taking action is a smart step toward keeping your data safe.
Google will launch its Gemini AI chatbot soon for children below the age of 13 with parent-managed Google accounts. The move comes as tech companies try to attract young users with AI tools. According to a mail sent to a parent of an 8-year-old, Google apps will soon be available to a child. It means your child can use Gemini to ask questions, get homework help, and also create stories.
That chatbot will be available to children whose guardians have Family Link, a Google feature that allows families to make Gmail and opt-in services like YouTube for their children. To register a child account, the parent gives the tech company the child’s personal information such as name and date of birth.
According to Google spokesperson Karl Ryan, Gemini has concrete measures for younger users to restrict the chatbot from creating unsafe or harmful content. If a child with a Family Link account uses Gemini, the company can not use the data for training its AI model.
Gemini for children can drive the use of chatbots among vulnerable populations as companies, colleges, schools, and others struggle with the effects of popular gen AI tech. The systems are trained on massive amounts of data sets to create human-like text and realistic images and videos. Google and other AI chatbot developers are battling fierce competition to get young users’ attention.
Recently, President Donald Trump requested schools to embrace tools for teaching and learning. Millions of teens are already using chatbots for study help, virtual companions, and writing coaches. Experts have warned that chatbots could pose serious threats to child safety.
The bots are known to sometimes make things up. UNICEF and other children's advocacy groups have found that AI systems can misinform, manipulate, and confuse young children who may face difficulties understanding that the chatbots are not humans.
According to UNICEF’s global research office, “Generative AI has produced dangerous content,” posing risks for children. Google has acknowledged some risks, cautioning parents that “Gemini can make mistakes” and suggesting they “help your child think critically” about the chatbot.
But this defense is not enough to protect against “juice-jacking” — a hacking technique that manipulates charging stations to install malicious code, steal data, or enable access to the device while plugged in. Experts have found a severe flaw in this system that hackers can exploit easily.
Cybersecurity researchers have discovered a serious loophole in this system that can be easily exploited.
According to experts, hackers can now use a new method called “choice jacking” to make sure that access to smartphones is easily verified without the user realizing it.
First, the hackers deploy a feature on a charging station so that it looks like a USB keyboard when connected. After that, through USB Power Delivery, it runs a “USB PD Data Role Swap” to make a Bluetooth connection, activating the file transfer consent pop-up, and approving permission while acting as a Bluetooth keyboard.
The hackers leverage the charging station to evade the protection mechanism on the device, which is aimed at protecting users against hacking attacks with USB peripherals. This can become a serious issue if the hacker gets access to all files and personal data stored on our smartphones to hack accounts.
Experts at Graz University of Technology tried this technique on devices from a lot of manufacturers such as Samsung, which sells the second most smartphones besides Apple. All tested smartphones allowed the researchers to transfer data during the duration the screen was unlocked.
Despite smartphone manufacturers being aware of the problem, there are not enough safety measures against juice-jacking, Only Google and Apply have implemented a solution, which requires users first to provide their PIN or password before they can use a device as authorized start and begin the data transfer. But, other manufacturers have not come up with efficient solutions to address this issue and offer protection.
If your smartphone has USB debugging enabled, it can be dangerous as USB debugging allows hackers to get access to the device via the Android Debug Bridge and deploy their own apps, run files, and generally use a higher access mode.
The easiest way users can protect themselves from juice-jacking attacks through USB charging stations is to never use a public charging station. Users should always avoid charging stations in busy areas such as airports and malls, they are the most dangerous.
Users are advised to carry their power banks when traveling and always keep their smartphones updated.
The main highlight of the M-Trends report is that hackers are using every opportunity to advance their goals, such as using infostealer malware to steal credentials. Another trend is attacking unsecured data repositories due to poor security hygiene.
Hackers are also exploiting fractures and risks that surface when an organization takes its data to the cloud. “In 2024, Mandiant initiated 83 campaigns and five global events and continued to track activity identified in previous years. These campaigns affected every industry vertical and 73 countries across six continents,” the report said.
Ransomware-related attacks accounted for 21% of all invasions in 2024 and comprised almost two-thirds of cases related to monetization tactics. This comes in addition to data theft, email hacks, cryptocurrency scams, and North Korean fake job campaigns, all attempting to get money from targets.
Exploits were amid the most popular primary infection vector at 33%, stolen credentials at 16%, phishing at 14%, web compromises at 9%, and earlier compromises at 8%.
Finance topped in the targeted industry, with more than 17% of attacks targeting the sector, followed closely by professional services and business (11%), critical industries such as high tech (10%), governments (10%), and healthcare (9%).
Experts have highlighted a broader target of various industries, suggesting that anyone can be targeted by state-sponsored attacks, either politically or financially motivated.
Stuart McKenzie, Managing Director, Mandiant Consulting EMEA. said “Financially motivated attacks are still the leading category. “While ransomware, data theft, and multifaceted extortion are and will continue to be significant global cybercrime concerns, we are also tracking the rise in the adoption of infostealer malware and the developing exploitation of Web3 technologies, including cryptocurrencies.”
He also stressed that the “increasing sophistication and automation offered by artificial intelligence are further exacerbating these threats by enabling more targeted, evasive, and widespread attacks. Organizations need to proactively gather insights to stay ahead of these trends and implement processes and tools to continuously collect and analyze threat intelligence from diverse sources.”
Google’s Gmail is now offering two new upgrades, but here’s the catch— they don’t work well together. This means Gmail’s billions of users are being asked to pick a side: better privacy or smarter features. And this decision could affect how their emails are handled in the future.
Let’s break it down. One upgrade focuses on stronger protection of your emails, which works like advanced encryption. This keeps your emails private, even Google won’t be able to read them. The second upgrade brings in artificial intelligence tools to improve how you search and use Gmail, promising quicker, more helpful results.
But there’s a problem. If your emails are fully protected, Gmail’s AI tools can’t read them to include in its search results. So, if you choose privacy, you might lose out on the benefits of smarter searches. On the other hand, if you want AI help, you’ll need to let Google access more of your email content.
This challenge isn’t unique to Gmail. Many tech companies are trying to combine stronger security with AI-powered features, but the two don’t always work together. Apple tried solving this with a system that processes data securely on your device. However, delays in rolling out their new AI tools have made their solution uncertain for now.
Some reports explain the choice like this: if you turn on AI features, Google will use your data to power smart tools. If you turn it off, you’ll have better privacy, but lose some useful options. The real issue is that opting out isn’t always easy. Some settings may remain active unless you manually turn them off, and fully securing your emails still isn’t simple.
Even when extra security is enabled, email systems have limitations. For example, Apple’s iCloud Mail doesn’t use full end-to-end encryption because it must work with global email networks. So even private emails may not be completely safe.
This issue goes beyond Gmail. Other platforms are facing similar challenges. WhatsApp, for example, added a privacy mode that blocks saving chats and media, but also limits AI-related features. OpenAI’s ChatGPT can now remember what you told it in past conversations, which may feel helpful but also raises questions about how your personal data is being stored.
In the end, users need to think carefully. AI tools can make email more useful, but they come with trade-offs. Email has never been a perfectly secure space, and with smarter AI, new threats like scams and data misuse may grow. That’s why it’s important to weigh both sides before making a choice.