The network is built by the National Data Association and managed by the Ministry of Public Security’s Data Innovation and Exploitation Center. It will serve as the primary verification layer for tasks such as supply-chain logs, school transcripts, and hospital records.
According to experts, NDAChain is based on a hybrid model, relying on a Proof-of-Authority mechanism to ensure only authorized nodes can verify transactions. It also adds Zero-Knowledge-Proofs to protect sensitive data while verifying its authenticity. According to officials, NDAChain can process between 1,200 and 3,600 transactions per second, a statistic that aims to support faster verifications in logistics, e-government, and other areas.
The networks have two main features: NDA DID offers digital IDs that integrate with Vietnam’s current VNeID framework, allowing users to verify their IDs online when signing documents or using services. On the other hand, NDATrace provides end-to-end product tracking via GS1 and EBSI Trace standards. Items are tagged with unique identifiers that RFID chips or QR codes can scan, helping businesses prove verification to overseas procurers and ease recalls in case of problems.
NDAChain works as a “protective layer” for Vietnam’s digital infrastructure, built to scale as data volume expands. Digital records can be verified without needing personal details due to the added privacy tools. The permissioned setup also offers authorities more control over people joining the network. According to reports, total integration with the National Data Center will be completed by this year. The focus will then move towards local agencies and universities, where industry-specific Layer 3 apps are planned for 2026.
According to Vietnam Briefing, "in sectors such as food, pharmaceuticals, and health supplements, where counterfeit goods remain a persistent threat, NDAChain enables end-to-end product origin authentication. By tracing a product’s whole journey from manufacturer to end-consumer, businesses can enhance brand trust, reduce legal risk, and meet rising regulatory demands for transparency."
Flashpoint’s Global Threat Intelligence Index report is based on more than 3.6 petabytes of data studied by the experts. Hackers stole credentials from 5.8 million compromised devices, according to the report. The significant rise is problematic as stolen credentials can give hackers access to organizational data, even when the accounts are protected by multi-factor authentication (MFA).
The report also includes details that concern security teams.
Until June 2025, the firm has found over 20,000 exposed bugs, 12,200 of which haven’t been reported in the National Vulnerability Database (NVD). This means that security teams are not informed. 7000 of these have public exploits available, exposing organizations to severe threats.
According to experts, “The digital attack surface continues to expand, and the volume of disclosed vulnerabilities is growing at a record pace – up by a staggering 246% since February 2025.” “This explosion, coupled with a 179% increase in publicly available exploit code, intensifies the pressure on security teams. It’s no longer feasible to triage and remediate every vulnerability.”
Both these trends can cause ransomware attacks, as early access mostly comes through vulnerability exploitation or credential hacking. Total reports of breaches have increased by 179% since 2024, manufacturing (22%), technology (18%), and retail (13%) have been hit the most. The report has also disclosed 3104 data breaches in the first half of this year, linked to 9.5 billion hacked records.
Flashpoint reports that “Over the past four months, data breaches surged by 235%, with unauthorized access accounting for nearly 78% of all reported incidents. Data breaches are both the genesis and culmination of threat actor campaigns, serving as a source of continuous fuel for cybercrime activity.”
In June, the Identity Theft Resource Center (ITRC) warned that 2025 could become a record year for data cyberattacks in the US.
The US Cybersecurity & Infrastructure Security Agency (CISA) confirms active exploitation of the CitrixBleed 2 vulnerability (CVE-2025-5777 in Citrix NetScaler ADC and Gateway. It has given federal parties one day to patch the bugs. This unrealistic deadline for deploying the patches is the first since CISA issued the Known Exploited Vulnerabilities (KEV) catalog, highlighting the severity of attacks abusing the security gaps.
CVE-2025-5777 is a critical memory safety bug (out-of-bounds memory read) that gives hackers unauthorized access to restricted memory parts. The flaw affects NetScaler devices that are configured as an AAA virtual server or a Gateway. Citrix patched the vulnerabilities via the June 17 updates.
After that, expert Kevin Beaumont alerted about the flaw’s capability for exploitation if left unaddressed, terming the bug as ‘CitrixBleed 2’ because it shared similarities with the infamous CitrixBleed bug (CVE-2023-4966), which was widely abused in the wild by threat actors.
According to Bleeping Computer, “The first warning of CitrixBleed 2 being exploited came from ReliaQuest on June 27. On July 7, security researchers at watchTowr and Horizon3 published proof-of-concept exploits (PoCs) for CVE-2025-5777, demonstrating how the flaw can be leveraged in attacks that steal user session tokens.”
During that time, experts could not spot the signs of active exploitation. Soon, the threat actors started to exploit the bug on a larger scale, and after the attack, they became active on hacker forums, “discussing, working, testing, and publicly sharing feedback on PoCs for the Citrix Bleed 2 vulnerability,” according to Bleeping Computers.
Hackers showed interest in how to use the available exploits in attacks effectively. The hackers have become more active, and various exploits for the bug have been published.
Now that CISA has confirmed the widespread exploitation of CitrixBleed 2 in attacks, threat actors may have developed their exploits based on the recently released technical information. CISA has suggested to “apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.”
Most of the victims were based in India, Argentina, Peru, Mexico, Colombia, Bolivia, and Ecuador. A few records date back to 2018. The leaked database also revealed the identity of the Catwatchful admin called Omar Soca Char.
The Catwatchful database also revealed the identity of the spyware operation’s administrator, Omar Soca Charcov, a developer based in Uruguay.
Catwatchful is a spyware that pretends to be a child monitoring app, claiming to be “invisible and can not be detected,” while it uploads the victim’s data to a dashboard accessible to the person who planted the app. The stolen data includes real-time location data, victims’ photos, and messages. The app can also track live ambient audio from the device’s mic and access the phone camera (both front and rear).
Catwatchful and similar apps are banned on app stores, and depend on being downloaded and deployed by someone having physical access to a victim’s phone. These apps are famous as “stalkerware” or “spouseware” as they are capable of unauthorized and illegal non-consensual surveillance of romantic partners and spouses.
The Catwatchful incident is the fifth and latest in this year’s growing list of stalkerware scams that have been breached, hacked, or had their data exposed.
Daigle has previously discovered stalkerware exploits. Catwatchful uses a custom-made API, which the planted app uses to communicate to send data back to Catwatchful servers. The stalkerware also uses Google Firebase to host and store stolen data.
According to Techradar, the “data was stored on Google Firebase, sent via a custom API that was unauthenticated, resulting in open access to user and victim data. The report also confirms that, although hosting had initially been suspended by HostGator, it had been restored via another temporary domain."
Further investigation by the US government revealed that these actors were working to steal money for the North Korean government and use the funds to run its government operations and its weapons program.
The US has imposed strict sanctions on North Korea, which restrict US companies from hiring North Korean nationals. It has led to threat actors making fake identities and using all kinds of tricks (such as VPNs) to obscure their real identities and locations. This is being done to avoid getting caught and get easily hired.
Recently, the threat actors have started using spoof tactics such as voice-changing tools and AI-generated documents to appear credible. In one incident, the scammers somehow used an individual residing in New Jersey, who set up shell companies to fool victims into believing they were paying a legitimate local business. The same individual also helped overseas partners to get recruited.
The clever campaign has now come to an end, as the US Department of Justice (DoJ) arrested and charged a US national called Zhenxing “Danny” Wanf with operating a “year-long” scam. The scheme earned over $5 million. The agency also arrested eight more people - six Chinese and two Taiwanese nationals. The arrested individuals are charged with money laundering, identity theft, hacking, sanctions violations, and conspiring to commit wire fraud.
In addition to getting paid in these jobs, which Microsoft says is a hefty payment, these individuals also get access to private organization data. They exploit this access by stealing sensitive information and blackmailing the company.
One of the largest and most infamous hacking gangs worldwide is the North Korean state-sponsored group, Lazarus. According to experts, the gang extorted billions of dollars from the Korean government through similar scams. The entire campaign is popular as “Operation DreamJob”.
"To disrupt this activity and protect our customers, we’ve suspended 3,000 known Microsoft consumer accounts (Outlook/Hotmail) created by North Korean IT workers," said Microsoft.
The Hunters International Ransomware-as-a-Service (RaaS) operation has recently announced that it is shutting down its operation and will provide free decryptors to help targets recover their data without paying a ransom.
"After careful consideration and in light of recent developments, we have decided to close the Hunters International project. This decision was not made lightly, and we recognize the impact it has on the organizations we have interacted with," the cybercrime gang said.
As a goodwill gesture to victims affected by the gang’s previous operations, it is helping them recover data without requiring them to pay ransoms. The gang has also removed all entries from the extortion portal and stated that organizations whose systems were encrypted in the Hunters International ransomware attacks can request assistance and recovery guidance on the group’s official website.
The gang has not explained the “recent developments” it referred to, the recent announcement comes after a November 17 statement announcing Hunters International will soon close down due to strict law enforcement actions and financial losses.
In April, Group-IB researchers said the group was rebranding with the aim to focus on extortion-only and data theft attacks and launched “World Leaks”- a new extortion-only operation. Group-IB said that “unlike Hunters International, which combined encryption with extortion, World Leaks operates as an extortion-only group using a custom-built exfiltration tool. The new tool looks like an advanced version of the Storage Software exfiltration tool used by Hunter International’s ransomware associates.
Hunter International surfaced in 2023, and cybersecurity experts flagged it as a rebrand of as it showed code similarities. The ransomware gang targeted Linux, ESXi (VMware servers), Windows, FreeBSD, and SunOS. In the past two years, Hunter International has attacked businesses of all sizes, demanding ransom up to millions of dollars.
The gang was responsible for around 300 operations globally. Some famous victims include the U.S Marshals Service, Tata Technologies, Japanese optics mammoth Hoya, U.S Navy contractor Austal USA, Oklahoma’s largest not-for-profit healthcare Integris Health, AutoCanada, and a North American automobile dealership. Last year, Hunter International attacked the Fred Hutch Cancer Center and blackmailed to leak stolen data of more than 800,000 cancer patients if ransom was not paid.
Generative AI (GenAI) is one of today’s most exciting technologies, offering potential to improve productivity, creativity, and customer service. But for many companies, it becomes like a forgotten gym membership, enthusiastically started, but quickly abandoned.
So how can businesses make sure they get real value from GenAI instead of falling into the trap of wasted effort? Success lies in four key steps: building a strong plan, choosing the right partners, launching responsibly, and tracking the impact.
1. Set Up a Strong Governance Framework
Before using GenAI, businesses must create clear rules and processes to use it safely and responsibly. This is called a governance framework. It helps prevent problems like privacy violations, data leaks, or misuse of AI tools.
This framework should be created by a group of leaders from different departments—legal, compliance, cybersecurity, data, and business operations. Since AI can affect many parts of a company, it’s important that leadership supports and oversees these efforts.
It’s also crucial to manage data properly. Many companies forget to prepare their data for AI tools. Data should be accurate, anonymous where needed, and well-organized to avoid security risks and legal trouble.
Risk management must be proactive. This includes reducing bias in AI systems, ensuring data security, staying within legal boundaries, and preventing misuse of intellectual property.
2. Choose Technology Partners Carefully
GenAI tools are not like regular software. When selecting a provider, businesses should look beyond basic features and check how the provider handles data, ownership rights, and ethical practices. A lack of transparency is a warning sign.
Companies should know where their data is stored, who can access it, and who owns the results produced by the AI tool. It’s also important to avoid being trapped in systems that make it difficult to switch vendors later. Always review agreements carefully, especially around copyright and data rights.
3. Launch With Care and Strategy
Once planning is complete, the focus should shift to thoughtful execution. Start with small projects that can demonstrate value quickly. Choose use cases where GenAI can clearly improve efficiency or outcomes.
Data used in GenAI must be organized and secured so that no sensitive information is exposed. Also, employees must be trained to work with these tools effectively. Skills like writing proper prompts and verifying AI-generated content are essential.
To build trust and encourage adoption, leaders should clearly explain why GenAI is being introduced and how it will help, not replace employees. GenAI should support teams and improve their performance, not reduce jobs.
4. Track Success and Business Value
Finally, companies need to measure the results. Success isn’t just about using the technology— it’s about making a real impact.
Set clear goals and use simple metrics, like productivity improvements, customer feedback, or employee satisfaction. GenAI should lead to better outcomes for both teams and clients, not just technical performance.
To move beyond the GenAI buzz and unlock real value, companies must approach it with clear goals, responsible use, and long-term thinking. With the right foundation, GenAI can be more than just hype, it can be a lasting asset for innovation and growth.
A group of hackers has been carrying out attacks against businesses by misusing a tool that looks like it belongs to Salesforce, according to information shared by Google’s threat researchers. These attacks have been going on for several months and have mainly focused on stealing private company information and later pressuring the victims for money.
How the Attack Happens
The hackers have been contacting employees by phone while pretending to work for their company’s technical support team. Through these phone calls, the attackers convince employees to share important login details.
After collecting this information, the hackers guide the employees to a specific page used to set up apps connected to Salesforce. Once there, the attackers use an illegal, altered version of a Salesforce data tool to quietly break into the company’s system and take sensitive data.
In many situations, the hackers don’t just stop at Salesforce. They continue to explore other parts of the company’s cloud accounts and sometimes reach deeper into the company’s private networks.
Salesforce’s Advice to Users
Earlier this year, Salesforce warned people about these kinds of scams. The company has made it clear that there is no known fault or security hole in the Salesforce platform itself. The problem is that the attackers are successfully tricking people by pretending to be trusted contacts.
Salesforce has recommended that users improve their account protection by turning on extra security steps like multi-factor authentication, carefully controlling who has permission to access sensitive areas, and limiting which locations can log into the system.
Unclear Why Salesforce is the Target
It is still unknown why the attackers are focusing on Salesforce tools or how they became skilled in using them. Google’s research team has not seen other hacker groups using this specific method so far.
Interestingly, the attackers do not all seem to have the same level of experience. Some are very skilled at using the fake Salesforce tool, while others seem less prepared. Experts believe that these skills likely come from past activities or learning from earlier attacks.
Hackers Delay Their Demands
In many cases, the hackers wait for several months after breaking into a company before asking for money. Some attackers claim they are working with outside groups, but researchers are still studying these possible connections.
A Rising Social Engineering Threat
This type of phone-based trick is becoming more common as hackers rely on social engineering — which means they focus on manipulating people rather than directly breaking into systems. Google’s researchers noted that while there are some similarities between these hackers and known criminal groups, this particular group appears to be separate.
Modern workplaces are beginning to track more than just employee hours or tasks. Today, many employers are collecting very personal information about workers' bodies and behaviors. This includes data like fingerprints, eye scans, heart rates, sleeping patterns, and even the way someone walks or types. All of this is made possible by tools like wearable devices, security cameras, and AI-powered monitoring systems.
The reason companies use these methods varies. Some want to increase workplace safety. Others hope to improve employee health or get discounts from insurance providers. Many believe that collecting this kind of data helps boost productivity and efficiency. At first glance, these goals might sound useful. But there are real risks to both workers and companies that many overlook.
New research shows that being watched in such personal ways can lead to fear and discomfort. Employees may feel anxious or unsure about their future at the company. They worry their job might be at risk if the data is misunderstood or misused. This sense of insecurity can impact mental health, lower job satisfaction, and make people less motivated to perform well.
There have already been legal consequences. In one major case, a railway company had to pay millions to settle a lawsuit after workers claimed their fingerprints were collected without consent. Other large companies have also faced similar claims. The common issue in these cases is the lack of clear communication and proper approval from employees.
Even when health programs are framed as helpful, they can backfire. For example, some workers are offered lower health insurance costs if they participate in screenings or share fitness data. But not everyone feels comfortable handing over private health details. Some feel pressured to agree just to avoid being judged or left out. In certain cases, those who chose not to participate were penalized. One university faced a lawsuit for this and later agreed to stop the program after public backlash.
Monitoring employees’ behavior can also affect how they work. For instance, in one warehouse, cameras were installed to track walking routes and improve safety. However, workers felt watched and lost the freedom to help each other or move around in ways that felt natural. Instead of making the workplace better, the system made workers feel less trusted.
Laws are slowly catching up, but in many places, current rules don’t fully protect workers from this level of personal tracking. Just because something is technically legal does not mean it is ethical or wise.
Before collecting sensitive data, companies must ask a simple but powerful question: is this really necessary? If the benefits only go to the employer, while workers feel stressed or powerless, the program might do more harm than good. In many cases, choosing not to collect such data is the better and more respectful option.
Many people are changing their social media habits and opting out of many services. Facebook has witnessed a large exodus of users deserting the platform after the announcement in March that Meta was terminating the independent fact-checking on its platform. However, fact-checking has been replaced with community notes, letting users make changes to potentially false/misleading information.
Users having years of photos and posts on Facebook are confused about how to collect their data before removing their accounts. If you also feel the same problem, this post will help you delete Facebook permanently, while taking all your information on the way out.
For users who do not want to be on Facebook anymore, deleting their account is the only way to completely remove yourself from the platform. If you are not sure, deactivating your account allows you to have some life off of Facebook without account deletion.
Make sure to remove third-party Facebook logins before deleting your account.
Third-party apps like DoorDash and Spotify allow you to log in using your Facebook account. This lets you log in without remembering another password, but if you’re planning on deleting Facebook, you have to update your login settings. That is because if you delete your account, there will not be another Facebook account for the user to log in through.
Fortunately, there is another simple way to find which of your sites and applications are connected to Facebook and delete them before removing your account. Once you disconnect from other websites and applications from Facebook, you will need to adjust how you login to them.
Users should try specific applications and websites to set new passwords or passkeys or log in via a single-service sign-on option, such as Google.
If you want to stay away from Facebook, you have two choices. Either delete your account permanently, or you can disable it temporarily to deactivate it.
A recently found malware called PumaBot is putting many internet-connected devices at risk. This malicious software is designed to attack smart systems like surveillance cameras, especially those that use the Linux operating system. It sneaks in by guessing weak passwords and then quietly takes over the system.
How PumaBot Finds Its Victims
Unlike many other threats that randomly scan the internet looking for weak points, PumaBot follows specific instructions from a remote command center. It receives a list of selected device addresses (known as IPs) from its control server and begins attempting to log in using common usernames and passwords through SSH — a tool that lets people access devices remotely.
Experts believe it may be going after security and traffic camera systems that belong to a company called Pumatronix, based on clues found in the malware’s code.
What Happens After It Breaks In
Once PumaBot gets into a device, it runs a quick check to make sure it's not inside a fake system set up by researchers (known as a honeypot). If it passes that test, the malware places a file on the device and creates a special service to make sure it stays active, even after the device is restarted.
To keep the door open for future access, PumaBot adds its own secret login credentials. This way, the hackers can return to the device later, even if some files are removed.
What the Malware Can Do
After it takes control, PumaBot can be told to:
• Steal data from the device
• Install other harmful software
• Collect login details from users
• Send stolen information back to the attackers
One tool it uses captures usernames and passwords typed into the device, saves them in a hidden file, and sends them to the hackers. Once the data is taken, the malware deletes the file to cover its tracks.
Why PumaBot Is Concerning
PumaBot is different from other malware. Many botnets simply use infected devices to send spam or run large-scale attacks. But PumaBot seems more focused and selective. Instead of causing quick damage, it slowly builds access to sensitive networks — which could lead to bigger security breaches later.
How to Protect Your Devices
If you use internet-connected gadgets like cameras or smart appliances, follow these safety steps:
1. Change factory-set passwords immediately
2. Keep device software updated
3. Use firewalls to block strange access
4. Put smart devices on a different Wi-Fi network than your main systems
By following these tips, you can lower your chances of being affected by malware like PumaBot.
Commvault, a well-known company that helps other businesses protect and manage their digital data, recently shared that it had experienced a cyberattack. However, the company clarified that none of the backup data it stores for customers was accessed or harmed during the incident.
The breach was discovered in February 2025 after Microsoft alerted Commvault about suspicious activity taking place in its Azure cloud services. After being notified, the company began investigating the issue and found that a very small group of customers had been affected. Importantly, Commvault stated that its systems remained up and running, and there was no major impact on its day-to-day operations.
Danielle Sheer, Commvault’s Chief Trust Officer, said the company is confident that hackers were not able to view or steal customer backup data. She also confirmed that Commvault is cooperating with government cybersecurity teams, including the FBI and CISA, and is receiving support from two independent cybersecurity firms.
Details About the Vulnerability
It was discovered that the attackers gained access by using a weakness in Commvault’s web server software. This flaw, now fixed, allowed hackers with limited permissions to install harmful software on affected systems. The vulnerability, known by the code CVE-2025-3928, had not been known or patched before the breach, making it what experts call a “zero-day” issue.
Because of the seriousness of this bug, CISA (Cybersecurity and Infrastructure Security Agency) added it to a list of known risks that hackers are actively exploiting. U.S. federal agencies have been instructed to update their Commvault software and fix the issue by May 19, 2025.
Steps Recommended to Stay Safe
To help customers stay protected, Commvault suggested the following steps:
• Use conditional access controls for all cloud-based apps linked to Microsoft services.
• Check sign-in logs often to see if anyone is trying to log in from suspicious locations.
• Update secret access credentials between Commvault and Azure every three months.
The company urged users to report any strange behavior right away so its support team can act quickly to reduce any damage.
Although this was a serious incident, Commvault’s response was quick and effective. No backup data was stolen, and the affected software has been patched. This event is a reminder to all businesses to regularly check for vulnerabilities and keep their systems up to date to prevent future attacks.