Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Data. Show all posts

Scammers Can Pinpoint Your Exact Location With a Single Click Warns Hacker


 

With the advent of the digital age, crime has steadily migrated from dark alleys to cyberspace, creating an entirely new type of criminal enterprise that thrives on technology. The adage that "crime doesn't pay" once seemed so absurd to me; now that it stands in stark contrast with the reality of cybercrime, which has evolved into a lucrative and relatively safe form of illegal activity that is also relatively risk-free. 

While traditional crime attracts a greater degree of exposure and punishment, cybercriminals enjoy relative impunity. There is no question that they exploit the gaps in digital security to make huge profits while suffering only minimal repercussions as a result. A study conducted by Bromium security firm indicates that there is a significant underground cyber economy, with elite hacker earnings reaching $2 million per year, middle-level cybercriminals earning $900,000 a year, and even entry-level hackers earning $42,000 a year. 

As cybercrime has grown in size, it has developed into a booming global industry that attracts opportunists, who are looking for new opportunities to take advantage of hyperconnectedness. Several deceptive tactics are currently proliferating online, but one of the most alarming is the false message "Hacker is tracking you". 

Many deceptive tactics are being used online these days. Through the use of rogue websites, this false message attempts to create panic by claiming that a hacker has compromised the victim's device and is continuously monitoring the victim's computer activity. There is an urgent warning placed on the victim's home page warning him or her to not close the page, as a countdown timer threatens to expose their identity, browsing history, and even the photos that they are alleged to have taken with the front camera to their entire contact list. 

The website that sent the warning does not possess the capability to detect threats on a user’s device. In fact, the warning is entirely fabricated by the website. Users are often tricked into downloading or installing software that is marketed as protective and is often disguised as anti-virus software or performance enhancers, thereby resulting in the download of the malicious software. 

The issue with downloading such files is, however, that these often turn out to be Potentially Unwanted Applications (PUAs), such as adware, browser hijackers, and other malicious software. It is often the case that these fraudulent websites are reached through mistyped web addresses, redirects from unreliable websites, or intrusive advertisements that lead to the page. 

In addition to risking infections, users are also exposed to significant threats such as privacy invasions, financial losses, and even identity theft if they fall victim to these schemes. Secondly, there is the growing value of personal data that is becoming increasingly valuable to cybercriminals, making it even more lucrative than financial theft in many cases. 

It is widely known that details, browsing patterns, and personal identifiers are coveted commodities in the underground market, making them valuable commodities for a variety of criminal activities, many of which extend far beyond just monetary scams. In a recent article published by the ethical hacker, he claimed that such information could often be extracted in only a few clicks, illustrating how easy it can be for an unsuspecting user to be compromised with such information. 

Cybercriminals continue to devise inventive ways of evading safeguards and tricking individuals into revealing sensitive information in spite of significant advances in device security. The phishing tactic known as “quishing” is one such technique that is gaining momentum. In this case, QR codes are used to lure victims into malicious traps. 

It has even evolved into the practice of fraudsters attaching QR codes to unsolicited packages, preying upon curiosity or confusion to obtain a scan. However, experts believe that even simpler techniques are becoming more common, entangling a growing number of users who underestimate how sophisticated and persistent these scams can be. 

Besides scams and phishing attempts, hackers and organisations alike have access to a wide range of tools that have the ability to track a person's movements with alarming precision. Malicious software, such as spyware or stalkerware, can penetrate mobile devices, transmit location data, and enable unauthorised access to microphones and cameras, while operating undetected, without revealing themselves. 

The infections often hide deep within compromised apps, so it is usually necessary to take out robust antivirus solutions to remove them. It is important to note that not all tracking takes place by malicious actors - there are legitimate applications, for example, Find My Device and Google Maps, which rely on location services for navigation and weather updates. 

While most companies claim to not monetise user data, several have been sued for selling personal information to third parties. As anyone with access to a device that can be used to share a location can activate this feature in places like Google Maps, which allows continuous tracking even when the phone is in aeroplane mode, the threat is compounded. 

As a matter of fact, mobile carriers routinely track location via cellular signals, which is a practice officially justified as a necessity for improving services and responding to emergencies. However, while carriers claim that they do not sell this data to the public, they acknowledge that they do share it with the authorities. Furthermore, Wi-Fi networks are another method of tracking, since businesses, such as shopping malls, use connected devices to monitor the behaviour of their consumers, thus resulting in targeted and intrusive advertising. 

Cybersecurity experts continue to warn that hackers continue to take advantage of both sophisticated malware as well as social engineering tactics to swindle unsuspecting consumers. An ethical hacker, Ryan Montgomery, recently demonstrated how scammers use text messages to trick victims into clicking on malicious links that lead them to fake websites, which harvest their personal information through the use of text messages. 

To make such messages seem more credible, some social media profiles have been used to tailor them so they seem legitimate. It is important to note that the threats do not end with phishing attempts alone. Another overlooked vulnerability is the poorly designed error messages in apps and websites. Error messages are crucial in the process of debugging and user guidance, but they can also be a security threat if they are crafted carelessly, as hackers can use them to gather sensitive information about users. 

A database connection string, an individual's username, email address, or even a confirmation of the existence of an account can provide attackers with critical information which they can use to weaponise automated attacks. As a matter of fact, if you display the error message "Password is incorrect", this confirms that a username is valid, allowing hackers to make lists of real accounts that they can try to brute force on. 

In order to reduce exposure and obscure details, security professionals recommend using generic phrases such as "Username or password is incorrect." It is also recommended that developers avoid disclosing backend technology or software versions through error outputs, as these can reveal exploitable vulnerabilities. 

It has been shown that even seemingly harmless notifications such as "This username does not exist" can help attackers narrow down the targets they target, demonstrating the importance of secure design to prevent users from being exploited. There is a troubling imbalance between technological convenience and security in the digital world, as cybercrime continues to grow in importance. 

The ingenuity of cybercriminals is also constantly evolving, ensuring that even as stronger defences are being erected, there will always be a risk associated with any system or device, regardless of how advanced the defences are. It is the invisibility of this threat that makes it so insidious—users may not realise the compromise has happened until the damage has been done. This can be done by draining their bank accounts, stealing their identities, or quietly monitoring their personal lives. 

Cybersecurity experts emphasise that it is not just important to be vigilant against obvious scams and suspicious links, but also to maintain an attitude of digital caution in their everyday interactions. As well as updating devices, scrutinising app permissions, practising safer browsing habits, and using trusted antivirus tools, there are many other ways in which users can dramatically reduce their risk of being exposed to cybercrime. 

In addition to personal responsibility, the importance of stronger privacy protections and transparent practices must also be emphasised among technology providers, app developers, and mobile carriers as a way to safeguard user data. It is the complacency of all of us that allows cybercrime to flourish in the end. I believe that through combining informed users with secure design and responsible corporate behaviour, society will be able to begin to tilt the balance away from those who exploit the shadows of the online world to their advantage.

Vietnam Launches NDAChain for National Data Security and Digital Identity


Vietnam has launched NDAChain, a new blockchain network that allows only approved participants to join. The move is aimed at locking down Vietnam’s government data. 

About NDAChain

The network is built by the National Data Association and managed by the Ministry of Public Security’s Data Innovation and Exploitation Center. It will serve as the primary verification layer for tasks such as supply-chain logs, school transcripts, and hospital records.

According to experts, NDAChain is based on a hybrid model, relying on a Proof-of-Authority mechanism to ensure only authorized nodes can verify transactions. It also adds Zero-Knowledge-Proofs to protect sensitive data while verifying its authenticity. According to officials, NDAChain can process between 1,200 and 3,600 transactions per second, a statistic that aims to support faster verifications in logistics, e-government, and other areas. 

Two new features

The networks have two main features: NDA DID offers digital IDs that integrate with Vietnam’s current VNeID framework, allowing users to verify their IDs online when signing documents or using services. On the other hand, NDATrace provides end-to-end product tracking via GS1 and EBSI Trace standards. Items are tagged with unique identifiers that RFID chips or QR codes can scan, helping businesses prove verification to overseas procurers and ease recalls in case of problems.

Privacy layer and network protection

NDAChain works as a “protective layer” for Vietnam’s digital infrastructure, built to scale as data volume expands. Digital records can be verified without needing personal details due to the added privacy tools. The permissioned setup also offers authorities more control over people joining the network. According to reports, total integration with the National Data Center will be completed by this year. The focus will then move towards local agencies and universities, where industry-specific Layer 3 apps are planned for 2026.

According to Vietnam Briefing, "in sectors such as food, pharmaceuticals, and health supplements, where counterfeit goods remain a persistent threat, NDAChain enables end-to-end product origin authentication. By tracing a product’s whole journey from manufacturer to end-consumer, businesses can enhance brand trust, reduce legal risk, and meet rising regulatory demands for transparency."

A Massive 800% Rise in Data Breach Incidents in First Half of 2025


Cybersecurity experts have warned of a significant increase in identity-based attacks, following the revelation that 1.8 billion credentials were stolen in the first half of 2025, representing an 800% increase compared to the previous six months.

Data breach attacks are rising rapidly

Flashpoint’s Global Threat Intelligence Index report is based on more than 3.6 petabytes of data studied by the experts. Hackers stole credentials from 5.8 million compromised devices, according to the report. The significant rise is problematic as stolen credentials can give hackers access to organizational data, even when the accounts are protected by multi-factor authentication (MFA).

The report also includes details that concern security teams.

About the bugs

Until June 2025, the firm has found over 20,000 exposed bugs, 12,200 of which haven’t been reported in the National Vulnerability Database (NVD). This means that security teams are not informed. 7000 of these have public exploits available, exposing organizations to severe threats.

According to experts, “The digital attack surface continues to expand, and the volume of disclosed vulnerabilities is growing at a record pace – up by a staggering 246% since February 2025.” “This explosion, coupled with a 179% increase in publicly available exploit code, intensifies the pressure on security teams. It’s no longer feasible to triage and remediate every vulnerability.”

Surge in ransomware attacks

Both these trends can cause ransomware attacks, as early access mostly comes through vulnerability exploitation or credential hacking. Total reports of breaches have increased by 179% since 2024, manufacturing (22%), technology (18%), and retail (13%) have been hit the most. The report has also disclosed 3104 data breaches in the first half of this year, linked to 9.5 billion hacked records.

2025 to be record year for data breaches

Flashpoint reports that “Over the past four months, data breaches surged by 235%, with unauthorized access accounting for nearly 78% of all reported incidents. Data breaches are both the genesis and culmination of threat actor campaigns, serving as a source of continuous fuel for cybercrime activity.” 

In June, the Identity Theft Resource Center (ITRC) warned that 2025 could become a record year for data cyberattacks in the US.

CISA Lists Citrix Bleed 2 as Exploit, Gives One Day Deadline to Patch

CISA Lists Citrix Bleed 2 as Exploit, Gives One Day Deadline to Patch

CISA confirms bug exploit

The US Cybersecurity & Infrastructure Security Agency (CISA) confirms active exploitation of the CitrixBleed 2 vulnerability (CVE-2025-5777 in Citrix NetScaler ADC and Gateway. It has given federal parties one day to patch the bugs. This unrealistic deadline for deploying the patches is the first since CISA issued the Known Exploited Vulnerabilities (KEV) catalog, highlighting the severity of attacks abusing the security gaps. 

About the critical vulnerability

CVE-2025-5777 is a critical memory safety bug (out-of-bounds memory read) that gives hackers unauthorized access to restricted memory parts. The flaw affects NetScaler devices that are configured as an AAA virtual server or a Gateway. Citrix patched the vulnerabilities via the June 17 updates. 

After that, expert Kevin Beaumont alerted about the flaw’s capability for exploitation if left unaddressed, terming the bug as ‘CitrixBleed 2’ because it shared similarities with the infamous CitrixBleed bug (CVE-2023-4966), which was widely abused in the wild by threat actors.

What is the CitrixBleed 2 exploit?

According to Bleeping Computer, “The first warning of CitrixBleed 2 being exploited came from ReliaQuest on June 27. On July 7, security researchers at watchTowr and Horizon3 published proof-of-concept exploits (PoCs) for CVE-2025-5777, demonstrating how the flaw can be leveraged in attacks that steal user session tokens.”

The rise of exploits

During that time, experts could not spot the signs of active exploitation. Soon, the threat actors started to exploit the bug on a larger scale, and after the attack, they became active on hacker forums, “discussing, working, testing, and publicly sharing feedback on PoCs for the Citrix Bleed 2 vulnerability,” according to Bleeping Computers. 

Hackers showed interest in how to use the available exploits in attacks effectively. The hackers have become more active, and various exploits for the bug have been published.

Now that CISA has confirmed the widespread exploitation of CitrixBleed 2 in attacks, threat actors may have developed their exploits based on the recently released technical information. CISA has suggested to “apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.”

Security Breach Reveals "Catwatchful" Spyware is Snooping on Users

Security Breach Reveals "Catwatchful" Spyware is Snooping on Users

A security bug in a stealthy Android spyware operation, “Catwatchful,” has exposed full user databases affecting its 62,000 customers and also its app admin. The vulnerability was found by cybersecurity expert Eric Daigle reported about the spyware app’s full database of email IDs and plaintext passwords used by Catwatchful customers to access stolen data from the devices of their victims. 

Most of the victims were based in India, Argentina, Peru, Mexico, Colombia, Bolivia, and Ecuador. A few records date back to 2018. The leaked database also revealed the identity of the Catwatchful admin called Omar Soca Char.

The Catwatchful database also revealed the identity of the spyware operation’s administrator, Omar Soca Charcov, a developer based in Uruguay.

About Catwatchful

Catwatchful is a spyware that pretends to be a child monitoring app, claiming to be “invisible and can not be detected,” while it uploads the victim’s data to a dashboard accessible to the person who planted the app. The stolen data includes real-time location data, victims’ photos, and messages.  The app can also track live ambient audio from the device’s mic and access the phone camera (both front and rear).

Catwatchful and similar apps are banned on app stores, and depend on being downloaded and deployed by someone having physical access to a victim’s phone. These apps are famous as “stalkerware” or “spouseware” as they are capable of unauthorized and illegal non-consensual surveillance of romantic partners and spouses. 

Rise of spyware apps

The Catwatchful incident is the fifth and latest in this year’s growing list of stalkerware scams that have been breached, hacked, or had their data exposed. 

How was the spyware found?

Daigle has previously discovered stalkerware exploits. Catwatchful uses a custom-made API, which the planted app uses to communicate to send data back to Catwatchful servers. The stalkerware also uses Google Firebase to host and store stolen data. 

According to Techradar, the “data was stored on Google Firebase, sent via a custom API that was unauthenticated, resulting in open access to user and victim data. The report also confirms that, although hosting had initially been suspended by HostGator, it had been restored via another temporary domain."

Amid Federal Crackdown, Microsoft Warns Against Rising North Korean Jobs Scams

Amid Federal Crackdown, Microsoft Warns Against Rising North Korean Jobs Scams

North Korean hackers are infiltrating high-profile US-based tech firms through scams. Recently, they have even advanced their tactics, according to the experts. In a recent investigation by Microsoft, the company has requested its peers to enforce stronger pre-employment verification measures and make policies to stop unauthorized IT management tools. 

Further investigation by the US government revealed that these actors were working to steal money for the North Korean government and use the funds to run its government operations and its weapons program.  

US imposes sanctions against North Korea

The US has imposed strict sanctions on North Korea, which restrict US companies from hiring North Korean nationals. It has led to threat actors making fake identities and using all kinds of tricks (such as VPNs) to obscure their real identities and locations. This is being done to avoid getting caught and get easily hired. 

Recently, the threat actors have started using spoof tactics such as voice-changing tools and AI-generated documents to appear credible. In one incident, the scammers somehow used an individual residing in New Jersey, who set up shell companies to fool victims into believing they were paying a legitimate local business. The same individual also helped overseas partners to get recruited. 

DoJ arrests accused

The clever campaign has now come to an end, as the US Department of Justice (DoJ) arrested and charged a US national called Zhenxing “Danny” Wanf with operating a “year-long” scam. The scheme earned over $5 million. The agency also arrested eight more people - six Chinese and two Taiwanese nationals. The arrested individuals are charged with money laundering, identity theft, hacking, sanctions violations, and conspiring to commit wire fraud.

In addition to getting paid in these jobs, which Microsoft says is a hefty payment, these individuals also get access to private organization data. They exploit this access by stealing sensitive information and blackmailing the company.

Lazarus group behind such scams

One of the largest and most infamous hacking gangs worldwide is the North Korean state-sponsored group, Lazarus. According to experts, the gang extorted billions of dollars from the Korean government through similar scams. The entire campaign is popular as “Operation DreamJob”. 

"To disrupt this activity and protect our customers, we’ve suspended 3,000 known Microsoft consumer accounts (Outlook/Hotmail) created by North Korean IT workers," said Microsoft.

Cybercrime Gang Hunters International Shuts Down, Returns Stolen Data as Goodwill

Cybercrime Gang Hunters International Shuts Down, Returns Stolen Data as Goodwill

Cybercrime gang to return stolen data

The Hunters International Ransomware-as-a-Service (RaaS) operation has recently announced that it is shutting down its operation and will provide free decryptors to help targets recover their data without paying a ransom. 

"After careful consideration and in light of recent developments, we have decided to close the Hunters International project. This decision was not made lightly, and we recognize the impact it has on the organizations we have interacted with," the cybercrime gang said. 

Hunter International claims goodwill

As a goodwill gesture to victims affected by the gang’s previous operations, it is helping them recover data without requiring them to pay ransoms. The gang has also removed all entries from the extortion portal and stated that organizations whose systems were encrypted in the Hunters International ransomware attacks can request assistance and recovery guidance on the group’s official website.

Gang rebranding?

The gang has not explained the “recent developments” it referred to, the recent announcement comes after a November 17 statement announcing Hunters International will soon close down due to strict law enforcement actions and financial losses. 

In April, Group-IB researchers said the group was rebranding with the aim to focus on extortion-only and data theft attacks and launched “World Leaks”- a new extortion-only operation. Group-IB said that “unlike Hunters International, which combined encryption with extortion, World Leaks operates as an extortion-only group using a custom-built exfiltration tool. The new tool looks like an advanced version of the Storage Software exfiltration tool used by Hunter International’s ransomware associates.

The emergence of Hunter International

Hunter International surfaced in 2023, and cybersecurity experts flagged it as a rebrand of as it showed code similarities. The ransomware gang targeted Linux, ESXi (VMware servers), Windows, FreeBSD, and SunOS. In the past two years, Hunter International has attacked businesses of all sizes, demanding ransom up to millions of dollars. 

The gang was responsible for around 300 operations globally. Some famous victims include the U.S Marshals Service, Tata Technologies, Japanese optics mammoth Hoya, U.S Navy contractor Austal USA, Oklahoma’s largest not-for-profit healthcare Integris Health, AutoCanada, and a North American automobile dealership. Last year, Hunter International attacked the Fred Hutch Cancer Center and blackmailed to leak stolen data of more than 800,000 cancer patients if ransom was not paid.

A Simple Guide to Launching GenAI Successfully

 


Generative AI (GenAI) is one of today’s most exciting technologies, offering potential to improve productivity, creativity, and customer service. But for many companies, it becomes like a forgotten gym membership, enthusiastically started, but quickly abandoned.

So how can businesses make sure they get real value from GenAI instead of falling into the trap of wasted effort? Success lies in four key steps: building a strong plan, choosing the right partners, launching responsibly, and tracking the impact.


1. Set Up a Strong Governance Framework

Before using GenAI, businesses must create clear rules and processes to use it safely and responsibly. This is called a governance framework. It helps prevent problems like privacy violations, data leaks, or misuse of AI tools.

This framework should be created by a group of leaders from different departments—legal, compliance, cybersecurity, data, and business operations. Since AI can affect many parts of a company, it’s important that leadership supports and oversees these efforts.

It’s also crucial to manage data properly. Many companies forget to prepare their data for AI tools. Data should be accurate, anonymous where needed, and well-organized to avoid security risks and legal trouble.

Risk management must be proactive. This includes reducing bias in AI systems, ensuring data security, staying within legal boundaries, and preventing misuse of intellectual property.


2. Choose Technology Partners Carefully

GenAI tools are not like regular software. When selecting a provider, businesses should look beyond basic features and check how the provider handles data, ownership rights, and ethical practices. A lack of transparency is a warning sign.

Companies should know where their data is stored, who can access it, and who owns the results produced by the AI tool. It’s also important to avoid being trapped in systems that make it difficult to switch vendors later. Always review agreements carefully, especially around copyright and data rights.


3. Launch With Care and Strategy

Once planning is complete, the focus should shift to thoughtful execution. Start with small projects that can demonstrate value quickly. Choose use cases where GenAI can clearly improve efficiency or outcomes.

Data used in GenAI must be organized and secured so that no sensitive information is exposed. Also, employees must be trained to work with these tools effectively. Skills like writing proper prompts and verifying AI-generated content are essential.

To build trust and encourage adoption, leaders should clearly explain why GenAI is being introduced and how it will help, not replace employees. GenAI should support teams and improve their performance, not reduce jobs.


4. Track Success and Business Value

Finally, companies need to measure the results. Success isn’t just about using the technology— it’s about making a real impact.

Set clear goals and use simple metrics, like productivity improvements, customer feedback, or employee satisfaction. GenAI should lead to better outcomes for both teams and clients, not just technical performance.

To move beyond the GenAI buzz and unlock real value, companies must approach it with clear goals, responsible use, and long-term thinking. With the right foundation, GenAI can be more than just hype, it can be a lasting asset for innovation and growth.



Cybercriminals Exploit Fake Salesforce Tool to Steal Company Data and Demand Payments

 



A group of hackers has been carrying out attacks against businesses by misusing a tool that looks like it belongs to Salesforce, according to information shared by Google’s threat researchers. These attacks have been going on for several months and have mainly focused on stealing private company information and later pressuring the victims for money.


How the Attack Happens

The hackers have been contacting employees by phone while pretending to work for their company’s technical support team. Through these phone calls, the attackers convince employees to share important login details.

After collecting this information, the hackers guide the employees to a specific page used to set up apps connected to Salesforce. Once there, the attackers use an illegal, altered version of a Salesforce data tool to quietly break into the company’s system and take sensitive data.

In many situations, the hackers don’t just stop at Salesforce. They continue to explore other parts of the company’s cloud accounts and sometimes reach deeper into the company’s private networks.


Salesforce’s Advice to Users

Earlier this year, Salesforce warned people about these kinds of scams. The company has made it clear that there is no known fault or security hole in the Salesforce platform itself. The problem is that the attackers are successfully tricking people by pretending to be trusted contacts.

Salesforce has recommended that users improve their account protection by turning on extra security steps like multi-factor authentication, carefully controlling who has permission to access sensitive areas, and limiting which locations can log into the system.


Unclear Why Salesforce is the Target

It is still unknown why the attackers are focusing on Salesforce tools or how they became skilled in using them. Google’s research team has not seen other hacker groups using this specific method so far.

Interestingly, the attackers do not all seem to have the same level of experience. Some are very skilled at using the fake Salesforce tool, while others seem less prepared. Experts believe that these skills likely come from past activities or learning from earlier attacks.


Hackers Delay Their Demands

In many cases, the hackers wait for several months after breaking into a company before asking for money. Some attackers claim they are working with outside groups, but researchers are still studying these possible connections.


A Rising Social Engineering Threat

This type of phone-based trick is becoming more common as hackers rely on social engineering — which means they focus on manipulating people rather than directly breaking into systems. Google’s researchers noted that while there are some similarities between these hackers and known criminal groups, this particular group appears to be separate.

How Biometric Data Collection Affects Workers

 


Modern workplaces are beginning to track more than just employee hours or tasks. Today, many employers are collecting very personal information about workers' bodies and behaviors. This includes data like fingerprints, eye scans, heart rates, sleeping patterns, and even the way someone walks or types. All of this is made possible by tools like wearable devices, security cameras, and AI-powered monitoring systems.

The reason companies use these methods varies. Some want to increase workplace safety. Others hope to improve employee health or get discounts from insurance providers. Many believe that collecting this kind of data helps boost productivity and efficiency. At first glance, these goals might sound useful. But there are real risks to both workers and companies that many overlook.

New research shows that being watched in such personal ways can lead to fear and discomfort. Employees may feel anxious or unsure about their future at the company. They worry their job might be at risk if the data is misunderstood or misused. This sense of insecurity can impact mental health, lower job satisfaction, and make people less motivated to perform well.

There have already been legal consequences. In one major case, a railway company had to pay millions to settle a lawsuit after workers claimed their fingerprints were collected without consent. Other large companies have also faced similar claims. The common issue in these cases is the lack of clear communication and proper approval from employees.

Even when health programs are framed as helpful, they can backfire. For example, some workers are offered lower health insurance costs if they participate in screenings or share fitness data. But not everyone feels comfortable handing over private health details. Some feel pressured to agree just to avoid being judged or left out. In certain cases, those who chose not to participate were penalized. One university faced a lawsuit for this and later agreed to stop the program after public backlash.

Monitoring employees’ behavior can also affect how they work. For instance, in one warehouse, cameras were installed to track walking routes and improve safety. However, workers felt watched and lost the freedom to help each other or move around in ways that felt natural. Instead of making the workplace better, the system made workers feel less trusted.

Laws are slowly catching up, but in many places, current rules don’t fully protect workers from this level of personal tracking. Just because something is technically legal does not mean it is ethical or wise.

Before collecting sensitive data, companies must ask a simple but powerful question: is this really necessary? If the benefits only go to the employer, while workers feel stressed or powerless, the program might do more harm than good. In many cases, choosing not to collect such data is the better and more respectful option.


Want to Leave Facebook? Do this.

Want to Leave Facebook? Do this.

Confused about leaving Facebook?

Many people are changing their social media habits and opting out of many services. Facebook has witnessed a large exodus of users deserting the platform after the announcement in March that Meta was terminating the independent fact-checking on its platform. However, fact-checking has been replaced with community notes, letting users make changes to potentially false/misleading information. 

Users having years of photos and posts on Facebook are confused about how to collect their data before removing their accounts. If you also feel the same problem, this post will help you delete Facebook permanently, while taking all your information on the way out. 

How to remove Facebook?

For users who do not want to be on Facebook anymore, deleting their account is the only way to completely remove yourself from the platform. If you are not sure, deactivating your account allows you to have some life off of Facebook without account deletion. 

Make sure to remove third-party Facebook logins before deleting your account. 

How to leave third-party apps?

Third-party apps like DoorDash and Spotify allow you to log in using your Facebook account. This lets you log in without remembering another password, but if you’re planning on deleting Facebook, you have to update your login settings. That is because if you delete your account, there will not be another Facebook account for the user to log in through. 

Fortunately, there is another simple way to find which of your sites and applications are connected to Facebook and delete them before removing your account. Once you disconnect from other websites and applications from Facebook, you will need to adjust how you login to them. 

Users should try specific applications and websites to set new passwords or passkeys or log in via a single-service sign-on option, such as Google. 

How is deactivating different than deactivating a Facebook account?

If you want to stay away from Facebook, you have two choices. Either delete your account permanently, or you can disable it temporarily to deactivate it. 

PumaBot: A New Malware That Sneaks into Smart Devices Using Weak Passwords

 


A recently found malware called PumaBot is putting many internet-connected devices at risk. This malicious software is designed to attack smart systems like surveillance cameras, especially those that use the Linux operating system. It sneaks in by guessing weak passwords and then quietly takes over the system.


How PumaBot Finds Its Victims

Unlike many other threats that randomly scan the internet looking for weak points, PumaBot follows specific instructions from a remote command center. It receives a list of selected device addresses (known as IPs) from its control server and begins attempting to log in using common usernames and passwords through SSH — a tool that lets people access devices remotely.

Experts believe it may be going after security and traffic camera systems that belong to a company called Pumatronix, based on clues found in the malware’s code.


What Happens After It Breaks In

Once PumaBot gets into a device, it runs a quick check to make sure it's not inside a fake system set up by researchers (known as a honeypot). If it passes that test, the malware places a file on the device and creates a special service to make sure it stays active, even after the device is restarted.

To keep the door open for future access, PumaBot adds its own secret login credentials. This way, the hackers can return to the device later, even if some files are removed.


What the Malware Can Do

After it takes control, PumaBot can be told to:

• Steal data from the device

• Install other harmful software

• Collect login details from users

• Send stolen information back to the attackers

One tool it uses captures usernames and passwords typed into the device, saves them in a hidden file, and sends them to the hackers. Once the data is taken, the malware deletes the file to cover its tracks.


Why PumaBot Is Concerning

PumaBot is different from other malware. Many botnets simply use infected devices to send spam or run large-scale attacks. But PumaBot seems more focused and selective. Instead of causing quick damage, it slowly builds access to sensitive networks — which could lead to bigger security breaches later.


How to Protect Your Devices

If you use internet-connected gadgets like cameras or smart appliances, follow these safety steps:

1. Change factory-set passwords immediately

2. Keep device software updated

3. Use firewalls to block strange access

4. Put smart devices on a different Wi-Fi network than your main systems

By following these tips, you can lower your chances of being affected by malware like PumaBot.

DragonForce Targets MSPs Using SimpleHelp Exploit, Expands Ransomware Reach

 


The DragonForce ransomware group has breached a managed service provider (MSP) and leveraged its SimpleHelp remote monitoring and management (RMM) tool to exfiltrate data and launch ransomware attacks on downstream clients.

Cybersecurity firm Sophos, which was brought in to assess the situation, believes that attackers exploited a set of older vulnerabilities in SimpleHelp—specifically CVE-2024-57727, CVE-2024-57728, and CVE-2024-57726—to gain unauthorized access.

SimpleHelp is widely adopted by MSPs to deliver remote support and manage software deployment across client networks. According to Sophos, DragonForce initially used the compromised tool to perform system reconnaissance—gathering details such as device configurations, user accounts, and network connections from the MSP's customers.

The attackers then moved to extract sensitive data and execute encryption routines. While Sophos’ endpoint protection successfully blocked the deployment on one customer's network, others were not as fortunate. Multiple systems were encrypted, and data was stolen to support double-extortion tactics.

In response, Sophos has released indicators of compromise (IOCs) to help other organizations defend against similar intrusions.

MSPs have consistently been attractive targets for ransomware groups due to the potential for broad, multi-company impact from a single entry point. Some threat actors have even tailored their tools and exploits around platforms commonly used by MSPs, including SimpleHelp, ConnectWise ScreenConnect, and Kaseya. This trend has previously led to large-scale incidents, such as the REvil ransomware attack on Kaseya that affected over 1,000 businesses.

DragonForce's Expanding Threat Profile

The DragonForce group is gaining prominence following a string of attacks on major UK retailers. Their tactics reportedly resemble those of Scattered Spider, a well-known cybercrime group.

As first reported by BleepingComputer, DragonForce ransomware was used in an attack on Marks & Spencer. Shortly after, the same group targeted another UK retailer, Co-op, where a substantial volume of customer data was compromised.

BleepingComputer had earlier noted that DragonForce is positioning itself as a leader in the ransomware-as-a-service (RaaS) space, offering a white-label version of its encryptor for affiliates.

With a rapidly expanding victim list and a business model that appeals to affiliates, DragonForce is cementing its status as a rising and formidable presence in the global ransomware ecosystem.

Evaly Website Allegedly Hacked Amid Legal Turmoil, Hacker Threatens to Leak Customer Data

 

Evaly, the controversial e-commerce platform based in Bangladesh, appeared to fall victim to a cyberattack on 24 May 2025. Visitors to the site were met with a stark warning reportedly left by a hacker, claiming to have obtained the platform’s customer data and urging Evaly staff to make contact.

Displayed in bold capital letters, the message read: “HACKED, I HAVE ALL CUSTOMER DATA. EVALY STAFF PLEASE CONTACT 00watch@proton.me.” The post included a threat, stating, “OR ELSE I WILL RELEASE THIS DATA TO THE PUBLIC,” signaling the potential exposure of private user information if the hacker’s demand is ignored.

It remains unclear what specific data was accessed or whether sensitive financial or personal details were involved. So far, Evaly has not released any official statement addressing the breach or the nature of the compromised information.

This development comes on the heels of a fresh wave of legal action against Evaly and its leadership. On 13 April 2025, state-owned Bangladesh Sangbad Sangstha (BSS) reported that a Dhaka court handed down three-year prison sentences to Evaly’s managing director, Mohammad Rassel, and chairperson, Shamima Nasrin, in a fraud case.

Dhaka Metropolitan Magistrate M Misbah Ur Rahman delivered the judgment, which also included fines of BDT 5,000 each. The court issued arrest warrants for both executives following the ruling.

The case was filed by a customer, Md Rajib, who alleged that he paid BDT 12.37 lakh for five motorcycles that were never delivered. The transaction took place through Evaly’s website, which had gained attention for its deep discount offers and aggressive promotional tactics.

AI is Accelerating India's Healthtech Revolution, but Data Privacy Concerns Loom Large

 

India’s healthcare, infrastructure, is undergoing a remarkable digital transformation, driven by emerging technologies like artificialintelligence (AI), machinelearning, and bigdata. These advancements are not only enhancing accessibility and efficiency but also setting the foundation for a more equitable health system. According to the WorldEconomicForum (WEF), AI is poised to account for 30% of new drug discoveries by 2025 — a major leap for the pharmaceutical industry.

As outlined in the Global Outlook and Forecast 2025–2030, the market for AI in drugdiscovery is projected to grow from $1.72 billion in 2024 to $8.53 billion by 2030, clocking a CAGR of 30.59%. Major tech players like IBMWatson, NVIDIA, and GoogleDeepMind are partnering with pharmaceutical firms to fast-track AI-led breakthroughs.

Beyond R&D, AI is transforming clinical workflows by digitising patientrecords and decentralising models to improve diagnostic precision while protecting privacy.

During an interview with AnalyticsIndiaMagazine (AIM), Rajan Kashyap, Assistant Professor at the National Institute of Mental Health and Neuro Sciences (NIMHANS), shared insights into the government’s push toward innovation: “Increasing the number of seats in medical and paramedical courses, implementing mandatory rural health services, and developing Indigenous low-cost MRI machines are contributing significantly to hardware development in the AI innovation cycle.”

Tech-Driven Healthcare Innovation

Kashyap pointed to major initiatives like the GenomeIndia project, cVEDA, and the AyushmanBharatDigitalMission as critical steps toward advancing India’s clinical research capabilities. He added that initiatives in genomics, AI, and ML are already improving clinicaloutcomes and streamlining operations.

He also spotlighted BrainSightAI, a Bengaluru-based startup that raised $5 million in a Pre-Series A round to scale its diagnostic tools for neurological conditions. The company aims to expand across Tier 1 and 2 cities and pursue FDA certification to access global healthcaremarkets.

Another innovator, Niramai Health Analytics, offers an AI-based breast cancer screening solution. Their product, Thermalytix, is a portable, radiationfree, and cost-effective screening device that is compatible with all age groups and breast densities.

Meanwhile, biopharma giant Biocon is leveraging AI in biosimilar development. Their work in predictivemodelling is reducing formulation failures and expediting regulatory approvals. One of their standout contributions is Semglee, the world’s first interchangeablebiosimilar insulin, now made accessible through their tie-up with ErisLifesciences.

Rising R&D costs have pushed pharma companies to adopt AI solutions for innovation and costefficiency.

Data Security Still a Grey Zone

While innovation is flourishing, there are pressing concerns around dataprivacy. A report by Netskope Threat Labs highlighted that doctors are increasingly uploading sensitive patient information to unregulated platforms like ChatGPT and Gemini.

Kashyap expressed serious concerns about lax data practices:

“During my professional experience at AI labs abroad, I observed that organisations enforced strict data protection regulations and mandatory training programs…The use of public AI tools like ChatGPT or Gemini was strictly prohibited, with no exceptions or shortcuts allowed.”

He added that anonymised data is still vulnerable to hacking or reidentification. Studies show that even brainscans like MRIs could potentially reveal personal or financial information.

“I strongly advocate for strict adherence to protected data-sharing protocols when handling clinical information. In today’s landscape of data warfare, where numerous companies face legal action for breaching data privacy norms, protecting health data is no less critical than protecting national security,” he warned.

Policy Direction and Regulatory Needs

The Netskope report recommends implementing approved GenAI tools in healthcare to reduce “shadow AI” usage and enhance security. It also urges deploying datalossprevention (DLP) policies to regulate what kind of data can be shared on generative AI platforms.

Although the usage of personal GenAI tools has declined — from 87% to 71% in one year — risks remain.

Kashyap commented on the pace of India’s regulatory approach:

“India is still in the process of formulating a comprehensive data protection framework. While the pace may seem slow, India’s approach has traditionally been organic, carefully evolving with consideration for its unique context.”

He also pushed for developing interdisciplinary medtech programs that integrate AIeducation into medicaltraining.

“Misinformation and fake news pose a significant threat to progress. In a recent R&D project I was involved in, public participation was disrupted due to the spread of misleading information. It’s crucial that legal mechanisms are in place to counteract such disruptions, ensuring that innovation is not undermined by false narratives,” he concluded.

Pen Test Partners Uncovers Major Vulnerability in Microsoft Copilot AI for SharePoint

 

Pen Test Partners, a renowned cybersecurity and penetration testing firm, recently exposed a critical vulnerability in Microsoft’s Copilot AI for SharePoint. Known for simulating real-world hacking scenarios, the company’s redteam specialists investigate how systems can be breached just like skilled threatactors would attempt in real-time. With attackers increasingly leveraging AI, ethical hackers are now adopting similar methods—and the outcomes are raising eyebrows.

In a recent test, the Pen Test Partners team explored how Microsoft Copilot AI integrated into SharePoint could be manipulated. They encountered a significant issue when a seemingly secure encrypted spreadsheet was exposed—simply by instructing Copilot to retrieve it. Despite SharePoint’s robust access controls preventing file access through conventional means, the AI assistant was able to bypass those protections.

“The agent then successfully printed the contents,” said Jack Barradell-Johns, a red team security consultant at Pen Test Partners, “including the passwords allowing us to access the encrypted spreadsheet.”

This alarming outcome underlines the dual-nature of AI in informationsecurity—it can enhance defenses, but also inadvertently open doors to attackers if not properly governed.

Barradell-Johns further detailed the engagement, explaining how the red team encountered a file labeled passwords.txt, placed near the encrypted spreadsheet. When traditional methods failed due to browser-based restrictions, the hackers used their red team expertise and simply asked the Copilot AI agent to fetch it.

“Notably,” Barradell-Johns added, “in this case, all methods of opening the file in the browser had been restricted.”

Still, those download limitations were sidestepped. The AI agent output the full contents, including sensitive credentials, and allowed the team to easily copy the chat thread, revealing a potential weak point in AI-assisted collaborationtools.

This case serves as a powerful reminder: as AItools become more embedded in enterprise workflows, their securitytesting must evolve in step. It's not just about protecting the front door—it’s about teaching your digital assistant not to hold it open for strangers.

For those interested in the full technical breakdown, the complete Pen Test Partners report dives into the step-by-step methods used and broader securityimplications of Copilot’s current design.

Davey Winder reached out to Microsoft, and a spokesperson said:

“SharePoint information protection principles ensure that content is secured at the storage level through user-specific permissions and that access is audited. This means that if a user does not have permission to access specific content, they will not be able to view it through Copilot or any other agent. Additionally, any access to content through Copilot or an agent is logged and monitored for compliance and security.”

Further, Davey Winder then contacted Ken Munro, founder of Pen Test Partners, who issued the following statement addressing the points made in the one provided by Microsoft.

“Microsoft are technically correct about user permissions, but that’s not what we are exploiting here. They are also correct about logging, but again it comes down to configuration. In many cases, organisations aren’t typically logging the activities that we’re taking advantage of here. Having more granular user permissions would mitigate this, but in many organisations data on SharePoint isn’t as well managed as it could be. That’s exactly what we’re exploiting. These agents are enabled per user, based on licenses, and organisations we have spoken to do not always understand the implications of adding those licenses to their users.”

Google’s New Android Security Update Might Auto-Reboot Your Phone After 3 Days

 

In a recent update to Google Play Services, the tech giant revealed a new security feature that could soon reboot your Android smartphone automatically — and this move could actually boost your device’s safety.

According to the update, Android phones left unused for three consecutive days will automatically restart. While this might sound intrusive at first, the reboot comes with key security benefits.

There are two primary reasons why this feature is important:

First, after a reboot, the only way to unlock a phone is by entering the PIN — biometric options like fingerprint or facial recognition won’t work until the PIN is input manually. This ensures added protection, especially for users who haven’t set up any screen lock. A forced PIN entry makes it much harder for unauthorized individuals to access your device or the data on it.

Second, the update enhances encryption security. Android devices operate in two states: Before First Unlock (BFU) and After First Unlock (AFU). In the BFU state, your phone’s contents are completely encrypted, meaning that even advanced tools can’t extract the data.

This security measure also affects how law enforcement and investigative agencies handle seized phones. Since the BFU state kicks in automatically after a reboot, authorities have a limited window to access a device before it locks down data access completely.

“A BFU phone remains connected to Wi-Fi or mobile data, meaning that if you lose your phone and it reboots, you'll still be able to use location-finding services.”

The feature is listed in Google’s April 2025 System release notes, and while it appears to extend to Android tablets, it won’t apply to wearables like the Pixel Watch, Android Auto, or Android TVs.

As of now, Google hasn’t clarified whether users will have the option to turn off this feature or customize the three-day timer.

Because it’s tied to Google Play Services, users will receive the feature passively — there’s no need for a full system update to access it.

Your Home Address Might be Available Online — Here’s How to Remove It

 

In today’s hyper-connected world, your address isn’t just a piece of contact info; it’s a data point that companies can sell and exploit.

Whenever you move or update your address, that information often gets picked up and distributed by banks, mailing list services, and even the US Postal Service. This makes it incredibly easy for marketers to target you — and worse, for bad actors to impersonate you in identity theft scams.

Thankfully, there are a number of ways to remove or obscure your address online. Here’s a step-by-step guide to help you regain control of your personal information.

1. Blur Your Home on Map Services
Map tools like Google Maps and Apple Maps often show street-level images of your home. While useful for navigation, they also open a window into your private life. Fortunately, both platforms offer a way to blur your home.

“Visit Google Maps on desktop, enter your address, and use the ‘Report a Problem’ link to manually blur your home from Street View.”

If you use Apple Maps, you’ll need to email mapsimagecollection@apple.com with your address and a description of your property as it appears in their Look Around feature. Apple will process the request and blur your home image accordingly.

2. Remove Your Address from Google Search Results
If your address appears in a Google search — particularly when you look up your own name — you can ask Google to remove it.

“From your Google Account, navigate to Data & Privacy > History Settings > My Activity > Other Activity > Results About You, then click ‘Get Started.’”

This feature also allows you to set up alerts so Google notifies you whenever your address resurfaces. Keep in mind, however, that Google may not remove information found on government websites, news reports, or business directories.

3. Scrub Your Social Media Profiles
Many people forget they’ve added their home address to platforms like Facebook, Instagram, or Twitter years ago. It’s worth double-checking your profile settings and removing any location-related details. Also take a moment to delete posts or images that might reveal your home’s exterior, street signs, or house number — small clues that can be pieced together easily.

4. Opt Out of Whitepages Listings
Whitepages.com is one of the most commonly used online directories to find personal addresses. If you discover your information there, it’s quick and easy to get it removed.

“Head to the Whitepages Suppression Request page, paste your profile URL, and submit a request for removal.”

This doesn’t just help with Whitepages — it also reduces the chances of your info being scraped by other data brokers.

5. Delete or Update Old Accounts
Over time, you’ve likely entered your address on numerous websites — for deliveries, sign-ups, memberships, and more. Some of those, like Amazon or your bank, are essential. But for others, especially old or unused accounts, it might be time to clean house.

Dig through your inbox to find services you may have forgotten about. These might include e-commerce platforms, mobile apps, advocacy groups, newsletter subscriptions, or even old sweepstakes sites. If you’re not using them, either delete the account or contact their support team to request data removal.

6. Use a PO Box for New Deliveries
If you're looking for a more permanent privacy solution, consider setting up a post office box through USPS. It keeps your real address hidden while still allowing you to receive packages and mail reliably.

“A PO Box gives you the added benefit of secure delivery, signature saving, and increased privacy.”

Applying is easy — just visit the USPS website, pick a location and size, and pay a small monthly fee. Depending on the size and city, prices typically range between $15 to $30 per month.

In a world where your personal information is increasingly exposed, your home address deserves extra protection.Taking control now can help prevent unwanted marketing, preserve your peace of mind, and protect against identity theft in the long run.

Public Wary of AI-Powered Data Use by National Security Agencies, Study Finds

 

A new report released alongside the Centre for Emerging Technology and Security (CETaS) 2025 event sheds light on growing public unease around automated data processing in national security. Titled UK Public Attitudes to National Security Data Processing: Assessing Human and Machine Intrusion, the research reveals limited public awareness and rising concern over how surveillance technologies—especially AI—are shaping intelligence operations.

The study, conducted by CETaS in partnership with Savanta and Hopkins Van Mil, surveyed 3,554 adults and included insights from a 33-member citizens’ panel. While findings suggest that more people support than oppose data use by national security agencies, especially when it comes to sensitive datasets like medical records, significant concerns persist.

During a panel discussion, investigatory powers commissioner Brian Leveson, who chaired the session, addressed the implications of fast-paced technological change. “We are facing new and growing challenges,” he said. “Rapid technological developments, especially in AI [artificial intelligence], are transforming our public authorities.”

Leveson warned that AI is shifting how intelligence gathering and analysis is performed. “AI could soon underpin the investigatory cycle,” he noted. But the benefits also come with risks. “AI could enable investigations to cover far more individuals than was ever previously possible, which raises concerns about privacy, proportionality and collateral intrusion.”

The report shows a divide in public opinion based on how and by whom data is used. While people largely support the police and national agencies accessing personal data for security operations, that support drops when it comes to regional law enforcement. The public is particularly uncomfortable with personal data being shared with political parties or private companies.

Marion Oswald, co-author and senior visiting fellow at CETaS, emphasized the intrusive nature of data collection—automated or not. “Data collection without consent will always be intrusive, even if the subsequent analysis is automated and no one sees the data,” she said.

She pointed out that predictive data tools, in particular, face strong opposition. “Panel members, in particular, had concerns around accuracy and fairness, and wanted to see safeguards,” Oswald said, highlighting the demand for stronger oversight and regulation of technology in this space.

Despite efforts by national security bodies to enhance public engagement, the study found that a majority of respondents (61%) still feel they understand “slightly” or “not at all” what these agencies actually do. Only 7% claimed a strong understanding.

Rosamund Powell, research associate at CETaS and co-author of the report, said: “Previous studies have suggested that the public’s conceptions of national security are really influenced by some James Bond-style fictions.”

She added that transparency significantly affects public trust. “There’s more support for agencies analysing data in the public sphere like posts on social media compared to private data like messages or medical data.”

Commvault Confirms Cyberattack, Says Customer Backup Data Remains Secure


Commvault, a well-known company that helps other businesses protect and manage their digital data, recently shared that it had experienced a cyberattack. However, the company clarified that none of the backup data it stores for customers was accessed or harmed during the incident.

The breach was discovered in February 2025 after Microsoft alerted Commvault about suspicious activity taking place in its Azure cloud services. After being notified, the company began investigating the issue and found that a very small group of customers had been affected. Importantly, Commvault stated that its systems remained up and running, and there was no major impact on its day-to-day operations.

Danielle Sheer, Commvault’s Chief Trust Officer, said the company is confident that hackers were not able to view or steal customer backup data. She also confirmed that Commvault is cooperating with government cybersecurity teams, including the FBI and CISA, and is receiving support from two independent cybersecurity firms.


Details About the Vulnerability

It was discovered that the attackers gained access by using a weakness in Commvault’s web server software. This flaw, now fixed, allowed hackers with limited permissions to install harmful software on affected systems. The vulnerability, known by the code CVE-2025-3928, had not been known or patched before the breach, making it what experts call a “zero-day” issue.

Because of the seriousness of this bug, CISA (Cybersecurity and Infrastructure Security Agency) added it to a list of known risks that hackers are actively exploiting. U.S. federal agencies have been instructed to update their Commvault software and fix the issue by May 19, 2025.


Steps Recommended to Stay Safe

To help customers stay protected, Commvault suggested the following steps:

• Use conditional access controls for all cloud-based apps linked to Microsoft services.

• Check sign-in logs often to see if anyone is trying to log in from suspicious locations.

• Update secret access credentials between Commvault and Azure every three months.


The company urged users to report any strange behavior right away so its support team can act quickly to reduce any damage.

Although this was a serious incident, Commvault’s response was quick and effective. No backup data was stolen, and the affected software has been patched. This event is a reminder to all businesses to regularly check for vulnerabilities and keep their systems up to date to prevent future attacks.