Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Data. Show all posts

How Biometric Data Collection Affects Workers

 


Modern workplaces are beginning to track more than just employee hours or tasks. Today, many employers are collecting very personal information about workers' bodies and behaviors. This includes data like fingerprints, eye scans, heart rates, sleeping patterns, and even the way someone walks or types. All of this is made possible by tools like wearable devices, security cameras, and AI-powered monitoring systems.

The reason companies use these methods varies. Some want to increase workplace safety. Others hope to improve employee health or get discounts from insurance providers. Many believe that collecting this kind of data helps boost productivity and efficiency. At first glance, these goals might sound useful. But there are real risks to both workers and companies that many overlook.

New research shows that being watched in such personal ways can lead to fear and discomfort. Employees may feel anxious or unsure about their future at the company. They worry their job might be at risk if the data is misunderstood or misused. This sense of insecurity can impact mental health, lower job satisfaction, and make people less motivated to perform well.

There have already been legal consequences. In one major case, a railway company had to pay millions to settle a lawsuit after workers claimed their fingerprints were collected without consent. Other large companies have also faced similar claims. The common issue in these cases is the lack of clear communication and proper approval from employees.

Even when health programs are framed as helpful, they can backfire. For example, some workers are offered lower health insurance costs if they participate in screenings or share fitness data. But not everyone feels comfortable handing over private health details. Some feel pressured to agree just to avoid being judged or left out. In certain cases, those who chose not to participate were penalized. One university faced a lawsuit for this and later agreed to stop the program after public backlash.

Monitoring employees’ behavior can also affect how they work. For instance, in one warehouse, cameras were installed to track walking routes and improve safety. However, workers felt watched and lost the freedom to help each other or move around in ways that felt natural. Instead of making the workplace better, the system made workers feel less trusted.

Laws are slowly catching up, but in many places, current rules don’t fully protect workers from this level of personal tracking. Just because something is technically legal does not mean it is ethical or wise.

Before collecting sensitive data, companies must ask a simple but powerful question: is this really necessary? If the benefits only go to the employer, while workers feel stressed or powerless, the program might do more harm than good. In many cases, choosing not to collect such data is the better and more respectful option.


Want to Leave Facebook? Do this.

Want to Leave Facebook? Do this.

Confused about leaving Facebook?

Many people are changing their social media habits and opting out of many services. Facebook has witnessed a large exodus of users deserting the platform after the announcement in March that Meta was terminating the independent fact-checking on its platform. However, fact-checking has been replaced with community notes, letting users make changes to potentially false/misleading information. 

Users having years of photos and posts on Facebook are confused about how to collect their data before removing their accounts. If you also feel the same problem, this post will help you delete Facebook permanently, while taking all your information on the way out. 

How to remove Facebook?

For users who do not want to be on Facebook anymore, deleting their account is the only way to completely remove yourself from the platform. If you are not sure, deactivating your account allows you to have some life off of Facebook without account deletion. 

Make sure to remove third-party Facebook logins before deleting your account. 

How to leave third-party apps?

Third-party apps like DoorDash and Spotify allow you to log in using your Facebook account. This lets you log in without remembering another password, but if you’re planning on deleting Facebook, you have to update your login settings. That is because if you delete your account, there will not be another Facebook account for the user to log in through. 

Fortunately, there is another simple way to find which of your sites and applications are connected to Facebook and delete them before removing your account. Once you disconnect from other websites and applications from Facebook, you will need to adjust how you login to them. 

Users should try specific applications and websites to set new passwords or passkeys or log in via a single-service sign-on option, such as Google. 

How is deactivating different than deactivating a Facebook account?

If you want to stay away from Facebook, you have two choices. Either delete your account permanently, or you can disable it temporarily to deactivate it. 

PumaBot: A New Malware That Sneaks into Smart Devices Using Weak Passwords

 


A recently found malware called PumaBot is putting many internet-connected devices at risk. This malicious software is designed to attack smart systems like surveillance cameras, especially those that use the Linux operating system. It sneaks in by guessing weak passwords and then quietly takes over the system.


How PumaBot Finds Its Victims

Unlike many other threats that randomly scan the internet looking for weak points, PumaBot follows specific instructions from a remote command center. It receives a list of selected device addresses (known as IPs) from its control server and begins attempting to log in using common usernames and passwords through SSH — a tool that lets people access devices remotely.

Experts believe it may be going after security and traffic camera systems that belong to a company called Pumatronix, based on clues found in the malware’s code.


What Happens After It Breaks In

Once PumaBot gets into a device, it runs a quick check to make sure it's not inside a fake system set up by researchers (known as a honeypot). If it passes that test, the malware places a file on the device and creates a special service to make sure it stays active, even after the device is restarted.

To keep the door open for future access, PumaBot adds its own secret login credentials. This way, the hackers can return to the device later, even if some files are removed.


What the Malware Can Do

After it takes control, PumaBot can be told to:

• Steal data from the device

• Install other harmful software

• Collect login details from users

• Send stolen information back to the attackers

One tool it uses captures usernames and passwords typed into the device, saves them in a hidden file, and sends them to the hackers. Once the data is taken, the malware deletes the file to cover its tracks.


Why PumaBot Is Concerning

PumaBot is different from other malware. Many botnets simply use infected devices to send spam or run large-scale attacks. But PumaBot seems more focused and selective. Instead of causing quick damage, it slowly builds access to sensitive networks — which could lead to bigger security breaches later.


How to Protect Your Devices

If you use internet-connected gadgets like cameras or smart appliances, follow these safety steps:

1. Change factory-set passwords immediately

2. Keep device software updated

3. Use firewalls to block strange access

4. Put smart devices on a different Wi-Fi network than your main systems

By following these tips, you can lower your chances of being affected by malware like PumaBot.

DragonForce Targets MSPs Using SimpleHelp Exploit, Expands Ransomware Reach

 


The DragonForce ransomware group has breached a managed service provider (MSP) and leveraged its SimpleHelp remote monitoring and management (RMM) tool to exfiltrate data and launch ransomware attacks on downstream clients.

Cybersecurity firm Sophos, which was brought in to assess the situation, believes that attackers exploited a set of older vulnerabilities in SimpleHelp—specifically CVE-2024-57727, CVE-2024-57728, and CVE-2024-57726—to gain unauthorized access.

SimpleHelp is widely adopted by MSPs to deliver remote support and manage software deployment across client networks. According to Sophos, DragonForce initially used the compromised tool to perform system reconnaissance—gathering details such as device configurations, user accounts, and network connections from the MSP's customers.

The attackers then moved to extract sensitive data and execute encryption routines. While Sophos’ endpoint protection successfully blocked the deployment on one customer's network, others were not as fortunate. Multiple systems were encrypted, and data was stolen to support double-extortion tactics.

In response, Sophos has released indicators of compromise (IOCs) to help other organizations defend against similar intrusions.

MSPs have consistently been attractive targets for ransomware groups due to the potential for broad, multi-company impact from a single entry point. Some threat actors have even tailored their tools and exploits around platforms commonly used by MSPs, including SimpleHelp, ConnectWise ScreenConnect, and Kaseya. This trend has previously led to large-scale incidents, such as the REvil ransomware attack on Kaseya that affected over 1,000 businesses.

DragonForce's Expanding Threat Profile

The DragonForce group is gaining prominence following a string of attacks on major UK retailers. Their tactics reportedly resemble those of Scattered Spider, a well-known cybercrime group.

As first reported by BleepingComputer, DragonForce ransomware was used in an attack on Marks & Spencer. Shortly after, the same group targeted another UK retailer, Co-op, where a substantial volume of customer data was compromised.

BleepingComputer had earlier noted that DragonForce is positioning itself as a leader in the ransomware-as-a-service (RaaS) space, offering a white-label version of its encryptor for affiliates.

With a rapidly expanding victim list and a business model that appeals to affiliates, DragonForce is cementing its status as a rising and formidable presence in the global ransomware ecosystem.

Evaly Website Allegedly Hacked Amid Legal Turmoil, Hacker Threatens to Leak Customer Data

 

Evaly, the controversial e-commerce platform based in Bangladesh, appeared to fall victim to a cyberattack on 24 May 2025. Visitors to the site were met with a stark warning reportedly left by a hacker, claiming to have obtained the platform’s customer data and urging Evaly staff to make contact.

Displayed in bold capital letters, the message read: “HACKED, I HAVE ALL CUSTOMER DATA. EVALY STAFF PLEASE CONTACT 00watch@proton.me.” The post included a threat, stating, “OR ELSE I WILL RELEASE THIS DATA TO THE PUBLIC,” signaling the potential exposure of private user information if the hacker’s demand is ignored.

It remains unclear what specific data was accessed or whether sensitive financial or personal details were involved. So far, Evaly has not released any official statement addressing the breach or the nature of the compromised information.

This development comes on the heels of a fresh wave of legal action against Evaly and its leadership. On 13 April 2025, state-owned Bangladesh Sangbad Sangstha (BSS) reported that a Dhaka court handed down three-year prison sentences to Evaly’s managing director, Mohammad Rassel, and chairperson, Shamima Nasrin, in a fraud case.

Dhaka Metropolitan Magistrate M Misbah Ur Rahman delivered the judgment, which also included fines of BDT 5,000 each. The court issued arrest warrants for both executives following the ruling.

The case was filed by a customer, Md Rajib, who alleged that he paid BDT 12.37 lakh for five motorcycles that were never delivered. The transaction took place through Evaly’s website, which had gained attention for its deep discount offers and aggressive promotional tactics.

AI is Accelerating India's Healthtech Revolution, but Data Privacy Concerns Loom Large

 

India’s healthcare, infrastructure, is undergoing a remarkable digital transformation, driven by emerging technologies like artificialintelligence (AI), machinelearning, and bigdata. These advancements are not only enhancing accessibility and efficiency but also setting the foundation for a more equitable health system. According to the WorldEconomicForum (WEF), AI is poised to account for 30% of new drug discoveries by 2025 — a major leap for the pharmaceutical industry.

As outlined in the Global Outlook and Forecast 2025–2030, the market for AI in drugdiscovery is projected to grow from $1.72 billion in 2024 to $8.53 billion by 2030, clocking a CAGR of 30.59%. Major tech players like IBMWatson, NVIDIA, and GoogleDeepMind are partnering with pharmaceutical firms to fast-track AI-led breakthroughs.

Beyond R&D, AI is transforming clinical workflows by digitising patientrecords and decentralising models to improve diagnostic precision while protecting privacy.

During an interview with AnalyticsIndiaMagazine (AIM), Rajan Kashyap, Assistant Professor at the National Institute of Mental Health and Neuro Sciences (NIMHANS), shared insights into the government’s push toward innovation: “Increasing the number of seats in medical and paramedical courses, implementing mandatory rural health services, and developing Indigenous low-cost MRI machines are contributing significantly to hardware development in the AI innovation cycle.”

Tech-Driven Healthcare Innovation

Kashyap pointed to major initiatives like the GenomeIndia project, cVEDA, and the AyushmanBharatDigitalMission as critical steps toward advancing India’s clinical research capabilities. He added that initiatives in genomics, AI, and ML are already improving clinicaloutcomes and streamlining operations.

He also spotlighted BrainSightAI, a Bengaluru-based startup that raised $5 million in a Pre-Series A round to scale its diagnostic tools for neurological conditions. The company aims to expand across Tier 1 and 2 cities and pursue FDA certification to access global healthcaremarkets.

Another innovator, Niramai Health Analytics, offers an AI-based breast cancer screening solution. Their product, Thermalytix, is a portable, radiationfree, and cost-effective screening device that is compatible with all age groups and breast densities.

Meanwhile, biopharma giant Biocon is leveraging AI in biosimilar development. Their work in predictivemodelling is reducing formulation failures and expediting regulatory approvals. One of their standout contributions is Semglee, the world’s first interchangeablebiosimilar insulin, now made accessible through their tie-up with ErisLifesciences.

Rising R&D costs have pushed pharma companies to adopt AI solutions for innovation and costefficiency.

Data Security Still a Grey Zone

While innovation is flourishing, there are pressing concerns around dataprivacy. A report by Netskope Threat Labs highlighted that doctors are increasingly uploading sensitive patient information to unregulated platforms like ChatGPT and Gemini.

Kashyap expressed serious concerns about lax data practices:

“During my professional experience at AI labs abroad, I observed that organisations enforced strict data protection regulations and mandatory training programs…The use of public AI tools like ChatGPT or Gemini was strictly prohibited, with no exceptions or shortcuts allowed.”

He added that anonymised data is still vulnerable to hacking or reidentification. Studies show that even brainscans like MRIs could potentially reveal personal or financial information.

“I strongly advocate for strict adherence to protected data-sharing protocols when handling clinical information. In today’s landscape of data warfare, where numerous companies face legal action for breaching data privacy norms, protecting health data is no less critical than protecting national security,” he warned.

Policy Direction and Regulatory Needs

The Netskope report recommends implementing approved GenAI tools in healthcare to reduce “shadow AI” usage and enhance security. It also urges deploying datalossprevention (DLP) policies to regulate what kind of data can be shared on generative AI platforms.

Although the usage of personal GenAI tools has declined — from 87% to 71% in one year — risks remain.

Kashyap commented on the pace of India’s regulatory approach:

“India is still in the process of formulating a comprehensive data protection framework. While the pace may seem slow, India’s approach has traditionally been organic, carefully evolving with consideration for its unique context.”

He also pushed for developing interdisciplinary medtech programs that integrate AIeducation into medicaltraining.

“Misinformation and fake news pose a significant threat to progress. In a recent R&D project I was involved in, public participation was disrupted due to the spread of misleading information. It’s crucial that legal mechanisms are in place to counteract such disruptions, ensuring that innovation is not undermined by false narratives,” he concluded.

Pen Test Partners Uncovers Major Vulnerability in Microsoft Copilot AI for SharePoint

 

Pen Test Partners, a renowned cybersecurity and penetration testing firm, recently exposed a critical vulnerability in Microsoft’s Copilot AI for SharePoint. Known for simulating real-world hacking scenarios, the company’s redteam specialists investigate how systems can be breached just like skilled threatactors would attempt in real-time. With attackers increasingly leveraging AI, ethical hackers are now adopting similar methods—and the outcomes are raising eyebrows.

In a recent test, the Pen Test Partners team explored how Microsoft Copilot AI integrated into SharePoint could be manipulated. They encountered a significant issue when a seemingly secure encrypted spreadsheet was exposed—simply by instructing Copilot to retrieve it. Despite SharePoint’s robust access controls preventing file access through conventional means, the AI assistant was able to bypass those protections.

“The agent then successfully printed the contents,” said Jack Barradell-Johns, a red team security consultant at Pen Test Partners, “including the passwords allowing us to access the encrypted spreadsheet.”

This alarming outcome underlines the dual-nature of AI in informationsecurity—it can enhance defenses, but also inadvertently open doors to attackers if not properly governed.

Barradell-Johns further detailed the engagement, explaining how the red team encountered a file labeled passwords.txt, placed near the encrypted spreadsheet. When traditional methods failed due to browser-based restrictions, the hackers used their red team expertise and simply asked the Copilot AI agent to fetch it.

“Notably,” Barradell-Johns added, “in this case, all methods of opening the file in the browser had been restricted.”

Still, those download limitations were sidestepped. The AI agent output the full contents, including sensitive credentials, and allowed the team to easily copy the chat thread, revealing a potential weak point in AI-assisted collaborationtools.

This case serves as a powerful reminder: as AItools become more embedded in enterprise workflows, their securitytesting must evolve in step. It's not just about protecting the front door—it’s about teaching your digital assistant not to hold it open for strangers.

For those interested in the full technical breakdown, the complete Pen Test Partners report dives into the step-by-step methods used and broader securityimplications of Copilot’s current design.

Davey Winder reached out to Microsoft, and a spokesperson said:

“SharePoint information protection principles ensure that content is secured at the storage level through user-specific permissions and that access is audited. This means that if a user does not have permission to access specific content, they will not be able to view it through Copilot or any other agent. Additionally, any access to content through Copilot or an agent is logged and monitored for compliance and security.”

Further, Davey Winder then contacted Ken Munro, founder of Pen Test Partners, who issued the following statement addressing the points made in the one provided by Microsoft.

“Microsoft are technically correct about user permissions, but that’s not what we are exploiting here. They are also correct about logging, but again it comes down to configuration. In many cases, organisations aren’t typically logging the activities that we’re taking advantage of here. Having more granular user permissions would mitigate this, but in many organisations data on SharePoint isn’t as well managed as it could be. That’s exactly what we’re exploiting. These agents are enabled per user, based on licenses, and organisations we have spoken to do not always understand the implications of adding those licenses to their users.”

Google’s New Android Security Update Might Auto-Reboot Your Phone After 3 Days

 

In a recent update to Google Play Services, the tech giant revealed a new security feature that could soon reboot your Android smartphone automatically — and this move could actually boost your device’s safety.

According to the update, Android phones left unused for three consecutive days will automatically restart. While this might sound intrusive at first, the reboot comes with key security benefits.

There are two primary reasons why this feature is important:

First, after a reboot, the only way to unlock a phone is by entering the PIN — biometric options like fingerprint or facial recognition won’t work until the PIN is input manually. This ensures added protection, especially for users who haven’t set up any screen lock. A forced PIN entry makes it much harder for unauthorized individuals to access your device or the data on it.

Second, the update enhances encryption security. Android devices operate in two states: Before First Unlock (BFU) and After First Unlock (AFU). In the BFU state, your phone’s contents are completely encrypted, meaning that even advanced tools can’t extract the data.

This security measure also affects how law enforcement and investigative agencies handle seized phones. Since the BFU state kicks in automatically after a reboot, authorities have a limited window to access a device before it locks down data access completely.

“A BFU phone remains connected to Wi-Fi or mobile data, meaning that if you lose your phone and it reboots, you'll still be able to use location-finding services.”

The feature is listed in Google’s April 2025 System release notes, and while it appears to extend to Android tablets, it won’t apply to wearables like the Pixel Watch, Android Auto, or Android TVs.

As of now, Google hasn’t clarified whether users will have the option to turn off this feature or customize the three-day timer.

Because it’s tied to Google Play Services, users will receive the feature passively — there’s no need for a full system update to access it.