Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label monitoring. Show all posts

AI Emotional Monitoring in the Workplace Raises New Privacy and Ethical Concerns

 

As artificial intelligence becomes more deeply woven into daily life, tools like ChatGPT have already demonstrated how appealing digital emotional support can be. While public discussions have largely focused on the risks of using AI for therapy—particularly for younger or vulnerable users—a quieter trend is unfolding inside workplaces. Increasingly, companies are deploying generative AI systems not just for productivity but to monitor emotional well-being and provide psychological support to employees. 

This shift accelerated after the pandemic reshaped workplaces and normalized remote communication. Now, industries including healthcare, corporate services and HR are turning to software that can identify stress, assess psychological health and respond to emotional distress. Unlike consumer-facing mental wellness apps, these systems sit inside corporate environments, raising questions about power dynamics, privacy boundaries and accountability. 

Some companies initially introduced AI-based counseling tools that mimic therapeutic conversation. Early research suggests people sometimes feel more validated by AI responses than by human interaction. One study found chatbot replies were perceived as equally or more empathetic than responses from licensed therapists. This is largely attributed to predictably supportive responses, lack of judgment and uninterrupted listening—qualities users say make it easier to discuss sensitive topics. 

Yet the workplace context changes everything. Studies show many employees hesitate to use employer-provided mental health tools due to fear that personal disclosures could resurface in performance reviews or influence job security. The concern is not irrational: some AI-powered platforms now go beyond conversation, analyzing emails, Slack messages and virtual meeting behavior to generate emotional profiles. These systems can detect tone shifts, estimate personal stress levels and map emotional trends across departments. 

One example involves workplace platforms using facial analytics to categorize emotional expression and assign wellness scores. While advocates claim this data can help organizations spot burnout and intervene early, critics warn it blurs the line between support and surveillance. The same system designed to offer empathy can simultaneously collect insights that may be used to evaluate morale, predict resignations or inform management decisions. 

Research indicates that constant monitoring can heighten stress rather than reduce it. Workers who know they are being analyzed tend to modulate behavior, speak differently or avoid emotional honesty altogether. The risk of misinterpretation is another concern: existing emotion-tracking models have demonstrated bias against marginalized groups, potentially leading to misread emotional cues and unfair conclusions. 

The growing use of AI-mediated emotional support raises broader organizational questions. If employees trust AI more than managers, what does that imply about leadership? And if AI becomes the primary emotional outlet, what happens to the human relationships workplaces rely on? 

Experts argue that AI can play a positive role, but only when paired with transparent data use policies, strict privacy protections and ethical limits. Ultimately, technology may help supplement emotional care—but it cannot replace the trust, nuance and accountability required to sustain healthy workplace relationships.

Apple’s Digital ID Tool Sparks Privacy Debate Despite Promised Security

 

Apple’s newly introduced Digital ID feature has quickly ignited a divide among users and cybersecurity professionals, with reactions ranging from excitement to deep skepticism. Announced earlier this week, the feature gives U.S. iPhone owners a way to present their passport directly from Apple Wallet at Transportation Security Administration checkpoints across more than 250 airports nationwide. Designed to replace the need for physical identity documents at select travel touchpoints, the rollout marks a major step in Apple’s broader effort to make digital credentials mainstream. But the move has sparked conversations about how willing society should be to entrust critical identity information to smartphones. 

On one side are supporters who welcome the convenience of leaving physical IDs at home, believing Apple’s security infrastructure offers a safer and more streamlined travel experience. On the other side are privacy advocates who fear that such technology could pave the way for increased surveillance and data misuse, especially if government agencies gain new avenues to track citizens. These concerns mirror wider debates already unfolding in regions like the United Kingdom and the European Union, where national and bloc-wide digital identity programs have faced opposition from civil liberties organizations. 

Apple states that its Digital ID system relies on advanced encryption and on-device storage to protect sensitive information from unauthorized access. Unlike cloud-based sharing models, Apple notes that passport data will remain confined to the user’s iPhone, and only the minimal information necessary for verification will be transmitted during identification checks. Authentication through Face ID or Touch ID is required to access the ID, aiming to ensure that no one else can view or alter the data. Apple has emphasized that it does not gain access to passport details and claims its design prioritizes privacy at every stage. 

Despite these assurances, cybersecurity experts and digital rights advocates are unconvinced. Jason Bassler, co-founder of The Free Thought Project, argued publicly that increasing reliance on smartphone-based identity tools could normalize a culture of compromised privacy dressed up as convenience. He warned that once the public becomes comfortable with digital credentials, resistance to broader forms of monitoring may fade. Other specialists, such as Swiss security researcher Jean-Paul Donner, note that iPhone security is not impenetrable, and both hackers and law enforcement have previously circumvented device protections. 

Major organizations like the ACLU, EFF, and CDT have also called for strict safeguards, insisting that identity systems must be designed to prevent authorities from tracking when or where identification is used. They argue that without explicit structural barriers to surveillance, the technology could be exploited in ways that undermine civil liberties. 

Whether Apple can fully guarantee the safety and independence of digital identity data remains an open question. As adoption expands and security is tested in practice, the debate over convenience versus privacy is unlikely to go away anytime soon. TechRadar is continuing to consult industry experts and will provide updates as more insights emerge.

Is Your Android Device Tracking You? Understanding its Monitoring Methods

 

In general discussions about how Android phones might collect location and personal data, the focus often falls on third-party apps rather than Google's built-in apps. This awareness has grown due to numerous apps gathering significant information about users, leading to concerns, especially when targeted ads start appearing. The worry persists about whether apps, despite OS permissions, eavesdrop on private in-person conversations, a concern even addressed by Instagram's head in a 2019 CBS News interview.

However, attention to third-party apps tends to overshadow the fact that Android and its integrated apps track users extensively. While much of this tracking aligns with user preferences, it results in a substantial accumulation of sensitive personal data on phones. Even for those trusting Google with their information, understanding the collected data and its usage remains crucial, especially considering the limited options available to opt out of this data collection.

For instance, a lesser-known feature involves Google Assistant's ability to identify a parked car and send a notification regarding its location. This functionality, primarily guesswork, varies in accuracy and isn't widely publicized by Google, reflecting how tech companies leverage personal data for results that might raise concerns about potential eavesdropping.

The ways Android phones track users were highlighted in an October 2021 Kaspersky blog post referencing a study by researchers from the University of Edinburgh and Trinity College. While seemingly innocuous, the compilation of installed apps, when coupled with other personal data, can reveal intimate details about users, such as their religion or mental health status. This fusion of app presence with location data exposes highly personal information through AI-based assumptions.

Another focal point was the extensive collection of unique identifiers by Google and OEMs, tying users to specific handsets. While standard data collection aids app troubleshooting, these unique identifiers, including Google Advertising IDs, device serial numbers, and SIM card details, can potentially associate users even after phone number changes, factory resets, or ROM installations.

The study also emphasized the potential invasiveness of data collection methods, such as Xiaomi uploading app window histories and Huawei's keyboard logging app usage. Details like call durations and keyboard activity could lead to inferences about users' activities and health, reflecting the extensive and often unnoticed data collection practices by smartphones, as highlighted by Trinity College's Prof. Doug Leith.