Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Data Privacy. Show all posts

Unauthorized Use of AI Tools by Employees Exposes Sensitive Corporate Data


 

Artificial intelligence has rapidly revolutionised the modern workplace, creating both unprecedented opportunities and presenting complex challenges at the same time. Despite the fact that AI was initially conceived to improve productivity, it has quickly evolved into a transformational force that has changed the way employees think, work, and communicate. 

Despite the rapid rise in technology, many organisations are still ill-prepared to deal with the unchecked use of artificial intelligence. With the advent of generative AI, which can produce text, images, videos, and audio in a variety of ways, employees have increasingly adopted it for drafting emails, preparing reports, analysing data, and even creating creative content. 

The ability of advanced language models, which have been trained based on vast datasets, to mimic the language of humans with remarkable fluency can enable workers to perform tasks that once took hours to complete. According to some surveys, a majority of American employees rely on AI tools, often without formal approval or oversight, which are freely accessible with a little more than an email address to use. 

Platforms such as ChatGPT, where all you need is an email address if you wish to use the tool, are inspiring examples of this fast-growing trend. Nonetheless, this widespread use of unregulated artificial intelligence tools raises many concerns about privacy, data protection, and corporate governance—a concern employers must address with clear policies, robust safeguards, and a better understanding of the evolving digital landscape to prevent these concerns from becoming unfounded. 

Cybernews has recently found out that the surge in unapproved AI use in the workplace is a concerning phenomenon. While digital risks are on the rise, a staggering 75 per cent of employees who use so-called “shadow artificial intelligence” tools admit to having shared sensitive or confidential information through them.

Information that could easily compromise their organisations. However, what is more troubling is that the trend is not restricted to junior staff; it is actually a trend led by the leadership at the organisation. With approximately 93 per cent of executives and senior managers admitting to using unauthorised AI tools, it is clear that executives and senior managers are the most frequent users. Management accounts for 73 per cent, followed by professionals who account for 62 per cent. 

In other words, it seems that unauthorised AI tools are not isolated, but rather a systemic problem. In addition to employee records and customer information, internal documents, financial and legal records, and proprietary code, these categories of sensitive information are among the most commonly exposed categories, each of which can lead to serious security breaches each of which has the potential to be a major vulnerability. 

However, despite nearly nine out of ten workers admitting that utilising AI entails significant risks, this continues to happen. It has been found that 64 per cent of respondents recognise the possibility of data leaks as a result of unapproved artificial intelligence tools, and more than half say they will stop using those tools if such a situation occurs. However, proactive measures remain rare in the industry. As a result, there is a growing disconnect between awareness and action in corporate data governance, one that could have profound consequences if not addressed. 

There is also an interesting paradox within corporate hierarchies revealed by the survey: even though senior management is often responsible for setting data governance standards, they are the most frequent infringers on those standards. According to a recent study, 93 per cent of executives and senior managers use unapproved AI tools, outpacing all other job levels by a wide margin.

There is also a significant increase in engagement with unauthorised platforms by managers and team leaders, who are responsible for ensuring compliance and modelling best practices within the organisation. This pattern, researchers suggest, reflects a worrying disconnect between policy enforcement and actual behaviour, one that erodes accountability from the top down. Žilvinas GirÄ—nas, head of product at Nexos.ai, warns that the implications of such unchecked behaviour extend far beyond simple misuse. 

The truth is that it is impossible to determine where sensitive data will end up if it is pasted into unapproved AI tools. "It might be stored, used to train another model, exposed in logs, or even sold to third parties," he explained. It could be possible to slip confidential contracts, customer details, or internal records quietly into external systems without detection through such actions, he added.

A study conducted by IBM underscores the seriousness of this issue by estimating that shadow artificial intelligence can result in an average data breach cost of up to $670,000, an expense that few companies are able to afford. Even so, the Cybernews study found that almost one out of four employers does not have formal policies in place governing artificial intelligence use in the workplace. 

Experts believe that awareness alone will not be enough to prevent these risks from occurring. As Sabeckis noted, “It would be a shame if the only way to stop employees from using unapproved AI tools was through the hard lesson of a data breach. For many companies, even a single breach can be catastrophic. GirÄ—nas echoed this sentiment, emphasising that shadow AI “thrives in silence” when leadership fails to act decisively. 

The speaker warned that employees will continue to rely on whatever tools seem convenient to them if clear guidelines and sanctioned alternatives are not provided, leading to efficiency shortcuts becoming potential security breaches without clear guidelines and sanctioned alternatives. Experts emphasise that organisations must adopt comprehensive internal governance strategies to mitigate the growing risks associated with the use of unregulated artificial intelligence, beyond technical safeguards. 

There are a number of factors that go into establishing a well-structured artificial intelligence framework, including establishing a formal AI policy. This policy should clearly state the acceptable uses for AI, prohibit the unauthorised download of free AI tools, and limit the sharing of personal, proprietary, and confidential information through these platforms. 

Businesses are also advised to revise and update existing IT, network security, and procurement policies in order to keep up with the rapidly changing AI environment. Additionally, proactive employee engagement continues to be a crucial part of addressing AI-related risks. Training programs can provide workers with the information and skills needed to understand potential risks, identify sensitive information, and follow best practices for safe, responsible use of AI. 

Also essential is the development of a robust data classification strategy that enables employees to recognise and handle confidential or sensitive information before interacting with AI systems in a proper manner. 

The implementation of formal authorisation processes for AI tools may also benefit organisations by limiting access to the tools to qualified personnel, along with documentation protocols that document inputs and outputs so that compliance and intellectual property issues can be tracked. Further safeguarding the reputation of your brand can be accomplished by periodic reviews of AI-generated content for bias, accuracy, and appropriateness. 

By continuously monitoring AI tools, including reviewing their evolving terms of service, organisations can ensure ongoing compliance with their company's standards, as well. Finally, it is important to put in place a clearly defined incident response plan, which includes designated points of contact for potential data exposure or misuse. This will help organisations respond more quickly to any AI-related incident. 

Combined, these measures represent a significant step forward in the adoption of structured, responsible artificial intelligence that balances innovation and accountability. Although internal governance is the cornerstone of responsible AI usage, external partnerships and vendor relationships are equally important when it comes to protecting organisational data. 

According to experts, organisation leaders need to be vigilant not just about internal compliance, but also about third-party contracts and data processing agreements. Data privacy, retention, and usage provisions should be explicitly included in any agreement with an external AI provider. These provisions are meant to protect confidential information from being exploited or stored in ways that are outside of the intended use of the information.

Business leaders, particularly CEOs and senior executives, must examine vendor agreements carefully in order to ensure that they are aligned with international data protection frameworks, such as the General Data Protection Regulation and California Consumer Privacy Act (CCPA). In order to improve their overall security posture, organisations can ensure that sensitive data is handled with the same rigour and integrity as their internal privacy standards by incorporating these safeguards into the contract terms. 

In the current state of artificial intelligence, which has been redefining the limits of workplace efficiency, its responsible integration has become an important factor in enhancing organisational trust and resilience as it continues to redefine the boundaries of workplace efficiency. Getting AI to work effectively in business requires not only innovation but also a mature set of governance frameworks that accompany its use. 

Companies that adopt a proactive approach, such as by enforcing clear internal policies, establishing transparency with vendors, and cultivating a culture of accountability, will be able to gain more than simply security. They will also gain credibility with clients and employees, as well as regulators. Although internal governance is the cornerstone of responsible AI usage, external partnerships and vendor relationships are equally important when it comes to protecting organisational data. 

According to experts, organisation leaders need to be vigilant not just about internal compliance, but also about third-party contracts and data processing agreements. Data privacy, retention, and usage provisions should be explicitly included in any agreement with an external AI provider. 

These provisions are meant to protect confidential information from being exploited or stored in ways that are outside of the intended use of the information. Business leaders, particularly CEOs and senior executives, must examine vendor agreements carefully in order to ensure that they are aligned with international data protection frameworks, such as the General Data Protection Regulation and California Consumer Privacy Act (CCPA). 

In order to improve their overall security posture, organisations can ensure that sensitive data is handled with the same rigour and integrity as their internal privacy standards by incorporating these safeguards into the contract terms. In the current state of artificial intelligence, which has been redefining the limits of workplace efficiency, its responsible integration has become an important factor in enhancing organisational trust and resilience as it continues to redefine the boundaries of workplace efficiency. 

Getting AI to work effectively in business requires not only innovation but also a mature set of governance frameworks that accompany its use. Companies that adopt a proactive approach, such as by enforcing clear internal policies, establishing transparency with vendors, and cultivating a culture of accountability, will be able to gain more than simply security. They will also gain credibility with clients and employees, as well as regulators.

In addition to ensuring compliance, responsible AI adoption can improve operational efficiency, increase employee confidence, and strengthen brand loyalty in an increasingly data-conscious market. According to experts, artificial intelligence should not be viewed merely as a risk to be controlled, but as a powerful tool to be harnessed under strong ethical and strategic guidelines. 

It is becoming increasingly apparent that in today's business climate, every prompt, every dataset can potentially create a vulnerability, so organisations that thrive will be those that integrate technological ambition with a disciplined governance framework - trying to transform AI from being a source of uncertainty to being a tool for innovation that is as sustainable and secure as possible.

Sam Altman Pushes for Legal Privacy Protections for ChatGPT Conversations

 

Sam Altman, CEO of OpenAI, has reiterated his call for legal privacy protections for ChatGPT conversations, arguing they should be treated with the same confidentiality as discussions with doctors or lawyers. “If you talk to a doctor about your medical history or a lawyer about a legal situation, that information is privileged,” Altman said. “We believe that the same level of protection needs to apply to conversations with AI.”  

Currently, no such legal safeguards exist for chatbot users. In a July interview, Altman warned that courts could compel OpenAI to hand over private chat data, noting that a federal court has already ordered the company to preserve all ChatGPT logs, including deleted ones. This ruling has raised concerns about user trust and OpenAI’s exposure to legal risks. 

Experts are divided on whether Altman’s vision could become reality. Peter Swire, a privacy and cybersecurity law professor at Georgia Tech, explained that while companies seek liability protection, advocates want access to data for accountability. He noted that full privacy privileges for AI may only apply in “limited circumstances,” such as when chatbots explicitly act as doctors or lawyers. 

Mayu Tobin-Miyaji, a law fellow at the Electronic Privacy Information Center, echoed that view, suggesting that protections might be extended to vetted AI systems operating under licensed professionals. However, she warned that today’s general-purpose chatbots are unlikely to receive such privileges soon. Mental health experts, meanwhile, are urging lawmakers to ban AI systems from misrepresenting themselves as therapists and to require clear disclosure when users are interacting with bots.  

Privacy advocates argue that transparency, not secrecy, should guide AI policy. Tobin-Miyaji emphasized the need for public awareness of how user data is collected, stored, and shared. She cautioned that confidentiality alone will not address the broader safety and accountability issues tied to generative AI. 

Concerns about data misuse are already affecting user behavior. After a May court order requiring OpenAI to retain ChatGPT logs indefinitely, many users voiced privacy fears online. Reddit discussions reflected growing unease, with some advising others to “assume everything you post online is public.” While most ChatGPT conversations currently center on writing or practical queries, OpenAI’s research shows an increase in emotionally sensitive exchanges. 

Without formal legal protections, users may hesitate to share private details, undermining the trust Altman views as essential to AI’s future. As the debate over AI confidentiality continues, OpenAI’s push for privacy may determine how freely people engage with chatbots in the years to come.

The Spectrum of Google Product Alternatives


 

It is becoming increasingly evident that as digital technologies are woven deeper into our everyday lives, questions about how personal data is collected, used, and protected are increasingly at the forefront of public discussion. 

There is no greater symbol of this tension than the vast ecosystem of Google products, whose products have become nearly inseparable from the entire online world. It's important to understand that, despite the convenience of this service, the business model that lies behind it is fundamentally based on collecting user data and monetising attention with targeted advertising. 

In the past year alone, this model has generated over $230 billion in advertising revenue – a model that has driven extraordinary profits — but it has also heightened the debate over what is the right balance between privacy and utility.'

In recent years, Google users have begun to reconsider their dependence on Google and instead turn to platforms that pledge to prioritise user privacy and minimise data exploitation rather than relying solely on Google's services. Over the last few decades, Google has built a business empire based on data collection, using Google's search engine, Android operating system, Play Store, Chrome browser, Gmail, Google Maps, and YouTube, among others, to collect vast amounts of personal information. 

Even though tools such as virtual private networks (VPNs) can offer some protection by encrypting online activity, they do not address the root cause of the problem: these platforms require accounts to be accessible, so they ultimately feed more information into Google's ecosystem for use there. 

As users become increasingly concerned about protecting their privacy, choosing alternatives developed by companies that are committed to minimising surveillance and respecting personal information is a more sustainable approach to protecting their privacy. In the past few years, it has been the case that an ever-growing market of privacy-focused competitors has emerged, offering users comparable functionality while not compromising their trust in these companies. 

 As an example, let's take the example of Google Chrome, which is a browser that is extremely popular worldwide, but often criticised for its aggressive data collection practices, which are highly controversial. According to a 2019 investigation published by The Washington Post, Chrome has been characterised as "spy software," as it has been able to install thousands of tracking cookies each week on devices. This has only fueled the demand for alternatives, and privacy-centric browsers are now positioning themselves as viable alternatives that combine performance with stronger privacy protection.

In the past decade, Google has become an integral part of the digital world for many internet users, providing tools such as search, email, video streaming, cloud storage, mobile operating systems, and web browsing that have become indispensable to them as the default gateways to the Internet. 

It has been a strategy that has seen the company dominate multiple sectors at the same time - a strategy that has been described as building a protective moat of services around their core business of search, data, and advertising. However, this dominance has included a cost. 

The company has created a system that monetises virtually every aspect of online behaviour by collecting and interfacing massive amounts of personal usage data across all its platforms, generating billions of dollars in advertising revenue while causing growing concern about the abuse of user privacy in the process. 

There is a growing awareness that, despite the convenience of Google's ecosystem, there are risks associated with it that are encouraging individuals and organisations to seek alternatives that better respect digital rights. For instance, Purism, a privacy-focused company that offers services designed to help users take control of their own information, tries to challenge this imbalance. However, experts warn that protecting the data requires a more proactive approach as a whole. 

The maintenance of secure offline backups is a crucial step that organisations should take, especially in the event of cyberattacks. Offline backups provide a reliable safeguard, unlike online backups, which are compromised by ransomware, allowing organisations to restore systems from clean data with minimal disruption and providing a reliable safeguard against malicious software and attacks. 

There is a growing tendency for users to shift away from default reliance on Google and other Big Tech companies, in favour of more secure, transparent, and user-centric solutions based on these strategies. Users are becoming increasingly concerned about privacy concerns, and they prefer platforms that prioritise security and transparency over Google's core services. 

As an alternative to Gmail, DuckDuckGo provides privacy-focused search results without tracking or profiling, whereas ProtonMail is a secure alternative to Gmail with end-to-end encrypted email. When it comes to encrypted event management, Proton Calendar replaces Google Calendar, and browsers such as Brave and LibreWolf minimise tracking and telemetry when compared to Chrome. 

It has been widely reported that the majority of apps are distributed by F-Droid, which offers free and open-source apps that do not rely on tracking, while note-taking and file storage are mainly handled by Simple Notes and Proton Drive, which protect the user's data. There are functional alternatives such as Todoist and HERE WeGo, which provide functionality without sacrificing privacy. 

There has even been a shift in video consumption, in which users use YouTube anonymously or subscribe to streaming platforms such as Netflix and Prime Video. Overall, these shifts highlight a trend toward digital tools that emphasise user control, data protection, and trust over convenience. As digital privacy and data security issues gain more and more attention, people and organisations are reevaluating their reliance on Google's extensive productivity and collaboration tools, as well as their dependency on the service. 

In spite of the immense convenience that these platforms offer, their pervasive data collection practices have raised serious questions about privacy and user autonomy. Consequently, alternatives to these platforms have evolved and were developed to maintain comparable functionality—including messaging, file sharing, project management, and task management—while emphasizing enhanced privacy, security, and operational control while maintaining comparable functionality. 

Continuing with the above theme, it is worthwhile to briefly examine some of the leading platforms that provide robust, privacy-conscious alternatives to Google's dominant ecosystem, as described in this analysis. Microsoft Teams.  In addition to Google's collaboration suite, Microsoft Teams is also a well-established alternative. 

It is a cloud-based platform that integrates seamlessly with Microsoft 365 applications such as Microsoft Word, Excel, PowerPoint, and SharePoint, among others. As a central hub for enterprise collaboration, it offers instant messaging, video conferencing, file sharing, and workflow management, which makes it an ideal alternative to Google's suite of tools. 

Several advanced features, such as APIs, assistant bots, conversation search, multi-factor authentication, and open APIs, further enhance its utility. There are, however, some downsides to Teams as well, such as the steep learning curve and the absence of a pre-call audio test option, which can cause interruptions during meetings, unlike some competitors. 

Zoho Workplace

A new tool from Zoho called Workplace is being positioned as a cost-effective and comprehensive digital workspace offering tools such as Zoho Mail, Cliq, WorkDrive, Writer, Sheet, and Meeting, which are integrated into one dashboard. 

The AI-assisted assistant, Zia, provides users with the ability to easily find files and information, while the mobile app ensures connectivity at all times. However, it has a relatively low price point, making it attractive for smaller businesses, although the customer support may be slow, and Zoho Meeting offers limited customisation options that may not satisfy users who need more advanced features. 

Bitrix24 

Among the many services provided by Bitrix24, there are project management, CRM, telephony, analytics, and video calls that are combined in an online unified workspace that simplifies collaboration. Designed to integrate multiple workflows seamlessly, the platform is accessible from a desktop, laptop, or mobile device. 

While it is used by businesses to simplify accountability and task assignment, users have reported some glitches and delays with customer support, which can hinder the smooth running of operations, causing organisations to look for other solutions. 

 Slack 

With its ability to offer flexible communication tools such as public channels, private groups, and direct messaging, Slack has become one of the most popular collaboration tools across industries because of its easy integration with social media and the ability to share files efficiently. 

Slack has all of the benefits associated with real-time communication, with notifications being sent in real-time, and thematic channels providing participants with the ability to have focused discussions. However, due to its limited storage capacity and complex interface, Slack can be challenging for new users, especially those who are managing large amounts of data. 

ClickUp 

This software helps simplify the management of projects and tasks with its drag-and-drop capabilities, collaborative document creation, and visual workflows. With ClickUp, you'll be able to customise the workflow using drag-and-drop functionality.

Incorporating tools like Zapier or Make into the processes enhances automation, while their flexibility makes it possible for people's business to tailor their processes precisely to their requirements. Even so, ClickUp's extensive feature set involves a steep learning curve. The software may slow down their productivity occasionally due to performance lags, but that does not affect its appeal. 

Zoom 

With Zoom, a global leader in video conferencing, remote communication becomes easier than ever before. It enables large-scale meetings, webinars, and breakout sessions, while providing features such as call recording, screen sharing, and attendance tracking, making it ideal for remote work. 

It is a popular choice because of its reliability and ease of use for both businesses and educational institutions, but also because its free version limits meetings to around 40 minutes, and its extensive capabilities can be a bit confusing for those who have never used it before. As digital tools with a strong focus on privacy are becoming increasingly popular, they are also part of a wider reevaluation of how data is managed in a modern digital ecosystem, both personally and professionally. 

By switching from default reliance on Google's services, not only are people reducing their exposure to extensive data collection, but they are also encouraging people to adopt platforms that emphasise security, transparency, and user autonomy. Individuals can greatly reduce the risks associated with online tracking, targeted advertising, and potential data breaches by implementing alternatives such as encrypted e-mail, secure calendars, and privacy-oriented browsers. 

Among the collaboration and productivity solutions that organisations can incorporate are Microsoft Teams, Zoho Workplace, ClickUp, and Slack. These products can enhance workflow efficiency and allow them to maintain a greater level of control over sensitive information while reducing the risk of security breaches.

In addition to offline backups and encrypted cloud storage, complementary measures, such as ensuring app permissions are audited carefully, strengthen data resilience and continuity in the face of cyber threats. In addition to providing greater levels of security, these alternative software solutions are typically more flexible, interoperable, and user-centred, making them more effective for teams to streamline communication and project management. 

With digital dependence continuing to grow, deciding to choose privacy-first solutions is more than simply a precaution; rather, it is a strategic choice that safeguards both an individual's digital assets as well as an organisation's in order to cultivate a more secure, responsible, and informed online presence as a whole.

Hackers Claim Data on 150000 AIL Users Stolen


It has been reported that American Income Life, one of the world's largest supplemental insurance providers, is now under close scrutiny following reports of a massive cyberattack that may have compromised the personal and insurance records of hundreds of thousands of the company's customers. It has been claimed that a post that has appeared on a well-known underground data leak forum contains sensitive data that was stolen directly from the website of the company. 

It is said to be a platform frequently used by cybercriminals for trading and selling stolen information. According to the person behind the post, there is extensive customer information involved in the breach, which raises concerns over the increasing frequency of large-scale attacks aimed at the financial and insurance industries. 

AIL, a Fortune 1000 company with its headquarters in Texas, generates over $5.7 billion in annual revenue. It is a subsidiary of Globe Life Inc., a Fortune 1000 financial services holding company. It is considered to be an incident that has the potential to cause a significant loss for one of the country's most prominent supplemental insurance companies. 

In the breach, which first came to light through a post on a well-trafficked hacking forum, it is alleged that approximately 150,000 personal records were compromised. The threat actor claimed that the exposed dataset included unique record identifiers, personal information such as names, phone numbers, addresses, email addresses, dates of birth, genders, as well as confidential information regarding insurance policies, including the type of policy and its status, among other details. 

According to Cybernews security researchers who examined some of the leaked data, the data seemed largely authentic, but they noted it was unclear whether the records were current or whether they represented old, outdated information. 

In their analysis, cybersecurity researchers at Cybernews concluded that delays in breach notification could have a substantial negative impact on a company's financial as well as reputational position. It has been noted by Alexa Vold, a regulatory lawyer and partner at BakerHostetler, that organisations often spend months or even years manually reviewing enormous volumes of compromised documents, when available reports are far more efficient in determining the identity of the victim than they could do by manually reviewing vast quantities of compromised documents. 

Aside from driving up costs, she cautioned that slow disclosures increase the likelihood of regulatory scrutiny, which in turn can lead to consumer backlash if they are not made sooner. A company such as Alera Group was found to be experiencing suspicious activity in its systems in August 2024, so the company immediately started an internal investigation into the matter. 

It was confirmed by the company on April 28, 202,5, that unauthorised access to its network between July 19 and August 4, 2024, may have resulted in the removal of sensitive personal data. It is important to note that the amount of information that has been compromised differs from person to person. 

However, this information could include highly confidential information such as names, addresses, dates of birth, Social Security numbers, driver's licenses, marriage certificates and birth certificates, passport information, financial details, credit card information, as well as other forms of identification issued by the government. 

A rather surprising fact about the breach is that it appears that the individual behind it is willing to offer the records for free, a move that will increase the risk to victims in a huge way. As a general rule, such information is sold on underground markets to a very small number of cybercriminals, but by making it freely available, it opens the door for widespread abuse and increases the likelihood that secondary attacks will take place. 

According to experts, certain personal identifiers like names, dates of birth, addresses, and phone numbers can be highly valuable for nabbing identity theft victims and securing loans on their behalf through fraudulent accounts or securing loans in the name of the victims. There is a further level of concern ensuing from the exposure of policy-related details, including policy status and types of plans, since this type of information could be used in convincing phishing campaigns designed to trick policyholders into providing additional credentials or authorising unauthorised payments.

There is a possibility of using the leaked records to commit medical fraud or insurance fraud in more severe scenarios, such as submitting false claims or applying for healthcare benefits under stolen identities in order to access healthcare benefits. The HIPAA breach notification requirements do not allow for much time to be slowed down, according to regulatory experts and healthcare experts. 

The rule permits reporting beyond the 60-day deadline only in rare cases, such as when a law enforcement agency or a government agency requests a longer period of time, so as not to interfere with an ongoing investigation or jeopardise national security. In spite of the difficulty in determining the whole scope of compromised electronic health information, regulators do not consider the difficulty in identifying it to be a valid reason, and they expect entities to disclose information breaches based on initial findings and provide updates as inquiries progress. 

There are situations where extreme circumstances, such as ongoing containment efforts or multijurisdictional coordination, may be operationally understandable, but they are not legally recognised as grounds for postponing a problem. In accordance with HHS OCR, the U.S. Department of Health and Human Services' “without unreasonable delay” standard is applied, and penalties may be imposed where it perceives excessive procrastination on the part of the public. 

According to experts, if the breach is expected to affect 500 or more individuals, a preliminary notice should be submitted, and supplemental updates should be provided as details emerge. This is a practice observed in major incidents such as the Change Healthcare breach. The consequences of delayed disclosures are often not only regulatory, but also expose organisations to litigation, which can be seen in Alera Group's case, where several proposed class actions accuse Alera Group of failing to promptly notify affected individuals of the incident. 

The attorneys at my firm advise that firms must strike a balance between timeliness and accuracy: prolonged document-by-document reviews can be wasteful, exacerbate regulatory and consumer backlash, and thereby lead to wasteful expenses and unnecessary risks, whereas efficient methods of analysis can accomplish the same tasks more quickly and without the need for additional resources. American Income Life's ongoing situation serves as a good example of how quickly an underground forum post may escalate to a problem that affects corporate authorities, regulators, and consumers if the incident is not dealt with promptly. 

In the insurance and financial sectors, this episode serves as a reminder that it is not only the effectiveness of a computer security system that determines the level of customer trust, but also how transparent and timely the organisation is in addressing breaches when they occur. 

According to industry observers, proactive monitoring, clear incident response protocols, and regular third-party security audits are no longer optional measures, but rather essential in mitigating both direct and indirect damages, both in the short run and in the long term, following a data breach event. Likewise, a breach notification system must strike the right balance between speed and accuracy so that individuals can safeguard their financial accounts, monitor their credit activity, and keep an eye out for fraudulent claims as early as possible.

It is unlikely that cyberattacks will slow down in frequency or sophistication in the foreseeable future. However, companies that are well prepared and accountable can significantly minimise the fallout when incidents occur. It is clear from the AIL case that the true test of any institution cannot be found in whether it can prevent every breach, but rather what it can do when it fails to prevent it from happening. 

There is a need for firms to strike a delicate balance between timeliness and accuracy, according to attorneys. The long-term review of documents can waste valuable resources and increase consumer and regulatory backlash, whereas efficient analysis methods allow for the same outcome much more quickly and with less risk than extended document-by-document reviews. 

American Income Life's ongoing situation illustrates how quickly a cyber incident can escalate from being a post on an underground forum to becoming a matter of regulatory concern and a matter that involves companies, regulators, and consumers in a significant way. There is no doubt that the episode serves as a reminder for companies in the insurance and financial sectors of the importance of customer trust. 

While on one hand, customer trust depends on how well systems are protected, on the other hand, customer trust is based on how promptly breaches are resolved. It is widely understood that proactive monitoring, clear incident response protocols, and regular third-party security audits are no longer optional measures. Rather, they have become essential components, minimising both short-term and long-term damage from cyberattacks. 

As crucial as ensuring the right balance is struck between speed and accuracy when it comes to breach notification is giving individuals the earliest possible chance of safeguarding their financial accounts, monitoring their credit activity, and looking for fraudulent claims when they happen. 

Although cyberattacks are unlikely to slow down in frequency or sophistication, companies that prioritise readiness and accountability can reduce the severity of incidents significantly if they occur. AIL's case highlights that what really counts for a company is not whether it can prevent every breach, but how effectively it is able to deal with the consequences when preventative measures fail.

Why CEOs Must Go Beyond Backups and Build Strong Data Recovery Plans

 

We are living in an era where fast and effective solutions for data challenges are crucial. Relying solely on backups is no longer enough to guarantee business continuity in the face of cyberattacks, hardware failures, human error, or natural disasters. Every CEO must take responsibility for ensuring that their organization has a comprehensive data recovery plan that extends far beyond simple backups. 

Backups are not foolproof. They can fail, be misconfigured, or become corrupted, leaving organizations exposed at critical moments. Modern attackers are also increasingly targeting backup systems directly, making it impossible to restore data when needed. Even when functioning correctly, traditional backups are usually scheduled once a day and do not run in real time, putting businesses at risk of losing hours of valuable work. Recovery time is equally critical, as lengthy downtime caused by delays in data restoration can severely damage both reputation and revenue.  

Businesses often overestimate the security that traditional backups provide, only to discover their shortcomings when disaster strikes. A strong recovery plan should include proactive measures such as regular testing, simulated ransomware scenarios, and disaster recovery drills to ensure preparedness. Without this, the organization risks significant disruption and financial losses. 

The consequences of poor planning extend beyond operational setbacks. For companies handling sensitive personal or financial data, legal and compliance requirements demand advanced protection and recovery systems. Failure to comply can lead to legal penalties and fines in addition to reputational harm. To counter modern threats, organizations should adopt solutions like immutable backups, air-gapped storage, and secure cloud-based systems. While migrating to cloud storage may seem time-consuming, it offers resilience against physical damage and ensures that data cannot be lost through hardware failures alone. 

An effective recovery plan must be multi-layered. Following the 3-2-1 backup rule—keeping three copies of data, on two different media, with one offline—is widely recognized as best practice. Cloud-based disaster recovery platforms such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) should also be considered to provide automated failover and minimize downtime. Beyond technology, employee awareness is essential. IT and support staff should be well-trained and recovery protocols tested quarterly to confirm readiness. 

Communication plays a vital role in data recovery planning. How an organization communicates a disruption to clients can directly influence how much trust is retained. While some customers may inevitably be lost, a clear and transparent communication strategy can help preserve the majority. CEOs should also evaluate cyber insurance options to mitigate financial risks tied to recovery costs. 

Ultimately, backups are just snapshots of data, while a recovery plan acts as a comprehensive playbook for survival when disaster strikes. CEOs who neglect this responsibility risk severe financial losses, regulatory penalties, and even business closure. A well-designed, thoroughly tested recovery plan not only minimizes downtime but also protects revenue, client trust, and the long-term future of the organization.

CLOUD Act Extends US Jurisdiction Over Global Cloud Data Across Microsoft, Google, and Amazon

 

That Frankfurt data center storing your business files or the Singapore server holding your personal photos may not be as secure from U.S. oversight as you think. If the provider is Microsoft, Amazon, Google, or another U.S.-based tech giant, physical geography does little to shield information once American authorities seek access. The Clarifying Lawful Overseas Use of Data (CLOUD) Act, enacted in March 2018, gives U.S. law enforcement broad authority to demand data from American companies no matter where that information is located. Many organizations and individuals who once assumed that hosting data in Europe or Asia provided protection from U.S. jurisdiction now face an overlooked vulnerability.  

The law applies to every major cloud provider headquartered in the United States, including Microsoft, Amazon, Google, Apple, Meta, and Salesforce. This means data hosted in Microsoft’s European facilities, Google’s Asian networks, or Amazon’s servers in regions worldwide can be accessed through proper legal orders. An organization running Office 365 in London or an individual storing iCloud photos in Berlin could have their data obtained by U.S. investigators with little visibility into the process. Even companies promoting themselves as “foreign hosted” may not be immune if they have American subsidiaries or offices. Jurisdiction extends to entities connected to the United States, meaning that promises of sovereignty can be undercut by corporate structure. 

The framework obligates companies to comply quickly with data requests, leaving limited room for delay. Providers may challenge orders if they conflict with local privacy protections, but the proceedings typically occur without the knowledge of the customer whose data is involved. As a result, users may never know their information has been disclosed, since notification is not required. This dynamic has raised significant concerns about transparency, privacy, and the balance of international legal obligations. 

There are alternatives for those seeking stronger guarantees of independence. Providers such as Hetzner in Germany, OVHcloud in France, and Proton in Switzerland operate strictly under European laws and maintain distance from U.S. corporate ties. These companies cannot be compelled to share data with American authorities unless they enter into agreements that extend jurisdiction. However, relying on such providers can involve trade-offs, such as limited integration with mainstream platforms or reduced global reach. Some U.S. firms have responded by offering “sovereign cloud regions” managed locally, but questions remain about whether ultimate control still rests with the parent corporation and therefore remains vulnerable to U.S. legal demands. 

The implications are clear: the choice of cloud provider is not only a technical or financial decision but a geopolitical one. In a world where information represents both power and liability, each upload is effectively a decision about which country’s laws govern your digital life. For businesses and individuals alike, data location may matter less than corporate origin, and the CLOUD Act ensures that U.S. jurisdiction extends far beyond its borders.

Federal Judge Allows Amazon Alexa Users’ Privacy Lawsuit to Proceed Nationwide

 

A federal judge in Seattle has ruled that Amazon must face a nationwide lawsuit involving tens of millions of Alexa users. The case alleges that the company improperly recorded and stored private conversations without user consent. U.S. District Judge Robert Lasnik determined that Alexa owners met the legal requirements to pursue collective legal action for damages and an injunction to halt the alleged practices. 

The lawsuit claims Amazon violated Washington state law by failing to disclose that it retained and potentially used voice recordings for commercial purposes. Plaintiffs argue that Alexa was intentionally designed to secretly capture billions of private conversations, not just the voice commands directed at the device. According to their claim, these recordings may have been stored and repurposed without permission, raising serious privacy concerns. Amazon strongly disputes the allegations. 

The company insists that Alexa includes multiple safeguards to prevent accidental activation and denies evidence exists showing it recorded conversations belonging to any of the plaintiffs. Despite Amazon’s defense, Judge Lasnik stated that millions of users may have been impacted in a similar manner, allowing the case to move forward. Plaintiffs are also seeking an order requiring Amazon to delete any recordings and related data it may still hold. The broader issue at stake in this case centers on privacy rights within the home.

If proven, the claims suggest that sensitive conversations could have been intercepted and stored without explicit approval from users. Privacy experts caution that voice data, if mishandled or exposed, can lead to identity risks, unauthorized information sharing, and long-term security threats. Critics further argue that the lawsuit highlights the growing power imbalance between consumers and large technology companies. Amazon has previously faced scrutiny over its corporate practices, including its environmental footprint. 

A 2023 report revealed that the company’s expanding data centers in Virginia would consume more energy than the entire city of Seattle, fueling additional criticism about the company’s long-term sustainability and accountability. The case against Amazon underscores the increasing tension between technological convenience and personal privacy. 

As voice-activated assistants become commonplace in homes, courts will likely play a decisive role in determining the boundaries of data collection and consumer protection. The outcome of this lawsuit could set a precedent for how tech companies handle user data and whether customers can trust that private conversations remain private.

Think Twice Before Uploading Personal Photos to AI Chatbots

 

Artificial intelligence chatbots are increasingly being used for fun, from generating quirky captions to transforming personal photos into cartoon characters. While the appeal of uploading images to see creative outputs is undeniable, the risks tied to sharing private photos with AI platforms are often overlooked. A recent incident at a family gathering highlighted just how easy it is for these photos to be exposed without much thought. What might seem like harmless fun could actually open the door to serious privacy concerns. 

The central issue is unawareness. Most users do not stop to consider where their photos are going once uploaded to a chatbot, whether those images could be stored for AI training, or if they contain personal details such as house numbers, street signs, or other identifying information. Even more concerning is the lack of consent—especially when it comes to children. Uploading photos of kids to chatbots, without their ability to approve or refuse, creates ethical and security challenges that should not be ignored.  

Photos contain far more than just the visible image. Hidden metadata, including timestamps, location details, and device information, can be embedded within every upload. This information, if mishandled, could become a goldmine for malicious actors. Worse still, once a photo is uploaded, users lose control over its journey. It may be stored on servers, used for moderation, or even retained for training AI models without the user’s explicit knowledge. Just because an image disappears from the chat interface does not mean it is gone from the system.  

One of the most troubling risks is the possibility of misuse, including deepfakes. A simple selfie, once in the wrong hands, can be manipulated to create highly convincing fake content, which could lead to reputational damage or exploitation. 

There are steps individuals can take to minimize exposure. Reviewing a platform’s privacy policy is a strong starting point, as it provides clarity on how data is collected, stored, and used. Some platforms, including OpenAI, allow users to disable chat history to limit training data collection. Additionally, photos can be stripped of metadata using tools like ExifTool or by taking a screenshot before uploading. 

Consent should also remain central to responsible AI use. Children cannot give informed permission, making it inappropriate to share their images. Beyond privacy, AI-altered photos can distort self-image, particularly among younger users, leading to long-term effects on confidence and mental health. 

Safer alternatives include experimenting with stock images or synthetic faces generated by tools like This Person Does Not Exist. These provide the creative fun of AI tools without compromising personal data. 

Ultimately, while AI chatbots can be entertaining and useful, users must remain cautious. They are not friends, and their cheerful tone should not distract from the risks. Practicing restraint, verifying privacy settings, and thinking critically before uploading personal photos is essential for protecting both privacy and security in the digital age.

Facial Recognition's False Promise: More Sham Than Security

 

Despite the rapid integration of facial recognition technology (FRT) into daily life, its effectiveness is often overstated, creating a misleading picture of its true capabilities. While developers frequently tout accuracy rates as high as 99.95%, these figures are typically achieved in controlled laboratory settings and fail to reflect the system's performance in the real world.

The discrepancy between lab testing and practical application has led to significant failures with severe consequences. A prominent example is the wrongful arrest of Robert Williams, a Black man from Detroit who was misidentified by police facial recognition software based on a low-quality image.

This is not an isolated incident; there have been at least seven confirmed cases of misidentification from FRT, six of which involved Black individuals. Similarly, an independent review of the London Metropolitan Police's use of live facial recognition found that out of 42 matches, only eight were definitively accurate.

These real-world failures stem from flawed evaluation methods. The benchmarks used to legitimize the technology, such as the US National Institute of Standards and Technology's (NIST) Facial Recognition Technology Evaluation (FRTE), do not adequately account for real-world conditions like blurred images, poor lighting, or varied camera angles. Furthermore, the datasets used for training these systems are often not representative of diverse demographics, which leads to significant biases .

The inaccuracies of FRT are not evenly distributed across the population. Research consistently shows that the technology has higher error rates for people of color, women, and individuals with disabilities. For example, one of Microsoft’s early models had a 20.8% error rate for dark-skinned women but a 0% error rate for light-skinned men . This systemic bias means the technology is most likely to fail the very communities that are already vulnerable to over-policing and surveillance.

Despite these well-documented issues, FRT is being widely deployed in sensitive areas such as law enforcement, airports, and retail stores. This raises profound ethical concerns about privacy, civil rights, and due process, prompting companies like IBM, Amazon, and Microsoft to restrict or halt the sale of their facial recognition systems to police departments. The continued rollout of this flawed technology suggests that its use is more of a "sham" than a reliable security solution, creating a false sense of safety while perpetuating harmful biases.

Over a Million Healthcare Devices Hit by Cyberattack

 


Despite the swell of cyberattacks changing the global threat landscape, Indian healthcare has become one of the most vulnerable targets as a result of these cyberattacks. There are currently 8,614 cyberattacks per week on healthcare institutions in the country, a figure that is more than four times the global average and nearly twice that of any other industry in the country. 

In addition to the immense value that patient data possesses and the difficulties in safeguarding sprawling healthcare networks, the relentless targeting of patients reflects the challenges that healthcare providers continue to face healthcare providers. With the emergence of sophisticated hacktivist operations, ransomware, hacking attacks, and large-scale data theft, these breaches are becoming more sophisticated and are not simply disruptions. 

The cybercriminal business is rapidly moving from traditional encryption-based extortion to aggressive methods of "double extortion" that involve stealing and then encrypting data, or in some cases abandoning encryption altogether in order to concentrate exclusively on exfiltrating data. This evolution can be seen in groups like Hunters International, recently rebranded as World Leaks, that are exploiting a declining ransom payment system and thriving underground market for stolen data to exploit its gains. 

A breach in the Healthcare Delivery Organisations' system risks exposing vast amounts of personal and medical information, which underscores why the sector remains a target for hackers today, as it is one of the most attractive sectors for attackers, and is also continually targeted by them. Modat, a cybersecurity firm that uncovered 1.2 million internet-connected medical systems that are misconfigured and exposed online in August 2025, is a separate revelation that emphasises the sector's vulnerabilities. 

Several critical devices in the system were available, including imaging scanners, X-ray machines, DICOM viewers, laboratory testing platforms, and hospital management systems, all of which could be accessed by an attacker. Experts warned that the exposure posed a direct threat to patient safety, in addition to posing a direct threat to privacy. 

In Modat's investigation, sensitive data categories, including highly detailed medical imaging, such as brain scans, lung MRIs, and dental X-rays, were uncovered, along with clinical documentation, complete medical histories and complete medical records. Personal information, including names, addresses and contact details, as well as blood test results, biometrics, and treatment records, all of which can be used to identify the individual.

A significant amount of information was exposed in an era of intensifying cyber threats, which highlights the profound consequences of poorly configured healthcare infrastructure. There has been an increasing number of breaches that illustrate the magnitude of the problem. BlackCat/ALPHV ransomware group has claimed responsibility for a devastating attack on Change Healthcare, where Optum, the parent company of UnitedHealth Group, has reportedly paid $22 million in ransom in exchange for the promise of deleting stolen data.

There was a twist in the crime ecosystem when BlackCat abruptly shut down, retaining none of the payments, but sending the data to an affiliate of the RansomHub ransomware group, which demanded a second ransom for the data in an attempt to secure payment. No second payment was received, and the breach grew in magnitude as each disclosure was made. Initially logged with the U.S. Health and Human Services (HHS) officials had initially estimated that the infection affected 500 people, but by July 2025, it had reached 100 million, then 190 million, and finally 192.7 million individuals.

These staggering figures highlight why healthcare remains a prime target for ransomware operators: if critical hospital systems fail to function correctly, downtime threatens not only revenue and reputations, but the lives of patients as well. Several other vulnerabilities compound the risk, including ransomware, since medical IoT devices are already vulnerable to compromise, which poses a threat to life-sustaining systems like heart monitors and infusion pumps. 

Telehealth platforms, on the other hand, extend the attack surface by routing sensitive consultations over the internet, thereby increasing the scope of potential attacks. In India, these global pressures are matched by local challenges, including outdated legacy systems, a lack of cybersecurity expertise, and a still-developing regulatory framework. 

Healthcare providers rely on a patchwork of frameworks in order to protect themselves from cybersecurity threats since there is no unified national healthcare cybersecurity law, including the Information Technology Act, SPDI Rules, and the Digital Personal Data Protection Act, which has not been enforced yet.

In their view, this lack of cohesion leaves organisations ill-equipped for future threats, particularly smaller companies with limited budgets and under-resourced security departments. In order to address these gaps, there is a partnership between the Data Security Council of India and the Healthcare Information and Management Systems Society (HIMSS) that aims to conduct a national cybersecurity assessment. As a result of the number of potentially exposed pieces of information that were uncovered as a result of the Serviceaide breach, it was particularly troubling. 

Depending on the individual, the data could include information such as their name, Social Security number, birth date, medical records, insurance details, prescription and treatment information, clinical notes, provider identifications, email usernames, and passwords. This information would vary by individual. As a response, Serviceaide announced that it had strengthened its security controls and was offering 12 months of complimentary credit and identity monitoring to affected individuals. 

There was an incident at Catholic Health that resulted in the disclosure that limited patient data was exposed by one of its vendors. According to the organisation's website, a formal notification letter is now being sent to potentially affected patients, and a link to the Serviceaide notice can be found on the website. No response has been received from either organisation regarding further information. 

While regulatory authorities and courts have shown little leniency in similar cases, in 2019, Puerto Rico-based Inmediata Health Group was fined $250,000 by the HHS' Office for Civil Rights (OCR) and later settled a lawsuit for more than $2.5 million with the state attorneys general and class actions plaintiffs after a misconfiguration resulted in 1.6 million patient records being exposed. As recently as last week, OCR penalised Vision Upright MRI, a small California imaging provider, for leaving medical images, including X-rays, CT scans, and MRIs, available online through an unsecured PACS server. 

A $5,000 fine and an action plan were awarded in this case, making the agency's 14th HIPAA enforcement action in 2025. The cumulative effect of these precedents illustrates that failing to secure patient information can lead to significant financial, regulatory, and reputational consequences for healthcare organisations. It has become increasingly evident that the regulatory consequences of failing to safeguard patient data are increasing as time goes on. 

Specifically, under the Health Insurance Portability and Accountability Act (HIPAA), fines can rise to millions of dollars for prolonged violations of the law, and systemic non-compliance with the law can result. For healthcare organisations, adhering to the regulations is both a financial and ethical imperative. 

Data from the U.S. As shown by the Department of Health and Human Services' Office for Civil Rights (OCR), enforcement activity has been steadily increasing over the past decade, with the year 2022 marking a record number of penalties imposed. OCR's Right of Access Initiative, launched in 2019, aims to curb providers who fail to provide patients with timely access to their medical records in a timely manner. 

It has contributed a great deal to the increase in penalties. There were 46 penalties issued for such violations between September 2019 and December 2023 as a result of enforcement activity. Enforcement activity continued high in 2024, as OCR closed 22 investigations with fines, even though only 16 of those were formally announced during that year. The momentum continues into 2025, bolstered by an increased enforcement focus on the HIPAA Security Rule's risk analysis provision, traditionally the most common cause of noncompliance. 

 Almost ten investigations have already been closed by OCR with financial penalties due to risk analysis failures as of May 31, 2025, indicating the agency's sharpened effort to reduce the backlog of data breach cases while holding covered entities accountable for their failures. It is a stark reminder that the healthcare sector stands at a crossroads between technology, patient care, and national security right now as a result of the increasing wave of cyberattacks that have been perpetrated against healthcare organisations. 

 Hospitals and medical networks are increasingly becoming increasingly dependent on the use of digital technologies, which means every exposed database, misconfigured system, or compromised vendor creates a greater opportunity for adversaries with ever greater resources, organisation, and determination to attack them. In the absence of decisive investments in cybersecurity infrastructure, workforce training, and stronger regulatory frameworks, experts warn that breaches will not only persist but will intensify in the future. 

A growing digitisation of healthcare in India makes the stakes even higher: the ability to preserve patient trust, ensure continuity of care, and safeguard sensitive health data is what will determine if digital innovation becomes a valuable asset or a liability, particularly in this country. In the big picture, it is also obvious that cybersecurity is no longer a technical afterthought but has evolved into a pillar of healthcare resilience, where failure has a cost that goes far beyond fines and penalties, and concerns involving patient safety as well as the lives of people involved.

Indian Government Flag Security Concerns with WhatsApp Web on Work PCs

 

The Indian government has issued a significant cybersecurity advisory urging citizens to avoid using WhatsApp Web on office computers and laptops, highlighting serious privacy and security risks that could expose personal information to employers and cybercriminals. 

The Ministry of Electronics and Information Technology (MeitY) released this public advisory through its Information Security Awareness (ISEA) team, warning that while accessing WhatsApp Web on office devices may seem convenient, it creates substantial cybersecurity vulnerabilities. The government describes the practice as a "major cybersecurity mistake" that could lead to unauthorized access to personal conversations, files, and login credentials. 

According to the advisory, IT administrators and company systems can gain access to private WhatsApp conversations through multiple pathways, including screen-monitoring software, malware infections, and browser hijacking tools. The government warns that many organizations now view WhatsApp Web as a potential security risk that could serve as a gateway for malware and phishing attacks, potentially compromising entire corporate networks. 

Specific privacy risks identified 

The advisory outlines several "horrors" of using WhatsApp on work-issued devices. Data breaches represent a primary concern, as compromised office laptops could expose confidential WhatsApp conversations containing sensitive personal information. Additionally, using WhatsApp Web on unsecured office Wi-Fi networks creates opportunities for malicious actors to intercept private data.

Perhaps most concerning, the government notes that even using office Wi-Fi to access WhatsApp on personal phones could grant companies some level of access to employees' private devices, further expanding the potential privacy violations. The advisory emphasizes that workplace surveillance capabilities mean employers may monitor browser activity, creating situations where sensitive personal information could be accessed, intercepted, or stored without employees' knowledge. 

Network security implication

Organizations increasingly implement comprehensive monitoring systems on corporate devices, making WhatsApp Web usage particularly risky. The government highlights that corporate networks face elevated vulnerability to phishing attacks and malware distribution through messaging applications like WhatsApp Web. When employees click malicious links or download suspicious attachments through WhatsApp Web on office systems, they could inadvertently provide hackers with backdoor access to organizational IT infrastructure. 

Recommended safety measures

For employees who must use WhatsApp Web on office devices, the government provides specific precautionary guidelines. Users should immediately log out of WhatsApp Web when stepping away from their desks or finishing work sessions. The advisory strongly recommends exercising caution when clicking links or opening attachments from unknown contacts, as these could contain malware designed to exploit corporate networks. 

Additionally, employees should familiarize themselves with their company's IT policies regarding personal application usage and data privacy on work devices. The government emphasizes that understanding organizational policies helps employees make informed decisions about personal technology use in professional environments. 

This advisory represents part of broader cybersecurity awareness efforts as workplace digital threats continue evolving, with the government positioning employee education as crucial for maintaining both personal privacy and corporate network security.

New York Lawmaker Proposes Bill to Regulate Gait Recognition Surveillance

 

New York City’s streets are often packed with people rushing to work, running errands, or simply enjoying the day. For many residents, walking is faster than taking the subway or catching a taxi. However, a growing concern is emerging — the way someone walks could now be tracked, analyzed, and used to identify them. 

City Councilmember Jennifer Gutierrez is seeking to address this through new legislation aimed at regulating gait recognition technology. This surveillance method can identify people based on the way they move, including their walking style, stride length, and posture. In some cases, it even factors in other unique patterns, such as vocal cadence. 

Gutierrez’s proposal would classify a person’s gait as “personal identifying information,” giving it the same protection as highly sensitive data, including tax or medical records. Her bill also requires that individuals be notified if city agencies are collecting this type of information. She emphasized that most residents are unaware their movements could be monitored, let alone stored for future analysis. 

According to experts, gait recognition technology can identify a person from as far as 165 feet away, even if they are walking away from the camera. This capability makes it an appealing tool for law enforcement but raises significant privacy questions. While Gutierrez acknowledges its potential in solving crimes, she stresses that everyday New Yorkers should not have their personal characteristics tracked without consent. 

Public opinion is divided. Privacy advocates argue the technology poses a serious risk of misuse, such as mass tracking without warrants or transparency. Supporters of its use believe it can be vital for security and public safety when handled with proper oversight. 

Globally, some governments have already taken steps to regulate similar surveillance tools. The European Union enforces strict rules on biometric data collection, and certain U.S. states have introduced laws to address privacy risks. However, experts warn that advancements in technology often move faster than legislation, making it difficult to implement timely safeguards. 

The New York City administration is reviewing Gutierrez’s bill, while the NYPD’s use of gait recognition for criminal investigations would remain exempt under the proposed law. The debate continues over whether this technology’s benefits outweigh the potential erosion of personal privacy in one of the world’s busiest cities.

Allianz Life Data Breach Exposes Personal Information of 1.4 Million Customers

 

Allianz Life Insurance has disclosed a major cybersecurity breach that exposed the personal details of approximately 1.4 million individuals. The breach was detected on July 16, 2025, and the company reported the incident to the Maine Attorney General’s office the following day. Initial findings suggest that the majority of Allianz Life’s customer base may have been impacted by the incident. 

According to Allianz Life, the attackers did not rely on exploiting technical weaknesses but instead used advanced social engineering strategies to deceive company employees. This approach bypasses system-level defenses by manipulating human behavior and trust. The cybercriminal group believed to be responsible is Scattered Spider, a collective that recently orchestrated a damaging attack on UK retailer Marks & Spencer, leading to substantial financial disruption. 

In this case, the attackers allegedly gained access to a third-party customer relationship management (CRM) platform used by Allianz Life. The company noted that there is no indication that its core systems were affected. However, the stolen data reportedly includes personally identifiable information (PII) of customers, financial advisors, and certain employees. Allianz SE, the parent company, confirmed that the information was exfiltrated using social engineering techniques that exploited human error rather than digital vulnerabilities. 

Social engineering attacks often involve tactics such as impersonating internal staff or calling IT help desks to request password resets. Scattered Spider has been known to use these methods in past campaigns, including those that targeted MGM Resorts and Caesar’s Palace. Their operations typically focus on high-profile organizations and are designed to extract valuable data with minimal use of traditional hacking methods. 

The breach at Allianz is part of a larger trend of rising cyberattacks on the insurance industry. Other firms like Aflac, Erie Insurance, and Philadelphia Insurance have also suffered similar incidents in recent months, raising alarms about the sector’s cybersecurity readiness.  

Industry experts emphasize the growing need for businesses to bolster their cybersecurity defenses—not just by investing in better tools but also by educating their workforce. A recent Experis report identified cybersecurity as the top concern for technology firms in 2025. Alarmingly, Tech.co research shows that nearly 98% of senior leaders still struggle to recognize phishing attempts, which are a common entry point for such breaches. 

The Allianz Life breach highlights the urgent need for organizations to treat cybersecurity as a shared responsibility, ensuring that every employee is trained to identify and respond to suspicious activities. Without such collective vigilance, the threat landscape will continue to grow more dangerous.