Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Fake Google Meet Update Can Give Attackers Control of Your Windows PC

  Cybersecurity analysts have identified a phishing campaign that can quietly hand control of a Windows computer to attackers after a single...

All the recent news you need to know

Apple Rolls Out Global Age-Verification System to Protect Kids Online

 

Apple has rolled out a new global age-verification system across its platforms, aimed at keeping kids safer online while helping developers comply with tightening child safety laws worldwide. The move targets both app downloads and in‑app experiences, with a particular focus on blocking underage access to adult‑rated content without sacrificing user privacy.

Under the new rules, users in countries such as Brazil, Australia and Singapore will be blocked from downloading apps rated 18+ unless Apple can confirm they are adults. Similar protections are being extended to parts of the United States, where states like Utah and Louisiana are introducing strict online age‑assurance laws, pushing platforms to verify whether users are children, teens or adults before allowing access to certain apps or features.This marks one of Apple’s strongest steps yet to align its App Store with regional regulations on children’s digital safety.

At the heart of the initiative is Apple’s privacy‑focused Declared Age Range API, which lets apps learn a user’s age category instead of their exact birthdate. Developers can use this signal to tailor content, enable or disable features, or trigger parental consent flows for younger users, while never seeing sensitive identity details. Apple says this design is meant to minimize data collection and reduce the risk of intrusive ID checks or third‑party age‑verification databases.

For parents, the age‑verification push builds on Apple’s existing child account system and content restrictions.Parents can already set up child profiles, choose age ranges and apply web content filters, and now those settings can flow through to third‑party apps via the new tools.This means a game, social app or streaming service can automatically recognize that a user is a child or teen and adjust what they can see or do without asking for new personal information.

For developers, Apple is introducing an expanded toolkit that includes the updated Declared Age Range API, new age‑rating properties in StoreKit, and improved server notifications to track compliance. These tools will be essential in regions where apps must prove they are screening out underage users from adult content or obtaining parental consent for significant changes. As more governments pass online safety laws, Apple’s global age‑verification framework is likely to become a key part of how the App Store balances regulatory demands with user privacy.

Conduent Leak: One of the Largest Breaches in The U.S


Conduent, a business that offers printing, payment, and document processing services to some of the biggest health insurance companies in the nation, has had at least 25 million people's personal information stolen. Addresses, social security numbers, and health information were exposed to ransomware hackers in what some have already dubbed one of the biggest data breaches in American history. 

According to a letter the business issued online, Conduent initially learned it was the victim of a "cyber incident" more than a year ago on January 13, 2025. The actual breach occurred between October 21, 2024, and January 13, 2025, and it included Conduent's data because the company offers services to health plans.

Names, social security numbers, health insurance details, and unspecified medical information were among the data. In its notice, the business stressed that "not every data element was present for every individual," which implies that some individuals may have had their health insurance information taken but not their social security number, or vice versa. 

According to Bleeping Computer, the Safepay ransomware organization claimed responsibility for the attack, which allegedly captured more than 8 gigabytes of data. Conduent stated online, "Presently, we are unaware of any attempted or actual misuse of any information involved in this incident," while it is unclear if Safepay has demanded payment for the information's recovery.

10.5 million people were affected by the incident, according to Oregon's consumer protection website, although it's unknown how many people in Oregon alone were affected. According to Wisconsin, the national total is more than 25 million. 

Notifications have also been sent to residents of other states, such as California, Delaware, Massachusetts, New Hampshire, and New Mexico. According to the state's attorney general, just 374 people's data was compromised in Maine, one of the states with very tiny numbers. Conduent, a New Jersey-based company, did not reply to emails on Tuesday inquiring about the full extent of the incident and what victims could do about it.

Conduent is providing free credit monitoring and identity restoration services through Epiq to certain individuals, but those affected must join before April 30, 2026, according to a letter given to victims in California.

Age Verification Laws for Social Media Raise Privacy Concerns and Enforcement Challenges

 

Across nations, governments push tighter rules limiting young users’ access to social media. Because of worries over endless scrolling, disturbing material online, or growing emotional struggles in teens, officials demand change. Minimum entry ages - often 13 or 16 - are now common in draft laws shaping platform duties. While debates continue, one thing holds: unrestricted teenage access faces mounting resistance. 

Still, putting such policies into practice stirs up both technological hurdles and concerns about personal privacy. To make sure people are old enough, services need proof - yet proving age typically means gathering private details. Meanwhile, current regulations push firms to keep data collection minimal. That tension forms what specialists call an “age-verification trap,” where tighter control over access can weaken safeguards meant to protect individual information. 

While many rules about age limits demand that services make "reasonable efforts" to block young users, clear guidance on checking someone's actual age is almost never included. One way firms handle this gap: they lean heavily on just two methods when deciding what to do. Starting off, identity checks require people to show their age using official ID or online identity tools. 

Although more reliable, keeping such data creates worries over privacy breaches. Handling vast collections of private details increases exposure to cyber threats. Security weakens when too much sensitive material gathers in one place. Age guesses shape the next method. By watching how someone uses a device, or analyzing video selfies with face-scanning tech, systems try to judge their years without asking for ID cards. 

Still, since these outcomes depend on likelihoods instead of confirmed proof, doubt remains part of the process. Some big tech firms now run these kinds of tools. While Meta applies face-based age checks on Instagram in select regions - asking certain users to send brief video clips if they seem underage - TikTok examines openly shared videos to guess how old someone might be. 

Elsewhere, Google and its platform YouTube lean on activity patterns; yet when doubt remains, they can ask for official identification or payment details. These steps aim at confirming ages without relying solely on stated information. Mistakes happen within these systems. Though meant to protect, they occasionally misidentify adults as children - leading to sudden account access issues. 

At times, underage individuals slip through gaps, using borrowed IDs or setting up more than one profile. Restrictions fail when shared credentials enter the picture. A single appeal can expose personal details when systems retain proof materials past their immediate need. Stored face scans, ID photos, or validation logs may linger just to satisfy legal checks. These files attract digital intrusions simply by existing. Every extra day they remain increases the chance of breach. 

Where identity infrastructure is weak, the difficulty grows. Biometrics might step in when official systems fall short. Oversight tends to be sparse, even as outside verifiers take on bigger roles. Still, shielding kids on the web without losing grip on private information is far from simple. When authorities roll out tighter rules for confirming age, the tools built to follow these laws could change how identities and personal details move through digital spaces.

Rising Cyber Threats Linked to Ongoing Middle East Conflict


A geopolitical crisis has historically been fought on physical battlefields, but its effects are seldom confined to borders in the modern threat landscape. While tensions are swirling across the Middle East as a result of the United States' military operations in Iran and Tehran's retaliatory actions, a parallel surge of activity is being witnessed in the digital world. 


There is increasing concern among security analysts as well as government cyber agencies about how geopolitical instability provides fertile ground for cybercriminals and state-aligned actors. In order to manipulate public curiosity, exploit fear, and conceal malicious campaigns, attackers have utilized this rapidly evolving situation as a convenient narrative.

As soon as the escalation began, researchers began tracking a growing ecosystem of cyber infrastructure based on conflict that lures unsuspecting users into fraudulent websites, phishing scams, and malware downloads. 

In many cases, what appears to be breaking news or urgent updates about a crisis hides carefully designed traps meant to infiltrate corporations, collect credentials, or spread malicious software designed to steal data. 

Due to this, the conflict's digital shadow has expanded beyond the immediate region, raising concerns among cybersecurity professionals that opportunistic attacks may become increasingly targeted against individuals and organizations worldwide. 

The intensification of hostilities in late February 2026, when the United States and Israel are said to have conducted coordinated airstrikes against multiple Iranian facilities, has further compounded the escalation of cyber threats. 

Security analysts have identified a pattern where cyber activity closely follows developments on the ground following the strikes and retaliatory actions which have reverberated across several Middle Eastern nations following the strikes. 

According to researchers, digital operations played a supporting role long before the first missiles were deployed. Iran's command-and-control infrastructure was disrupted by coordinated electronic warfare tactics and large-scale distributed denial-of-service campaigns. This temporarily impeded national internet access and could potentially complicate real-time military coordination by reducing national internet connectivity to a fraction of its usual capacity. 

It is clear from such incidents that cyber capabilities are becoming increasingly integrated into broader strategic operations, influencing the circumstances under which conventional military engagements occur. However, analysts note that the cyber dimension of the conflict cannot be limited to state-directed operations alone. 

As a result, it is widely expected that Iranian digital response will follow an asymmetric model, with loosely aligned or ideologically sympathetic groups operating outside its borders typically executing these actions. They vary considerably in capability, but their activities often involve defacing websites, leaking data, and launching disruptive attacks intended to generate publicity in addition to operational damage. 

A team tracking online channels associated with hacktivist communities has observed hundreds of claims of cyberattack within days of the escalation, many of which were shared via propaganda platforms and messaging platforms aligned with geopolitical agendas. 

In spite of the fact that not all claims reflect a verified breach, the rapid dissemination of such announcements can create confusion, inflate perceived impact, and press targeted organizations into responding before technical verification is possible. It is becoming increasingly clear that the target list is expanding beyond political disruption. 

Monitoring of cybersecurity indicates that activities related to the conflict extend beyond Israel to Gulf States, Jordan, Cyprus, and American organizations based abroad. As a result of financial motivation, ransomware operators and threat groups have attempted to frame attacks against Israeli and Western-related entities as political alignments rather than criminal attacks.

A gradual blurring of the distinction between state-aligned disruption and extortion involving financial gain is being caused by the blending of ideological messaging and traditional cybercrime tactics. Moreover, security teams have warned that opportunistic actors are leveraging geopolitical tensions as a narrative hook for phishing and fraud operations. 

It has been observed increasingly that travel-related scams are targeting individuals stranded or traveling within the region, and credential harvesting campaigns are targeting diplomats, journalists, humanitarian organizations and defense contractors. There has been an increase in interest in industrial and operational technology environments in recent years, which has created an alarm. 

It is important to note that early cyber activity linked to the conflict was primarily defacements and distributed denial-of-service attacks against public websites. In recent reports, threat intelligence reports have indicated an attempt to probe systems linked to industrial control components such as programmable logic controllers and other industrial control components. 

Consequently, if substantiated, this shift would represent a substantial escalation of both technical ambition and potential impact for energy facilities, utilities, and other critical infrastructure operators throughout the Middle East and Gulf region, should reevaluate their operational network resilience, particularly those that connect information technology with industrial control systems. 

Together, these developments suggest a broad range of potential cyber activity, including high-volume DDoS campaigns that target government portals as well as targeted spear-phishing activities that seek credentials from diplomats, media organizations, and defense contractors. 

A number of analysts have warned that ransomware incidents can be politicized, hack-and-leak operations will target military-linked entities, and destructive malware may be used to disable government systems. 

The influence campaigns and fabricated breach claims being circulated through social media platforms are expected to play a parallel role in shaping public perception as well as these technical threats. As a result of the possibility of both verified attacks and exaggerated narratives producing real-world consequences, enhancing situational awareness and improving defensive monitoring is becoming an integral aspect of risk management in organizations. 

It is also evident from the broader regional context why geopolitical escalation often results in heightened cyber security risks in the Middle East. Over the past decade, countries across the region have taken steps to transform public services, financial systems, telecommunications infrastructure, and energy operations through large-scale digital transformation initiatives. 

Particularly, Gulf Cooperation Council members have led these efforts. In addition to strengthening economic diversification and technological capacity, these efforts have increased the digital attack surface available to threat actors at the same time.

Monitoring of cybercrime activities in the Gulf has indicated an increasing number of traditional cybercrime activities targeting both private and state institutions. In recent years, financial fraud campaigns, ransomware attacks, and political-motivated web defacements have disrupted a wide range of industries, including banking, telecommunications, and more. 

There have been several high-profile incidents in recent years that involved financial institution and mobile banking platform breaches, while ransomware groups have increasingly targeted large regional service providers as targets. These campaigns have grown in frequency as well as sophistication, reflecting the region's interconnected digital infrastructure’s increasing strategic value. 

In addition, the threat environment is not limited to conventional cybercrime. Researchers continue to report advanced persistent threat groups conducting cyberespionage operations against governmental agencies, defense organizations, and energy infrastructure throughout the region, in addition to conventional cybercrime. 

There is a widespread belief that many of these campaigns are associated with states and geopolitical rivalries, with a particular focus being placed on individuals associated with Iran following earlier cyber incidents against its nuclear facilities. 

Several activities attributed to this group have included deployment of destructive malware, covert surveillance campaigns, and data destruction attacks, all aimed at disrupting critical infrastructure without providing any indication as to whether the underlying motive is political disruption or financial gain. 

Consequently, attribution efforts have been complicated by the convergence of these motives, resulting in the increasing overlap between cyber espionage, sabotage, and criminal activity. Cybersecurity dynamics are also influenced by the political and social significance of the digital space within the region.

Digital platforms, data flows, and communication infrastructure are frequently regulated by Middle Eastern governments as a matter of national stability and regime security. Consequently, social media platforms and messaging platforms have evolved into contested environments where state institutions, activists, extremist organizations, and influence networks compete to shape narratives in contested environments. 

In times of conflict or political instability, this competition can take the form of distributed denial-of-service attacks, coordinated disinformation campaigns, doxxing operations, and claims of data breaches aimed at putting pressure on political opponents or influencing public opinion. 

With the increasing use of artificial intelligence tools for creating synthetic media, automating propaganda, or manipulating information flow, it has become increasingly difficult for organizations to maintain reliable situational awareness during emergencies. In addition to the integration of artificial intelligence and autonomous technologies into military and security operations across the region, there is an emerging dimension. 

New cybersecurity vulnerabilities are inevitable as governments and non-state actors experiment with artificial intelligence-enabled surveillance, targeting, and operational coordination systems. It is important to be aware that when systems depend on complex supply chains of software or foreign technological expertise, cyber intrusions, manipulation, and espionage can be a potential entry point. 

According to security specialists, interference with these technologies could have consequences beyond the theft of data, impacting battlefield decision-making, operational reliability, or strategic control over sensitive defense capabilities, among other things. 

Institutions are not the only ones to face such risks. Technology-facilitated abuse has become increasingly problematic for vulnerable communities as it intersects with personal safety concerns and digital rights. 

A number of places in the region have experienced an increase in the spread of manipulated images and deepfake content as a result of technology-facilitated abuse, including impersonation schemes and sextortion. Many victims experience significant social stigma or legal barriers when seeking assistance, which can discourage them from reporting and allow perpetrators to operate with relative impunity. 

In combination, these trends illustrate that cybersecurity is not limited to protecting networks or infrastructure in the Middle East. A complex intersection of national security, information control, technological competition, and social vulnerability has resulted in a situation where the region is particularly vulnerable to cyber activity arising from geopolitical tensions.

Cyberattacks Shift Tactics as Hackers Exploit User Behavior and AI, Experts Warn

 

Cybersecurity threats are evolving rapidly, forcing businesses to rethink how they approach digital security. Experts say modern cyberattacks are no longer focused solely on breaking technical defenses but are increasingly designed to exploit everyday user behavior. 
 
According to industry observers, files downloaded by employees have become a common entry point for cybercriminals. Items such as invoices, installers, documents, and productivity tools are often downloaded without careful verification, creating opportunities for attackers. 

“The Downloads folder has quietly become one of the hottest pieces of real estate for cybercriminals,” said Sanket Atal, senior vice president of engineering and country head at OpenText India. 

“Attackers are not trying to break cryptography anymore. They’re hijacking habits.” Research cited by the company indicates that more than one third of consumer malware infections are first detected in the Downloads directory. 

Security specialists say this reflects a broader shift in how cyberattacks are designed, with attackers relying more on social engineering and multi-stage malware. Atal said malicious files frequently appear harmless when first opened. “These files often look completely harmless at first,” he said. 

“They only later pull in ransomware components or credential-stealing payloads. It is a multi-stage approach that is very difficult to catch with signature-based tools.” Experts say the rise in such attacks is also linked to the growing industrialization of cybercrime. 

Modern ransomware groups and information-stealing operations increasingly operate like structured businesses that continuously test and refine their methods. “Ransomware-as-a-service groups and info-stealer operators are constantly refining their lures,” Atal said. 

“They are comfortable using SEO-poisoned websites, fake update prompts, and even ‘productivity tools’ to get users to download something that looks normal.” India’s rapidly expanding digital ecosystem has made it an attractive target for attackers. 

The combination of millions of new internet users, the widespread use of personal devices for work, and the overlap between personal and professional computing environments increases exposure to risk. 

“When a poisoned file lands in a Downloads folder on a personal device, it can easily become an entry point into enterprise systems,” Atal said. “Especially when that same device is used for banking, office work, and email.” Artificial intelligence is further changing the threat landscape. 

Generative AI tools can now produce convincing phishing messages that mimic corporate communication styles and reference real projects. “AI has removed the traditional visual cues people relied on to spot scams,” Atal said. 

“Generative models now write in perfect business language, reuse an organisation’s tone, and reference real projects scraped from public sources.” Security analysts say deepfake technology is also being used to manipulate business processes. 

Synthetic video calls and cloned voices have been used to approve financial transactions in some cases. Another emerging pattern is the rise of malware-free intrusions, where attackers rely on stolen credentials or legitimate remote access tools instead of traditional malicious software. 

“We’re also seeing a rise in malware-free intrusions,” Atal said. “Attackers use stolen credentials and legitimate remote access tools. Nothing matches a known signature, yet the breach is very real.” Experts say these developments are forcing organizations to shift their security strategies. 

Instead of focusing solely on scanning files and attachments, security teams are increasingly monitoring behavior patterns across users, devices, and systems. “The first shift is moving from content to behaviour,” Atal said. 

“Instead of just scanning attachments, organisations need to focus on whether a user or service account is behaving consistently with historical and peer norms.” Security specialists also emphasize the importance of integrating identity verification with threat detection systems. 

When phishing messages become difficult to distinguish from legitimate communication, identity context becomes a key factor in identifying suspicious activity. In addition, companies are beginning to rely on artificial intelligence for defensive purposes. 

Automated systems can help security teams manage the growing volume of alerts by identifying patterns and highlighting potential threats more quickly. “Security teams are overwhelmed by alerts,” Atal said. 

“AI-based triage is essential to reduce noise, correlate weak signals, and generate plain-language narratives so analysts can act faster.” Despite increased awareness of cybersecurity threats, several misconceptions persist. 

Many organizations assume that the most serious cyberattacks originate from sophisticated state-backed actors. “One big myth is that serious attacks only come from exotic nation-state actors,” Atal said. “The truth is, most breaches begin with everyday issues such as phishing, malicious downloads, weak passwords, or cloud misconfigurations.” 

Another misconception is that smaller organizations are less likely to be targeted. However, experts say attackers often focus on industries with weaker security controls, including healthcare providers, hospitality companies, and smaller financial institutions. 

Cybersecurity specialists also warn that many attacks no longer rely on traditional malware. Techniques such as identity-based attacks, business email compromise, and misuse of legitimate administrative tools often bypass standard antivirus defenses. “Identity-based attacks, business email compromise, and abuse of legitimate tools often never trigger traditional antivirus,” Atal said. 

“The starting point can be any user, device, or partner that has access to data.” Industry leaders say the challenge is compounded by the fact that many cybersecurity systems were designed for a different technological environment. 

Vinayak Godse, chief executive of the Data Security Council of India, said existing security frameworks were built before the widespread adoption of digital services and artificial intelligence. 

“In the digitalisation space, we are creating tremendous experiences, productivity gains, and new possibilities,” Godse said. “But the security frameworks we have in place were designed for an older paradigm.” He added that attackers today are capable of identifying and exploiting even a single vulnerability in complex digital systems. 

“The current attack ecosystem can identify and exploit even one vulnerability out of millions, or even billions,” Godse said. Experts say the erosion of traditional network boundaries has further complicated security efforts. Remote work, cloud computing, software-as-a-service platforms, and third-party integrations mean that sensitive systems can now be accessed from a wide range of devices and locations. 

“A user on a personal phone, accessing a SaaS application from home Wi-Fi, is still inside your risk perimeter,” Atal said. As a result, organizations are increasingly focusing on continuous verification and context-aware monitoring rather than relying solely on perimeter defenses. 

According to Atal, the effectiveness of AI-driven security tools ultimately depends on the quality of underlying data. If data sources are fragmented or poorly labeled, even advanced analytics systems may struggle to detect threats. 
 
“Every advanced AI-driven security use case boils down to whether you can see your data and whether you can trust it,” he said. Security experts say that integrating identity signals, access patterns, and data sensitivity into unified monitoring systems can help organizations identify suspicious activity more effectively. 

“When data, identity, and threat signals are unified, security teams can see a connected narrative,” Atal said. “A login, a download, and a data access event stop being isolated alerts and start telling a story.” 

 
Despite advances in technology, experts say human behavior remains a critical factor in cybersecurity. 

“In today’s cyber landscape, the front line is no longer the firewall,” Atal said. “It is the file you choose to open and the behaviour that follows.”

New Copilot Setting May Access Activity From Other Microsoft Services. Here’s How Users Can Disable It

 



A recently noticed configuration inside Microsoft Copilot may allow the AI tool to reference activity from several other Microsoft platforms, prompting renewed discussion around data privacy and AI personalization. The option, which appears within Copilot’s settings, enables the assistant to use information connected to services such as Bing, MSN, and the Microsoft Edge browser. Users who are uncomfortable with this level of integration can switch the feature off.

Like many modern artificial intelligence systems, Copilot attempts to improve the usefulness of its responses by understanding more about the person interacting with it. The assistant normally does this by remembering past conversations and storing certain details that users intentionally share during chats. These stored elements help the AI maintain context across multiple interactions and generate responses that feel more tailored.

However, a specific configuration called “Microsoft usage data” expands that capability. According to reporting first highlighted by the technology outlet Windows Latest, this setting allows Copilot to reference information associated with other Microsoft services a user has interacted with. The option appears within the assistant’s Memory controls and is available through both the Copilot website and its mobile applications. Observers believe the setting was introduced recently as part of Microsoft’s effort to strengthen personalization features in its AI tools.

The Memory feature in Copilot is designed to help the assistant retain useful context. Through this system, the AI can recall earlier conversations, remember instructions or factual information shared by users, and potentially reference certain account-linked activity from other Microsoft products. The idea is that by understanding more about a user’s interests or previous discussions, the assistant can provide more relevant answers.

In practice, such capabilities can be helpful. For instance, a user who discussed a topic with Copilot previously may want to continue that conversation later without repeating the entire background. Similarly, individuals seeking guidance about personal or professional matters may receive more relevant suggestions if the assistant has some awareness of their preferences or circumstances.

Despite the convenience, the feature also raises questions about privacy. Some users may be concerned that allowing an AI assistant to accumulate information from multiple services could expose more personal data than expected. Others may want to know how that information is used beyond personalizing conversations.

Microsoft addresses these concerns in its official Copilot documentation. In its frequently asked questions section, the company states that user conversations are processed only for limited purposes described in its privacy policies. According to Microsoft, this information may be used to evaluate Copilot’s performance, troubleshoot operational issues, identify software bugs, prevent misuse of the service, and improve the overall quality of the product.

The company also says that conversations are not used to train AI models by default. Model training is controlled through a separate configuration, which users can choose to disable if they do not want their interactions contributing to AI development.

Microsoft further clarifies that Copilot’s personalization settings do not determine whether a user receives targeted advertisements. Advertising preferences are managed through a different option available in the Microsoft account privacy dashboard. Users who want to stop personalized advertising must adjust the Personalized ads and offers setting separately.

Even with these explanations, privacy concerns remain understandable, particularly because Microsoft documentation indicates that Copilot’s personalization features may already be activated automatically in some cases. When reviewing the settings on a personal device, these options were found to be switched on. Users who prefer not to allow Copilot to access broader usage data may therefore wish to disable them.

Checking these settings is straightforward. Users can open Copilot through its website or mobile application and ensure they are signed in with their Microsoft account. On the web interface, selecting the account name at the bottom of the left-hand panel opens the Settings menu, where the Memory section can be accessed. In the mobile application, the same controls are available through the side navigation menu by tapping the account name and choosing Memory.

Inside the Memory settings, users will see a general control labeled “Personalization and memory.” Two additional options appear beneath it: “Facts you’ve shared,” which stores information provided directly during conversations, and “Microsoft usage data,” which allows Copilot to reference activity from other Microsoft services.

To limit this behavior, users can switch off the Microsoft usage data toggle. They may also disable the broader Personalization and memory option if they prefer that the AI assistant does not retain contextual information about their interactions. Copilot also provides a “Delete all memory” function that removes all stored data from the system. If individual personal details have been recorded, they can be reviewed and deleted through the editing option next to “Facts you’ve shared.”

Security and privacy experts generally advise caution when sharing information with AI assistants, even when personalization features remain enabled. Sensitive or confidential details should not be entered into conversations. Microsoft itself recommends avoiding the disclosure of certain types of highly personal data, including information related to health conditions or sexual orientation.

The broader development reflects a growing trend in the technology industry. As AI assistants become integrated across multiple platforms and services, companies are increasingly using cross-service data to make these tools more helpful and personalized. While this approach can improve convenience and usability, it also underlines the grave necessity for transparent privacy controls so users remain aware of how their information is being used and can adjust those settings when necessary.

Featured