A recently noticed configuration inside Microsoft Copilot may allow the AI tool to reference activity from several other Microsoft platforms, prompting renewed discussion around data privacy and AI personalization. The option, which appears within Copilot’s settings, enables the assistant to use information connected to services such as Bing, MSN, and the Microsoft Edge browser. Users who are uncomfortable with this level of integration can switch the feature off.
Like many modern artificial intelligence systems, Copilot attempts to improve the usefulness of its responses by understanding more about the person interacting with it. The assistant normally does this by remembering past conversations and storing certain details that users intentionally share during chats. These stored elements help the AI maintain context across multiple interactions and generate responses that feel more tailored.
However, a specific configuration called “Microsoft usage data” expands that capability. According to reporting first highlighted by the technology outlet Windows Latest, this setting allows Copilot to reference information associated with other Microsoft services a user has interacted with. The option appears within the assistant’s Memory controls and is available through both the Copilot website and its mobile applications. Observers believe the setting was introduced recently as part of Microsoft’s effort to strengthen personalization features in its AI tools.
The Memory feature in Copilot is designed to help the assistant retain useful context. Through this system, the AI can recall earlier conversations, remember instructions or factual information shared by users, and potentially reference certain account-linked activity from other Microsoft products. The idea is that by understanding more about a user’s interests or previous discussions, the assistant can provide more relevant answers.
In practice, such capabilities can be helpful. For instance, a user who discussed a topic with Copilot previously may want to continue that conversation later without repeating the entire background. Similarly, individuals seeking guidance about personal or professional matters may receive more relevant suggestions if the assistant has some awareness of their preferences or circumstances.
Despite the convenience, the feature also raises questions about privacy. Some users may be concerned that allowing an AI assistant to accumulate information from multiple services could expose more personal data than expected. Others may want to know how that information is used beyond personalizing conversations.
Microsoft addresses these concerns in its official Copilot documentation. In its frequently asked questions section, the company states that user conversations are processed only for limited purposes described in its privacy policies. According to Microsoft, this information may be used to evaluate Copilot’s performance, troubleshoot operational issues, identify software bugs, prevent misuse of the service, and improve the overall quality of the product.
The company also says that conversations are not used to train AI models by default. Model training is controlled through a separate configuration, which users can choose to disable if they do not want their interactions contributing to AI development.
Microsoft further clarifies that Copilot’s personalization settings do not determine whether a user receives targeted advertisements. Advertising preferences are managed through a different option available in the Microsoft account privacy dashboard. Users who want to stop personalized advertising must adjust the Personalized ads and offers setting separately.
Even with these explanations, privacy concerns remain understandable, particularly because Microsoft documentation indicates that Copilot’s personalization features may already be activated automatically in some cases. When reviewing the settings on a personal device, these options were found to be switched on. Users who prefer not to allow Copilot to access broader usage data may therefore wish to disable them.
Checking these settings is straightforward. Users can open Copilot through its website or mobile application and ensure they are signed in with their Microsoft account. On the web interface, selecting the account name at the bottom of the left-hand panel opens the Settings menu, where the Memory section can be accessed. In the mobile application, the same controls are available through the side navigation menu by tapping the account name and choosing Memory.
Inside the Memory settings, users will see a general control labeled “Personalization and memory.” Two additional options appear beneath it: “Facts you’ve shared,” which stores information provided directly during conversations, and “Microsoft usage data,” which allows Copilot to reference activity from other Microsoft services.
To limit this behavior, users can switch off the Microsoft usage data toggle. They may also disable the broader Personalization and memory option if they prefer that the AI assistant does not retain contextual information about their interactions. Copilot also provides a “Delete all memory” function that removes all stored data from the system. If individual personal details have been recorded, they can be reviewed and deleted through the editing option next to “Facts you’ve shared.”
Security and privacy experts generally advise caution when sharing information with AI assistants, even when personalization features remain enabled. Sensitive or confidential details should not be entered into conversations. Microsoft itself recommends avoiding the disclosure of certain types of highly personal data, including information related to health conditions or sexual orientation.
The broader development reflects a growing trend in the technology industry. As AI assistants become integrated across multiple platforms and services, companies are increasingly using cross-service data to make these tools more helpful and personalized. While this approach can improve convenience and usability, it also underlines the grave necessity for transparent privacy controls so users remain aware of how their information is being used and can adjust those settings when necessary.
According to a statement issued by the organization last week, hackers gained access to documents that included 1998 voter registration records from the City and County of Honolulu, as well as Social Security numbers (SSNs) and driver's license numbers gathered from the Hawaiʻi State Department of Transportation.
A 1993 Multiethnic Cohort (MEC) Study was shown to be partially responsible for the breach. The institution recruited study participants using voter registration information and driver's license numbers. Health information was included in some of the files that were made public.
Files related to three other epidemiological studies of diet and cancer were retrieved, along with data on MEC Study participants. To determine whether further sensitive data was obtained, the hack is still being investigated. According to the university, "additional individuals whose personal information may have been included in the historical driver's license and voter registration records with SSN identifiers number approximately 1.15 million."
A total of 87,493 study participants had their information taken. The cyber problem was initially found on August 31, 2025, according to a report the university gave to the state assembly in January.
The stolen data was found in a subset of research files on specific servers supporting the epidemiological research activities of the University of Hawaii Cancer Center. The University of Hawaii Cancer Center's clinical trials activities, patient care, and other divisions were unaffected by the ransomware attack. The University of Hawaii Cancer Center's director, Naoto Ueno, expressed regret for the incident last week and stated that the organization was "committed to transparency."
According to the institution, in order to address the issue, they hired cybersecurity specialists and notified law enforcement after the attackers encrypted and probably stole data. The cybersecurity company acquired "an affirmation that any information obtained was destroyed" and a decryption tool.
Three universities, seven community colleges, one employment training center, and numerous research institutions dispersed over six islands make up the University of Hawaii system. About 50,000 students are served by it.
The Madison Square Garden Family of Companies has disclosed that it recently alerted an undisclosed number of individuals about a cybersecurity incident that occurred in August 2025. The company confirmed that the exposed information includes names and Social Security numbers.
According to MSG’s notification letter, attackers exploited a previously unknown vulnerability in Oracle’s E-Business Suite, an enterprise software platform widely used for finance, human resources, and back-office operations. The affected system was hosted and managed by an unnamed third-party vendor, indicating the intrusion occurred through an externally maintained environment rather than MSG’s core internal network.
Oracle informed customers that an undisclosed condition in the application had been abused by an unauthorized party to obtain access to stored data. MSG stated that its investigation, completed in late November 2025, determined that unauthorized access had taken place in August 2025. The gap between compromise and confirmation reflects a common pattern in zero-day attacks, where flaws are exploited before vendors are aware of their existence or able to issue patches.
In November 2025, the ransomware group known as Clop, also stylized as Cl0p, publicly claimed responsibility for the breach. During the same period, the group carried out a broader campaign targeting hundreds of organizations by leveraging the same Oracle vulnerability. MSG has not acknowledged Clop’s claim, and independent verification of the group’s involvement has not been established. The company has not disclosed how many people were notified, whether a ransom demand was made, or whether any payment occurred. A request for further comment remains pending.
MSG is offering eligible individuals one year of complimentary credit monitoring through TransUnion. Affected recipients have 90 days from receiving the notice letter to enroll.
Clop first appeared in 2019 and has become known for exploiting zero-day flaws in enterprise software. Beyond Oracle’s E-Business Suite, the group has targeted Cleo file transfer software and, more recently, vulnerabilities in Gladinet CentreStack file servers. Unlike traditional ransomware operators that focus primarily on encrypting systems, Clop frequently prioritizes data theft. The group exfiltrates information and then threatens to publish or sell it if payment is not made.
In 2025, Clop claimed responsibility for 456 ransomware incidents. Of those, 31 targeted organizations publicly confirmed resulting data breaches, collectively exposing approximately 3.75 million personal records. Institutions reportedly affected by the Oracle zero-day campaign include Harvard University, GlobalLogic, SATO Corporation, and Dartmouth College.
So far in 2026, Clop has claimed another 123 victims, including the French labor union CFDT. Its most recent operations reportedly leverage a newer vulnerability in Gladinet CentreStack servers.
Ransomware activity across the United States remains extensive. In 2025, researchers recorded 646 confirmed ransomware attacks against U.S. organizations, along with 3,193 additional unverified claims made by ransomware groups. Confirmed incidents resulted in nearly 42 million exposed records. One of the largest cases linked to Clop involved exploitation of the Oracle vulnerability at the University of Phoenix, which later notified 3.5 million individuals. In 2026 to date, 17 confirmed attacks and 624 unconfirmed claims are under review.
Other incidents disclosed this week include a December 2024 breach affecting the City of Carthage, Texas, reportedly claimed by Rhysida; a March 2025 breach at Hennessy Advisors impacting 12,643 individuals and attributed to LockBit; an August 2025 breach at KCI Telecommunications linked to Akira; and a December 2025 incident at The Lewis Bear Company affecting 555 individuals and also claimed by Akira.
Ransomware attacks can both disable systems through encryption and involve large-scale data theft. In Clop’s case, data exfiltration appears to be the primary tactic. Organizations that refuse to meet ransom demands may face public disclosure of stolen data, extended operational disruption, and increased fraud risks for affected individuals.
The Madison Square Garden Family of Companies includes Madison Square Garden Sports Corp., Madison Square Garden Entertainment Corp., and Sphere Entertainment Co.. The group owns and operates major venues such as Madison Square Garden, Radio City Music Hall, and the Las Vegas Sphere.
As organizations build and host their own Large Language Models, they also create a network of supporting services and APIs to keep those systems running. The growing danger does not usually originate from the model’s intelligence itself, but from the technical framework that delivers, connects, and automates it. Every new interface added to support an LLM expands the number of possible entry points into the system. During rapid rollouts, these interfaces are often trusted automatically and reviewed later, if at all.
When these access points are given excessive permissions or rely on long-lasting credentials, they can open doors far wider than intended. A single poorly secured endpoint can provide access to internal systems, service identities, and sensitive data tied to LLM operations. For that reason, managing privileges at the endpoint level is becoming a central security requirement.
In practical terms, an endpoint is any digital doorway that allows a user, application, or service to communicate with a model. This includes APIs that receive prompts and return generated responses, administrative panels used to update or configure models, monitoring dashboards, and integration points that allow the model to interact with databases or external tools. Together, these interfaces determine how deeply the LLM is embedded within the broader technology ecosystem.
A major issue is that many of these interfaces are designed for experimentation or early deployment phases. They prioritize speed and functionality over hardened security controls. Over time, temporary testing configurations remain active, monitoring weakens, and permissions accumulate. In many deployments, the endpoint effectively becomes the security perimeter. Its authentication methods, secret management practices, and assigned privileges ultimately decide how far an intruder could move.
Exposure rarely stems from a single catastrophic mistake. Instead, it develops gradually. Internal APIs may be made publicly reachable to simplify integration and left unprotected. Access tokens or API keys may be embedded in code and never rotated. Teams may assume that internal networks are inherently secure, overlooking the fact that VPN access, misconfigurations, or compromised accounts can bridge that boundary. Cloud settings, including improperly configured gateways or firewall rules, can also unintentionally expose services to the internet.
These risks are amplified in LLM ecosystems because models are typically connected to multiple internal systems. If an attacker compromises one endpoint, they may gain indirect access to databases, automation tools, and cloud resources that already trust the model’s credentials. Unlike traditional APIs with narrow functions, LLM interfaces often support broad, automated workflows. This enables lateral movement at scale.
Threat actors can exploit prompts to extract confidential information the model can access. They may also misuse tool integrations to modify internal resources or trigger privileged operations. Even limited access can be dangerous if attackers manipulate input data in ways that influence the model to perform harmful actions indirectly.
Non-human identities intensify this exposure. Service accounts, machine credentials, and API keys allow models to function continuously without human intervention. For convenience, these identities are often granted broad permissions and rarely audited. If an endpoint tied to such credentials is breached, the attacker inherits trusted system-level access. Problems such as scattered secrets across configuration files, long-lived static credentials, excessive permissions, and a growing number of unmanaged service accounts increase both complexity and risk.
Mitigating these threats requires assuming that some endpoints will eventually be reached. Security strategies should focus on limiting impact. Access should follow strict least-privilege principles for both people and systems. Elevated rights should be granted only temporarily and revoked automatically. Sensitive sessions should be logged and reviewed. Credentials must be rotated regularly, and long-standing static secrets should be eliminated wherever possible.
Because LLM systems operate autonomously and at scale, traditional access models are no longer sufficient. Strong endpoint privilege governance, continuous verification, and reduced standing access are essential to protecting AI-driven infrastructure from escalating compromise.