A recently noticed configuration inside Microsoft Copilot may allow the AI tool to reference activity from several other Microsoft platforms, prompting renewed discussion around data privacy and AI personalization. The option, which appears within Copilot’s settings, enables the assistant to use information connected to services such as Bing, MSN, and the Microsoft Edge browser. Users who are uncomfortable with this level of integration can switch the feature off.
Like many modern artificial intelligence systems, Copilot attempts to improve the usefulness of its responses by understanding more about the person interacting with it. The assistant normally does this by remembering past conversations and storing certain details that users intentionally share during chats. These stored elements help the AI maintain context across multiple interactions and generate responses that feel more tailored.
However, a specific configuration called “Microsoft usage data” expands that capability. According to reporting first highlighted by the technology outlet Windows Latest, this setting allows Copilot to reference information associated with other Microsoft services a user has interacted with. The option appears within the assistant’s Memory controls and is available through both the Copilot website and its mobile applications. Observers believe the setting was introduced recently as part of Microsoft’s effort to strengthen personalization features in its AI tools.
The Memory feature in Copilot is designed to help the assistant retain useful context. Through this system, the AI can recall earlier conversations, remember instructions or factual information shared by users, and potentially reference certain account-linked activity from other Microsoft products. The idea is that by understanding more about a user’s interests or previous discussions, the assistant can provide more relevant answers.
In practice, such capabilities can be helpful. For instance, a user who discussed a topic with Copilot previously may want to continue that conversation later without repeating the entire background. Similarly, individuals seeking guidance about personal or professional matters may receive more relevant suggestions if the assistant has some awareness of their preferences or circumstances.
Despite the convenience, the feature also raises questions about privacy. Some users may be concerned that allowing an AI assistant to accumulate information from multiple services could expose more personal data than expected. Others may want to know how that information is used beyond personalizing conversations.
Microsoft addresses these concerns in its official Copilot documentation. In its frequently asked questions section, the company states that user conversations are processed only for limited purposes described in its privacy policies. According to Microsoft, this information may be used to evaluate Copilot’s performance, troubleshoot operational issues, identify software bugs, prevent misuse of the service, and improve the overall quality of the product.
The company also says that conversations are not used to train AI models by default. Model training is controlled through a separate configuration, which users can choose to disable if they do not want their interactions contributing to AI development.
Microsoft further clarifies that Copilot’s personalization settings do not determine whether a user receives targeted advertisements. Advertising preferences are managed through a different option available in the Microsoft account privacy dashboard. Users who want to stop personalized advertising must adjust the Personalized ads and offers setting separately.
Even with these explanations, privacy concerns remain understandable, particularly because Microsoft documentation indicates that Copilot’s personalization features may already be activated automatically in some cases. When reviewing the settings on a personal device, these options were found to be switched on. Users who prefer not to allow Copilot to access broader usage data may therefore wish to disable them.
Checking these settings is straightforward. Users can open Copilot through its website or mobile application and ensure they are signed in with their Microsoft account. On the web interface, selecting the account name at the bottom of the left-hand panel opens the Settings menu, where the Memory section can be accessed. In the mobile application, the same controls are available through the side navigation menu by tapping the account name and choosing Memory.
Inside the Memory settings, users will see a general control labeled “Personalization and memory.” Two additional options appear beneath it: “Facts you’ve shared,” which stores information provided directly during conversations, and “Microsoft usage data,” which allows Copilot to reference activity from other Microsoft services.
To limit this behavior, users can switch off the Microsoft usage data toggle. They may also disable the broader Personalization and memory option if they prefer that the AI assistant does not retain contextual information about their interactions. Copilot also provides a “Delete all memory” function that removes all stored data from the system. If individual personal details have been recorded, they can be reviewed and deleted through the editing option next to “Facts you’ve shared.”
Security and privacy experts generally advise caution when sharing information with AI assistants, even when personalization features remain enabled. Sensitive or confidential details should not be entered into conversations. Microsoft itself recommends avoiding the disclosure of certain types of highly personal data, including information related to health conditions or sexual orientation.
The broader development reflects a growing trend in the technology industry. As AI assistants become integrated across multiple platforms and services, companies are increasingly using cross-service data to make these tools more helpful and personalized. While this approach can improve convenience and usability, it also underlines the grave necessity for transparent privacy controls so users remain aware of how their information is being used and can adjust those settings when necessary.
Various mental health mobile applications with over millions of downloads on Google Play have security flaws that could leak users’ personal medical data.
Researchers found over 85 medium and high-severity vulnerabilities in one of the apps that can be abused to hack users' therapy data and privacy.
Few products are AI companions built to help people having anxiety, clinical depression, bipolar disorder and stress.
Six of the ten studied applications said that user chats are private and encoded safely on the vendor's servers.
Oversecured CEO Sergey Toshin said that “Mental health data carries unique risks. On the dark web, therapy records sell for $1,000 or more per record, far more than credit card numbers.”
Experts scanned ten mobile applications promoted as tools that help with mental health issues, and found 1,575 security flaws: 938 low-severity, 538 medium-severity, and 54 rated high-severity.
No critical issues were found, a few can be leveraged to hack login credentials, HTML injection, locate the user, or spoof notifications.
Experts used the Oversecured scanner to analyse the APK files of the mental health apps for known flaw patterns in different categories.
Using Intent.parseUri() on an externally controlled string, one treatment app with over a million downloads launches the generated messaging object (intent) without verifying the target component.
This makes it possible for an attacker to compel the application to launch any internal activity, even if it isn't meant for external access.
Oversecured said, “Since these internal activities often handle authentication tokens and session data, exploitation could give an attacker access to a user’s therapy records.”
Another problem is storing data locally that gives read access to all apps on the device. This can expose therapy details, depending on the saved data. Therapy details such as Cognitive Behavioural Therapy (CBT), session notes, therapy entries. Experts found plaintext configuration data and backend API endpoints inside the APK resources.
“These apps collect and store some of the most sensitive personal data in mobile: therapy session transcripts, mood logs, medication schedules, self-harm indicators, and in some cases, information protected under HIPAA,” Oversecured said.
Artificial intelligence is increasingly being used to help developers identify security weaknesses in software, and a new tool from OpenAI reflects that shift.
The company has introduced Codex Security, an automated security assistant designed to examine software projects, detect vulnerabilities, confirm whether they can actually be exploited, and recommend ways to fix them.
The feature is currently being released as a research preview and can be accessed through the Codex interface by users subscribed to ChatGPT Pro, Enterprise, Business, and Edu plans. OpenAI said customers will be able to use the capability without cost during its first month of availability.
According to the company, the system studies how a codebase functions as a whole before attempting to locate security flaws. By building a detailed understanding of how the software operates, the tool aims to detect complicated vulnerabilities that may escape conventional automated scanners while filtering out minor or irrelevant issues that can overwhelm security teams.
The technology is an advancement of Aardvark, an internal project that entered private testing in October 2025 to help development and security teams locate and resolve weaknesses across large collections of source code.
During the last month of beta testing, Codex Security examined more than 1.2 million individual code commits across publicly accessible repositories. The analysis produced 792 critical vulnerabilities and 10,561 issues classified as high severity.
Several well-known open-source projects were affected, including OpenSSH, GnuTLS, GOGS, Thorium, libssh, PHP, and Chromium.
Some of the identified weaknesses were assigned official vulnerability identifiers. These included CVE-2026-24881 and CVE-2026-24882 linked to GnuPG, CVE-2025-32988 and CVE-2025-32989 affecting GnuTLS, and CVE-2025-64175 along with CVE-2026-25242 associated with GOGS. In the Thorium browser project, researchers also reported seven separate issues ranging from CVE-2025-35430 through CVE-2025-35436.
OpenAI explained that the system relies on advanced reasoning capabilities from its latest AI models together with automated verification techniques. This combination is intended to reduce the number of incorrect alerts while producing remediation guidance that developers can apply directly.
Repeated scans of the same repositories during testing also showed measurable improvements in accuracy. The company reported that the number of false alarms declined by more than 50 percent while the precision of vulnerability detection increased.
The platform operates through a multi-step process. It begins by examining a repository in order to understand the structure of the application and map areas where security risks are most likely to appear. From this analysis, the system produces an editable threat model describing the software’s behavior and potential attack surfaces.
Using that model as a reference point, the tool searches for weaknesses and evaluates how serious they could be in real-world scenarios. Suspected vulnerabilities are then executed in a sandbox environment to determine whether they can actually be exploited.
When configured with a project-specific runtime environment, the system can test potential vulnerabilities directly against a functioning version of the software. In some cases it can also generate proof-of-concept exploits, allowing security teams to confirm the problem before deploying a fix.
Once validation is complete, the tool suggests code changes designed to address the weakness while preserving the original behavior of the application. This approach is intended to reduce the risk that security patches introduce new software defects.
The launch of Codex Security follows the introduction of Claude Code Security by Anthropic, another system that analyzes software repositories to uncover vulnerabilities and propose remediation steps.
The emergence of these tools reflects a broader trend within cybersecurity: using artificial intelligence to review vast amounts of software code, detect vulnerabilities earlier in the development cycle, and assist developers in securing critical digital infrastructure.
Microsoft’s new Threat Intelligence report reveals that threat actors are using genAI tools for various tasks, such as phishing, surveillance, malware building, infrastructure development, and post-hack activity.
In various incidents, AI helps to create phishing emails, summarize stolen information, debug malware, translate content, and configure infrastructure. “Microsoft Threat Intelligence has observed that most malicious use of AI today centers on using language models for producing text, code, or media. Threat actors use generative AI to draft phishing lures, translate content, summarize stolen data, generate or debug malware, and scaffold scripts or infrastructure,” the report said.
"For these uses, AI functions as a force multiplier that reduces technical friction and accelerates execution, while human operators retain control over objectives, targeting, and deployment decisions,’ warns Microsoft.
Microsoft found different hacking gangs using AI in their cyberattacks, such as North Korean hackers known as Coral Sleet (Storm-1877) and Jasper Sleet (Storm-0287), who use the AI in their remote IT worker scams.
The AI helps to make realistic identities, communications, and resumes to get a job in Western companies and have access once hired. Microsoft also explained how AI is being exploited in malware development and infrastructure creation. Threat actors are using AI coding tools to create and refine malicious code, fix errors, and send malware components to different programming languages.
A few malware experiments showed traces of AI-enabled malware that create scripts or configure behaviour at runtime. Microsoft found Coral Sleet using AI to make fake company sites, manage infrastructure, and troubleshoot their installations.
When security analysts try to stop the use of AI in these attacks, Microsoft says hackers are using jailbreaking techniques to trick AI into creating malicious code or content.
Besides generative AI use, the report revealed that hackers experiment with agentic AI to do tasks autonomously. The AI is mainly used for decision-making currently. As IT worker campaigns depend on the exploitation of authentic access, experts have advised organizations to address these attacks as insider risks.
Expanding cybersecurity services as a Managed Service Provider (MSP) or Managed Security Service Provider (MSSP) requires more than strong technical capabilities. Providers also need a sustainable business approach that can deliver clear and measurable value to clients while supporting growth at scale.
One approach gaining attention across the cybersecurity industry is risk-based security management. When implemented effectively, this model can strengthen trust with customers, create opportunities to offer additional services, and establish stable recurring revenue streams. However, maintaining such a strategy consistently requires structured workflows and the right supporting technologies.
To help providers adopt this approach, a new resource titled “The MSP Growth Guide: How MSPs Use AI-Powered Risk Management to Scale Their Cybersecurity Business” outlines how organizations can transition toward scalable cybersecurity services centered on risk management. The guide provides insights into the operational difficulties many MSPs encounter, offers recommendations from industry experts, and explains how AI-driven risk management platforms can help build a more scalable and profitable service model.
Why Risk-Focused Security Enables Service Expansion
Many MSPs already deliver essential cybersecurity capabilities such as endpoint protection, regulatory compliance assistance, and other defensive tools. While these services remain critical, they are often delivered as separate engagements rather than as part of a unified strategy. As a result, the long-term strategic value of these services may remain limited, and opportunities to generate consistent recurring revenue may be reduced.
Adopting a risk-centered cybersecurity framework can shift this dynamic. Instead of addressing isolated technical issues, providers evaluate the complete threat environment facing a client organization. Security risks are then prioritized according to their potential impact on business operations.
This broader perspective allows MSPs to move away from reactive fixes and instead deliver continuous, proactive security management.
Organizations that implement this risk-first model can gain several advantages:
• Security teams can detect and address threats before they escalate into damaging incidents.
• Defensive measures can be continuously updated as the cyber threat landscape evolves.
• Critical assets, daily operations, and organizational reputation can be protected even when compliance regulations do not explicitly require certain safeguards.
Another major benefit is alignment with modern cybersecurity frameworks. Many current standards require companies to conduct formal and ongoing risk evaluations. By integrating risk management into their core service offerings, MSPs can position themselves to pursue higher-value contracts and offer additional services driven by regulatory compliance requirements.
Common Obstacles That Limit Risk Management Services
Although risk-focused security delivers substantial value, MSPs often encounter operational barriers that make these services difficult to scale or demonstrate clearly to clients.
Several recurring challenges affect service delivery and growth:
Manual assessment processes
Traditional risk evaluations often rely heavily on manual work. This approach can consume a vast majority of time, introduce inconsistencies, and make it difficult to expand services efficiently.
Lack of actionable remediation plans
Risk reports sometimes underline security weaknesses but fail to outline clear steps for resolving them. Without defined guidance, clients may struggle to understand how to address the issues that have been identified.
Complex regulatory alignment
Organizations frequently need to comply with multiple cybersecurity standards and regulatory frameworks. Managing these requirements manually can create inefficiencies and inconsistencies.
Limited business context in security reports
Many security assessments are written in highly technical language. As a result, business leaders and non-technical stakeholders may find it difficult to interpret the results or understand the real impact on their organization.
Shortage of specialized cybersecurity professionals
Skilled risk management experts remain in high demand across the industry, making it difficult for service providers to recruit and retain qualified personnel.
Third-party risk visibility gaps
Many cybersecurity platforms focus only on internal infrastructure and overlook risks introduced by external vendors and service providers.
These challenges can make it difficult for MSPs to transform risk management into a scalable and profitable cybersecurity offering.
How AI-Powered Platforms Help Address These Barriers
To overcome these operational difficulties, many providers are turning to artificial intelligence-driven risk management tools.
AI-based platforms can automate large portions of the risk management process. Tasks that previously required extensive manual effort, such as risk assessment, prioritization, and reporting, can be completed more quickly and consistently.
These systems are designed to streamline the entire risk management lifecycle while incorporating advanced security expertise into service delivery.
What Modern Risk Management Platforms Should Deliver
A well-designed AI-enabled risk management solution should do more than simply detect potential threats. It should also accelerate service delivery and support business growth for service providers.
Organizations adopting these platforms can expect several operational benefits:
• Faster onboarding and service deployment through automated and easy-to-use risk assessment tools
• More efficient compliance management supported by built-in mappings to cybersecurity frameworks and continuous monitoring capabilities
• Clearer reporting that presents cybersecurity risks in language business leaders can understand
• Demonstrable return on investment by reducing manual workloads and enabling more efficient service delivery
• Additional revenue opportunities by identifying new cybersecurity services clients may require based on their risk profile
Key Capabilities to Evaluate When Selecting a Platform
Selecting the right technology platform is critical for service providers that want to scale cybersecurity operations effectively.
Several capabilities are considered essential in modern risk management tools:
Automated risk assessment systems
Automation allows providers to generate assessment results within days rather than months, while minimizing human error and ensuring consistent outcomes.
Dynamic risk registers and visual risk mapping
Visualization tools such as heatmaps help security teams quickly identify which risks pose the greatest threat and should be addressed first.
Action-oriented remediation planning
Effective platforms convert risk findings into structured and prioritized tasks aligned with both compliance obligations and business objectives.
Customizable risk tolerance frameworks
Organizations can adapt risk scoring models to match each client’s specific operational priorities and appetite for risk.
The MSP Growth Guide provides additional details on the features providers should consider when evaluating potential solutions.
Building Long-Term Strategic Value with AI-Driven Risk Management
For MSPs and MSSPs seeking to expand their cybersecurity practices, AI-powered risk management offers a way to deliver consistent value while improving operational efficiency.
By automating risk assessments, prioritizing security issues based on business impact, and standardizing reporting processes, these platforms enable providers to deliver reliable cybersecurity services to a growing client base.
The guide “The MSP Growth Guide: How MSPs Use AI-Powered Risk Management to Scale Their Cybersecurity Business” explains how service providers can integrate AI-driven risk management into their offerings to support long-term growth.
Organizations interested in strengthening customer relationships, expanding cybersecurity services, and building a competitive advantage may benefit from adopting risk-focused security strategies supported by AI-enabled platforms.