Cybersecurity analysts have identified a phishing campaign that can quietly hand control of a Windows computer to attackers after a single click. The scam appears as a routine update notice for Google Meet, but the prompt is fraudulent and redirects victims into a device management system controlled by threat actors.
Unlike many phishing schemes, the technique does not steal passwords, download obvious malware, or display clear warning signs. Instead, the attack relies on convincing users to interact with a page that imitates a standard software update message.
A convincing but fake update message
The deceptive webpage tells visitors they must install the latest version of Meet in order to continue using the service. The design closely resembles a legitimate update notification and uses familiar colors and branding that many users associate with Google products.
However, both the “Update now” button and the “Learn more” link do not connect to any official Google resource. Instead, they activate a special Windows deep link known as ms-device-enrollment:.
This feature is a built-in Windows mechanism designed for corporate environments. IT administrators commonly use it to send employees a link that allows a computer to be enrolled in a company’s device management system with minimal effort. In the attack campaign, the same capability is redirected to infrastructure operated by the attacker.
How the enrollment process begins
Windows enrollment links such as ms-device-enrollment: are commonly used in corporate environments where organizations need to configure large numbers of laptops quickly. The link automatically opens Windows settings and connects the device to an enterprise management server.
Once enrolled, the device becomes part of a management framework that allows administrators to deploy software updates, enforce security policies, and manage system configurations remotely.
Attackers exploit this workflow because users are accustomed to seeing this setup process when joining corporate networks, making it appear legitimate.
When a victim clicks the link, Windows immediately bypasses the browser and opens the operating system’s “Set up a work or school account” dialog. This is the same interface that appears when an organization configures a new employee laptop.
The enrollment request arrives with several fields already filled in. The username displayed is collinsmckleen@sunlife-finance.com, a domain designed to resemble the financial services firm Sun Life Financial. Meanwhile, the server connection is preconfigured to an endpoint hosted at tnrmuv-api.esper[.]cloud, which is part of infrastructure operated by Esper.
The attacker’s objective is not to impersonate the victim’s account perfectly. Instead, the goal is to persuade the user to continue through the legitimate Windows enrollment process. Even if only a small portion of targeted users proceed, that is enough for attackers to gain access to some systems.
What attackers gain after enrollment
If the victim clicks Next and completes the setup wizard, the computer becomes registered with a remote Mobile Device Management (MDM) server.
MDM platforms are commonly used by organizations to manage employee devices. Once a device joins such a system, administrators can remotely install or remove applications, modify operating system settings, access stored files, lock the device, or completely erase its contents.
Because the commands come from a legitimate management platform rather than a malicious program, the operating system performs the actions itself. As a result, there may be no suspicious malware process running on the machine.
The infrastructure used in this campaign relies on Esper, a legitimate enterprise management service that many companies use to control corporate hardware.
Further analysis of the malicious link shows encoded configuration data embedded in the server address. When decoded, the data reveals two identifiers associated with the Esper platform: a blueprint ID that determines which management configuration will be applied and a group ID that specifies the device group the computer will join once enrolled.
Abuse of legitimate features
Both the Windows enrollment handler and the Esper management service are functioning exactly as designed. The attacker’s tactic simply redirects these legitimate tools toward unsuspecting users.
Because no malicious software is delivered and no login credentials are requested, the attack can be difficult for security tools to detect. The enrollment prompt displayed to the user is an authentic Windows system dialog rather than a fake webpage. This means typical browser warnings or email filters that look for credential-stealing forms may not flag the activity.
Additionally, the command infrastructure operates on a trusted cloud-based platform, making domain reputation filtering less effective. Security specialists warn that many traditional detection tools are not designed to recognize situations where legitimate operating system features are misused to gain control of a system.
This technique reflects a broader trend in cybercrime. Increasingly, attackers are abandoning conventional malware and instead exploiting built-in operating system capabilities or legitimate cloud services to carry out their operations.
Steps to take if you interacted with the page
Users who believe they may have clicked the fake update prompt should first check whether their device has been enrolled in an unfamiliar management system.
On Windows computers, this can be done by navigating to Settings → Accounts → Access work or school. If an unfamiliar entry appears, particularly one associated with domains such as sunlife-finance or esper, it should be selected and disconnected immediately.
Anyone who clicked the “Update now” link on the malicious site and proceeded through the enrollment wizard should treat the computer as potentially compromised. Running a current anti-malware scan is recommended to determine whether the management server deployed additional software after enrollment.
For organizations, administrators may also want to review device management policies. Endpoint management platforms such as Microsoft Intune allow companies to restrict which MDM servers corporate devices are permitted to join. Implementing such restrictions can reduce the risk of unauthorized device enrollment in similar attacks.
Security researchers have warned that misuse of device management systems can be particularly dangerous because they grant deep administrative control over enrolled devices.
According to analysts from Gartner, enterprise device management platforms often have privileged system access comparable to local administrators, allowing them to modify system policies, install applications, and control security settings remotely.
When such privileges fall into the wrong hands, attackers can effectively operate the device as if they were legitimate administrators.
According to a letter the business issued online, Conduent initially learned it was the victim of a "cyber incident" more than a year ago on January 13, 2025. The actual breach occurred between October 21, 2024, and January 13, 2025, and it included Conduent's data because the company offers services to health plans.
Names, social security numbers, health insurance details, and unspecified medical information were among the data. In its notice, the business stressed that "not every data element was present for every individual," which implies that some individuals may have had their health insurance information taken but not their social security number, or vice versa.
According to Bleeping Computer, the Safepay ransomware organization claimed responsibility for the attack, which allegedly captured more than 8 gigabytes of data. Conduent stated online, "Presently, we are unaware of any attempted or actual misuse of any information involved in this incident," while it is unclear if Safepay has demanded payment for the information's recovery.
10.5 million people were affected by the incident, according to Oregon's consumer protection website, although it's unknown how many people in Oregon alone were affected. According to Wisconsin, the national total is more than 25 million.
Notifications have also been sent to residents of other states, such as California, Delaware, Massachusetts, New Hampshire, and New Mexico. According to the state's attorney general, just 374 people's data was compromised in Maine, one of the states with very tiny numbers. Conduent, a New Jersey-based company, did not reply to emails on Tuesday inquiring about the full extent of the incident and what victims could do about it.
Conduent is providing free credit monitoring and identity restoration services through Epiq to certain individuals, but those affected must join before April 30, 2026, according to a letter given to victims in California.
A recently noticed configuration inside Microsoft Copilot may allow the AI tool to reference activity from several other Microsoft platforms, prompting renewed discussion around data privacy and AI personalization. The option, which appears within Copilot’s settings, enables the assistant to use information connected to services such as Bing, MSN, and the Microsoft Edge browser. Users who are uncomfortable with this level of integration can switch the feature off.
Like many modern artificial intelligence systems, Copilot attempts to improve the usefulness of its responses by understanding more about the person interacting with it. The assistant normally does this by remembering past conversations and storing certain details that users intentionally share during chats. These stored elements help the AI maintain context across multiple interactions and generate responses that feel more tailored.
However, a specific configuration called “Microsoft usage data” expands that capability. According to reporting first highlighted by the technology outlet Windows Latest, this setting allows Copilot to reference information associated with other Microsoft services a user has interacted with. The option appears within the assistant’s Memory controls and is available through both the Copilot website and its mobile applications. Observers believe the setting was introduced recently as part of Microsoft’s effort to strengthen personalization features in its AI tools.
The Memory feature in Copilot is designed to help the assistant retain useful context. Through this system, the AI can recall earlier conversations, remember instructions or factual information shared by users, and potentially reference certain account-linked activity from other Microsoft products. The idea is that by understanding more about a user’s interests or previous discussions, the assistant can provide more relevant answers.
In practice, such capabilities can be helpful. For instance, a user who discussed a topic with Copilot previously may want to continue that conversation later without repeating the entire background. Similarly, individuals seeking guidance about personal or professional matters may receive more relevant suggestions if the assistant has some awareness of their preferences or circumstances.
Despite the convenience, the feature also raises questions about privacy. Some users may be concerned that allowing an AI assistant to accumulate information from multiple services could expose more personal data than expected. Others may want to know how that information is used beyond personalizing conversations.
Microsoft addresses these concerns in its official Copilot documentation. In its frequently asked questions section, the company states that user conversations are processed only for limited purposes described in its privacy policies. According to Microsoft, this information may be used to evaluate Copilot’s performance, troubleshoot operational issues, identify software bugs, prevent misuse of the service, and improve the overall quality of the product.
The company also says that conversations are not used to train AI models by default. Model training is controlled through a separate configuration, which users can choose to disable if they do not want their interactions contributing to AI development.
Microsoft further clarifies that Copilot’s personalization settings do not determine whether a user receives targeted advertisements. Advertising preferences are managed through a different option available in the Microsoft account privacy dashboard. Users who want to stop personalized advertising must adjust the Personalized ads and offers setting separately.
Even with these explanations, privacy concerns remain understandable, particularly because Microsoft documentation indicates that Copilot’s personalization features may already be activated automatically in some cases. When reviewing the settings on a personal device, these options were found to be switched on. Users who prefer not to allow Copilot to access broader usage data may therefore wish to disable them.
Checking these settings is straightforward. Users can open Copilot through its website or mobile application and ensure they are signed in with their Microsoft account. On the web interface, selecting the account name at the bottom of the left-hand panel opens the Settings menu, where the Memory section can be accessed. In the mobile application, the same controls are available through the side navigation menu by tapping the account name and choosing Memory.
Inside the Memory settings, users will see a general control labeled “Personalization and memory.” Two additional options appear beneath it: “Facts you’ve shared,” which stores information provided directly during conversations, and “Microsoft usage data,” which allows Copilot to reference activity from other Microsoft services.
To limit this behavior, users can switch off the Microsoft usage data toggle. They may also disable the broader Personalization and memory option if they prefer that the AI assistant does not retain contextual information about their interactions. Copilot also provides a “Delete all memory” function that removes all stored data from the system. If individual personal details have been recorded, they can be reviewed and deleted through the editing option next to “Facts you’ve shared.”
Security and privacy experts generally advise caution when sharing information with AI assistants, even when personalization features remain enabled. Sensitive or confidential details should not be entered into conversations. Microsoft itself recommends avoiding the disclosure of certain types of highly personal data, including information related to health conditions or sexual orientation.
The broader development reflects a growing trend in the technology industry. As AI assistants become integrated across multiple platforms and services, companies are increasingly using cross-service data to make these tools more helpful and personalized. While this approach can improve convenience and usability, it also underlines the grave necessity for transparent privacy controls so users remain aware of how their information is being used and can adjust those settings when necessary.
Various mental health mobile applications with over millions of downloads on Google Play have security flaws that could leak users’ personal medical data.
Researchers found over 85 medium and high-severity vulnerabilities in one of the apps that can be abused to hack users' therapy data and privacy.
Few products are AI companions built to help people having anxiety, clinical depression, bipolar disorder and stress.
Six of the ten studied applications said that user chats are private and encoded safely on the vendor's servers.
Oversecured CEO Sergey Toshin said that “Mental health data carries unique risks. On the dark web, therapy records sell for $1,000 or more per record, far more than credit card numbers.”
Experts scanned ten mobile applications promoted as tools that help with mental health issues, and found 1,575 security flaws: 938 low-severity, 538 medium-severity, and 54 rated high-severity.
No critical issues were found, a few can be leveraged to hack login credentials, HTML injection, locate the user, or spoof notifications.
Experts used the Oversecured scanner to analyse the APK files of the mental health apps for known flaw patterns in different categories.
Using Intent.parseUri() on an externally controlled string, one treatment app with over a million downloads launches the generated messaging object (intent) without verifying the target component.
This makes it possible for an attacker to compel the application to launch any internal activity, even if it isn't meant for external access.
Oversecured said, “Since these internal activities often handle authentication tokens and session data, exploitation could give an attacker access to a user’s therapy records.”
Another problem is storing data locally that gives read access to all apps on the device. This can expose therapy details, depending on the saved data. Therapy details such as Cognitive Behavioural Therapy (CBT), session notes, therapy entries. Experts found plaintext configuration data and backend API endpoints inside the APK resources.
“These apps collect and store some of the most sensitive personal data in mobile: therapy session transcripts, mood logs, medication schedules, self-harm indicators, and in some cases, information protected under HIPAA,” Oversecured said.
Artificial intelligence is increasingly being used to help developers identify security weaknesses in software, and a new tool from OpenAI reflects that shift.
The company has introduced Codex Security, an automated security assistant designed to examine software projects, detect vulnerabilities, confirm whether they can actually be exploited, and recommend ways to fix them.
The feature is currently being released as a research preview and can be accessed through the Codex interface by users subscribed to ChatGPT Pro, Enterprise, Business, and Edu plans. OpenAI said customers will be able to use the capability without cost during its first month of availability.
According to the company, the system studies how a codebase functions as a whole before attempting to locate security flaws. By building a detailed understanding of how the software operates, the tool aims to detect complicated vulnerabilities that may escape conventional automated scanners while filtering out minor or irrelevant issues that can overwhelm security teams.
The technology is an advancement of Aardvark, an internal project that entered private testing in October 2025 to help development and security teams locate and resolve weaknesses across large collections of source code.
During the last month of beta testing, Codex Security examined more than 1.2 million individual code commits across publicly accessible repositories. The analysis produced 792 critical vulnerabilities and 10,561 issues classified as high severity.
Several well-known open-source projects were affected, including OpenSSH, GnuTLS, GOGS, Thorium, libssh, PHP, and Chromium.
Some of the identified weaknesses were assigned official vulnerability identifiers. These included CVE-2026-24881 and CVE-2026-24882 linked to GnuPG, CVE-2025-32988 and CVE-2025-32989 affecting GnuTLS, and CVE-2025-64175 along with CVE-2026-25242 associated with GOGS. In the Thorium browser project, researchers also reported seven separate issues ranging from CVE-2025-35430 through CVE-2025-35436.
OpenAI explained that the system relies on advanced reasoning capabilities from its latest AI models together with automated verification techniques. This combination is intended to reduce the number of incorrect alerts while producing remediation guidance that developers can apply directly.
Repeated scans of the same repositories during testing also showed measurable improvements in accuracy. The company reported that the number of false alarms declined by more than 50 percent while the precision of vulnerability detection increased.
The platform operates through a multi-step process. It begins by examining a repository in order to understand the structure of the application and map areas where security risks are most likely to appear. From this analysis, the system produces an editable threat model describing the software’s behavior and potential attack surfaces.
Using that model as a reference point, the tool searches for weaknesses and evaluates how serious they could be in real-world scenarios. Suspected vulnerabilities are then executed in a sandbox environment to determine whether they can actually be exploited.
When configured with a project-specific runtime environment, the system can test potential vulnerabilities directly against a functioning version of the software. In some cases it can also generate proof-of-concept exploits, allowing security teams to confirm the problem before deploying a fix.
Once validation is complete, the tool suggests code changes designed to address the weakness while preserving the original behavior of the application. This approach is intended to reduce the risk that security patches introduce new software defects.
The launch of Codex Security follows the introduction of Claude Code Security by Anthropic, another system that analyzes software repositories to uncover vulnerabilities and propose remediation steps.
The emergence of these tools reflects a broader trend within cybersecurity: using artificial intelligence to review vast amounts of software code, detect vulnerabilities earlier in the development cycle, and assist developers in securing critical digital infrastructure.