Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Cyberattacks Shift Tactics as Hackers Exploit User Behavior and AI, Experts Warn

  Cybersecurity threats are evolving rapidly, forcing businesses to rethink how they approach digital security. Experts say modern cyberatta...

All the recent news you need to know

New Copilot Setting May Access Activity From Other Microsoft Services. Here’s How Users Can Disable It

 



A recently noticed configuration inside Microsoft Copilot may allow the AI tool to reference activity from several other Microsoft platforms, prompting renewed discussion around data privacy and AI personalization. The option, which appears within Copilot’s settings, enables the assistant to use information connected to services such as Bing, MSN, and the Microsoft Edge browser. Users who are uncomfortable with this level of integration can switch the feature off.

Like many modern artificial intelligence systems, Copilot attempts to improve the usefulness of its responses by understanding more about the person interacting with it. The assistant normally does this by remembering past conversations and storing certain details that users intentionally share during chats. These stored elements help the AI maintain context across multiple interactions and generate responses that feel more tailored.

However, a specific configuration called “Microsoft usage data” expands that capability. According to reporting first highlighted by the technology outlet Windows Latest, this setting allows Copilot to reference information associated with other Microsoft services a user has interacted with. The option appears within the assistant’s Memory controls and is available through both the Copilot website and its mobile applications. Observers believe the setting was introduced recently as part of Microsoft’s effort to strengthen personalization features in its AI tools.

The Memory feature in Copilot is designed to help the assistant retain useful context. Through this system, the AI can recall earlier conversations, remember instructions or factual information shared by users, and potentially reference certain account-linked activity from other Microsoft products. The idea is that by understanding more about a user’s interests or previous discussions, the assistant can provide more relevant answers.

In practice, such capabilities can be helpful. For instance, a user who discussed a topic with Copilot previously may want to continue that conversation later without repeating the entire background. Similarly, individuals seeking guidance about personal or professional matters may receive more relevant suggestions if the assistant has some awareness of their preferences or circumstances.

Despite the convenience, the feature also raises questions about privacy. Some users may be concerned that allowing an AI assistant to accumulate information from multiple services could expose more personal data than expected. Others may want to know how that information is used beyond personalizing conversations.

Microsoft addresses these concerns in its official Copilot documentation. In its frequently asked questions section, the company states that user conversations are processed only for limited purposes described in its privacy policies. According to Microsoft, this information may be used to evaluate Copilot’s performance, troubleshoot operational issues, identify software bugs, prevent misuse of the service, and improve the overall quality of the product.

The company also says that conversations are not used to train AI models by default. Model training is controlled through a separate configuration, which users can choose to disable if they do not want their interactions contributing to AI development.

Microsoft further clarifies that Copilot’s personalization settings do not determine whether a user receives targeted advertisements. Advertising preferences are managed through a different option available in the Microsoft account privacy dashboard. Users who want to stop personalized advertising must adjust the Personalized ads and offers setting separately.

Even with these explanations, privacy concerns remain understandable, particularly because Microsoft documentation indicates that Copilot’s personalization features may already be activated automatically in some cases. When reviewing the settings on a personal device, these options were found to be switched on. Users who prefer not to allow Copilot to access broader usage data may therefore wish to disable them.

Checking these settings is straightforward. Users can open Copilot through its website or mobile application and ensure they are signed in with their Microsoft account. On the web interface, selecting the account name at the bottom of the left-hand panel opens the Settings menu, where the Memory section can be accessed. In the mobile application, the same controls are available through the side navigation menu by tapping the account name and choosing Memory.

Inside the Memory settings, users will see a general control labeled “Personalization and memory.” Two additional options appear beneath it: “Facts you’ve shared,” which stores information provided directly during conversations, and “Microsoft usage data,” which allows Copilot to reference activity from other Microsoft services.

To limit this behavior, users can switch off the Microsoft usage data toggle. They may also disable the broader Personalization and memory option if they prefer that the AI assistant does not retain contextual information about their interactions. Copilot also provides a “Delete all memory” function that removes all stored data from the system. If individual personal details have been recorded, they can be reviewed and deleted through the editing option next to “Facts you’ve shared.”

Security and privacy experts generally advise caution when sharing information with AI assistants, even when personalization features remain enabled. Sensitive or confidential details should not be entered into conversations. Microsoft itself recommends avoiding the disclosure of certain types of highly personal data, including information related to health conditions or sexual orientation.

The broader development reflects a growing trend in the technology industry. As AI assistants become integrated across multiple platforms and services, companies are increasingly using cross-service data to make these tools more helpful and personalized. While this approach can improve convenience and usability, it also underlines the grave necessity for transparent privacy controls so users remain aware of how their information is being used and can adjust those settings when necessary.

Mental Health Apps With Million Downloads Filled With Security Vulnerabilities


Mental health apps may have flaws

Various mental health mobile applications with over millions of downloads on Google Play have security flaws that could leak users’ personal medical data.

Researchers found over 85 medium and high-severity vulnerabilities in one of the apps that can be abused to hack users' therapy data and privacy. 

Few products are AI companions built to help people having anxiety, clinical depression, bipolar disorder and stress. 

Six of the ten studied applications said that user chats are private and encoded safely on the vendor's servers. 

Oversecured CEO Sergey Toshin said that “Mental health data carries unique risks. On the dark web, therapy records sell for $1,000 or more per record, far more than credit card numbers.”

More than 1500 security vulnerabilities reported 

Experts scanned ten mobile applications promoted as tools that help with mental health issues, and found 1,575 security flaws: 938 low-severity, 538 medium-severity, and 54 rated high-severity. 

No critical issues were found, a few can be leveraged to hack login credentials, HTML injection, locate the user, or spoof notifications. 

Experts used the Oversecured scanner to analyse the APK files of the mental health apps for known flaw patterns in different categories. 

Using Intent.parseUri() on an externally controlled string, one treatment app with over a million downloads launches the generated messaging object (intent) without verifying the target component. 

This makes it possible for an attacker to compel the application to launch any internal activity, even if it isn't meant for external access.

Oversecured said, “Since these internal activities often handle authentication tokens and session data, exploitation could give an attacker access to a user’s therapy records.”

Another problem is storing data locally that gives read access to all apps on the device. This can expose therapy details, depending on the saved data. Therapy details such as Cognitive Behavioural Therapy (CBT), session notes, therapy entries. Experts found plaintext configuration data and backend API endpoints inside the APK resources. 

 “These apps collect and store some of the most sensitive personal data in mobile: therapy session transcripts, mood logs, medication schedules, self-harm indicators, and in some cases, information protected under HIPAA,” Oversecured said.

AI-Powered Cybercrime Hits 600+ FortiGate Firewalls Across 55 Countries, AWS Warns

 

Cybercriminals using readily available generative AI tools managed to breach more than 600 internet-facing FortiGate firewalls across 55 countries within a little over a month, according to a recent incident analysis released by Amazon Web Services (AWS).

The operation, active between mid-January and mid-February, did not rely on sophisticated zero-day vulnerabilities. Instead, attackers automated large-scale attempts to access exposed systems by rapidly testing weak or reused credentials—essentially the digital equivalent of trying every unlocked door, but at high speed with the assistance of AI.

AWS investigators believe the operation was carried out by a financially motivated Russian-speaking group. The attackers scanned for publicly accessible FortiGate management interfaces, attempted to log in using commonly reused passwords, and once successful, extracted configuration files that provided detailed insight into the victims’ network environments.

According to AWS’s security team, the threat actors leveraged multiple commercially available AI tools to produce attack playbooks, scripts, and operational documentation. This allowed a relatively small or less technically advanced group to conduct a campaign that would typically require greater manpower and development effort. Analysts also discovered traces of AI-generated code and planning materials on compromised systems, indicating that AI tools were used extensively throughout the operation rather than just for occasional scripting tasks.

"The volume and variety of custom tooling would typically indicate a well-resourced development team," said CJ Moses, CISO at Amazon. "Instead, a single actor or very small group generated this entire toolkit through AI-assisted development."

After gaining access to the firewalls, the attackers retrieved configuration data containing administrator and VPN credentials, network architecture information, and firewall policies. Armed with these details, they attempted deeper intrusions by targeting directory services such as Active Directory, harvesting credentials, and exploring options for lateral movement across compromised networks. Backup infrastructure, including servers running Veeam, was also targeted during the intrusions.

AWS researchers noted that although the tools used in the campaign were functional, they appeared somewhat crude. The scripts showed basic parsing methods and repetitive comments often associated with machine-generated drafts. Despite their imperfections, the tools proved effective enough for large-scale automated attacks. When systems proved difficult to compromise, the attackers often abandoned them and shifted focus to easier targets, suggesting that their strategy prioritized volume over precision.

The affected organizations were spread across several regions, including Europe, Asia, Africa, and Latin America. The activity did not appear to focus on a single sector or country, indicating opportunistic targeting. However, investigators observed clusters of incidents suggesting that some breaches may have provided access to managed service providers or shared infrastructure, potentially increasing the scale of downstream exposure.

AWS emphasized that many of the compromises could have been avoided with standard cybersecurity practices. Preventing management interfaces from being publicly accessible, implementing multi-factor authentication, and avoiding password reuse would have significantly reduced the attackers’ chances of success.

The report comes shortly after Google cautioned that cybercriminal groups are increasingly integrating generative AI technologies—including tools such as Gemini AI—into their operations. These technologies are being used for tasks such as reconnaissance, target profiling, phishing campaign creation, and malware development


Researchers Find Critical Zero-Day Vulnerabilities in Foxit and Apryse PDF Platforms

 

PDF files are often seen as simple digital documents, but recent research shows they have evolved into complex software environments that can expose corporate systems to cyber risks. Modern PDF tools now function more like application platforms than basic viewers, potentially giving attackers pathways into private networks. 

A study by Novee Security examined two widely used platforms, Foxit and Apryse. Released on February 18, 2026, the report identified 13 categories of vulnerabilities and 16 potential attack paths that could allow systems to be compromised. 

Researchers say these issues are more than minor bugs. Some zero-day flaws could allow attackers to run commands on backend servers or take over user accounts without needing to compromise a browser or operating system. To find the vulnerabilities, analysts first identified common patterns that signal security weaknesses. These patterns were then used to train an AI system that scanned large volumes of code much faster than manual review alone. 

By combining human insight with automated analysis, the system detected several high-impact issues that conventional scanning tools might miss. One major flaw appeared in Foxit’s digital signature server, which verifies electronically signed documents. Some of the most serious findings involve one-click exploits where simply opening a document or loading a link can trigger malicious activity. Vulnerabilities CVE-2025-70402 and CVE-2025-70400 affect Apryse WebViewer by allowing the software to trust remote configuration files without proper validation, enabling attackers to run malicious scripts. 

Another flaw, CVE-2025-70401, showed that malicious code could be hidden in the “Author” field of a PDF comment and executed when a user interacts with it. Researchers also identified CVE-2025-66500, which affects Foxit browser plugins. In this case, manipulated messages could trick the plugin into running harmful scripts within the application. Testing further showed that certain weaknesses could allow attackers to send a simple request that triggers command execution on a server, granting unauthorized access to parts of the system. 

These vulnerabilities highlight how small interactions or overlooked behaviors can lead to significant security risks. Experts say the core problem lies in how modern PDF platforms are built. Many now rely on web technologies such as iframes and server-side processing, yet organizations still treat PDF files as harmless static documents. This mismatch can create “trust boundary” failures where software accepts external data without sufficient validation. 

Both vendors were notified before the research was published, and the vulnerabilities were assigned official CVE identifiers to support patching efforts. The findings highlight how document-processing systems—often overlooked in security planning—can become complex attack surfaces if not properly secured.

ECB Tightens Oversight of Banks’ Growing AI Sector Risks

 

The European Central Bank is intensifying its oversight of how eurozone lenders finance the fast‑growing artificial intelligence ecosystem, reflecting concern that the boom in data‑centre and AI‑related infrastructure could hide pockets of credit and concentration risk.

In recent weeks, the ECB has sent targeted requests to a select group of major European banks, asking for granular data on their loans and other exposures to AI‑linked activities such as data‑centre construction, vendor financing and large project‑finance structures. Supervisors want to map where credit is clustering around a small set of hyperscalers, cloud providers and specialized hardware suppliers, amid global estimates of trillions of dollars in planned AI‑related capital spending. Officials stress this is a diagnostic exercise rather than an immediate step toward higher capital charges, but it marks a shift from general discussion to hands‑on information gathering.

The push comes as European banks race to harness AI inside their own operations, from credit scoring and fraud detection to automating back‑office tasks and enhancing customer service. Supervisors acknowledge that these technologies promise sizeable efficiency gains and new revenue opportunities, yet warn that many institutions still lack mature governance for AI models, including robust data‑quality controls, explainability, and clear accountability for automated decisions. The ECB has repeatedly argued that AI adoption must be matched by stronger risk‑management frameworks and continuous human oversight over model life cycles.

Regulators are also increasingly uneasy about systemic dependencies created by the dominance of a handful of mostly non‑EU AI and cloud providers. Heavy reliance on these external platforms raises concerns about operational resilience, data protection, and geopolitical risk that could spill over into financial stability if disruptions occur. At the same time, the ECB’s broader financial‑stability assessments have highlighted stretched valuations in some AI‑linked equities, warning that a sharp correction could transmit stress into bank balance sheets through both direct exposures and wider market channels. 

For now, supervisors frame their AI‑sector review as part of a wider effort to “encourage innovation while managing risks,” aligning prudential expectations with Europe’s new AI Act and digital‑operational‑resilience rules. Banks are being nudged to tighten contract terms, strengthen model‑validation teams and improve documentation before scaling AI‑driven business lines. The message from Frankfurt is that AI remains welcome as a driver of competitiveness in European finance—but only if lenders can demonstrate they understand, measure and contain the new concentrations of credit, market and operational risk that accompany the technology’s rapid rise.

Optimizely Reports Data Breach Linked to Sophisticated Vishing Incident


 

In addition to serving as a crossroads of technology, marketing intelligence, and vast networks of corporate data, digital advertising platforms are becoming increasingly attractive targets for cybercriminals seeking an entry point into enterprise infrastructure.

Optimizely recently revealed that a security incident was initiated not by sophisticated malware, but rather by a social engineering scheme that was carefully orchestrated. A voice-phishing tactic was utilized by attackers linked to the threat group ShinyHunters to deceive a company employee earlier in February 2026 and gain access to some parts of the company's internal environment without authorization. 

Investigators determined that the attackers were able to extract limited business contact information from internal resources even though the intrusion was contained before it could reach sensitive customer databases or critical operational systems. 

Throughout this episode, we learn that even mature technology companies remain vulnerable to manipulation-based attacks aimed at bypassing technical defenses and targeting the human layer of security. 

Optimizely, a leading provider of digital experience infrastructure, develops tools that assist organizations in managing web properties, conducting marketing experiments, and refining online customer journeys based on data. 

Among its many capabilities are A/B experimentation frameworks, enterprise-grade content management systems, and integrated ecommerce tools that are designed to assist businesses in improving conversion performance and audience engagement across a variety of digital channels. 

Over 10,000 organizations worldwide use the company's technology stack, including H&M, PayPal, Toyota, Nike, and Salesforce, among others. A number of customers have recently received notifications detailing this incident. According to the company, the attackers gained access through what it described as a "sophisticated voice-phishing attack" on February 11. 

The internal investigation indicates that although the threat actors were able to penetrate a limited segment of the corporate environment, the intrusion did not result in privilege escalation and no malicious payloads or malware were deployed within the network during the intrusion. 

Therefore, the breach remained constrained within a narrow scope, confirming our assessment that the attackers were limited in their access and were not permitted to reach sensitive customer and operational data. Researchers have identified the intrusion as the work of the threat actor collective ShinyHunters, a financially motivated group involved in cybercrime since at least 2020. 

It is well known for orchestrating high-visibility data theft operations and subsequently distributing or monetizing compromised databases through dark web forums and underground marketplaces. A great deal of its campaign effort has been directed toward technology and telecommunications organizations, areas where internal access to corporate databases and partner information can prove to be very useful. 

According to analysts, this group has demonstrated a high degree of flexibility in its intrusion techniques, combining credential-based attacks, such as credential stuffing, with increasingly persuasive social engineering techniques, such as voice-based deception schemes, to achieve their objectives. 

Despite the fact that the precise geographical origins of the actors remain unknown, their operational footprint spans multiple regions, reflecting their focus on monetizing corporate information or using stolen data to exert reputational and financial pressure on targeted organizations through exploitation of stolen data. In the immediate case, organizations connected to the affected environment appear to be only exposed to basic business contact information, not sensitive customer information. 

Cybersecurity specialists caution, however, that even seemingly routine information can provide a foothold for follow-on attacks. By using contact directories, email addresses, and professional identifiers, attackers may be able to craft convincing phishing emails or conduct additional social engineering attempts in order to gather credentials or financial information. 

In addition to facilitating spam operations, this type of data can also facilitate fraudulent outreach that impersonates trusted partners or internal employees. A precautionary measure, security experts recommend that employees and partners be aware of unexpected communications, independently verify the legitimacy of telephone calls or e-mail requests, and maintain multi-factor authentication on all corporate accounts as a precautionary measure. 

A proactive approach to security hygiene and maintaining open communication with affected stakeholders are widely regarded as essential measures in order to minimize the impact of incidents of this nature on organizational operations and reputations. 

Optimizely did not disclose the exact number of customers whose information may have been exposed; however, it indicated in its breach notification that the activity closely resembles that of a network of attackers known for persistent social engineering campaigns involving loosely connected attackers. 

According to the firm, during the incident, communications were received that reflected patterns commonly associated with groups that utilize voice phishing to manipulate employees into providing access to corporate systems. 

As stated in the description, the operational style of ShinyHunters is commonly attributed to those responsible for a series of breaches affecting major online platforms and consumer brands recently, such as Canada Goose, Panera Bread, Betterment, SoundCloud, Pornhub, Figure, and Match Group, which operates Tinder, Hinge, Meetic, Match.com, and OkCupid, among others. 

It should be noted that not every incident has been related to a single coordinated campaign; however, numerous victims have reported a successful intrusion attempt related to voice phishing operations designed to compromise enterprise single sign-on environments. 

It has been reported that attackers have impersonated internal IT support staff and contacted employees directly, leading them to counterfeit authentication portals that mimic legitimate corporate logins. These interactions led to the attackers bypassing standard access controls by obtaining account credentials and one-time multi-factor authentication codes from victims. These techniques have also been observed to evolve, with threat actors using device-code phishing methods to obtain authentication tokens tied to enterprise identity services by exploiting the legitimate OAuth device authorization flow. 

Once a single sign-on account has been compromised, attackers can pivot among integrated corporate applications and cloud-based platforms using compromised employee accounts. The same access may be extended to enterprise tools such as Microsoft Entra ID, Microsoft 365, Google Workspace, Salesforce, Zendesk, Dropbox, SAP, Slack, Adobe, and Atlassian, enabling an intruder to move laterally across connected services and collect additional corporate information once an initial foothold has been established. 

Ultimately, this incident serves as a reminder that technical safeguards alone are rarely sufficient to prevent determined social engineering campaigns from gaining traction. It is not uncommon for attackers to exploit human trust and routine operational processes to breach the security architecture of organizations with mature security architectures. 

Identify-verification procedures should be strengthened for internal support interactions, voice-based fraud should be regularly discussed with employees and strong monitoring should be implemented around single sign-on activity and unusual authentication requests, according to security professionals. 

Taking measures such as implementing conditional access policies, enforcing multi-factor authentication strictly, and implementing rapid incident response protocols can greatly reduce an attacker's ability to attack after the initial attempt has been made. 

The development of voice-driven deception tactics is continuing to prompt companies across the technology sector to prioritize social engineering resilience as a core component of enterprise cybersecurity strategy, rather than as a peripheral issue.

Featured