Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Cyberattacks Shift Tactics as Hackers Exploit User Behavior and AI, Experts Warn

 

Cybersecurity threats are evolving rapidly, forcing businesses to rethink how they approach digital security. Experts say modern cyberattacks are no longer focused solely on breaking technical defenses but are increasingly designed to exploit everyday user behavior. 
 
According to industry observers, files downloaded by employees have become a common entry point for cybercriminals. Items such as invoices, installers, documents, and productivity tools are often downloaded without careful verification, creating opportunities for attackers. 

“The Downloads folder has quietly become one of the hottest pieces of real estate for cybercriminals,” said Sanket Atal, senior vice president of engineering and country head at OpenText India. 

“Attackers are not trying to break cryptography anymore. They’re hijacking habits.” Research cited by the company indicates that more than one third of consumer malware infections are first detected in the Downloads directory. 

Security specialists say this reflects a broader shift in how cyberattacks are designed, with attackers relying more on social engineering and multi-stage malware. Atal said malicious files frequently appear harmless when first opened. “These files often look completely harmless at first,” he said. 

“They only later pull in ransomware components or credential-stealing payloads. It is a multi-stage approach that is very difficult to catch with signature-based tools.” Experts say the rise in such attacks is also linked to the growing industrialization of cybercrime. 

Modern ransomware groups and information-stealing operations increasingly operate like structured businesses that continuously test and refine their methods. “Ransomware-as-a-service groups and info-stealer operators are constantly refining their lures,” Atal said. 

“They are comfortable using SEO-poisoned websites, fake update prompts, and even ‘productivity tools’ to get users to download something that looks normal.” India’s rapidly expanding digital ecosystem has made it an attractive target for attackers. 

The combination of millions of new internet users, the widespread use of personal devices for work, and the overlap between personal and professional computing environments increases exposure to risk. 

“When a poisoned file lands in a Downloads folder on a personal device, it can easily become an entry point into enterprise systems,” Atal said. “Especially when that same device is used for banking, office work, and email.” Artificial intelligence is further changing the threat landscape. 

Generative AI tools can now produce convincing phishing messages that mimic corporate communication styles and reference real projects. “AI has removed the traditional visual cues people relied on to spot scams,” Atal said. 

“Generative models now write in perfect business language, reuse an organisation’s tone, and reference real projects scraped from public sources.” Security analysts say deepfake technology is also being used to manipulate business processes. 

Synthetic video calls and cloned voices have been used to approve financial transactions in some cases. Another emerging pattern is the rise of malware-free intrusions, where attackers rely on stolen credentials or legitimate remote access tools instead of traditional malicious software. 

“We’re also seeing a rise in malware-free intrusions,” Atal said. “Attackers use stolen credentials and legitimate remote access tools. Nothing matches a known signature, yet the breach is very real.” Experts say these developments are forcing organizations to shift their security strategies. 

Instead of focusing solely on scanning files and attachments, security teams are increasingly monitoring behavior patterns across users, devices, and systems. “The first shift is moving from content to behaviour,” Atal said. 

“Instead of just scanning attachments, organisations need to focus on whether a user or service account is behaving consistently with historical and peer norms.” Security specialists also emphasize the importance of integrating identity verification with threat detection systems. 

When phishing messages become difficult to distinguish from legitimate communication, identity context becomes a key factor in identifying suspicious activity. In addition, companies are beginning to rely on artificial intelligence for defensive purposes. 

Automated systems can help security teams manage the growing volume of alerts by identifying patterns and highlighting potential threats more quickly. “Security teams are overwhelmed by alerts,” Atal said. 

“AI-based triage is essential to reduce noise, correlate weak signals, and generate plain-language narratives so analysts can act faster.” Despite increased awareness of cybersecurity threats, several misconceptions persist. 

Many organizations assume that the most serious cyberattacks originate from sophisticated state-backed actors. “One big myth is that serious attacks only come from exotic nation-state actors,” Atal said. “The truth is, most breaches begin with everyday issues such as phishing, malicious downloads, weak passwords, or cloud misconfigurations.” 

Another misconception is that smaller organizations are less likely to be targeted. However, experts say attackers often focus on industries with weaker security controls, including healthcare providers, hospitality companies, and smaller financial institutions. 

Cybersecurity specialists also warn that many attacks no longer rely on traditional malware. Techniques such as identity-based attacks, business email compromise, and misuse of legitimate administrative tools often bypass standard antivirus defenses. “Identity-based attacks, business email compromise, and abuse of legitimate tools often never trigger traditional antivirus,” Atal said. 

“The starting point can be any user, device, or partner that has access to data.” Industry leaders say the challenge is compounded by the fact that many cybersecurity systems were designed for a different technological environment. 

Vinayak Godse, chief executive of the Data Security Council of India, said existing security frameworks were built before the widespread adoption of digital services and artificial intelligence. 

“In the digitalisation space, we are creating tremendous experiences, productivity gains, and new possibilities,” Godse said. “But the security frameworks we have in place were designed for an older paradigm.” He added that attackers today are capable of identifying and exploiting even a single vulnerability in complex digital systems. 

“The current attack ecosystem can identify and exploit even one vulnerability out of millions, or even billions,” Godse said. Experts say the erosion of traditional network boundaries has further complicated security efforts. Remote work, cloud computing, software-as-a-service platforms, and third-party integrations mean that sensitive systems can now be accessed from a wide range of devices and locations. 

“A user on a personal phone, accessing a SaaS application from home Wi-Fi, is still inside your risk perimeter,” Atal said. As a result, organizations are increasingly focusing on continuous verification and context-aware monitoring rather than relying solely on perimeter defenses. 

According to Atal, the effectiveness of AI-driven security tools ultimately depends on the quality of underlying data. If data sources are fragmented or poorly labeled, even advanced analytics systems may struggle to detect threats. 
 
“Every advanced AI-driven security use case boils down to whether you can see your data and whether you can trust it,” he said. Security experts say that integrating identity signals, access patterns, and data sensitivity into unified monitoring systems can help organizations identify suspicious activity more effectively. 

“When data, identity, and threat signals are unified, security teams can see a connected narrative,” Atal said. “A login, a download, and a data access event stop being isolated alerts and start telling a story.” 

 
Despite advances in technology, experts say human behavior remains a critical factor in cybersecurity. 

“In today’s cyber landscape, the front line is no longer the firewall,” Atal said. “It is the file you choose to open and the behaviour that follows.”

New Copilot Setting May Access Activity From Other Microsoft Services. Here’s How Users Can Disable It

 



A recently noticed configuration inside Microsoft Copilot may allow the AI tool to reference activity from several other Microsoft platforms, prompting renewed discussion around data privacy and AI personalization. The option, which appears within Copilot’s settings, enables the assistant to use information connected to services such as Bing, MSN, and the Microsoft Edge browser. Users who are uncomfortable with this level of integration can switch the feature off.

Like many modern artificial intelligence systems, Copilot attempts to improve the usefulness of its responses by understanding more about the person interacting with it. The assistant normally does this by remembering past conversations and storing certain details that users intentionally share during chats. These stored elements help the AI maintain context across multiple interactions and generate responses that feel more tailored.

However, a specific configuration called “Microsoft usage data” expands that capability. According to reporting first highlighted by the technology outlet Windows Latest, this setting allows Copilot to reference information associated with other Microsoft services a user has interacted with. The option appears within the assistant’s Memory controls and is available through both the Copilot website and its mobile applications. Observers believe the setting was introduced recently as part of Microsoft’s effort to strengthen personalization features in its AI tools.

The Memory feature in Copilot is designed to help the assistant retain useful context. Through this system, the AI can recall earlier conversations, remember instructions or factual information shared by users, and potentially reference certain account-linked activity from other Microsoft products. The idea is that by understanding more about a user’s interests or previous discussions, the assistant can provide more relevant answers.

In practice, such capabilities can be helpful. For instance, a user who discussed a topic with Copilot previously may want to continue that conversation later without repeating the entire background. Similarly, individuals seeking guidance about personal or professional matters may receive more relevant suggestions if the assistant has some awareness of their preferences or circumstances.

Despite the convenience, the feature also raises questions about privacy. Some users may be concerned that allowing an AI assistant to accumulate information from multiple services could expose more personal data than expected. Others may want to know how that information is used beyond personalizing conversations.

Microsoft addresses these concerns in its official Copilot documentation. In its frequently asked questions section, the company states that user conversations are processed only for limited purposes described in its privacy policies. According to Microsoft, this information may be used to evaluate Copilot’s performance, troubleshoot operational issues, identify software bugs, prevent misuse of the service, and improve the overall quality of the product.

The company also says that conversations are not used to train AI models by default. Model training is controlled through a separate configuration, which users can choose to disable if they do not want their interactions contributing to AI development.

Microsoft further clarifies that Copilot’s personalization settings do not determine whether a user receives targeted advertisements. Advertising preferences are managed through a different option available in the Microsoft account privacy dashboard. Users who want to stop personalized advertising must adjust the Personalized ads and offers setting separately.

Even with these explanations, privacy concerns remain understandable, particularly because Microsoft documentation indicates that Copilot’s personalization features may already be activated automatically in some cases. When reviewing the settings on a personal device, these options were found to be switched on. Users who prefer not to allow Copilot to access broader usage data may therefore wish to disable them.

Checking these settings is straightforward. Users can open Copilot through its website or mobile application and ensure they are signed in with their Microsoft account. On the web interface, selecting the account name at the bottom of the left-hand panel opens the Settings menu, where the Memory section can be accessed. In the mobile application, the same controls are available through the side navigation menu by tapping the account name and choosing Memory.

Inside the Memory settings, users will see a general control labeled “Personalization and memory.” Two additional options appear beneath it: “Facts you’ve shared,” which stores information provided directly during conversations, and “Microsoft usage data,” which allows Copilot to reference activity from other Microsoft services.

To limit this behavior, users can switch off the Microsoft usage data toggle. They may also disable the broader Personalization and memory option if they prefer that the AI assistant does not retain contextual information about their interactions. Copilot also provides a “Delete all memory” function that removes all stored data from the system. If individual personal details have been recorded, they can be reviewed and deleted through the editing option next to “Facts you’ve shared.”

Security and privacy experts generally advise caution when sharing information with AI assistants, even when personalization features remain enabled. Sensitive or confidential details should not be entered into conversations. Microsoft itself recommends avoiding the disclosure of certain types of highly personal data, including information related to health conditions or sexual orientation.

The broader development reflects a growing trend in the technology industry. As AI assistants become integrated across multiple platforms and services, companies are increasingly using cross-service data to make these tools more helpful and personalized. While this approach can improve convenience and usability, it also underlines the grave necessity for transparent privacy controls so users remain aware of how their information is being used and can adjust those settings when necessary.

Mental Health Apps With Million Downloads Filled With Security Vulnerabilities


Mental health apps may have flaws

Various mental health mobile applications with over millions of downloads on Google Play have security flaws that could leak users’ personal medical data.

Researchers found over 85 medium and high-severity vulnerabilities in one of the apps that can be abused to hack users' therapy data and privacy. 

Few products are AI companions built to help people having anxiety, clinical depression, bipolar disorder and stress. 

Six of the ten studied applications said that user chats are private and encoded safely on the vendor's servers. 

Oversecured CEO Sergey Toshin said that “Mental health data carries unique risks. On the dark web, therapy records sell for $1,000 or more per record, far more than credit card numbers.”

More than 1500 security vulnerabilities reported 

Experts scanned ten mobile applications promoted as tools that help with mental health issues, and found 1,575 security flaws: 938 low-severity, 538 medium-severity, and 54 rated high-severity. 

No critical issues were found, a few can be leveraged to hack login credentials, HTML injection, locate the user, or spoof notifications. 

Experts used the Oversecured scanner to analyse the APK files of the mental health apps for known flaw patterns in different categories. 

Using Intent.parseUri() on an externally controlled string, one treatment app with over a million downloads launches the generated messaging object (intent) without verifying the target component. 

This makes it possible for an attacker to compel the application to launch any internal activity, even if it isn't meant for external access.

Oversecured said, “Since these internal activities often handle authentication tokens and session data, exploitation could give an attacker access to a user’s therapy records.”

Another problem is storing data locally that gives read access to all apps on the device. This can expose therapy details, depending on the saved data. Therapy details such as Cognitive Behavioural Therapy (CBT), session notes, therapy entries. Experts found plaintext configuration data and backend API endpoints inside the APK resources. 

 “These apps collect and store some of the most sensitive personal data in mobile: therapy session transcripts, mood logs, medication schedules, self-harm indicators, and in some cases, information protected under HIPAA,” Oversecured said.

AI-Powered Cybercrime Hits 600+ FortiGate Firewalls Across 55 Countries, AWS Warns

 

Cybercriminals using readily available generative AI tools managed to breach more than 600 internet-facing FortiGate firewalls across 55 countries within a little over a month, according to a recent incident analysis released by Amazon Web Services (AWS).

The operation, active between mid-January and mid-February, did not rely on sophisticated zero-day vulnerabilities. Instead, attackers automated large-scale attempts to access exposed systems by rapidly testing weak or reused credentials—essentially the digital equivalent of trying every unlocked door, but at high speed with the assistance of AI.

AWS investigators believe the operation was carried out by a financially motivated Russian-speaking group. The attackers scanned for publicly accessible FortiGate management interfaces, attempted to log in using commonly reused passwords, and once successful, extracted configuration files that provided detailed insight into the victims’ network environments.

According to AWS’s security team, the threat actors leveraged multiple commercially available AI tools to produce attack playbooks, scripts, and operational documentation. This allowed a relatively small or less technically advanced group to conduct a campaign that would typically require greater manpower and development effort. Analysts also discovered traces of AI-generated code and planning materials on compromised systems, indicating that AI tools were used extensively throughout the operation rather than just for occasional scripting tasks.

"The volume and variety of custom tooling would typically indicate a well-resourced development team," said CJ Moses, CISO at Amazon. "Instead, a single actor or very small group generated this entire toolkit through AI-assisted development."

After gaining access to the firewalls, the attackers retrieved configuration data containing administrator and VPN credentials, network architecture information, and firewall policies. Armed with these details, they attempted deeper intrusions by targeting directory services such as Active Directory, harvesting credentials, and exploring options for lateral movement across compromised networks. Backup infrastructure, including servers running Veeam, was also targeted during the intrusions.

AWS researchers noted that although the tools used in the campaign were functional, they appeared somewhat crude. The scripts showed basic parsing methods and repetitive comments often associated with machine-generated drafts. Despite their imperfections, the tools proved effective enough for large-scale automated attacks. When systems proved difficult to compromise, the attackers often abandoned them and shifted focus to easier targets, suggesting that their strategy prioritized volume over precision.

The affected organizations were spread across several regions, including Europe, Asia, Africa, and Latin America. The activity did not appear to focus on a single sector or country, indicating opportunistic targeting. However, investigators observed clusters of incidents suggesting that some breaches may have provided access to managed service providers or shared infrastructure, potentially increasing the scale of downstream exposure.

AWS emphasized that many of the compromises could have been avoided with standard cybersecurity practices. Preventing management interfaces from being publicly accessible, implementing multi-factor authentication, and avoiding password reuse would have significantly reduced the attackers’ chances of success.

The report comes shortly after Google cautioned that cybercriminal groups are increasingly integrating generative AI technologies—including tools such as Gemini AI—into their operations. These technologies are being used for tasks such as reconnaissance, target profiling, phishing campaign creation, and malware development


Researchers Find Critical Zero-Day Vulnerabilities in Foxit and Apryse PDF Platforms

 

PDF files are often seen as simple digital documents, but recent research shows they have evolved into complex software environments that can expose corporate systems to cyber risks. Modern PDF tools now function more like application platforms than basic viewers, potentially giving attackers pathways into private networks. 

A study by Novee Security examined two widely used platforms, Foxit and Apryse. Released on February 18, 2026, the report identified 13 categories of vulnerabilities and 16 potential attack paths that could allow systems to be compromised. 

Researchers say these issues are more than minor bugs. Some zero-day flaws could allow attackers to run commands on backend servers or take over user accounts without needing to compromise a browser or operating system. To find the vulnerabilities, analysts first identified common patterns that signal security weaknesses. These patterns were then used to train an AI system that scanned large volumes of code much faster than manual review alone. 

By combining human insight with automated analysis, the system detected several high-impact issues that conventional scanning tools might miss. One major flaw appeared in Foxit’s digital signature server, which verifies electronically signed documents. Some of the most serious findings involve one-click exploits where simply opening a document or loading a link can trigger malicious activity. Vulnerabilities CVE-2025-70402 and CVE-2025-70400 affect Apryse WebViewer by allowing the software to trust remote configuration files without proper validation, enabling attackers to run malicious scripts. 

Another flaw, CVE-2025-70401, showed that malicious code could be hidden in the “Author” field of a PDF comment and executed when a user interacts with it. Researchers also identified CVE-2025-66500, which affects Foxit browser plugins. In this case, manipulated messages could trick the plugin into running harmful scripts within the application. Testing further showed that certain weaknesses could allow attackers to send a simple request that triggers command execution on a server, granting unauthorized access to parts of the system. 

These vulnerabilities highlight how small interactions or overlooked behaviors can lead to significant security risks. Experts say the core problem lies in how modern PDF platforms are built. Many now rely on web technologies such as iframes and server-side processing, yet organizations still treat PDF files as harmless static documents. This mismatch can create “trust boundary” failures where software accepts external data without sufficient validation. 

Both vendors were notified before the research was published, and the vulnerabilities were assigned official CVE identifiers to support patching efforts. The findings highlight how document-processing systems—often overlooked in security planning—can become complex attack surfaces if not properly secured.

ECB Tightens Oversight of Banks’ Growing AI Sector Risks

 

The European Central Bank is intensifying its oversight of how eurozone lenders finance the fast‑growing artificial intelligence ecosystem, reflecting concern that the boom in data‑centre and AI‑related infrastructure could hide pockets of credit and concentration risk.

In recent weeks, the ECB has sent targeted requests to a select group of major European banks, asking for granular data on their loans and other exposures to AI‑linked activities such as data‑centre construction, vendor financing and large project‑finance structures. Supervisors want to map where credit is clustering around a small set of hyperscalers, cloud providers and specialized hardware suppliers, amid global estimates of trillions of dollars in planned AI‑related capital spending. Officials stress this is a diagnostic exercise rather than an immediate step toward higher capital charges, but it marks a shift from general discussion to hands‑on information gathering.

The push comes as European banks race to harness AI inside their own operations, from credit scoring and fraud detection to automating back‑office tasks and enhancing customer service. Supervisors acknowledge that these technologies promise sizeable efficiency gains and new revenue opportunities, yet warn that many institutions still lack mature governance for AI models, including robust data‑quality controls, explainability, and clear accountability for automated decisions. The ECB has repeatedly argued that AI adoption must be matched by stronger risk‑management frameworks and continuous human oversight over model life cycles.

Regulators are also increasingly uneasy about systemic dependencies created by the dominance of a handful of mostly non‑EU AI and cloud providers. Heavy reliance on these external platforms raises concerns about operational resilience, data protection, and geopolitical risk that could spill over into financial stability if disruptions occur. At the same time, the ECB’s broader financial‑stability assessments have highlighted stretched valuations in some AI‑linked equities, warning that a sharp correction could transmit stress into bank balance sheets through both direct exposures and wider market channels. 

For now, supervisors frame their AI‑sector review as part of a wider effort to “encourage innovation while managing risks,” aligning prudential expectations with Europe’s new AI Act and digital‑operational‑resilience rules. Banks are being nudged to tighten contract terms, strengthen model‑validation teams and improve documentation before scaling AI‑driven business lines. The message from Frankfurt is that AI remains welcome as a driver of competitiveness in European finance—but only if lenders can demonstrate they understand, measure and contain the new concentrations of credit, market and operational risk that accompany the technology’s rapid rise.

Optimizely Reports Data Breach Linked to Sophisticated Vishing Incident


 

In addition to serving as a crossroads of technology, marketing intelligence, and vast networks of corporate data, digital advertising platforms are becoming increasingly attractive targets for cybercriminals seeking an entry point into enterprise infrastructure.

Optimizely recently revealed that a security incident was initiated not by sophisticated malware, but rather by a social engineering scheme that was carefully orchestrated. A voice-phishing tactic was utilized by attackers linked to the threat group ShinyHunters to deceive a company employee earlier in February 2026 and gain access to some parts of the company's internal environment without authorization. 

Investigators determined that the attackers were able to extract limited business contact information from internal resources even though the intrusion was contained before it could reach sensitive customer databases or critical operational systems. 

Throughout this episode, we learn that even mature technology companies remain vulnerable to manipulation-based attacks aimed at bypassing technical defenses and targeting the human layer of security. 

Optimizely, a leading provider of digital experience infrastructure, develops tools that assist organizations in managing web properties, conducting marketing experiments, and refining online customer journeys based on data. 

Among its many capabilities are A/B experimentation frameworks, enterprise-grade content management systems, and integrated ecommerce tools that are designed to assist businesses in improving conversion performance and audience engagement across a variety of digital channels. 

Over 10,000 organizations worldwide use the company's technology stack, including H&M, PayPal, Toyota, Nike, and Salesforce, among others. A number of customers have recently received notifications detailing this incident. According to the company, the attackers gained access through what it described as a "sophisticated voice-phishing attack" on February 11. 

The internal investigation indicates that although the threat actors were able to penetrate a limited segment of the corporate environment, the intrusion did not result in privilege escalation and no malicious payloads or malware were deployed within the network during the intrusion. 

Therefore, the breach remained constrained within a narrow scope, confirming our assessment that the attackers were limited in their access and were not permitted to reach sensitive customer and operational data. Researchers have identified the intrusion as the work of the threat actor collective ShinyHunters, a financially motivated group involved in cybercrime since at least 2020. 

It is well known for orchestrating high-visibility data theft operations and subsequently distributing or monetizing compromised databases through dark web forums and underground marketplaces. A great deal of its campaign effort has been directed toward technology and telecommunications organizations, areas where internal access to corporate databases and partner information can prove to be very useful. 

According to analysts, this group has demonstrated a high degree of flexibility in its intrusion techniques, combining credential-based attacks, such as credential stuffing, with increasingly persuasive social engineering techniques, such as voice-based deception schemes, to achieve their objectives. 

Despite the fact that the precise geographical origins of the actors remain unknown, their operational footprint spans multiple regions, reflecting their focus on monetizing corporate information or using stolen data to exert reputational and financial pressure on targeted organizations through exploitation of stolen data. In the immediate case, organizations connected to the affected environment appear to be only exposed to basic business contact information, not sensitive customer information. 

Cybersecurity specialists caution, however, that even seemingly routine information can provide a foothold for follow-on attacks. By using contact directories, email addresses, and professional identifiers, attackers may be able to craft convincing phishing emails or conduct additional social engineering attempts in order to gather credentials or financial information. 

In addition to facilitating spam operations, this type of data can also facilitate fraudulent outreach that impersonates trusted partners or internal employees. A precautionary measure, security experts recommend that employees and partners be aware of unexpected communications, independently verify the legitimacy of telephone calls or e-mail requests, and maintain multi-factor authentication on all corporate accounts as a precautionary measure. 

A proactive approach to security hygiene and maintaining open communication with affected stakeholders are widely regarded as essential measures in order to minimize the impact of incidents of this nature on organizational operations and reputations. 

Optimizely did not disclose the exact number of customers whose information may have been exposed; however, it indicated in its breach notification that the activity closely resembles that of a network of attackers known for persistent social engineering campaigns involving loosely connected attackers. 

According to the firm, during the incident, communications were received that reflected patterns commonly associated with groups that utilize voice phishing to manipulate employees into providing access to corporate systems. 

As stated in the description, the operational style of ShinyHunters is commonly attributed to those responsible for a series of breaches affecting major online platforms and consumer brands recently, such as Canada Goose, Panera Bread, Betterment, SoundCloud, Pornhub, Figure, and Match Group, which operates Tinder, Hinge, Meetic, Match.com, and OkCupid, among others. 

It should be noted that not every incident has been related to a single coordinated campaign; however, numerous victims have reported a successful intrusion attempt related to voice phishing operations designed to compromise enterprise single sign-on environments. 

It has been reported that attackers have impersonated internal IT support staff and contacted employees directly, leading them to counterfeit authentication portals that mimic legitimate corporate logins. These interactions led to the attackers bypassing standard access controls by obtaining account credentials and one-time multi-factor authentication codes from victims. These techniques have also been observed to evolve, with threat actors using device-code phishing methods to obtain authentication tokens tied to enterprise identity services by exploiting the legitimate OAuth device authorization flow. 

Once a single sign-on account has been compromised, attackers can pivot among integrated corporate applications and cloud-based platforms using compromised employee accounts. The same access may be extended to enterprise tools such as Microsoft Entra ID, Microsoft 365, Google Workspace, Salesforce, Zendesk, Dropbox, SAP, Slack, Adobe, and Atlassian, enabling an intruder to move laterally across connected services and collect additional corporate information once an initial foothold has been established. 

Ultimately, this incident serves as a reminder that technical safeguards alone are rarely sufficient to prevent determined social engineering campaigns from gaining traction. It is not uncommon for attackers to exploit human trust and routine operational processes to breach the security architecture of organizations with mature security architectures. 

Identify-verification procedures should be strengthened for internal support interactions, voice-based fraud should be regularly discussed with employees and strong monitoring should be implemented around single sign-on activity and unusual authentication requests, according to security professionals. 

Taking measures such as implementing conditional access policies, enforcing multi-factor authentication strictly, and implementing rapid incident response protocols can greatly reduce an attacker's ability to attack after the initial attempt has been made. 

The development of voice-driven deception tactics is continuing to prompt companies across the technology sector to prioritize social engineering resilience as a core component of enterprise cybersecurity strategy, rather than as a peripheral issue.

OpenAI’s Codex Security Flags Over 10,000 High-Risk Vulnerabilities in Code Scan

 



Artificial intelligence is increasingly being used to help developers identify security weaknesses in software, and a new tool from OpenAI reflects that shift.

The company has introduced Codex Security, an automated security assistant designed to examine software projects, detect vulnerabilities, confirm whether they can actually be exploited, and recommend ways to fix them.

The feature is currently being released as a research preview and can be accessed through the Codex interface by users subscribed to ChatGPT Pro, Enterprise, Business, and Edu plans. OpenAI said customers will be able to use the capability without cost during its first month of availability.

According to the company, the system studies how a codebase functions as a whole before attempting to locate security flaws. By building a detailed understanding of how the software operates, the tool aims to detect complicated vulnerabilities that may escape conventional automated scanners while filtering out minor or irrelevant issues that can overwhelm security teams.

The technology is an advancement of Aardvark, an internal project that entered private testing in October 2025 to help development and security teams locate and resolve weaknesses across large collections of source code.

During the last month of beta testing, Codex Security examined more than 1.2 million individual code commits across publicly accessible repositories. The analysis produced 792 critical vulnerabilities and 10,561 issues classified as high severity.

Several well-known open-source projects were affected, including OpenSSH, GnuTLS, GOGS, Thorium, libssh, PHP, and Chromium.

Some of the identified weaknesses were assigned official vulnerability identifiers. These included CVE-2026-24881 and CVE-2026-24882 linked to GnuPG, CVE-2025-32988 and CVE-2025-32989 affecting GnuTLS, and CVE-2025-64175 along with CVE-2026-25242 associated with GOGS. In the Thorium browser project, researchers also reported seven separate issues ranging from CVE-2025-35430 through CVE-2025-35436.

OpenAI explained that the system relies on advanced reasoning capabilities from its latest AI models together with automated verification techniques. This combination is intended to reduce the number of incorrect alerts while producing remediation guidance that developers can apply directly.

Repeated scans of the same repositories during testing also showed measurable improvements in accuracy. The company reported that the number of false alarms declined by more than 50 percent while the precision of vulnerability detection increased.

The platform operates through a multi-step process. It begins by examining a repository in order to understand the structure of the application and map areas where security risks are most likely to appear. From this analysis, the system produces an editable threat model describing the software’s behavior and potential attack surfaces.

Using that model as a reference point, the tool searches for weaknesses and evaluates how serious they could be in real-world scenarios. Suspected vulnerabilities are then executed in a sandbox environment to determine whether they can actually be exploited.

When configured with a project-specific runtime environment, the system can test potential vulnerabilities directly against a functioning version of the software. In some cases it can also generate proof-of-concept exploits, allowing security teams to confirm the problem before deploying a fix.

Once validation is complete, the tool suggests code changes designed to address the weakness while preserving the original behavior of the application. This approach is intended to reduce the risk that security patches introduce new software defects.

The launch of Codex Security follows the introduction of Claude Code Security by Anthropic, another system that analyzes software repositories to uncover vulnerabilities and propose remediation steps.

The emergence of these tools reflects a broader trend within cybersecurity: using artificial intelligence to review vast amounts of software code, detect vulnerabilities earlier in the development cycle, and assist developers in securing critical digital infrastructure.

Microsoft Report Reveals Hackers Exploit AI In Cyberattacks


According to Microsoft, hackers are increasingly using AI in their work to increase attacks, scale cyberattack activity, and limit technical barriers throughout all aspects of a cyberattack. 

Microsoft’s new Threat Intelligence report reveals that threat actors are using genAI tools for various tasks, such as phishing, surveillance, malware building, infrastructure development, and post-hack activity. 

About the report

In various incidents, AI helps to create phishing emails, summarize stolen information, debug malware, translate content, and configure infrastructure. “Microsoft Threat Intelligence has observed that most malicious use of AI today centers on using language models for producing text, code, or media. Threat actors use generative AI to draft phishing lures, translate content, summarize stolen data, generate or debug malware, and scaffold scripts or infrastructure,” the report said. 

"For these uses, AI functions as a force multiplier that reduces technical friction and accelerates execution, while human operators retain control over objectives, targeting, and deployment decisions,’ warns Microsoft.

AI in cyberattacks 

Microsoft found different hacking gangs using AI in their cyberattacks, such as North Korean hackers known as Coral Sleet (Storm-1877) and Jasper Sleet (Storm-0287), who use the AI in their remote IT worker scams. 

The AI helps to make realistic identities, communications, and resumes to get a job in Western companies and have access once hired. Microsoft also explained how AI is being exploited in malware development and infrastructure creation. Threat actors are using AI coding tools to create and refine malicious code, fix errors, and send malware components to different programming languages. 

The impact

A few malware experiments showed traces of AI-enabled malware that create scripts or configure behaviour at runtime. Microsoft found Coral Sleet using AI to make fake company sites, manage infrastructure, and troubleshoot their installations. 

When security analysts try to stop the use of AI in these attacks, Microsoft says hackers are using jailbreaking techniques to trick AI into creating malicious code or content. 

Besides generative AI use, the report revealed that hackers experiment with agentic AI to do tasks autonomously. The AI is mainly used for decision-making currently. As IT worker campaigns depend on the exploitation of authentic access, experts have advised organizations to address these attacks as insider risks. 

Anthropic AI Model Finds 22 Security Flaws in Firefox

 

Anthropic said its artificial intelligence model Claude Opus 4.6 helped uncover 22 previously unknown security vulnerabilities in the Firefox web browser as part of a collaboration with the Mozilla. 

The company said the issues were discovered during a two week analysis conducted in January 2026. 

The findings include 14 vulnerabilities rated as high severity, seven categorized as moderate and one considered low severity. 

Most of the flaws were addressed in Firefox version 148, which was released late last month, while the remaining fixes are expected in upcoming updates. 

Anthropic said the number of high severity bugs discovered by its AI model represents a notable share of the browser’s serious vulnerabilities reported over the past year. 

During the research, Claude Opus 4.6 scanned roughly 6,000 C++ files in the Firefox codebase and generated 112 unique vulnerability reports. 

Human researchers reviewed the results to confirm the findings and rule out false positives before reporting them. One issue identified by the model involved a use-after-free vulnerability in Firefox’s JavaScript engine. 

According to Anthropic, the AI located the flaw within about 20 minutes of examining the code, after which a security researcher validated the finding in a controlled testing environment. 

Researchers also tested whether the AI model could go beyond identifying flaws and attempt to build exploits from them. Anthropic said it provided Claude access to the list of vulnerabilities reported to Mozilla and asked it to develop working exploits. 

After hundreds of test runs and about $4,000 worth of API usage, the model succeeded in producing a working exploit in only two cases. 

Anthropic said the results suggest that finding vulnerabilities may be easier for AI systems than turning those flaws into functioning exploits. 

“However, the fact that Claude could succeed at automatically developing a crude browser exploit, even if only in a few cases, is concerning,” the company said. 

It added that the exploit tests were performed in a restricted research environment where some protections, such as sandboxing, were deliberately removed. 

One exploit generated by the model targeted a vulnerability tracked as CVE-2026-2796, which involves a miscompilation issue in the JavaScript WebAssembly component of Firefox’s just-in-time compilation system. 

Anthropic said the testing process included a verification system designed to check whether the AI-generated exploit actually worked. 

The system provided real-time feedback, allowing the model to refine its attempts until it produced a functioning proof of concept. The research comes shortly after Anthropic introduced Claude Code Security in a limited preview. 

The tool is designed to help developers identify and fix software vulnerabilities with the assistance of AI agents. Mozilla said in a separate statement that the collaboration produced additional findings beyond the 22 vulnerabilities. 

According to the company, the AI-assisted analysis uncovered about 90 other bugs, including assertion failures typically identified through fuzzing as well as logic errors that traditional testing tools had missed. 

“The scale of findings reflects the power of combining rigorous engineering with new analysis tools for continuous improvement,” Mozilla said. 

“We view this as clear evidence that large-scale, AI-assisted analysis is a powerful new addition to security engineers’ toolbox.”

DeepMind Chief Sounds Alarm on AI's Dual Threats

 

Google DeepMind CEO Sir Demis Hassabis has issued a stark warning on the escalating threats posed by artificial intelligence, urging immediate action from governments and tech firms. In an exclusive BBC interview at the AI Impact Summit in Delhi, he emphasized that more research into AI risks "needs to be done urgently," rather than waiting years. Hassabis highlighted the industry's push for "smart regulation" targeting genuine dangers from increasingly autonomous systems.

The AI pioneer identified two primary threats: malicious exploitation by bad actors and the potential loss of human control over super-capable AI systems. He stressed that current fragmented efforts in safety research are insufficient, with massive investments in AI development far outpacing those in oversight and evaluation. As AI models grow more powerful, Hassabis warned of a "narrow window" to implement robust safeguards before existing institutions are overwhelmed.

Speaking at the summit, which concluded recently in India's capital, Hassabis called for scaled-up funding and talent in AI safety science. He compared the challenge to nuclear safety protocols, arguing that advanced AI now demands societal-level treatment with rigorous testing before widespread deployment. The event brought together global leaders to discuss AI's societal impacts amid rapid advancements.

Hassabis advocated for international cooperation, noting AI's borderless nature means it affects everyone worldwide. He praised forums like those in the UK, Paris, and Seoul for uniting technologists and policymakers, while pushing for minimum global standards on AI deployment.However, tensions exist, as the US delegation at the Delhi summit rejected global AI governance outright.

This comes as AI capabilities surge, with systems learning physical realities and approaching artificial general intelligence (AGI) in 5-10 years. Hassabis acknowledged natural constraints like hardware shortages may slow progress, providing time for safeguards, but stressed proactive measures are essential. Industry leaders must balance innovation with risk mitigation to harness AI's potential safely.

Safety recommendations 

To counter AI threats, organizations should prioritize independent safety evaluations and red-teaming exercises before deploying models. Governments must fund public AI safety research grants and enforce "smart regulations" focused on real risks like misuse and loss of control. Individuals can stay vigilant by verifying AI-generated content, using tools like watermark detectors, limiting data shared with AI systems, and supporting ethical AI policies through advocacy.

FBI Warns Outdated Wi-Fi Routers Are Being Targeted in Malware and Botnet Attacks

 

Cybersecurity risks could rise when outdated home routers stop getting manufacturer support, federal agents say. Devices from the late 2000s into the early 2010s often fall out of update cycles, leaving networks open. Without patches, vulnerabilities stay unaddressed - making intrusion more likely over time. Older models reaching end-of-life lack protection upgrades once available. This gap draws attention from officials tracking digital threats to household systems. 

Older network equipment often loses support as makers discontinue update releases. Once patching ends, weaknesses found earlier stay open indefinitely. Such gaps let hackers break in more easily. Devices like obsolete routers now attract criminals who deploy malicious code. Access at admin level gets seized without owners noticing. Infected machines may join hidden networks controlled remotely. Evidence shows law enforcement warning about these risks repeatedly. 

Built from hijacked devices, botnets answer to remote operators. These collections of infected machines frequently enable massive digital assaults. Instead of serving legitimate users, they route harmful data across the web. Criminals rely on them to mask where attacks originate. Through hidden channels, wrongdoers stay anonymous during operations. 

Back in 2011, Linksys made several routers later flagged as weak by the FBI. Devices like the E1200, E2500, and E4200 came under scrutiny due to security flaws. Earlier models also appear on the list - take the WRT320N, launched in 2009. Then there is the M10, hitting shelves a year after that one. Some routers come equipped with remote setup options, letting people adjust settings using web-connected interfaces. 

Though useful, such access may lead to problems if flaws are left unfixed. Hackers regularly search online for devices running open management ports, particularly ones stuck on old software versions. Hackers start by spotting weak routers, then slip through software gaps to plant harmful programs straight onto the machine. Once inside, that hidden code opens the door wide - giving intruders complete control while setting up secret talks with remote hubs. 

Sometimes, these taken devices ping those distant centers each minute, just to say they’re still online and waiting. Opened network ports on routers might let malware turn devices into proxies. With such access, attackers send harmful data across infected networks instead of launching attacks directly. Some even trade entry rights to third parties wanting to mask where they operate from. What makes router-based infections tricky is how hard they are to spot for most people. 

Since standard antivirus tools target laptops and phones, routers often fall outside their scope. Running within the router's own software, the malware stays hidden even when everything seems to work fine. The network keeps running smoothly, masking the presence of harmful code tucked deep inside. Older routers without regular updates become weak spots over time. 

Because of this, specialists suggest swapping them out. A modern replacement brings continued protection through active maintenance. This shift lowers chances of intrusions via obsolete equipment found in personal setups.

ExpressVPN Expands Privacy Tools with Launch of Hybrid Browser Extension


 

Increasingly, immersive technologies are moving from being novel to being part of everyday digital infrastructure, which raises questions regarding privacy within virtual environments. Activities previously conducted on conventional screens now occur within headsets that process vast streams of personal data, such as browsing behavior, location signals, and device interactions, as well as process vast streams of personal data.

It has been announced that ExpressVPN has partnered with Meta in recognition of this emerging privacy frontier, which will allow its security tools to be integrated directly into Meta Quest. An application will be introduced by Meta through the Meta App Store, which will enable headset users to activate full-device VPN protection within the virtual reality environment. 

Additionally, ExpressVPN has released a hybrid browser extension that combines VPN and proxy functionality into an effective privacy tool, signaling an ongoing effort to adapt traditional internet security models to the increasingly complex environment of immersive computing. An integral part of the newly introduced extension is Smart Routing, which enables users to control how browser traffic interacts with the VPN network with granularity. 

By using the system, specific websites can be automatically linked to predefined VPN endpoints or routing preferences rather than requiring users to switch server locations multiple times when navigating between services hosted in various regions. In addition to streamlining the management of geographically sensitive connections, this approach also maintains a consistent level of privacy protection.

Additionally, additional safeguards have been implemented in order to increase protections at the browser-level. WebRTC leaks are a well-known method by which IP addresses can be uncovered despite the use of VPNs, and the extension incorporates mechanisms to block them. HTML5 geolocation data transmission is also restricted by controls in the extension. These measures are designed to prevent websites from inferring a user's physical location through browser-based signals by limiting the ability of websites to do so. 

In light of the fact that most digital activity now takes place within web environments, browser-centric protection has been focused as a way to address this reality. In order to facilitate streaming media, electronic commerce transactions, and collaborative work platforms, browser interfaces are increasingly replacing standalone software applications. 

It appears as though the company is positioning the hybrid extension as a flexible bridge between lightweight web privacy and comprehensive network protection by concentrating security controls at this layer while still providing a primary VPN application that can fully encrypt devices at the device level. At the same time, the company is expanding its privacy infrastructure beyond traditional computing devices to include immersive technology, which is rapidly gaining in popularity.

In addition to the Meta Quest platform support, we are introducing a dedicated VPN application which can be downloaded directly from the Meta App Store, enabling encrypted connectivity across the headset's system environment. Additionally, the hybrid extension is expected to be available on the platform in a browser-specific version, providing an additional level of security for virtual reality activities. 

It has historically been difficult to deploy conventional VPNs in VR ecosystems, requiring complex network workarounds or external device configuration. Native integration therefore indicates a significant change in how privacy tools are adapting to these environments. It is important to note that this development is part of a broader change that is occurring within the VPN industry as internet usage gradually expands into a variety of connected hardware categories. 

Increasingly, browsing occurs within headsets and other immersive devices, rather than just laptops or smartphones. The use of flexible routing and layered protection to safeguard user data across emerging digital interfaces may become more prominent as a result of the emergence of this technology. 

In addition to providing an encrypted connection directly to the Meta Quest headset through a dedicated application distributed through the Meta App Store as part of the company's collaboration with Meta, the company is introducing hybrid browser technology as well. As a result of this development, virtual reality headsets are increasingly regarded as more than entertainment devices; they are becoming full-featured computing platforms that facilitate various digital activities, including communication, content consumption, and collaboration. 

ExpressVPN utilizes a native VPN application that is deployed within the device environment to ensure that network traffic generated by the entire headset is routed through encrypted channels rather than limiting protection only to individual applications or browsing sessions. This type of system-wide coverage is especially useful for applications that consume large amounts of bandwidth, such as VR streaming and multiplayer gaming, where unprotected traffic can be subjected to network throttling. 

In addition, the company stated that its newly introduced hybrid extension will shortly be extended to the headset's native browsing environment in the near future. VR browser users will be able to secure web traffic via a streamlined protection mode once it is implemented, which will not require the user to remain active through a background VPN. 

In addition to providing additional privacy for browser-based activity, this lighter configuration also ensures that system resources are preserved during performance-sensitive applications, such as those that affect the immersive experience directly due to computational overhead and frame stability. 

As part of the extension architecture, the provider's proprietary Lightway Protocol has been updated to incorporate post-quantum cryptographic protections, as well as support for the extension architecture. By strengthening the protocol, we hope to address emerging concerns that future developments in quantum computing may undermine conventional encryption algorithms, positioning it as a forward-looking safeguard against decryption capabilities of the future.

It is currently available for popular browsers including Google Chrome and Mozilla Firefox, however it is expected that integration with Meta Quest in the near future will be available as soon as possible. Combined, the developments demonstrate how privacy infrastructure is gradually evolving in order to accommodate new digital interfaces, extending encrypted connectivity beyond traditional desktop and mobile ecosystems into immersive computing environments. 

The combination of these developments illustrates how privacy architectures are gradually being revised to accommodate the changing boundaries of the internet as digital interaction is increasingly centered on browsers, applications, and immersive devices. Security strategies that once focused on a single device or network layer are becoming more adaptable to meet changing requirements. 

Organizations and individual users should examine how data flows through emerging platforms and ensure that encryption and routing controls evolve simultaneously. With the internet continuing to extend beyond conventional computing interfaces, solutions that integrate flexible browser-level safeguards with device-wide encryption may represent a practical solution for maintaining consistent privacy standards.

San Francisco Children’s Council Breach Exposes SSNs of 12,000+ People

 

The Children’s Council of San Francisco has notified more than 12,000 individuals that their personal information was compromised in a cyberattack discovered last year. 

According to breach notification letters, the incident occurred on August 3, 2025, when the organization experienced what it described as a network disruption. An investigation later found that an unauthorized actor had accessed and obtained certain data. 

“On August 3, 2025, ChCo experienced a network disruption,” the Council said in its notice to affected individuals. 

“The investigation determined that an unknown actor accessed and acquired certain data without authorization.” 

The compromised information includes names and Social Security numbers belonging to 12,655 people. 

The notice did not specify whether the affected data included information related to children served by the organization. About two weeks after the breach occurred, a ransomware group known as SafePay claimed responsibility for the attack on its data leak website. 

The group reportedly demanded payment within 24 hours in exchange for deleting the stolen data. The Children’s Council has not confirmed the claim made by SafePay, and it remains unclear how attackers gained access to the organization’s systems. 

The nonprofit has not disclosed whether a ransom demand was paid. The organization said it is offering individuals affected by the breach free identity protection services. 

Victims can enroll in 12 months of credit monitoring and receive identity theft insurance coverage of up to one million dollars through TransUnion. The offer is available for 90 days from the date of the notification letter. 

SafePay is a ransomware operation that began publicly listing its victims on a leak site in November 2024. The group uses ransomware based on the LockBit strain and typically employs a double extortion strategy, demanding payment both to restore encrypted systems and to prevent the release of stolen data. 

In 2025, SafePay claimed responsibility for 374 ransomware attacks. Of those, 46 organizations confirmed the incidents and reported data breaches affecting about 17 million people. One of the largest involved Conduent Business Services, which notified approximately 16.7 million individuals that their data had been exposed. 

 
The group continues to be active in 2026 and has already taken credit for more than a dozen additional attacks, although only one of those has been confirmed so far. Ransomware incidents targeting organizations in the United States remain widespread. 

Researchers tracked 653 confirmed ransomware attacks against U.S. organizations in 2025, exposing roughly 43.3 million personal records. 

Several nonprofit and social service organizations have been among the victims. Recent incidents have affected groups such as Bucks County Opportunity Council in Pennsylvania, Catholic Charities of the Diocese of Albany in New York, North American Family Institute in Massachusetts, Elmcrest Children’s Center in New York and Family and Community Services in Ohio.

The Children’s Council of San Francisco is a nonprofit that works with government agencies to support childcare and early education services. The organization helps families locate and pay for childcare while distributing public funding to childcare providers that serve infants and children up to age 13. 

According to its website, the nonprofit administers an annual budget of nearly 250 million dollars and partners with the California Department of Social Services as well as local government agencies in San Francisco.

Pakistan-Linked Hackers Use AI to Flood Targets With Malware in India Campaign

 

A Pakistan-aligned hacking group known as Transparent Tribe is using artificial intelligence coding tools to produce large numbers of malware implants in a campaign primarily targeting India, according to new research from cybersecurity firm Bitdefender. 

Security researchers say the activity reflects a shift in how some threat actors are developing malicious software. Instead of focusing on highly advanced malware, the group appears to be generating a large volume of implants written in multiple programming languages and distributed across different infrastructure. 

Researchers said the operation is designed to create a “high-volume, mediocre mass of implants” using less common languages such as Nim, Zig and Crystal while relying on legitimate platforms including Slack, Discord, Supabase and Google Sheets to help evade detection. 

“Rather than a breakthrough in technical sophistication, we are seeing a transition toward AI-assisted malware industrialization that allows the actor to flood target environments with disposable, polyglot binaries,” Bitdefender researchers said in a technical analysis of the campaign. 

The strategy involves creating numerous variations of malware rather than relying on a single sophisticated tool. Bitdefender described the approach as a form of “Distributed Denial of Detection,” where attackers overwhelm security systems with large volumes of different binaries that use various communication protocols and programming languages. 

Researchers say large language models have lowered the barrier for threat actors by allowing them to generate working code in unfamiliar languages or convert existing code into different formats. 

That capability makes it easier to produce large numbers of malware samples with minimal expertise. 

The campaign has primarily targeted Indian government organizations and diplomatic missions abroad. 

Investigators said the attackers also showed interest in Afghan government entities and some private businesses. According to the analysis, the attackers use LinkedIn to identify potential targets before launching phishing campaigns. 

Victims may receive emails containing ZIP archives or ISO images that include malicious Windows shortcut files. In other cases, victims are sent PDF documents that include a “Download Document” button directing them to attacker-controlled websites. 

These websites trigger the download of malicious archives. Once opened, the shortcut file launches PowerShell scripts that run in memory. 

The scripts download a backdoor and enable additional actions inside the compromised system. Researchers said attackers sometimes deploy well-known adversary simulation tools such as Cobalt Strike and Havoc to maintain access. 

Bitdefender identified a wide range of custom tools used in the campaign. These include Warcode, a shellcode loader written in Crystal designed to load a Havoc agent into memory, and NimShellcodeLoader, which deploys a Cobalt Strike beacon. 

Another tool called CreepDropper installs additional malware, including SHEETCREEP, a Go-based information stealer that communicates with command servers through Microsoft Graph API, and MAILCREEP, a backdoor written in C# that uses Google Sheets for command and control. 

Researchers also identified SupaServ, a Rust-based backdoor that communicates through the Supabase platform with Firebase acting as a fallback channel. The code includes Unicode emojis, which researchers said suggests it may have been generated with the help of AI. 

Additional malware used in the campaign includes CrystalShell and ZigShell, backdoors written in Crystal and Zig that can run commands, collect host information and communicate with command servers through platforms such as Slack or Discord. 

Other tools observed in the operation include LuminousStealer, a Rust-based information stealer that exfiltrates files to Firebase and Google Drive, and LuminousCookies, which extracts cookies, passwords and payment information from Chromium-based browsers. 

Bitdefender said the attackers are also using utilities such as BackupSpy to monitor file systems for sensitive data and ZigLoader to decrypt and execute shellcode directly in memory. Despite the large number of tools involved, researchers say the overall quality of the malware is often inconsistent. 

“The transition of APT36 toward vibeware represents a technical regression,” Bitdefender said, referring to the Transparent Tribe group. “While AI-assisted development increases sample volume, the resulting tools are often unstable and riddled with logical errors.” 

Still, the researchers warned that the broader trend could make cyberattacks easier to scale. By combining AI-generated code with trusted cloud services, attackers can hide malicious activity within normal network traffic. 

“We are seeing a convergence of two trends that have been developing for some time the adoption of exotic programming languages and the abuse of trusted services to hide in legitimate traffic,” the researchers said. 

They added that this combination allows even relatively simple malware to succeed by overwhelming traditional detection systems with sheer volume.

AI-Driven Risk Management Is Becoming a Key Growth Strategy for MSPs

 



Expanding cybersecurity services as a Managed Service Provider (MSP) or Managed Security Service Provider (MSSP) requires more than strong technical capabilities. Providers also need a sustainable business approach that can deliver clear and measurable value to clients while supporting growth at scale.

One approach gaining attention across the cybersecurity industry is risk-based security management. When implemented effectively, this model can strengthen trust with customers, create opportunities to offer additional services, and establish stable recurring revenue streams. However, maintaining such a strategy consistently requires structured workflows and the right supporting technologies.

To help providers adopt this approach, a new resource titled “The MSP Growth Guide: How MSPs Use AI-Powered Risk Management to Scale Their Cybersecurity Business” outlines how organizations can transition toward scalable cybersecurity services centered on risk management. The guide provides insights into the operational difficulties many MSPs encounter, offers recommendations from industry experts, and explains how AI-driven risk management platforms can help build a more scalable and profitable service model.


Why Risk-Focused Security Enables Service Expansion

Many MSPs already deliver essential cybersecurity capabilities such as endpoint protection, regulatory compliance assistance, and other defensive tools. While these services remain critical, they are often delivered as separate engagements rather than as part of a unified strategy. As a result, the long-term strategic value of these services may remain limited, and opportunities to generate consistent recurring revenue may be reduced.

Adopting a risk-centered cybersecurity framework can shift this dynamic. Instead of addressing isolated technical issues, providers evaluate the complete threat environment facing a client organization. Security risks are then prioritized according to their potential impact on business operations.

This broader perspective allows MSPs to move away from reactive fixes and instead deliver continuous, proactive security management.

Organizations that implement this risk-first model can gain several advantages:

• Security teams can detect and address threats before they escalate into damaging incidents.

• Defensive measures can be continuously updated as the cyber threat landscape evolves.

• Critical assets, daily operations, and organizational reputation can be protected even when compliance regulations do not explicitly require certain safeguards.

Another major benefit is alignment with modern cybersecurity frameworks. Many current standards require companies to conduct formal and ongoing risk evaluations. By integrating risk management into their core service offerings, MSPs can position themselves to pursue higher-value contracts and offer additional services driven by regulatory compliance requirements.


Common Obstacles That Limit Risk Management Services

Although risk-focused security delivers substantial value, MSPs often encounter operational barriers that make these services difficult to scale or demonstrate clearly to clients.

Several recurring challenges affect service delivery and growth:

Manual assessment processes

Traditional risk evaluations often rely heavily on manual work. This approach can consume a vast majority of time, introduce inconsistencies, and make it difficult to expand services efficiently.

Lack of actionable remediation plans

Risk reports sometimes underline security weaknesses but fail to outline clear steps for resolving them. Without defined guidance, clients may struggle to understand how to address the issues that have been identified.

Complex regulatory alignment

Organizations frequently need to comply with multiple cybersecurity standards and regulatory frameworks. Managing these requirements manually can create inefficiencies and inconsistencies.

Limited business context in security reports

Many security assessments are written in highly technical language. As a result, business leaders and non-technical stakeholders may find it difficult to interpret the results or understand the real impact on their organization.

Shortage of specialized cybersecurity professionals

Skilled risk management experts remain in high demand across the industry, making it difficult for service providers to recruit and retain qualified personnel.

Third-party risk visibility gaps

Many cybersecurity platforms focus only on internal infrastructure and overlook risks introduced by external vendors and service providers.

These challenges can make it difficult for MSPs to transform risk management into a scalable and profitable cybersecurity offering.


How AI-Powered Platforms Help Address These Barriers

To overcome these operational difficulties, many providers are turning to artificial intelligence-driven risk management tools.

AI-based platforms can automate large portions of the risk management process. Tasks that previously required extensive manual effort, such as risk assessment, prioritization, and reporting, can be completed more quickly and consistently.

These systems are designed to streamline the entire risk management lifecycle while incorporating advanced security expertise into service delivery.


What Modern Risk Management Platforms Should Deliver

A well-designed AI-enabled risk management solution should do more than simply detect potential threats. It should also accelerate service delivery and support business growth for service providers.

Organizations adopting these platforms can expect several operational benefits:

• Faster onboarding and service deployment through automated and easy-to-use risk assessment tools

• More efficient compliance management supported by built-in mappings to cybersecurity frameworks and continuous monitoring capabilities

• Clearer reporting that presents cybersecurity risks in language business leaders can understand

• Demonstrable return on investment by reducing manual workloads and enabling more efficient service delivery

• Additional revenue opportunities by identifying new cybersecurity services clients may require based on their risk profile


Key Capabilities to Evaluate When Selecting a Platform

Selecting the right technology platform is critical for service providers that want to scale cybersecurity operations effectively.

Several capabilities are considered essential in modern risk management tools:

Automated risk assessment systems

Automation allows providers to generate assessment results within days rather than months, while minimizing human error and ensuring consistent outcomes.


Dynamic risk registers and visual risk mapping

Visualization tools such as heatmaps help security teams quickly identify which risks pose the greatest threat and should be addressed first.


Action-oriented remediation planning

Effective platforms convert risk findings into structured and prioritized tasks aligned with both compliance obligations and business objectives.


Customizable risk tolerance frameworks

Organizations can adapt risk scoring models to match each client’s specific operational priorities and appetite for risk.

The MSP Growth Guide provides additional details on the features providers should consider when evaluating potential solutions.


Building Long-Term Strategic Value with AI-Driven Risk Management

For MSPs and MSSPs seeking to expand their cybersecurity practices, AI-powered risk management offers a way to deliver consistent value while improving operational efficiency.

By automating risk assessments, prioritizing security issues based on business impact, and standardizing reporting processes, these platforms enable providers to deliver reliable cybersecurity services to a growing client base.

The guide “The MSP Growth Guide: How MSPs Use AI-Powered Risk Management to Scale Their Cybersecurity Business” explains how service providers can integrate AI-driven risk management into their offerings to support long-term growth.

Organizations interested in strengthening customer relationships, expanding cybersecurity services, and building a competitive advantage may benefit from adopting risk-focused security strategies supported by AI-enabled platforms.