Cybersecurity investigators at Google have confirmed that state-sponsored hacking groups are actively relying on generative artificial intelligence to improve how they research targets, prepare cyber campaigns, and develop malicious tools. According to the company’s threat intelligence teams, North Korea–linked attackers were observed using the firm’s AI platform, Gemini, to collect and summarize publicly available information about organizations and employees they intended to target. This type of intelligence gathering allows attackers to better understand who works at sensitive companies, what technical roles exist, and how to approach victims in a convincing way.
Investigators explained that the attackers searched for details about leading cybersecurity and defense companies, along with information about specific job positions and salary ranges. These insights help threat actors craft more realistic fake identities and messages, often impersonating recruiters or professionals to gain the trust of their targets. Security experts warned that this activity closely resembles legitimate professional research, which makes it harder for defenders to distinguish normal online behavior from hostile preparation.
The hacking group involved, tracked as UNC2970, is linked to North Korea and overlaps with a network widely known as Lazarus Group. This group has previously run a long-term operation in which attackers pretended to offer job opportunities to professionals in aerospace, defense, and energy companies, only to deliver malware instead. Researchers say this group continues to focus heavily on defense-related targets and regularly impersonates corporate recruiters to begin contact with victims.
The misuse of AI is not limited to one actor. Multiple hacking groups connected to China and Iran were also found using AI tools to support different phases of their operations. Some groups used AI to gather targeted intelligence, including collecting email addresses and account details. Others relied on AI to analyze software weaknesses, prepare technical testing plans, interpret documentation from open-source tools, and debug exploit code. Certain actors used AI to build scanning tools and malicious web shells, while others created fake online identities to manipulate individuals into interacting with them. In several cases, attackers claimed to be security researchers or competition participants in order to bypass safety restrictions built into AI systems.
Researchers also identified malware that directly communicates with AI services to generate harmful code during an attack. One such tool, HONESTCUE, requests programming instructions from AI platforms and receives source code that is used to build additional malicious components on the victim’s system. Instead of storing files on disk, this malware compiles and runs code directly in memory using legitimate system tools, making detection and forensic analysis more difficult. Separately, investigators uncovered phishing kits designed to look like cryptocurrency exchanges. These fake platforms were built using automated website creation tools from Lovable AI and were used to trick victims into handing over login credentials. Parts of this activity were linked to a financially motivated group known as UNC5356.
Security teams also reported an increase in so-called ClickFix campaigns. In these schemes, attackers use public sharing features on AI platforms to publish convincing step-by-step guides that appear to fix common computer problems. In reality, these instructions lead users to install malware that steals personal and financial data. This trend was first flagged in late 2025 by Huntress.
Another growing threat involves model extraction attacks. In these cases, adversaries repeatedly query proprietary AI systems in order to observe how they respond and then train their own models to imitate the same behavior. In one large campaign, attackers sent more than 100,000 prompts to replicate how an AI model reasons across many tasks in different languages. Researchers at Praetorian demonstrated that a functional replica could be built using a relatively small number of queries and limited training time. Experts warned that keeping AI model parameters secret is not enough, because every response an AI system provides can be used as training data for attackers.
Google, which launched its AI Cyber Defense Initiative in 2024, stated that artificial intelligence is increasingly amplifying the capabilities of cybercriminals by improving their efficiency and speed. Company representatives cautioned that as attackers integrate AI into routine operations, the volume and sophistication of attacks will continue to rise. Security specialists argue that defenders must adopt similar AI-powered tools to automate threat detection, accelerate response times, and operate at the same machine-level speed as modern attacks.
Over six thousand SmarterMail systems sit reachable online, possibly at risk due to a serious login vulnerability, found by the nonprofit cybersecurity group Shadowserver. Attention grows as hackers increasingly aim for outdated corporate mail setups left unprotected.
The threat actors used internet-exposed SolarWinds Web Help Desk (WHD) instances to gain initial access and then proceed laterally across the organization's network to other high-value assets, according to Microsoft's disclosure of a multi-stage attack.
However, it is unclear if the activity used a previously patched vulnerability (CVE-2025-26399, CVSS score: 9.8) or recently revealed vulnerabilities (CVE-2025-40551, CVSS score: 9.8, and CVE-2025-40536, CVSS score: 8.1), according to the Microsoft Defender Security Research Team.
"Since the attacks occurred in December 2025 and on machines vulnerable to both the old and new set of CVEs at the same time, we cannot reliably confirm the exact CVE used to gain an initial foothold," the company said in the report.
CVE-2025-40551 and CVE-2025-26399 both relate to untrusted data deserialization vulnerabilities that could result in remote code execution, and CVE-2025-400536 is a security control bypass vulnerability that might enable an unauthenticated attacker to access some restricted functionality.
Citing proof of active exploitation in the field, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) added CVE-2025-40551 to its list of known exploited vulnerabilities (KEVs) last week. By February 6, 2026, agencies of the Federal Civilian Executive Branch (FCEB) were required to implement the solutions for the defect.
The successful exploitation of the exposed SolarWinds WHD instance in the attacks that Microsoft discovered gave the attackers the ability to execute arbitrary commands within the WHD application environment and accomplish unauthenticated remote code execution.
Microsoft claimed that in at least one instance, the threat actors used a DCSync attack, in which they impersonated a Domain Controller (DC) and asked an Active Directory (AD) database for password hashes and other private data.
Users are recommended to update WHD instances, identify and eliminate any unauthorized RMM tools, rotate admin and service accounts, and isolate vulnerable workstations to minimize the breach in order to combat the attack.
"This activity reflects a common but high-impact pattern: a single exposed application can provide a path to full domain compromise when vulnerabilities are unpatched or insufficiently monitored," the creator of Windows stated.
Two students affiliated with Stanford University have raised $2 million to expand an accelerator program designed for entrepreneurs who are still in college or who have recently graduated. The initiative, called Breakthrough Ventures, focuses on helping early-stage founders move from rough ideas to viable businesses by providing capital, guidance, and access to professional networks.
The program was created by Roman Scott, a recent graduate, and Itbaan Nafi, a current master’s student. Their work began with small-scale demo days held at Stanford in 2024, where student teams presented early concepts and received feedback. Interest from participants and observers revealed a clear gap. Many students had promising ideas but lacked practical support, legal guidance, and introductions to investors. The founders then formalized the effort into a structured accelerator and raised funding to scale it.
Breakthrough Ventures aims to address two common obstacles faced by student founders. First, early funding is difficult to access before a product or revenue exists. Second, students often do not have reliable access to mentors and industry networks. The program responds to both challenges through a combination of financial support and hands-on assistance.
Selected teams receive grant funding of up to $10,000 without giving up ownership in their companies. Participants also gain access to legal support and structured mentorship from experienced professionals. The program includes technical resources such as compute credits from technology partners, which can lower early development costs for startups building software or data-driven products. At the end of the program, founders who demonstrate progress may be considered for additional investment of up to $50,000.
The accelerator operates through a hybrid format. Founders participate in a mix of online sessions and in-person meetups, and the program concludes with a demo day at Stanford, where teams present their progress to potential investors and collaborators. This structure is intended to keep participation accessible while still offering in-person exposure to the startup ecosystem.
Over the next three years, the organizers plan to deploy the $2 million fund to support at least 100 student-led companies across areas such as artificial intelligence, healthcare, consumer products, sustainability, and deep technology. By targeting founders at an early stage, the program aims to reduce the friction between having an idea and building a credible company, while promoting responsible, well-supported innovation within the student community.
Security researchers have identified a previously undocumented cyber espionage group that infiltrated at least 70 government and critical infrastructure organizations across 37 countries within the past year. The same activity cluster also conducted wide-scale scanning and probing of government-related systems connected to 155 countries between November and December 2025, indicating a broad intelligence collection effort rather than isolated attacks.
The group is tracked as TGR-STA-1030, a temporary designation used for actors assessed to operate with state-backed intent. Investigators report evidence of activity dating back to January 2024. While no specific country has been publicly confirmed as the sponsor, technical indicators suggest an Asian operational footprint. These indicators include the services and tools used, language and configuration preferences, targeting patterns tied to regional interests, and working hours consistent with the GMT+8 time zone.
Who was targeted and what was taken
Confirmed victims include national law enforcement and border agencies, finance ministries, and departments responsible for trade, natural resources, and diplomatic affairs. In several intrusions, attackers maintained access for months. During these periods, sensitive data was taken from compromised email servers, including financial negotiations, contract material, banking information, and operational details linked to military or security functions.
How the intrusions worked
The initial entry point commonly involved phishing messages that led recipients to download files hosted on a legitimate cloud storage service. The downloaded archive contained a custom loader and a decoy file. The malware was engineered to avoid automated analysis by refusing to run unless specific environmental conditions were met, including a required screen resolution and the presence of the decoy file. It also checked for the presence of selected security products before proceeding.
Once active, the loader retrieved additional components disguised as image files from a public code repository. These components were used to deploy a well known command and control framework to manage compromised systems. The repository linked to this activity has since been taken down.
Beyond phishing, the group relied on known vulnerabilities in widely used enterprise and network software to gain initial access. There is no indication that previously unknown flaws were used. After entry, the attackers employed a mix of command and control tools, web shells for remote access, and tunneling utilities to move traffic through intermediary servers.
Researchers also observed a Linux kernel level implant that hides processes, files, and network activity by manipulating low level system functions. This tool concealed directories with a specific name to avoid detection. To mask their operations, the attackers rented infrastructure from legitimate hosting providers and routed traffic through additional relay servers.
Analysts assess that the campaign focuses on countries with active or emerging economic partnerships of interest to the attackers. The scale, persistence, and technical depth of these operations highlight ongoing risks to national security and essential public services, and reinforce the need for timely patching, email security controls, and continuous monitoring across government networks.
Modern organizations rely on a wide range of software systems to run daily operations. While identity and access management tools were originally designed to control users and directory services, much of today’s identity activity no longer sits inside those centralized platforms. Access decisions increasingly happen inside application code, application programming interfaces, service accounts, and custom login mechanisms. In many environments, credentials are stored within applications, permissions are enforced locally, and usage patterns evolve without formal review.
As a result, substantial portions of identity activity operate beyond the visibility of traditional identity, privileged access, and governance tools. This creates a persistent blind spot for security teams. The unseen portion of identity behavior represents risk that cannot be directly monitored or governed using configuration-based controls alone.
Conventional identity programs depend on predefined policies and system settings. These approaches work for centrally managed user accounts, but they do not adequately address custom-built software, legacy authentication processes, embedded secrets, non-human identities such as service accounts, or access routes that bypass identity providers. When these conditions exist, teams are often forced to reconstruct how access occurred after an incident or during an audit. This reactive process is labor-intensive and does not scale in complex enterprise environments.
Orchid Security positions its platform as a way to close this visibility gap through continuous identity observability across applications. The platform follows a four-part operational model designed to align with how security teams work in practice.
First, the platform identifies applications and examines how identity is implemented within them. Lightweight inspection techniques review authentication methods, authorization logic, and credential usage across both managed and unmanaged systems. This produces an inventory of applications, identity types, access flows, and embedded credentials, establishing a baseline of how identity functions in the environment.
Second, observed identity activity is evaluated in context. By linking identities, applications, and access paths, the platform highlights risks such as shared or hardcoded secrets, unused service accounts, privileged access that exists outside centralized controls, and differences between intended access design and real usage. This assessment is grounded in what is actually happening, not in what policies assume should happen.
Third, the platform supports remediation by integrating with existing identity and security processes. Teams can rank risks by potential impact, assign ownership to the appropriate control teams, and monitor progress as issues are addressed. The goal is coordination across current controls rather than replacement.
Finally, because discovery and analysis operate continuously, evidence for governance and compliance is available at all times. Current application inventories, records of identity usage, and documentation of control gaps and corrective actions are maintained on an ongoing basis. This shifts audits from periodic, manual exercises to a continuous readiness model.
As identity increasingly moves into application layers, sustained visibility into how access actually functions becomes essential for reducing unmanaged exposure, improving audit preparedness, and enabling decisions based on verified operational data rather than assumptions.