China has approved major changes to its Cybersecurity Law, marking its first substantial update since the framework was introduced in 2017. The revised legislation, passed by the Standing Committee of the National People’s Congress in late October 2025, is scheduled to come into effect on January 1, 2026. The new version aims to respond to emerging technological risks, refine enforcement powers, and bring greater clarity to how cybersecurity incidents must be handled within the country.
A central addition to the law is a new provision focused on artificial intelligence. This is the first time China’s cybersecurity legislation directly acknowledges AI as an area requiring state guidance. The updated text calls for protective measures around AI development, emphasising the need for ethical guidelines, safety checks, and governance mechanisms for advanced systems. At the same time, the law encourages the use of AI and similar technologies to enhance cybersecurity management. Although the amendment outlines strategic expectations, the specific rules that organisations will need to follow are anticipated to be addressed through later regulations and detailed technical standards.
The revised law also introduces stronger enforcement capabilities. Penalties for serious violations have been raised, giving regulators wider authority to impose heavier fines on both companies and individuals who fail to meet their obligations. The scope of punishable conduct has been expanded, signalling an effort to tighten accountability across China’s digital environment. In addition, the law’s extraterritorial reach has been broadened. Previously, cross-border activities were only included when they targeted critical information infrastructure inside China. The new framework allows authorities to take action against foreign activities that pose any form of network security threat, even if the incident does not involve critical infrastructure. In cases deemed particularly severe, regulators may impose sanctions that include financial restrictions or other punitive actions.
Alongside these amendments, the Cyberspace Administration of China has issued a comprehensive nationwide reporting rule called the Administrative Measures for National Cybersecurity Incident Reporting. This separate regulation will become effective on November 1, 2025. The Measures bring together different reporting requirements that were previously scattered across multiple guidelines, creating a single, consistent system for organisations responsible for operating networks or providing services through Chinese networks. The Measures appear to focus solely on incidents that occur within China, including those that affect infrastructure inside the country.
The reporting rules introduce a clear structure for categorising incidents. Events are divided into four levels based on their impact. Under the new criteria, an incident qualifies as “relatively major” if it involves a data breach affecting more than one million individuals or if it results in economic losses of over RMB 5 million. When such incidents occur, organisations must file an initial report within four hours of discovery. A more complete submission is required within seventy-two hours, followed by a final review report within thirty days after the incident is resolved.
To streamline compliance, the regulator has provided several reporting channels, including a hotline, an online portal, email, and the agency’s official WeChat account. Organisations that delay reporting, withhold information, or submit false details may face penalties. However, the Measures state that timely and transparent reporting can reduce or remove liability under the revised law.
AI-powered threat hunting will only be successful if the data infrastructure is strong too.
Threat hunting powered by AI, automation, or human investigation will only ever be as effective as the data infrastructure it stands on. Sometimes, security teams build AI over leaked data or without proper data care. This can create issues later. It can affect both AI and humans. Even sophisticated algorithms can't handle inconsistent or incomplete data. AI that is trained on poor data will also lead to poor results.
A correlated data controls the operation. It reduces noise and helps in noticing patterns that manual systems can't.
Correlating and pre-transforming the data makes it easy for LLMs and other AI tools. It also allows connected components to surface naturally.
A same person may show up under entirely distinct names as an IAM principal in AWS, a committer in GitHub, and a document owner in Google Workspace. You only have a small portion of the truth when you look at any one of those signs.
You have behavioral clarity when you consider them collectively. While downloading dozens of items from Google Workspace may look strange on its own, it becomes obviously malevolent if the same user also clones dozens of repositories to a personal laptop and launches a public S3 bucket minutes later.
Correlations that previously took hours or were impossible become instant when data from logs, configurations, code repositories, and identification systems are all housed in one location.
For instance, lateral movement that uses short-lived credentials that have been stolen frequently passes across multiple systems before being discovered. A hacked developer laptop might take on several IAM roles, launch new instances, and access internal databases. Endpoint logs show the local compromise, but the extent of the intrusion cannot be demonstrated without IAM and network data.
A new security analysis has revealed that nearly half of all network communications between Internet of Things (IoT) devices and traditional IT systems come from devices that pose serious cybersecurity risks.
The report, published by cybersecurity company Palo Alto Networks, analyzed data from over 27 million connected devices across various organizations. The findings show that 48.2 percent of these IoT-to-IT connections came from devices classified as high risk, while an additional 4 percent were labeled critical risk.
These figures underline a growing concern that many organizations are struggling to secure the rapidly expanding number of IoT devices on their networks. Experts noted that a large portion of these devices operate with outdated software, weak default settings, or insecure communication protocols, making them easy targets for cybercriminals.
Why It’s a Growing Threat
IoT devices, ranging from smart security cameras and sensors to industrial control systems are often connected to the same network as computers and servers used for daily business operations. This creates a problem: once a vulnerable IoT device is compromised, attackers can move deeper into the network, access sensitive data, and disrupt normal operations.
The study emphasized that the main cause behind such widespread exposure is poor network segmentation. Many organizations still run flat networks, where IoT devices and IT systems share the same environment without proper separation. This allows a hacker who infiltrates one device to move easily between systems and cause greater harm.
How Organizations Can Reduce Risk
Security professionals recommend several key actions for both small businesses and large enterprises to strengthen their defenses:
1. Separate Networks:
Keep IoT devices isolated from core IT infrastructure through proper network segmentation. This prevents threats in one area from spreading to another.
2. Adopt Zero Trust Principles:
Follow a security model that does not automatically trust any device or user. Each access request should be verified, and only the minimum level of access should be allowed.
3. Improve Device Visibility:
Maintain an accurate inventory of all devices connected to the network, including personal or unmanaged ones. This helps identify and secure weak points before they can be exploited.
4. Keep Systems Updated:
Regularly patch and update device firmware and software. Unpatched systems often contain known vulnerabilities that attackers can easily exploit.
5. Use Strong Endpoint Protection:
Deploy Endpoint Detection and Response (EDR) or Extended Detection and Response (XDR) tools across managed IT systems, and use monitoring solutions for IoT devices that cannot run these tools directly.
As organizations rely more on connected devices to improve efficiency, the attack surface grows wider. Without proper segmentation, monitoring, and consistent updates, one weak device can become an entry point for cyberattacks that threaten entire operations.
The report reinforces an important lesson: proactive network management is the foundation of cybersecurity. Ensuring visibility, limiting trust, and continuously updating systems can significantly reduce exposure to emerging IoT-based threats.
Earlier this year, the breach was against a famous U.S non-profit working in advocacy, that demonstrated advanced techniques and shared tools among Chinese cyber criminal gangs like APT41, Space Pirates, and Kelp.
They struck again in April with various malicious prompts checking both internal network breach and internet connectivity, particularly targeting a system at 192.0.0.88. Various tactics and protocols were used, showing both determination and technical adaptability to get particular internal resources.
Following the connectivity tests, the hackers used tools like netstat for network surveillance and made an automatic task via the Windows command-line tools.
This task ran a genuine MSBuild.exe app that processed an outbound.xml file to deploy code into csc.exe and connected it to a C2 server.
These steps hint towards automation (through scheduled tasks) and persistence via system-level privileges increasing the complexity of the compromise and potential damage.
The techniques and toolkit show traces of various Chinese espionage groups. The hackers weaponized genuine software elements. This is called DLL sideloading by abusing vetysafe.exe (a VipreAV component signed by Sunbelt Software, Inc.) to load a malicious payload called sbamres.dll.
This tactic was earlier found in campaigns lkmkedytl Earth Longzhi and Space Pirates, the former also known as APT41 subgroup.
Coincidentally, the same tactic was found in cases connected to Kelp, showing the intrusive tool-sharing tactics within Chinese APTs.
Google has launched a detailed investigation into a weeks-long security breach after discovering that a contractor with legitimate system privileges had been quietly collecting internal screenshots and confidential files tied to the Play Store ecosystem. The company uncovered the activity only after it had continued for several weeks, giving the individual enough time to gather sensitive technical data before being detected.
According to verified cybersecurity reports, the contractor managed to access information that explained the internal functioning of the Play Store, Google’s global marketplace serving billions of Android users. The files reportedly included documentation describing the structure of Play Store infrastructure, the technical guardrails that screen malicious apps, and the compliance systems designed to meet international data protection laws. The exposure of such material presents serious risks, as it could help malicious actors identify weaknesses in Google’s defense systems or replicate its internal processes to deceive automated security checks.
Upon discovery of the breach, Google initiated a forensic review to determine how much information was accessed and whether it was shared externally. The company has also reported the matter to law enforcement and begun a complete reassessment of its third-party access procedures. Internal sources indicate that Google is now tightening security for all contractor accounts by expanding multi-factor authentication requirements, deploying AI-based systems to detect suspicious activities such as repeated screenshot captures, and enforcing stricter segregation of roles and privileges. Additional measures include enhanced background checks for third-party employees who handle sensitive systems, as part of a larger overhaul of Google’s contractor risk management framework.
Experts note that the incident arrives during a period of heightened regulatory attention on Google’s data protection and antitrust practices. The breach not only exposes potential security weaknesses but also raises broader concerns about insider threats, one of the most persistent and challenging issues in cybersecurity. Even companies that invest heavily in digital defenses remain vulnerable when authorized users intentionally misuse their access for personal gain or external collaboration.
The incident has also revived discussion about earlier insider threat cases at Google. In one of the most significant examples, a former software engineer was charged with stealing confidential files related to Google’s artificial intelligence systems between 2022 and 2023. Investigators revealed that he had transferred hundreds of internal documents to personal cloud accounts and even worked with external companies while still employed at Google. That case, which resulted in multiple charges of trade secret theft and economic espionage, underlined how intellectual property theft by insiders can evolve into major national security concerns.
For Google, the latest breach serves as another reminder that internal misuse, whether by employees or contractors remains a critical weak point. As the investigation continues, the company is expected to strengthen oversight across its global operations. Cybersecurity analysts emphasize that organizations managing large user platforms must combine strong technical barriers with vigilant monitoring of human behavior to prevent insider-led compromises before they escalate into large-scale risks.
The European Commission has accused Meta Platforms, the parent company of Facebook and Instagram, of violating the European Union’s Digital Services Act (DSA) by making it unnecessarily difficult for users to report illegal online content and challenge moderation decisions.
In its preliminary findings, the Commission said both platforms lack a user-friendly “Notice and Action” system — the mechanism that allows people to flag unlawful material such as child sexual abuse content or terrorist propaganda. Regulators noted that users face multiple steps and confusing options before they can file a report. The Commission also claimed that Meta’s interface relies on “dark patterns”, which are design features that subtly discourage users from completing certain actions, such as submitting reports.
According to the Commission, Meta’s appeal process also falls short of DSA requirements. The current system allegedly prevents users from adding explanations or submitting supporting evidence when disputing a moderation decision. This, the regulator said, limits users’ ability to express why they believe a decision was unfair and weakens the overall transparency of Meta’s content moderation practices.
The European Commission’s findings are not final, and Meta has the opportunity to respond before any enforcement action is taken. If the Commission confirms these violations, it could issue a non-compliance decision, which may result in penalties of up to 6 percent of Meta’s global annual revenue. The Commission may also impose recurring fines until the company aligns its operations with EU law.
Meta, in a public statement, said it “disagrees with any suggestion” that it breached the DSA. The company stated that it has already made several updates to comply with the law, including revisions to content reporting options, appeals procedures, and data access tools.
The European Commission also raised similar concerns about TikTok, saying that both companies have limited researchers’ access to public data on their platforms. The DSA requires large online platforms to provide sufficient data access so independent researchers can analyze potential harms — for example, whether minors are exposed to illegal or harmful content. The Commission’s review concluded that the data-access tools of Facebook, Instagram, and TikTok are burdensome and leave researchers with incomplete or unreliable datasets, which hinders academic and policy research.
TikTok responded that it has provided data to almost 1,000 research teams and remains committed to transparency. However, the company noted that the DSA’s data-sharing obligations sometimes conflict with the General Data Protection Regulation (GDPR), making it difficult to comply with both laws simultaneously. TikTok urged European regulators to offer clarity on how these two frameworks should be balanced.
Beyond Europe, the investigation may strain relations with the United States. American officials have previously criticized the EU for imposing regulatory burdens on U.S.-based tech firms. U.S. FTC Chairman Andrew Ferguson recently warned companies that censoring or modifying content to satisfy foreign governments could violate U.S. law. Former President Donald Trump has also expressed opposition to EU digital rules and even threatened tariffs against countries enforcing them.
For now, the Commission’s investigation continues. If confirmed, the case could set a major precedent for how global social media companies manage user safety, transparency, and accountability under Europe’s strict online governance laws.
A password is essentially a secret code you use to prove your identity online. But weak password habits are widespread. A CyberNews report revealed that 94% of 19 billion leaked passwords were reused, and many followed predictable patterns—think “123456,” names, cities, or popular brands.
When breaches occur, these passwords spread rapidly, leading to account takeovers, phishing scams, and identity theft. In fact, hackers often attempt to exploit leaked credentials within an hour of a breach.
Phishing attacks—where users are tricked into entering their passwords on fake websites—continue to rise, with more than 3 billion phishing emails sent daily worldwide.
Experts recommend creating unique, complex passwords or even memorable passphrases like “CrocApplePurseBike.” Associating it with a story can help you recall it easily.
Emerging around four years ago, passkeys use public-key cryptography, a process that creates two linked keys—one public and one private.
The public key is shared with the website.
The private key stays safely stored on your device.
When you log in, your device signs a unique challenge using the private key, confirming your identity without sending any password. To authorize this action, you’ll usually verify with your fingerprint or face ID, ensuring that only you can access your accounts.
Even if the public key is stolen, it’s useless without the private one—making passkeys inherently phishing-proof and more secure. Each passkey is also unique to the website, so it can’t be reused elsewhere.
Passkeys eliminate the need to remember passwords or type them manually. Since they’re tied to your device and require biometric approval, they’re both more convenient and more secure.
However, the technology isn’t yet universal. Compatibility issues between platforms like Apple and Microsoft have slowed adoption, though these gaps are closing as newer devices and systems improve integration.
From a cybersecurity perspective, passkeys are clearly the superior option—they’re stronger, resistant to phishing, and easy to use. But widespread adoption will take time. Many websites still rely on traditional passwords, and transitioning millions of users will be a long process.
Until then, maintaining good password hygiene remains essential: use unique passwords for every account, enable multi-factor authentication, and change any reused credentials immediately.