A recent Android update has marked a paradigm shifting change in how text messages are handled on employer-controlled devices. This means Google has introduced a feature called Android RCS Archival, which lets organisations capture and store all RCS, SMS, and MMS communications sent through Google Messages on fully managed work phones. While the messages remain encrypted in transport, they can now be accessed on the device itself once delivered.
This update is designed to help companies meet compliance and record-keeping requirements, especially in sectors that must retain communication logs for regulatory reasons. Until now, many organizations had blocked RCS entirely because of its encryption, which made it difficult to archive. The new feature gives them a way to support richer messaging while still preserving mandatory records.
Archiving occurs via authorized third-party software that integrates directly with Google Messages on work-managed devices. Once enabled by a company's IT, the software will log every interaction inside of a conversation, including messages received, sent, edited, or later deleted. Employees using these devices will see a notification when archiving is active, signaling their conversations are being logged.
Google's indicated that this functionality only refers to work-managed Android devices, personal phones and personal profiles are not impacted, and the update doesn't allow employers access to user data on privately-owned devices. The feature must also be intentionally switched on by the organisation; it is not automatically on.
The update also brings to the surface a common misconception about encrypted messaging: End-to-end encryption protects content only while it's in transit between devices. When a message lands on a device that is owned and administered by an employer, the organization has the technical ability to capture it. It does not extend to over-the-top platforms - such as WhatsApp or Signal - that manage their own encryption. Those apps can expose data as well in cases where backups aren't encrypted or when the device itself is compromised.
This change also raises a broader issue: one of counterparty risk. A conversation remains private only if both ends of it are stored securely. Screenshots, unsafe backups, and linked devices outside the encrypted environment can all leak message content. Work-phone archiving now becomes part of that wider set of risks users should be aware of.
For employees, the takeaway is clear: A company-issued phone is a workplace tool, not a private device. Any communication that originates from a fully managed device can be archived, meaning personal conversations should stay on a personal phone. Users reliant on encrypted platforms have reason to review their backup settings and steer clear of mixing personal communication with corporate technology.
Google's new archival option gives organisations a compliance solution that brings RCS in line with traditional SMS logging, while for workers it is a further reminder that privacy expectations shift the moment a device is brought under corporate management.
China has approved major changes to its Cybersecurity Law, marking its first substantial update since the framework was introduced in 2017. The revised legislation, passed by the Standing Committee of the National People’s Congress in late October 2025, is scheduled to come into effect on January 1, 2026. The new version aims to respond to emerging technological risks, refine enforcement powers, and bring greater clarity to how cybersecurity incidents must be handled within the country.
A central addition to the law is a new provision focused on artificial intelligence. This is the first time China’s cybersecurity legislation directly acknowledges AI as an area requiring state guidance. The updated text calls for protective measures around AI development, emphasising the need for ethical guidelines, safety checks, and governance mechanisms for advanced systems. At the same time, the law encourages the use of AI and similar technologies to enhance cybersecurity management. Although the amendment outlines strategic expectations, the specific rules that organisations will need to follow are anticipated to be addressed through later regulations and detailed technical standards.
The revised law also introduces stronger enforcement capabilities. Penalties for serious violations have been raised, giving regulators wider authority to impose heavier fines on both companies and individuals who fail to meet their obligations. The scope of punishable conduct has been expanded, signalling an effort to tighten accountability across China’s digital environment. In addition, the law’s extraterritorial reach has been broadened. Previously, cross-border activities were only included when they targeted critical information infrastructure inside China. The new framework allows authorities to take action against foreign activities that pose any form of network security threat, even if the incident does not involve critical infrastructure. In cases deemed particularly severe, regulators may impose sanctions that include financial restrictions or other punitive actions.
Alongside these amendments, the Cyberspace Administration of China has issued a comprehensive nationwide reporting rule called the Administrative Measures for National Cybersecurity Incident Reporting. This separate regulation will become effective on November 1, 2025. The Measures bring together different reporting requirements that were previously scattered across multiple guidelines, creating a single, consistent system for organisations responsible for operating networks or providing services through Chinese networks. The Measures appear to focus solely on incidents that occur within China, including those that affect infrastructure inside the country.
The reporting rules introduce a clear structure for categorising incidents. Events are divided into four levels based on their impact. Under the new criteria, an incident qualifies as “relatively major” if it involves a data breach affecting more than one million individuals or if it results in economic losses of over RMB 5 million. When such incidents occur, organisations must file an initial report within four hours of discovery. A more complete submission is required within seventy-two hours, followed by a final review report within thirty days after the incident is resolved.
To streamline compliance, the regulator has provided several reporting channels, including a hotline, an online portal, email, and the agency’s official WeChat account. Organisations that delay reporting, withhold information, or submit false details may face penalties. However, the Measures state that timely and transparent reporting can reduce or remove liability under the revised law.
The European Commission has accused Meta Platforms, the parent company of Facebook and Instagram, of violating the European Union’s Digital Services Act (DSA) by making it unnecessarily difficult for users to report illegal online content and challenge moderation decisions.
In its preliminary findings, the Commission said both platforms lack a user-friendly “Notice and Action” system — the mechanism that allows people to flag unlawful material such as child sexual abuse content or terrorist propaganda. Regulators noted that users face multiple steps and confusing options before they can file a report. The Commission also claimed that Meta’s interface relies on “dark patterns”, which are design features that subtly discourage users from completing certain actions, such as submitting reports.
According to the Commission, Meta’s appeal process also falls short of DSA requirements. The current system allegedly prevents users from adding explanations or submitting supporting evidence when disputing a moderation decision. This, the regulator said, limits users’ ability to express why they believe a decision was unfair and weakens the overall transparency of Meta’s content moderation practices.
The European Commission’s findings are not final, and Meta has the opportunity to respond before any enforcement action is taken. If the Commission confirms these violations, it could issue a non-compliance decision, which may result in penalties of up to 6 percent of Meta’s global annual revenue. The Commission may also impose recurring fines until the company aligns its operations with EU law.
Meta, in a public statement, said it “disagrees with any suggestion” that it breached the DSA. The company stated that it has already made several updates to comply with the law, including revisions to content reporting options, appeals procedures, and data access tools.
The European Commission also raised similar concerns about TikTok, saying that both companies have limited researchers’ access to public data on their platforms. The DSA requires large online platforms to provide sufficient data access so independent researchers can analyze potential harms — for example, whether minors are exposed to illegal or harmful content. The Commission’s review concluded that the data-access tools of Facebook, Instagram, and TikTok are burdensome and leave researchers with incomplete or unreliable datasets, which hinders academic and policy research.
TikTok responded that it has provided data to almost 1,000 research teams and remains committed to transparency. However, the company noted that the DSA’s data-sharing obligations sometimes conflict with the General Data Protection Regulation (GDPR), making it difficult to comply with both laws simultaneously. TikTok urged European regulators to offer clarity on how these two frameworks should be balanced.
Beyond Europe, the investigation may strain relations with the United States. American officials have previously criticized the EU for imposing regulatory burdens on U.S.-based tech firms. U.S. FTC Chairman Andrew Ferguson recently warned companies that censoring or modifying content to satisfy foreign governments could violate U.S. law. Former President Donald Trump has also expressed opposition to EU digital rules and even threatened tariffs against countries enforcing them.
For now, the Commission’s investigation continues. If confirmed, the case could set a major precedent for how global social media companies manage user safety, transparency, and accountability under Europe’s strict online governance laws.
As technology advances, quantum computing is no longer a distant concept — it is steadily becoming a real-world capability. While this next-generation innovation promises breakthroughs in fields like medicine and materials science, it also poses a serious threat to cybersecurity. The encryption systems that currently protect global digital infrastructure may not withstand the computing power quantum technology will one day unleash.
Data is now the most valuable strategic resource for any organization. Every financial transaction, business operation, and communication depends on encryption to stay secure. However, once quantum computers reach full capability, they could break the mathematical foundations of most existing encryption systems, exposing sensitive data on a global scale.
The urgency of post-quantum security
Post-Quantum Cryptography (PQC) refers to encryption methods designed to remain secure even against quantum computers. Transitioning to PQC will not be an overnight task. It demands re-engineering of applications, operating systems, and infrastructure that rely on traditional cryptography. Businesses must begin preparing now, because once the threat materializes, it will be too late to react effectively.
Experts warn that quantum computing will likely follow the same trajectory as artificial intelligence. Initially, the technology will be accessible only to a few institutions. Over time, as more companies and researchers enter the field, the technology will become cheaper and widely available including to cybercriminals. Preparing early is the only viable defense.
Governments are setting the pace
Several governments and standard-setting bodies have already started addressing the challenge. The United Kingdom’s National Cyber Security Centre (NCSC) has urged organizations to adopt quantum-resistant encryption by 2035. The European Union has launched its Quantum Europe Strategy to coordinate member states toward unified standards. Meanwhile, the U.S. National Institute of Standards and Technology (NIST) has finalized its first set of post-quantum encryption algorithms, which serve as a global reference point for organizations looking to begin their transition.
As these efforts gain momentum, businesses must stay informed about emerging regulations and standards. Compliance will require foresight, investment, and close monitoring of how different jurisdictions adapt their cybersecurity frameworks.
To handle the technical and organizational scale of this shift, companies can establish internal Centers of Excellence (CoEs) dedicated to post-quantum readiness. These teams bring together leaders from across departments: IT, compliance, legal, product development, and procurement to map vulnerabilities, identify dependencies, and coordinate upgrades.
The CoE model also supports employee training, helping close skill gaps in quantum-related technologies. By testing new encryption algorithms, auditing existing infrastructure, and maintaining company-wide communication, a CoE ensures that no critical process is overlooked.
Industry action has already begun
Leading technology providers have started adopting quantum-safe practices. For example, Red Hat’s Enterprise Linux 10 is among the first operating systems to integrate PQC support, while Kubernetes has begun enabling hybrid encryption methods that combine traditional and quantum-safe algorithms. These developments set a precedent for the rest of the industry, signaling that the shift to PQC is not a theoretical concern but an ongoing transformation.
The time to prepare is now
Transitioning to a quantum-safe infrastructure will take years, involving system audits, software redesigns, and new cryptographic standards. Organizations that begin planning today will be better equipped to protect their data, meet upcoming regulatory demands, and maintain customer trust in the digital economy.
Quantum computing will redefine the boundaries of cybersecurity. The only question is whether organizations will be ready when that day arrives.
When news breaks about a cyberattack, the ransom demand often steals the spotlight. It’s the most visible figure, millions demanded, negotiations unfolding, and sometimes, payment made. But in truth, that amount only scratches the surface. The real costs of a cyber incident often emerge long after the headlines fade, in the form of business disruptions, shaken trust, legal pressures, and a long, difficult road to recovery.
One of the most common problems organizations face after a breach is the communication gap between technical experts and senior leadership. While the cybersecurity team focuses on containing the attack, tracing its source, and preserving evidence, the executives are under pressure to reassure clients, restore operations, and navigate complex reporting requirements.
Each group works with valid priorities, but without coordination, efforts can collide. A system that’s isolated for forensic investigation may also be the one that the operations team needs to serve customers. This misalignment is avoidable if organizations plan beyond technology by assigning clear responsibilities across departments and conducting regular crisis simulations to ensure a unified response when an attack hits.
When systems go offline, the impact ripples across every department. A single infected server can halt manufacturing lines, delay financial transactions, or force hospitals to revert to manual record-keeping. Even after the breach is contained, lost time translates into lost revenue and strained customer relationships.
Many companies underestimate downtime in their recovery strategies. Backup plans often focus on restoring data, but not on sustaining operations during outages. Every organization should ask: Can employees access essential tools if systems are locked? Can management make decisions without their usual dashboards? If those answers are uncertain, then the recovery plan is incomplete.
Beyond financial loss, cyber incidents leave a lasting mark on reputation. Customers and partners may begin to question whether their information is safe. Rebuilding that trust requires transparent, timely, and fact-based communication. Sharing too much before confirming the facts can create confusion; saying too little can appear evasive.
Recovery also depends on how well a company understands its data environment. If logs are incomplete or investigations are slow, regaining credibility becomes even harder. The most effective organizations balance honesty with precision, updating stakeholders as verified information becomes available.
The legal consequences of a cyber incident often extend further than companies expect. Even if a business does not directly store consumer data, it may still have obligations under privacy laws, vendor contracts, or insurance terms. State and international regulations increasingly require timely disclosure of breaches, and failing to comply can result in penalties.
Engaging legal and compliance teams before a crisis ensures that everyone understands the organization’s obligations and can act quickly under pressure.
Cybersecurity is no longer just an IT issue; it’s a core business concern. Effective protection depends on organization-wide preparedness. That means bridging gaps between departments, creating holistic response plans that include legal and communication teams, and regularly testing how those plans perform under real-world pressure.
Businesses that focus on resilience, not just recovery, are better positioned to minimize disruption, maintain trust, and recover faster if a cyber incident occurs.
Google announced a new step to make Android apps safer: starting next year, developers who distribute apps to certified Android phones and tablets, even outside Google Play, will need to verify their legal identity. The change ties every app on certified devices to a named developer account, while keeping Android’s ability to run apps from other stores or direct downloads intact.
What this means for everyday users and small developers is straightforward. If you download an app from a website or a third-party store, the app will now be linked to a developer who has provided a legal name, address, email and phone number. Google says hobbyists and students will have a lighter account option, but many independent creators may choose to register as a business to protect personal privacy. Certified devices are the ones that ship with Google services and pass Google’s compatibility tests; devices that do not include Google Play services may follow different rules.
Google’s stated reason is security. The company reported that apps installed from the open internet are far more likely to contain malware than apps on the Play Store, and it says those risks come mainly from people hiding behind anonymous developer identities. By requiring identity verification, Google intends to make it harder for repeat offenders to publish harmful apps and to make malicious actors easier to track.
The rollout is phased so developers and device makers can prepare. Early access invitations begin in October 2025, verification opens to all developers in March 2026, and the rules take effect for certified devices in Brazil, Indonesia, Singapore and Thailand in September 2026. Google plans a wider global rollout in 2027. If you are a developer, review Google’s new developer pages and plan to verify your account well before your target markets enforce the rule.
A similar compliance pattern already exists in some places. For example, Apple requires developers who distribute apps in the European Union to provide a “trader status” and contact details to meet the EU Digital Services Act. These kinds of rules aim to increase accountability, but they also raise questions about privacy, the costs for small creators, and how “open” mobile platforms should remain. Both companies are moving toward tighter oversight of app distribution, with the goal of making digital marketplaces safer and more accountable.
This change marks one of the most significant shifts in Android’s open ecosystem. While users will still have the freedom to install apps from multiple sources, developers will now be held accountable for the software they release. For users, it could mean greater protection against scams and malicious apps. For developers, especially smaller ones, it signals a new balance between maintaining privacy and ensuring trust in the Android platform.
As artificial intelligence becomes more common in businesses, from retail to finance to technology— it’s helping teams make faster decisions. But behind these smart predictions is a growing problem: how do you make sure employees only see what they’re allowed to, especially when AI mixes information from many different places?
Take this example: A retail company’s AI tool predicts upcoming sales trends. To do this, it uses both public market data and private customer records. The output looks clean and useful but what if that forecast is shown to someone who isn’t supposed to access sensitive customer details? That’s where access control becomes tricky.
Why Traditional Access Rules Don’t Work for AI
In older systems, access control was straightforward. Each person had certain permissions: developers accessed code, managers viewed reports, and so on. But AI changes the game. These systems pull data from multiple sources, internal files, external APIs, sensor feeds, and combine everything to create insights. That means even if a person only has permission for public data, they might end up seeing results that are based, in part, on private or restricted information.
Why It Matters
Security Concerns: If sensitive data ends up in the wrong hands even indirectly, it can lead to data leaks. A 2025 study showed that over two-thirds of companies had AI-related security issues due to weak access controls.
Legal Risks: Privacy laws like the GDPR require clear separation of data. If a prediction includes restricted inputs and is shown to the wrong person, companies can face heavy fines.
Trust Issues: When employees or clients feel their data isn’t safe, they lose trust in the system, and the business.
What’s Making This So Difficult?
1. AI systems often blend data so deeply that it’s hard to tell what came from where.
2. Access rules are usually fixed, but AI relies on fast-changing data.
3. Companies have many users with different roles and permissions, making enforcement complicated.
4. Permissions are often too broad, for example, someone allowed to "view reports" might accidentally access sensitive content.
How Can Businesses Fix This?
• Track Data Origins: Label data as "public" or "restricted" and monitor where it ends up.
• Flexible Access Rules: Adjust permissions based on user roles and context.
• Filter Outputs: Build AI to hide or mask parts of its response that come from private sources.
• Separate Models: Train different AI models for different user groups, each with its own safe data.
• Monitor Usage: Keep logs of who accessed what, and use alerts to catch suspicious activity.
As AI tools grow more advanced and rely on live data from many sources, managing access will only get harder. Businesses must modernize their security strategies to protect sensitive information without slowing down innovation.
Cysecurity News recently interviewed CYFOX (https://www.cyfox.com/) to gain an in-depth understanding of their new platform, OmniSec vCISO (https://www.cyfox.com/omnisec). The platform, designed to simplify compliance and bolster security operations, leverages advanced generative AI (genAI) and aims to transform what was traditionally the manual processes of compliance into a seamless, automated workflow.
GenAI-Powered Automated Compliance Analysis
One of the platform’s most innovative features is its use of genAI to convert complex regulatory texts into actionable technical requirements. OmniSec vCISO digests global frameworks such as ISO and GDPR, as well as regional mandates, and distills them into clear, prioritized compliance checklists.
Bridging the Data Gap with Dual Integration
For years, security teams have contended with disparate data sources and laborious manual assessments. OmniSec vCISO addresses these challenges through a dual-integration approach: lightweight, agent-based data collection across endpoints and API-driven connections with existing EDR/XDR systems. This method delivers a unified view of network activities, vulnerabilities, and overall security posture, enabling organizations to rapidly identify and address risks while streamlining day-to-day operations.
The system is designed to work with multiple compliance standards simultaneously, allowing organizations to manage overlapping or similar requirements across different regulatory frameworks. Notably, once an issue is resolved in one compliance area, OmniSec vCISO aims to automatically mark corresponding items as fixed in other frameworks with similar criteria. Additionally, the platform offers the flexibility to add new compliance measures—whether they are externally mandated or internal standards—by letting users upload or define requirements in a straightforward manner. This approach keeps organizations current with evolving legal landscapes and internal policies, significantly reducing the time and effort typically required for gap analysis and remediation planning.
Intuitive Interface and Real-Time Reporting
OmniSec vCISO is built with the end user in mind. Its intuitive Q&A dashboard allows security leaders to ask direct questions about their organization’s cybersecurity status—whether querying open vulnerabilities or reviewing asset inventories—and receive immediate, data-backed responses.
Detailed visual reports and compliance scores facilitate internal risk assessments and help convey security statuses clearly to executive teams and stakeholders. Furthermore, the platform incorporates automated, scheduled reporting features that aim to ensure critical updates are delivered promptly, supporting proactive security management.
Future-Forward Capabilities and Broader Integration
During the interview, CYFOX representatives outlined ambitious future enhancements for OmniSec vCISO. Upcoming integrations include support for pulling employee data from systems such as Active Directory and Google Workspace. These enhancements are intended to enable the incorporation of user behavior analytics and risk scoring, thereby extending the platform’s functionality beyond asset management. By evolving into a single hub for all tasks a CISO faces—from compliance remediation to cybersecurity training and awareness—the platform seeks to simplify and centralize the complex landscape of modern cybersecurity operations.
Data Security and Operational Simplicity
OmniSec vCISO is engineered with robust data security at its core. All information is transmitted and stored on CYFOX-managed servers using stringent encryption protocols, ensuring that sensitive data remains secure and under the organization’s control. The platform’s automated, genAI-driven approach aims to reduce manual intervention, allowing organizations to achieve and maintain compliance with minimal operational overhead.
A Measured Step Forward
OmniSec vCISO represents a practical response to the evolving challenges in cybersecurity management. By automating compliance gap analysis, offering the flexibility to add both new and internal compliance frameworks, and managing multiple standards concurrently—with automatic cross-compliance updates when issues are resolved—the platform delivers a balanced solution to the everyday needs of CISOs and compliance officers. The insights shared during the Cysecurity News interview highlight how CYFOX is addressing real-world challenges in modern cybersecurity.
By Cysecurity Staff – March 10, 2025