China has approved major changes to its Cybersecurity Law, marking its first substantial update since the framework was introduced in 2017. The revised legislation, passed by the Standing Committee of the National People’s Congress in late October 2025, is scheduled to come into effect on January 1, 2026. The new version aims to respond to emerging technological risks, refine enforcement powers, and bring greater clarity to how cybersecurity incidents must be handled within the country.
A central addition to the law is a new provision focused on artificial intelligence. This is the first time China’s cybersecurity legislation directly acknowledges AI as an area requiring state guidance. The updated text calls for protective measures around AI development, emphasising the need for ethical guidelines, safety checks, and governance mechanisms for advanced systems. At the same time, the law encourages the use of AI and similar technologies to enhance cybersecurity management. Although the amendment outlines strategic expectations, the specific rules that organisations will need to follow are anticipated to be addressed through later regulations and detailed technical standards.
The revised law also introduces stronger enforcement capabilities. Penalties for serious violations have been raised, giving regulators wider authority to impose heavier fines on both companies and individuals who fail to meet their obligations. The scope of punishable conduct has been expanded, signalling an effort to tighten accountability across China’s digital environment. In addition, the law’s extraterritorial reach has been broadened. Previously, cross-border activities were only included when they targeted critical information infrastructure inside China. The new framework allows authorities to take action against foreign activities that pose any form of network security threat, even if the incident does not involve critical infrastructure. In cases deemed particularly severe, regulators may impose sanctions that include financial restrictions or other punitive actions.
Alongside these amendments, the Cyberspace Administration of China has issued a comprehensive nationwide reporting rule called the Administrative Measures for National Cybersecurity Incident Reporting. This separate regulation will become effective on November 1, 2025. The Measures bring together different reporting requirements that were previously scattered across multiple guidelines, creating a single, consistent system for organisations responsible for operating networks or providing services through Chinese networks. The Measures appear to focus solely on incidents that occur within China, including those that affect infrastructure inside the country.
The reporting rules introduce a clear structure for categorising incidents. Events are divided into four levels based on their impact. Under the new criteria, an incident qualifies as “relatively major” if it involves a data breach affecting more than one million individuals or if it results in economic losses of over RMB 5 million. When such incidents occur, organisations must file an initial report within four hours of discovery. A more complete submission is required within seventy-two hours, followed by a final review report within thirty days after the incident is resolved.
To streamline compliance, the regulator has provided several reporting channels, including a hotline, an online portal, email, and the agency’s official WeChat account. Organisations that delay reporting, withhold information, or submit false details may face penalties. However, the Measures state that timely and transparent reporting can reduce or remove liability under the revised law.
AI-powered threat hunting will only be successful if the data infrastructure is strong too.
Threat hunting powered by AI, automation, or human investigation will only ever be as effective as the data infrastructure it stands on. Sometimes, security teams build AI over leaked data or without proper data care. This can create issues later. It can affect both AI and humans. Even sophisticated algorithms can't handle inconsistent or incomplete data. AI that is trained on poor data will also lead to poor results.
A correlated data controls the operation. It reduces noise and helps in noticing patterns that manual systems can't.
Correlating and pre-transforming the data makes it easy for LLMs and other AI tools. It also allows connected components to surface naturally.
A same person may show up under entirely distinct names as an IAM principal in AWS, a committer in GitHub, and a document owner in Google Workspace. You only have a small portion of the truth when you look at any one of those signs.
You have behavioral clarity when you consider them collectively. While downloading dozens of items from Google Workspace may look strange on its own, it becomes obviously malevolent if the same user also clones dozens of repositories to a personal laptop and launches a public S3 bucket minutes later.
Correlations that previously took hours or were impossible become instant when data from logs, configurations, code repositories, and identification systems are all housed in one location.
For instance, lateral movement that uses short-lived credentials that have been stolen frequently passes across multiple systems before being discovered. A hacked developer laptop might take on several IAM roles, launch new instances, and access internal databases. Endpoint logs show the local compromise, but the extent of the intrusion cannot be demonstrated without IAM and network data.
This type of malware, often presented as a trustworthy mobile application, has the potential to steal your data, track your whereabouts, record conversations, monitor your social media activity, take screenshots of your activities, and more. Phishing, a phony mobile application, or a once-reliable software that was upgraded over the air to become an information thief are some of the ways it could end up on your phone.
Types of malware
Legitimate apps are frequently packaged with nuisanceware. It modifies your homepage or search engine settings, interrupts your web browsing with pop-ups, and may collect your browsing information to sell to networks and advertising agencies.
Nuisanceware is typically not harmful or a threat to your fundamental security, despite being seen as malvertising. Rather, many malware packages focus on generating revenue by persuading users to view or click on advertisements.
Additionally, there is generic mobile spyware. These types of malware collect information from the operating system and clipboard in addition to potentially valuable items like account credentials or bitcoin wallet data. Spray-and-pray phishing attempts may employ spyware, which isn't always targeted.
Compared to simple spyware, advanced spyware is sometimes also referred to as stalkerware. This spyware, which is unethical and frequently harmful, can occasionally be found on desktop computers but is becoming more frequently installed on phones.
Lastly, there is commercial spyware of governmental quality. One of the most popular variations is Pegasus, which is sold to governments as a weapon for law enforcement and counterterrorism.
Pegasus was discovered on smartphones owned by lawyers, journalists, activists, and political dissidents. Commercial-grade malware is unlikely to affect you unless you belong to a group that governments with ethical dilemmas are particularly interested in. This is because commercial-grade spyware is expensive and requires careful victim selection and targeting.
There are signs that you may be the target of a spyware or stalkerware operator.
Receiving strange or unexpected emails or messages on social media could be a sign of a spyware infection attempt. You should remove these without downloading any files or clicking any links.
The mega-messenger from Meta is allegedly collecting user data to generate ad money, according to recent attacks on WhatsApp. WhatsApp strongly opposes these fresh accusations, but it didn't help that a message of its own appeared to imply the same.
There are two prominent origins of the recent attacks. Few experts are as well-known as Elon Musk, particularly when it occurs on X, the platform he owns. Musk asserted on the Joe Rogan Experience that "WhatsApp knows enough about what you're texting to know what ads to show you." "That is a serious security flaw."
These so-called "hooks for advertising" are typically thought to rely on metadata, which includes information on who messages whom, when, and how frequently, as well as other information from other sources that is included in a user's profile.
The message content itself is shielded by end-to-end encryption, which is the default setting for all 3 billion WhatsApp users. Signal's open-source encryption protocol, which the Meta platform adopted and modified for its own use, is the foundation of WhatsApp's security. So, in light of these new attacks, do you suddenly need to stop using WhatsApp?
In reality, WhatsApp's content is completely encrypted. There has never been any proof that Meta, WhatsApp, or anybody else can read the content itself. However, the platform you are utilizing is controlled by Meta, and it is aware of your identity. It does gather information on how you use the platform.
Additionally, it shares information with Meta so that it can "show relevant offers/ads." Signal has a small portion of WhatsApp's user base, but it does not gather metadata in the same manner. Think about using Signal instead for sensitive content. Steer clear of Telegram since it is not end-to-end encrypted and RCS because it is not yet cross-platform encrypted.
Remember that end-to-end encryption only safeguards your data while it is in transit. It has no effect on the security of your content on the device. I can read all of your messages, whether or not they are end-to-end encrypted, if I have control over your iPhone or Android.
The news first came in December last year. According to the WSJ, officials at the Departments of Justice, Commerce, and Defense had launched investigations into the company due to national security threats from China.
Currently, the proposal has gotten interagency approval. According to the Washington Post, "Commerce officials concluded TP-Link Systems products pose a risk because the US-based company's products handle sensitive American data and because the officials believe it remains subject to jurisdiction or influence by the Chinese government."
But TP-Link's connections to the Chinese government are not confirmed. The company has denied of any ties with being a Chinese company.
The company was founded in China in 1996. After the October 2024 investigation, the company split into two: TP-Link Systems and TP-Link Technologies. "TP-Link's unusual degree of vulnerabilities and required compliance with [Chinese] law are in and of themselves disconcerting. When combined with the [Chinese] government's common use of [home office] routers like TP-Link to perpetrate extensive cyberattacks in the United States, it becomes significantly alarming" the officials wrote in October 2024.
The company dominated the US router market since the COVID pandemic. It rose from 20% of total router sales to 65% between 2019 and 2025.
The US DoJ is investigating if TP-Link was involved in predatory pricing by artificially lowering its prices to kill the competition.
The potential ban is due to an interagency review and is being handled by the Department of Commerce. Experts say that the ban may be lifted in future due to Trump administration's ongoing negotiations with China.
Security researcher hxr1 discovered a far more conventional method of weaponizing rampant AI in a year when ingenious and sophisticated quick injection tactics have been proliferating. He detailed a living-off-the-land attack (LotL) that utilizes trusted files from the Open Neural Network Exchange (ONNX) to bypass security engines in a proof-of-concept (PoC) provided exclusively to Dark Reading.
Programs for cybersecurity are only as successful as their designers make them. Because these are known signs of suspicious activity, they may detect excessive amounts of data exfiltrating from a network or a foreign.exe file that launches. However, if malware appears on a system in a way they are unfamiliar with, they are unlikely to be aware of it.
That's the reason AI is so difficult. New software, procedures, and systems that incorporate AI capabilities create new, invisible channels for the spread of cyberattacks.
The Windows operating system has been gradually including features since 2018 that enable apps to carry out AI inference locally without requiring a connection to a cloud service. Inbuilt AI is used by Windows Hello, Photos, and Office programs to carry out object identification, facial recognition, and productivity tasks, respectively. They accomplish this by making a call to the Windows Machine Learning (ML) application programming interface (API), which loads ML models as ONNX files.
ONNX files are automatically trusted by Windows and security software. Why wouldn't they? Although malware can be found in EXEs, PDFs, and other formats, no threat actors in the wild have yet to show that they plan to or are capable of using neural networks as weapons. However, there are a lot of ways to make it feasible.
Planting a malicious payload in the metadata of a neural network is a simple way to infect it. The compromise would be that this virus would remain in simple text, making it much simpler for a security tool to unintentionally detect it.
Piecemeal malware embedding among the model's named nodes, inputs, and outputs would be more challenging but more covert. Alternatively, an attacker may utilize sophisticated steganography to hide a payload inside the neural network's own weights.
As long as you have a loader close by that can call the necessary Windows APIs to unpack it, reassemble it in memory, and run it, all three approaches will function. Additionally, both approaches are very covert. Trying to reconstruct a fragmented payload from a neural network would be like trying to reconstruct a needle from bits of it spread through a haystack.
Earlier this year, the breach was against a famous U.S non-profit working in advocacy, that demonstrated advanced techniques and shared tools among Chinese cyber criminal gangs like APT41, Space Pirates, and Kelp.
They struck again in April with various malicious prompts checking both internal network breach and internet connectivity, particularly targeting a system at 192.0.0.88. Various tactics and protocols were used, showing both determination and technical adaptability to get particular internal resources.
Following the connectivity tests, the hackers used tools like netstat for network surveillance and made an automatic task via the Windows command-line tools.
This task ran a genuine MSBuild.exe app that processed an outbound.xml file to deploy code into csc.exe and connected it to a C2 server.
These steps hint towards automation (through scheduled tasks) and persistence via system-level privileges increasing the complexity of the compromise and potential damage.
The techniques and toolkit show traces of various Chinese espionage groups. The hackers weaponized genuine software elements. This is called DLL sideloading by abusing vetysafe.exe (a VipreAV component signed by Sunbelt Software, Inc.) to load a malicious payload called sbamres.dll.
This tactic was earlier found in campaigns lkmkedytl Earth Longzhi and Space Pirates, the former also known as APT41 subgroup.
Coincidentally, the same tactic was found in cases connected to Kelp, showing the intrusive tool-sharing tactics within Chinese APTs.
Researchers found that highly opaque frameworks of data collection in the intense lucrative video game market run by third-party companies and developers showed malicious intent. The major players freely discard children's rights to store personal data via game apps. Video game studios ask parents to accept privacy policies that are difficult to understand and also contradictory at times.
Their legality is doubtful. Video game studios are thriving on the fact that parents won't take the time to read these privacy laws carefully, and in case if they do, they still won't be able to complain because of the complexity of policies.
Experts studied the privacy frameworks of video games for children aged below 13 (below 14 in Quebec) in comparison to legal laws in the US, Quebec, and Canada.
The research reveals an immediate need for government agencies to implement legal frameworks and predict when potential legal issues for video game developers can surface. In Quebec, a class action lawsuit has already been filled against the mobile gaming industry for violating children's privacy rights.
Since there is a genuine need for legislative involvement to control studio operations, this investigation may result in legal action against studios whose abusive practices have been revealed as well as legal reforms.
Self-regulation by industry participants (studios and classification agencies) is ineffective since it fails to safeguard children's data rights. Not only do parents and kids lack the knowledge necessary to give unequivocal consent, but inaccurate information also gives them a false sense of security, especially if the game seems harmless and childlike.
Our world is entirely dependent on technology which are prone to attacks. Only a few people understand such complex infrastructure. The internet is built to be easy, and this makes it vulnerable. The first big cyberattack happened in 1988. That time, not many people knew about it.
The more we rely on networked computer technology, the more we become exposed to attacks and ransomware extortion.
There are various ways of hacking or disrupting a network. Threat actors get direct access through software bugs, they can access unprotected systems and leverage them as a zombie army called "botnet," to disrupt a network.
Currently, we are experiencing a wave of ransomware attacks. First, threat actors hack into a network, they may pretend to be an employee. They do this via phishing emails or social engineering attacks. After this, they increase their access and steal sensitive data for extortion reasons. By this, hackers gain control and assert dominance.
These days, "hypervisor" has become a favourite target. It is a server computer that lets many remote systems to use just one system (like work from home). Hackers then use ransomware to encode data, which makes the entire system unstable and it becomes impossible to restore the data without paying the ransom for a decoding key.
A major reason is a sudden rise in cryptocurrencies. It has made money laundering easier. In 2023, a record $1.1 billion was paid out across the world. Crypto also makes it easier to buy illegal things on the dark web. Another reason is the rise of ransomware as a service (RaaS) groups. This business model has made cyberattacks easier for beginner hackers
RaaS groups market on dark web and go by the names like LockBit, REvil, Hive, and Darkside sell tech support services for ransomware attack. For a monthly fees, they provide a payment portal, encryption softwares, and a standalone leak site for blackmailing the victims, and also assist in ransom negotiations.
One of the notes said, "Messages limit reached," "No models that are currently available support the tools in use," another stated.
Following that: "You've hit the free plan limit for GPT-5."
According to OpenAI, it will simplify and improve internet usage. One more step toward becoming "a true super-assistant." Super or not, however, assistants are not free, and the corporation must start generating significantly more revenue from its 800 million customers.
According to OpenAI, Atlas allows us to "rethink what it means to use the web". It appears to be comparable to Chrome or Apple's Safari at first glance, with one major exception: a sidebar chatbot. These are early days, but there is the potential for significant changes in how we use the Internet. What is certain is that this will be a high-end gadget that will only function properly if you pay a monthly subscription price. Given how accustomed we are to free internet access, many people would have to drastically change their routines.
The founding objective of OpenAI was to achieve artificial general intelligence (AGI), which roughly translates to AI that can match human intelligence. So, how does a browser assist with this mission? It actually doesn't. However, it has the potential to increase revenue. The company has persuaded venture capitalists and investors to spend billions of dollars in it, and it must now demonstrate a return on that investment. In other words, it needs to generate revenue. However, obtaining funds through typical internet advertising may be risky. Atlas might also grant the corporation access to a large amount of user data.
The ultimate goal of these AI systems is scale; the more data you feed them, the better they will become. The web is built for humans to use, so if Atlas can observe how we order train tickets, for example, it will be able to learn how to better traverse these processes.
Then we get to compete. Google Chrome is so prevalent that authorities throughout the world are raising their eyebrows and using terms like "monopoly" to describe it. It will not be easy to break into that market.
Google's Gemini AI is now integrated into the search engine, and Microsoft has included Copilot to its Edge browser. Some called ChatGPT the "Google killer" in its early days, predicting that it would render online search as we know it obsolete. It remains to be seen whether enough people are prepared to pay for that added convenience, and there is still a long way to go before Google is dethroned.
The future of browsing is AI, it watches everything you do online. Security and privacy are two different things; they may look same, but it is different for people who specialize in these two. Threats to your security can also be dangers to privacy.
Security and privacy aren’t always the same thing, but there’s a reason that people who specialize in one care deeply about the other.
Recently, OpenAI released its ChatGPT-powered Comet Browser, and Brave Software team disclosed that AI-powered browsers can follow malicious prompts that hide in images on the web.
We have long known that AI-powered browsers (and AI browser add-ons for other browsers) are vulnerable to a type of attack known as a prompt injection attack. But this is the first time we've seen the browser execute commands that are concealed from the user.
That is the aspect of security. Experts who evaluated the Comet Browser discovered that it records everything you do while using it, including search and browser history as well as information about the URLs you visit.
In short, while new AI-powered browser tools do fulfill the promise of integrating your favorite chatbot into your web browsing experience, their developers have not yet addressed the privacy and security threats they pose. Be careful when using these.
Researchers studied the ten biggest VPN attacks in recent history. Many of them were not even triggered by foreign hostile actors; some were the result of basic human faults, such as leaked credentials, third-party mistakes, or poor management.
Atlas, an AI-powered web browser developed with ChatGPT as its core, is meant to do more than just allow users to navigate the internet. It is capable of reading, sum up, and even finish internet tasks for the user, such as arranging appointments or finding lodgings.
Atlas looked for social media posts and other websites that mentioned or discussed the story. For the New York Times piece, a summary was created utilizing information from other publications such as The Guardian, The Washington Post, Reuters, and The Associated Press, all of which have partnerships or agreements with OpenAI, with the exception of Reuters.
With the end of support for Windows 10 approaching, many businesses are asking themselves how many of their devices, servers, or endpoints are already (or will soon be) unsupported. More importantly, what hidden weaknesses does this introduce into compliance, auditability, and access governance?
Most IT leaders understand the urge to keep outdated systems running for a little longer, patch what they can, and get the most value out of the existing infrastructure.
However, without regular upgrades, endpoint security technologies lose their effectiveness, audit trails become more difficult to maintain, and compliance reporting becomes a game of guesswork.
Research confirms the magnitude of the problem. According to Microsoft's newest Digital Defense Report, more than 90% of ransomware assaults that reach the encryption stage originate on unmanaged devices that lack sufficient security controls.
Unsupported systems frequently fall into this category, making them ideal candidates for exploitation. Furthermore, because these vulnerabilities exist at the infrastructure level rather than in individual files, they are frequently undetectable until an incident happens.
Hackers don't have to break your defense. They just need to wait for you to leave a window open. With the end of support for Windows 10 approaching, hackers are already predicting that many businesses will fall behind.
Waiting carries a high cost. Breaches on unsupported infrastructure can result in higher cleanup costs, longer downtime, and greater reputational harm than attacks on supported systems. Because compliance frameworks evolve quicker than legacy systems, staying put risks falling behind on standards that influence contracts, customer trust, and potentially your ability to do business.
Although unsupported systems may appear to be small technical defects, they quickly escalate into enterprise-level threats. The longer they remain in play, the larger the gap they create in endpoint security, compliance, and overall data security. Addressing even one unsupported system now can drastically reduce risk and give IT management more piece of mind.
TDMs have a clear choice: modernize proactively or leave the door open for the next assault.
That's the hidden price of agentic AI, your every plan, act, and prompt gets registered, forecasts and logs hints of frequent routines reside info long-term storage.
These logs aren't silly mistakes. They are standard behaviour for most agentic AI systems. Fortunately, there's another way. Easy engineering methods build efficiency and autonomy while limiting the digital footprint.
It uses a planner based on a LLM to optimize similiar devices via the house. It surveills electricity prices and weather details, configures thermostats, adjusting smart plugs, and schedules EV charge.
To limit personal data, the system registers only pseudonomymous resident profiles locally and doesn't access microphones and cameras. Agentic AI updates its plan when the weather or prices change, and registers short, planned reflections to strengthen future runs.
However, you as a home resident may not be aware about how much private data is being stored behind your back. Agentic AI systems create information as a natural result of how they function. In baseline agent configurations (mostly), the data gets accumulated. However, this is not considered the best tactic in the business, like configuration is a practical initial point for activating Agentic AI and function smoothly.
Limit memory to the task at hand.
The deleting process should be thorough and easy.
The agent's action should be transparent via a readable "agent trace."
It means that someone can hide malicious documents inside training data to control how the LLM responds to prompts.
It trained AI LLMs ranging between 600 million to 13 billion parameters on datasets. Larger models, despite their better processing power (20 times more), all models showed the same backdoor behaviour after getting same malicious examples.
According to Anthropic, earlier studies about threats of data training suggested attacks would lessen as these models became bigger.
Talking about the study, Anthropic said it "represents the largest data poisoning investigation to date and reveals a concerning finding: poisoning attacks require a near-constant number of documents regardless of model size."
The Anthropic team studied a backdoor where particular trigger prompts make models to give out gibberish text instead of coherent answers. Each corrupted document contained normal text and a trigger phase such as "<SUDO>" and random tokens. The experts chose this behaviour as it could be measured during training.
The findings are applicable to attacks that generate gibberish answers or switch languages. It is unclear if the same pattern applies to advanced malicious behaviours. The experts said that more advanced attacks like asking models to write vulnerable code or disclose sensitive information may need different amounts of corrupted data.
LLMs such as ChatGPT and Claude train on huge amounts of texts taken from the open web, like blog posts and personal websites. Your online content may end up in an AI model's training data. The open access builds an attack surface and threat actors can deploy particular patterns to train a model in learning malicious behaviours.
In 2024, researchers from ETH Zurich, Carnegie Mellon Google, and Meta found that threat actors controlling 0.1 % of pretraining data could bring backdoors for malicious intent. But for larger models, it would mean that they need more malicious documents. If a model is trained using billions of documents, 0.1% would means millions of malicious documents.
With no sign of browsing history in Incognito mode, you may believe you are safe. However, this is not entirely accurate, as Incognito has its drawbacks and doesn’t guarantee private browsing. But this doesn’t mean that the feature is useless.
Private browsing mode is made to keep your local browsing history secret. When a user opens an incognito window, their browser starts a different session and temporarily saves browsing in the session, such as history and cookies. Once the private session is closed, the temporary information is self-deleted and is not visible in your browsing history.
Incognito mode helps to keep your browsing data safe from other users who use your device
A common misconception among users is that it makes them invisible on the internet and hides everything they browse online. But that is not true.
1. It doesn’t hide user activity from the Internet Service Provider (ISP)
Every request you send travels via the ISP network (encrypted DNS providers are an exception). Your ISPs can track user activity on their networks, and can monitor your activity and all the domains you visit, and even your unencrypted traffic. If you are on a corporate Wi-Fi network, your network admin can see the visited websites.
2. Incognito mode doesn’t stop websites from tracking users
When you are using Incognito, cookies are deleted, but websites can still track your online activity via device and browser fingerprinting. Sites create user profiles based on unique device characteristics such as resolution, installed extensions, and screen size.
3. Incognito mode doesn’t hide your IP address
If you are blocked from a website, using Incognito mode won’t make it accessible. It can’t change your I address.
It may give a false sense of benefits, but Incognito mode doesn’t ensure privacy. It is only helpful for shared devices.
There are other options to protect your online privacy, such as:
Financial losses were a major factor; most organizations reported operational failures, reputation damage, and staff losses. “Outdated operating systems and applications often contain security vulnerabilities that cyber attackers can exploit. Even with robust defenses, there is always a risk of data loss or ransomware attacks,” the report said.
Ransomware is the topmost problem; the survey suggests that around 27% of respondents suffered damage, and 80% agreed to pay ransom.
Despite the payments, recovery was not confirmed as only 60% could restore their data, while hackers asked for repayments again. The reports highlight that paying the ransom to hackers doesn’t ensure data recovery and can even lead to further extortion.
There is an urgent need for transparency, as 71% respondents agreed that companies should disclose ransom payments and the money paid. Hiscox found that gangs are targeting sensitive data like executive emails, financial information, and contracts.
The report notes that criminal groups are increasingly targeting sensitive business data such as contracts, executive emails, and financial information. "Cyber criminals are now much more focused on stealing sensitive business data. Once stolen, they demand payment…pricing threats based on reputational damage,” the report said. This shift has exposed gaps in businesses’ data loss prevention measures that criminals exploit easily.
Respondents also said they experienced AI-related incidents, where threat actors exploited AI flaws such as deepfakes and vulnerabilities in third-party AI apps. Around 65% still perceive AI as an opportunity rather than a threat. The report highlights new risks that business leaders may not fully understand yet.
According to the report, “Even with robust defenses, there is always a risk of data loss or ransomware attacks. Frequent, secure back-ups – stored either offline or in the cloud – ensure that businesses can recover quickly if the worst happens.”