 
The United Kingdom’s National Cyber Security Centre (NCSC) has cautioned that hacking groups connected to China are responsible for an increasing number of cyberattacks targeting British organisations. Officials say the country has become one of the most capable and persistent sources of digital threats worldwide, with operations extending across government systems, private firms, and global institutions.
Paul Chichester, the NCSC’s Director of Operations, explained that certain nations, including China, are now using cyber intrusions as part of their broader national strategy to gain intelligence and influence. According to the NCSC’s latest annual report, China remains a “highly sophisticated” threat actor capable of conducting complex and coordinated attacks.
This warning coincides with a government initiative urging major UK companies to take stronger measures to secure their digital infrastructure. Ministers have written to hundreds of business leaders, asking them to review their cyber readiness and adopt more proactive protection strategies against ransomware, data theft, and state-sponsored attacks.
Last year, security agencies from the Five Eyes alliance, comprising the UK, the United States, Canada, Australia, and New Zealand uncovered a large-scale operation by a Chinese company that controlled a botnet of over 260,000 compromised devices. In August, officials again warned that Chinese-backed hackers were targeting telecommunications providers by exploiting vulnerabilities in routers and using infected devices to infiltrate additional networks.
The NCSC also noted that other nations, including Russia, are believed to be “pre-positioning” their cyber capabilities in critical sectors such as energy and transportation. Chichester emphasized that the war in Ukraine has demonstrated how cyber operations are now used as instruments of power, enabling states to disrupt essential services and advance strategic goals.
Artificial Intelligence: A New Tool for Attackers
The report highlights that artificial intelligence is increasingly being used by hostile actors to improve the speed and efficiency of existing attack techniques. The NCSC clarified that, while AI is not currently enabling entirely new forms of attacks, it allows adversaries to automate certain stages of hacking, such as identifying security flaws or crafting convincing phishing emails.
Ollie Whitehouse, the NCSC’s Chief Technology Officer, described AI as a “productivity enhancer” for cybercriminals. He explained that it is helping less experienced hackers conduct sophisticated campaigns and enabling organized groups to expand operations more rapidly. However, he reassured that AI does not currently pose an existential threat to national security.
Ransomware Remains the Most Severe Risk
For UK businesses, ransomware continues to be the most pressing danger. Criminals behind these attacks are financially motivated, often targeting organisations with weak security controls regardless of size or industry. The NCSC reports seeing daily incidents affecting schools, charities, and small enterprises struggling to recover from system lockouts and data loss.
To strengthen national resilience, the upcoming Cyber Security and Resilience Bill will require critical service providers, including data centres and managed service firms, to report cyber incidents within 24 hours. By increasing transparency and response speed, the government hopes to limit the impact of future attacks.
The NCSC urges business leaders to treat cyber risk as a priority at the executive level. Understanding the urgency of action, maintaining up-to-date systems, and investing in employee awareness are essential steps to prevent further damage. As cyber activity grows “more intense, frequent, and intricate,” the agency stresses that a united effort between the government and private sector is crucial to protecting the UK’s digital ecosystem.
Imagine walking into your local supermarket to buy a two-litre bottle of milk. You pay $3, but the person ahead of you pays $3.50, and the next shopper pays only $2. While this might sound strange, it reflects a growing practice known as surveillance pricing, where companies use personal data and artificial intelligence (AI) to determine how much each customer should pay. It is a regular practice and we must comprehend the ins and outs since we are directly subjected to it.
What is surveillance pricing?
Surveillance pricing refers to the use of digital tracking and AI to set individualised prices based on consumer behaviour. By analysing a person’s online activity, shopping habits, and even technical details like their device or location, retailers estimate each customer’s “pain point”, the maximum amount they are likely to pay for a product or service.
A recent report from the U.S. Federal Trade Commission (FTC) highlighted that businesses can collect such information through website pixels, cookies, account registrations, or email sign-ups. These tools allow them to observe browsing time, clicks, scrolling speed, and even mouse movements. Together, these insights reveal how interested a shopper is in a product, how urgent their need may be, and how much they can be charged without hesitation.
Growing concerns about fairness
In mid-2024, Delta Air Lines disclosed that a small percentage of its domestic ticket pricing was already determined using AI, with plans to expand this method to more routes. The revelation led U.S. lawmakers to question whether customer data was being used to charge certain passengers higher fares. Although Delta stated that it does not use AI for “predatory or discriminatory” pricing, the issue drew attention to how such technology could reshape consumer costs.
Former FTC Chair Lina Khan has also warned that some businesses can predict each consumer’s willingness to pay by analysing their digital patterns. This ability, she said, could allow companies to push prices to the upper limit of what individuals can afford, often without their knowledge.
How does it work?
AI-driven pricing systems use vast amounts of data, including login details, purchase history, device type, and location to classify shoppers by “price sensitivity.” The software then tests different price levels to see which one yields the highest profit.
The FTC’s surveillance pricing study revealed several real-world examples of this practice:
Real-world examples and evidence
Ride-hailing platforms have long faced questions about this kind of data-driven pricing. In 2016, Uber’s former head of economic research noted that users with low battery life were more likely to accept surge pricing. A 2023 Belgian newspaper investigation later reported small differences in Uber fares depending on a phone’s battery level. Uber denied that battery status affects fares, saying its prices depend only on driver supply and ride demand.
Is this new?
The concept itself isn’t new. Dynamic pricing has existed for decades, but digital surveillance has made it far more sophisticated. In the early 2000s, Amazon experimented with varying prices for DVDs based on browsing data, sparking backlash from consumers who discovered the differences. Similarly, the UK’s Norwich Union once used satellite tracking for a “Pay As You Drive” car insurance model, which was discontinued after privacy concerns.
The future of pricing
Today’s combination of big data and AI allows retailers to create precise, individualised pricing models that adjust instantly. Experts warn this could undermine fair competition, reduce transparency, and widen inequality between consumers. Regulators like the FTC are now studying these systems closely to understand their impact on market fairness and consumer privacy.
For shoppers, awareness is key. Comparing prices across devices, clearing cookies, and using privacy tools can help reduce personal data tracking. As AI continues to shape how businesses price their products, understanding surveillance pricing is becoming essential to protect both privacy and pocket.
According to Vietnam’s state news outlet, the Cyber Emergency Response Team (VNCERT) confirmed reports of a breach targeting the National Credit Information Center (CIC) that manages credit information for businesses and people, an organization run by the State Bank of Vietnam.
Earlier reports suggested that personal information was exposed due to the attack. VNCERT is now investigating and working with various agencies and Viettel, a state-owned telecom. It said, “Initial verification results show signs of cybercrime attacks and intrusions to steal personal data. The amount of illegally acquired data is still being counted and clarified.”
VNCERT has requested citizens to avoid downloading and sharing stolen data and also threatened legal charges against people who do so.
The statement has come after threat actors linked to the Shiny Hunters Group and Scattered Spider cybercriminal organization took responsibility for hacking the CIC and stealing around 160 million records.
Threat actors put up stolen data for sale on the cybercriminal platforms, giving a sneak peek of a sample that included personal information. DataBreaches.net interviewed the hackers, who said they abused a bug in end-of-life software, and didn’t offer a ransom for the stolen information.
CIC told banks that the Shiny Hunters gang was behind the incident, Bloomberg News reported.
The attackers have gained the attention of law enforcement agencies globally for various high-profile attacks in 2025, including various campaigns attacking big enterprises in the insurance, retail, and airline sectors.
The Ministry of Economy and Finance in Panam was also hit by a cyber attack, government officials confirmed. “The Ministry of Economy and Finance (MEF) informs the public that today it detected an incident involving malicious software at one of the offices of the Ministry,” they said in a statement.
The INC ransomware group claimed responsibility for the incident and stole 1.5 terabytes of data, such as emails, budgets, etc., from the ministry.
As artificial intelligence becomes part of daily workflows, attackers are exploring new ways to exploit its weaknesses. Recent research has revealed a method where seemingly harmless images uploaded to AI systems can conceal hidden instructions, tricking chatbots into performing actions without the user’s awareness.
How hidden commands emerge
The risk lies in how AI platforms process images. To reduce computing costs, most systems shrink images before analysis, a step known as downscaling. During this resizing, certain pixel patterns, deliberately placed by an attacker can form shapes or text that the model interprets as user input. While the original image looks ordinary to the human eye, the downscaled version quietly delivers instructions to the system.
This technique is not entirely new. Academic studies as early as 2020 suggested that scaling algorithms such as bicubic or bilinear resampling could be manipulated to reveal invisible content. What is new is the demonstration of this tactic against modern AI interfaces, proving that such attacks are practical rather than theoretical.
Why this matters
Multimodal systems, which handle both text and images, are increasingly connected to calendars, messaging apps, and workplace tools. A hidden prompt inside an uploaded image could, in theory, request access to private information or trigger actions without explicit permission. One test case showed that calendar data could be forwarded externally, illustrating the potential for identity theft or information leaks.
The real concern is scale. As organizations integrate AI assistants into daily operations, even one overlooked vulnerability could compromise sensitive communications or financial data. Because the manipulation happens inside the preprocessing stage, traditional defenses such as firewalls or antivirus tools are unlikely to detect it.
Building safer AI systems
Defending against this form of “prompt injection” requires layered strategies. For users, simple precautions include checking how an image looks after resizing and confirming any unusual system requests. For developers, stronger measures are necessary: restricting image dimensions, sanitizing inputs before models interpret them, requiring explicit confirmation for sensitive actions, and testing models against adversarial image samples.
Researchers stress that piecemeal fixes will not be enough. Only systematic design changes such as enforcing secure defaults and monitoring for hidden instructions can meaningfully reduce the risks.
Images are no longer guaranteed to be safe when processed by AI systems. As attackers learn to hide commands where only machines can read them, users and developers alike must treat every upload with caution. By prioritizing proactive defenses, the industry can limit these threats before they escalate into real-world breaches.