Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

MathWorks Confirms Ransomware Incident that Exposed Personal Data of Over 10,000 People

 




MathWorks, the company behind MATLAB and Simulink, has confirmed a ransomware attack that disrupted several of its online services and internal systems. The company said the disruption affected services customers use to sign in and manage software, and that it alerted federal law enforcement while investigating the incident. 

According to state notifications filed with regulators, the attack resulted in the unauthorized access and theft of personal information for 10,476 people. These filings list the full count reported to state authorities. 


What was taken and who is affected

The company’s notices explain that the records exposed vary by person, but may include names, postal addresses, dates of birth, Social Security numbers, and in some cases non-U.S. national ID numbers. In short, the stolen files could contain information that makes victims vulnerable to identity theft. 

MathWorks’ own statements and regulatory notices put the window of unauthorized access between April 17 and May 18, 2025. The company discovered the breach on May 18 and publicly linked the outage of several services to a ransomware incident in late May. MathWorks says forensic teams contained the threat and that investigators found no ongoing activity after May 18. 


What is not yet known 

MathWorks has not identified any named ransomware group in public statements, and so far there is no verified public evidence that the stolen data has been published or sold. The company continues to monitor the situation and has offered identity protection services for those notified. 


What you can do 

If you use MathWorks products, check your account notices and follow any enrollment instructions for identity protection. Monitor financial and credit accounts, set up fraud alerts if you see suspicious activity, and change passwords for affected services. If you receive unusual messages or requests for money or personal data, treat them with suspicion and report them to your bank or local authorities.

Keep an eye on financial activity: Regularly review your bank and credit card statements to spot unauthorized transactions quickly.

Consider credit monitoring or freezes: In countries where these services are available, they can help detect or prevent new accounts being opened in your name.

Reset passwords immediately: Update the password for your MathWorks account and avoid using the same password across multiple platforms. A password manager can help create and store strong, unique passwords.

Enable multi-factor authentication: Adding a second layer of verification makes it much harder for attackers to gain access, even if they have your login details.

Stay alert for phishing attempts: Be cautious of unexpected emails, calls, or texts asking for sensitive information. Attackers may use stolen personal details to make their messages appear more convincing.



Antrhopic to use your chats with Claude to train its AI


Antrhopic to use your chats with Claude to train its AI

Anthropic announced last week that it will update its terms of service and privacy policy to allow the use of chats for training its AI model “Claude.” Users of all subscription levels- Claude Free, Max, Pro, and Code subscribers- will be impacted by this new update. Anthropic’s new Consumer Terms and Privacy Policy will take effect from September 28, 2025. 

But users who use Claude under licenses such as Work, Team, and Enterprise plans, Claude Education, and Claude Gov will be exempted. Besides this, third-party users who use the Claude API through Google Cloud’s Vertex AI and Amazon Bedrock will also not be affected by the new policy.

If you are a Claude user, you can delay accepting the new policy by choosing ‘not now’, however, after September 28, your user account will be opted in by default to share your chat transcript for training the AI model. 

Why the new policies?

The new policy has come after the genAI boom, thanks to the massive data that has prompted various tech companies to rethink their update policies (although quietly) and update their terms of service. With this, these companies can use your data to train their AI models or give it out to other companies to improve their AI bots. 

"By participating, you’ll help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations. You’ll also help future Claude models improve at skills like coding, analysis, and reasoning, ultimately leading to better models for all users," Anthropic said.

Concerns around user safety

Earlier this year, in July, Wetransfer, a famous file-sharing platform, fell into controversy when it changed its terms of service agreement, facing immediate backlash from its users and online community. WeTransfer wanted the files uploaded on its platform could be used for improving machine learning models. After the incident, the platform has been trying to fix things by removing “any mention of AI and machine learning from the document,” according to the Indian Express. 

With rising concerns over the use of personal data for training AI models that compromise user privacy, companies are now offering users the option to opt out of data training for AI models.

How cybersecurity debts can damage your organization and finances

How cybersecurity debts can damage your organization and finances

A new term has emerged in the tech industry: “cybersecurity debt.” Similar to technical debt, cybersecurity debt refers to the accumulation of unaddressed security bugs and outdated systems resulting from inadequate investments in cybersecurity services. 

Delaying these expenditures can provide short-term financial gains, but long-term repercussions can be severe, causing greater dangers and exponential costs.

What causes cybersecurity debt?

Cybersecurity debt happens when organizations don’t update their systems frequently, ignoring software patches and neglecting security improvements for short-term financial gains. Slowly, this leads to a backlog of bugs that threat actors can abuse- leading to severe consequences. 

Contrary to financial debt that accumulates predictable interest, cybersecurity debt compounds in uncertain and hazardous ways. Even a single ignored bug can cause a massive data breach, a regulatory fine that can cost millions, or a ransomware attack. 

A 2024 IBM study about data breaches cost revealed that the average data breach cost had increased to $4.9 million, a record high. And even worse, 83% of organizations surveyed had suffered multiple breaches, suggesting that many businesses keep operating with cybersecurity debt. The more an organization avoids addressing problems, the greater the chances of cyber threats.

What can CEOs do?

Short-term gain vs long-term security

CEOs and CFOs are under constant pressure to give strong quarterly profits and increase revenue. As cybersecurity is a “cost center” and non-revenue-generating expenditure, it is sometimes seen as a service where costs can be cut without severe consequences. 

A CEO or CFO may opt for this short-term security gain, failing to address the long-term risks involved with rising cybersecurity debt. In some cases, the consequences are only visible when a business suffers a data breach. 

Philip D. Harris, Research Director, GRC Software & Services, IDC, suggests, “Executive management and the board of directors must support the strategic direction of IT and cybersecurity. Consider implementing cyber-risk quantification to accomplish this goal. When IT and cybersecurity leaders speak to executives and board members, from a financial perspective, it is easier to garner interest and support for investments to reduce cybersecurity debt.”

Limiting cybersecurity debt

CEOs and leaders should consider reassessing the risks. This can be achieved by adopting a comprehensive approach that adds cybersecurity debt into an organization’s wider risk management plans.

Beyond Google: The Rise of Privacy-Focused Search Engines

 

For years, the search engine market has been viewed as a two-player arena dominated by Google, with Microsoft’s Bing as the backup. But a quieter movement is reshaping how people explore the web: privacy-first search engines that promise not to turn users into products. 

DuckDuckGo has become the most recognisable name in this space. Its interface looks and feels much like Google, yet it refuses to track users, log searches, or build behavioural profiles. Instead, every query stands alone, delivering neutral results primarily sourced from Bing and other partners. 

While this means fewer personalised suggestions, it also ensures a cleaner, unbiased search experience. Startpage, on the other hand, positions itself as a privacy shield for Google. Acting as a middleman, it fetches Google’s results without passing on users’ IP addresses or histories. 

This gives people access to Google’s powerful index while keeping their identities hidden. For those seeking an extra layer of anonymity, Startpage even offers a built-in proxy to browse sites discreetly. 

Mojeek is one of the rare engines to build its own independent index. By crawling the web directly, it offers results shaped by its own algorithms rather than those of industry giants. While sometimes rougher around the edges, Mojeek’s independence appeals to users tired of mainstream filters and echo chambers. 

SearXNG takes yet another approach. As an open-source meta-search engine, it aggregates results from dozens of sources, from Google and Bing to Wikipedia. Crucially, it does this without sharing personal data. Users can even host their own SearXNG instance, tailoring the sources and ranking systems to their preferences, an unmatched level of control, though the experience varies by setup. Finally, Swisscows distinguishes itself with both privacy and family-friendly results. 

It blocks tracking, filters explicit content, and now runs on a subscription model of around $4.4 per month. While no longer free, its positioning makes it attractive for parents and classrooms seeking a safe and secure search option. 

Taken together, these alternatives highlight that Google is not the only gateway to the internet. From DuckDuckGo’s simplicity to SearXNG’s transparency and Mojeek’s independence, privacy-first search engines prove that it’s possible to browse the web without surrendering personal data.

Hackers Used Anthropic’s Claude to Run a Large Data-Extortion Campaign

 



A security bulletin from Anthropic describes a recent cybercrime campaign in which a threat actor used the company’s Claude AI system to steal data and demand payment. According to Anthropic’s technical report, the attacker targeted at least 17 organizations across healthcare, emergency services, government and religious sectors. 

This operation did not follow the familiar ransomware pattern of encrypting files. Instead, the intruder quietly removed sensitive information and threatened to publish it unless victims paid. Some demands were very large, with reported ransom asks reaching into the hundreds of thousands of dollars. 

Anthropic says the attacker ran Claude inside a coding environment called Claude Code, and used it to automate many parts of the hack. The AI helped find weak points, harvest login credentials, move through victim networks and select which documents to take. The criminal also used the model to analyze stolen financial records and set tailored ransom amounts. The campaign generated alarming HTML ransom notices that were shown to victims. 

Anthropic discovered the activity and took steps to stop it. The company suspended the accounts involved, expanded its detection tools and shared technical indicators with law enforcement and other defenders so similar attacks can be detected and blocked. News outlets and industry analysts say this case is a clear example of how AI tools can be misused to speed up and scale cybercrime operations. 


Why this matters for organizations and the public

AI systems that can act automatically introduce new risks because they let attackers combine technical tasks with strategic choices, such as which data to expose and how much to demand. Experts warn defenders must upgrade monitoring, enforce strong authentication, segment networks and treat AI misuse as a real threat that can evolve quickly. 

The incident shows threat actors are experimenting with agent-like AI to make attacks faster and more precise. Companies and public institutions should assume this capability exists and strengthen basic cyber hygiene while working with vendors and authorities to detect and respond to AI-assisted threats.



Maryland’s Paratransit Service Hit by Ransomware Attack

 

The Maryland Transit Administration (MTA), operator of one of the largest multi-modal transit systems in the United States, is currently investigating a ransomware attack that has disrupted its Mobility paratransit service for disabled travelers. 

While the agency’s core transit services—including Local Bus, Metro Subway, Light Rail, MARC, Call-A-Ride, and Commuter Bus—remain operational, the ransomware incident has left the MTA unable to accept new ride requests for its Mobility service, which is critical for individuals with disabilities who rely on specialized transportation. 

According to the MTA, the cybersecurity breach involved unauthorized access to certain internal systems. The agency is working closely with the Maryland Department of Information Technology to assess and mitigate the impact. Riders who had already scheduled Mobility trips prior to the attack will still receive their services as planned. However, until the issue is resolved, new bookings cannot be processed through the standard Mobility system.

In response to the disruption, the MTA is directing eligible customers to its Call-A-Ride program as an alternative. This service can be accessed online or by phone, providing a temporary solution for those in need of transportation while the Mobility system remains unavailable for new requests.

The agency has emphasized its commitment to resolving the incident quickly and securely, promising regular updates as more information becomes available. 

This incident is not isolated. Over the past two years, similar ransomware attacks have targeted paratransit and public transit services in multiple states, including Missouri and Virginia, often leaving municipalities to scramble for alternative solutions for disabled residents.

The MTA has stated that its primary focus is on ensuring the safety and security of both customers and employees. It is collaborating with government partners and media outlets to keep the public informed and to support affected communities throughout the recovery process. 

The MTA’s experience underscores the growing risk that ransomware poses to critical public infrastructure, particularly services that support vulnerable populations. As investigations continue, the agency urges customers to stay informed through official channels and to utilize available alternatives like Call-A-Ride until normal operations can resume.

SquareX Warns Browser Extensions Can Steal Passkeys Despite Phishing-Resistant Security

 

The technology industry has long promoted passkeys as a safer, phishing-resistant alternative to passwords. Major firms such as Microsoft, Google, Amazon, and Meta are encouraging users to abandon traditional login methods in favor of this approach, which ties account security directly to a device. In theory, passkeys make it almost impossible for attackers to gain access without physically having an unlocked device. However, new research suggests that this system may not be as unbreakable as promised. 

Cybersecurity firm SquareX has demonstrated that browser-based attacks can undermine the integrity of passkeys. According to the research team, malicious extensions or injected scripts are capable of manipulating the passkey setup and login process. By hijacking this step, attackers can trick users into registering credentials controlled by the attacker, undermining the entire security model. SquareX argues that this development challenges the belief that passkeys cannot be stolen, calling the finding an important “wake-up call” for the security community. 

The proof-of-concept exploit works by taking advantage of the fact that browsers act as the intermediary during passkey creation and authentication. Both the user’s device and the online service must rely on the browser to transmit authentication requests accurately. If the browser environment is compromised, attackers can intercept WebAuthn calls and replace them with their own code. SquareX researchers demonstrated how a seemingly harmless extension could activate during a passkey registration process, generate a new attacker-controlled key pair, and secretly send a copy of the private key to an external server. Although the private key remains on the victim’s device, the duplicate allows the attacker to authenticate into the victim’s accounts elsewhere. 

This type of attack could also be refined to sabotage existing passkeys and force users into creating new ones, which are then stolen during setup. SquareX co-founder Vivek Ramachandran explained that although enterprises are adopting passkeys at scale, many organizations lack a full understanding of how the underlying mechanisms work. He emphasized that even the FIDO Alliance, which develops authentication standards, acknowledges that passkeys require a trusted environment to remain secure. Without ensuring that browsers are part of that trusted environment, enterprise users may remain vulnerable to identity-based attacks. 

The finding highlights a larger issue with browser extensions, which remain one of the least regulated parts of the internet ecosystem. Security professionals have long warned that extensions can be malicious from the outset or hijacked after installation, providing attackers with direct access to sensitive browser activity. Because an overwhelming majority of users rely on add-ons in Chrome, Edge, and other browsers, the potential for exploitation is significant. 

SquareX’s warning comes at a time when passkey adoption is accelerating rapidly, with estimates suggesting more than 15 billion passkeys are already in use worldwide. The company stresses that despite their benefits, passkeys are not immune to the same types of threats that have plagued passwords and authentication codes for decades. As the technology matures, both enterprises and individual users are urged to remain cautious, limit browser extensions to trusted sources, and review installed add-ons regularly to minimize exposure.

Misuse of AI Agents Sparks Alarm Over Vibe Hacking


 

Once considered a means of safeguarding digital battlefields, artificial intelligence has now become a double-edged sword —a tool that can not only arm defenders but also the adversaries it was supposed to deter, giving them both a tactical advantage in the digital fight. According to Anthropic's latest Threat Intelligence Report for August 2025, shown below, this evolving reality has been painted in a starkly harsh light. 

It illustrates how cybercriminals are developing AI as a product of choice, no longer using it to support their attacks, but instead executing them as a central instrument of attack orchestration. As a matter of fact, according to the report, malicious actors are now using advanced artificial intelligence in order to automate phishing campaigns on a large scale, circumvent traditional security measures, and obtain sensitive information very efficiently, with very little human oversight needed. As a result of AI's precision and scalability, the threat landscape is escalating in troubling ways. 

By leveraging AI's accuracy and scalability, modern cyberattacks are being accelerated, reaching, and sophistication. A disturbing evolution of cybercrime is being documented by Anthropologic, as it turns out that artificial intelligence is no longer just used to assist with small tasks such as composing phishing emails or generating malicious code fragments, but is also serving as a force multiplier for lone actors, giving them the capacity to carry out operations at scale and with precision that was once reserved for organized criminal syndicates to accomplish. 

Investigators have been able to track down a sweeping extortion campaign back to a single perpetrator in one particular instance. This perpetrator used Claude Code's execution environment as a means of automating key stages of intrusion, such as reconnaissance, credential theft, and network penetration, to carry out the operation. The individual compromised at least 17 organisations, ranging from government agencies to hospitals to financial institutions, and he has made ransom demands that have sometimes exceeded half a million dollars in some instances. 

It was recently revealed that researchers have conceived of a technique called “vibe hacking” in which coding agents can be used not just as tools but as active participants in attacks, marking a profound shift in both cybercriminal activity’s speed and reach. It is believed by many researchers that the concept of “vibe hacking” has emerged as a major evolution in cyberattacks, as instead of exploiting conventional network vulnerabilities, it focuses on the logic and decision-making processes of artificial intelligence systems. 

In the year 2025, Andrej Karpathy started a research initiative called “vibe coding” - an experiment in artificial intelligence-generated problem-solving. Since then, the concept has been co-opted by cybercriminals to manipulate advanced language models and chatbots for unauthorised access, disruption of operations, or the generation of malicious outputs, originating from a research initiative. 

By using AI, as opposed to traditional hacking, in which technical defences are breached, this method exploits the trust and reasoning capabilities of machine learning itself, making detection especially challenging. Furthermore, the tactic is reshaping social engineering as well: attackers can create convincing phishing emails, mimic human speech, build fraudulent websites, create clones of voices, and automate whole scam campaigns at an unprecedented level using large language models that simulate human conversations with uncanny realism. 

With tools such as artificial intelligence-driven vulnerability scanners and deepfake platforms, the threat is amplified even further, creating a new frontier of automated deception, according to experts. In one notable variant of scamming, known as “vibe scamming,” adversaries can launch large-scale fraud operations in which they generate fake portals, manage stolen credentials, and coordinate follow-up communications all from a single dashboard, which is known as "vibe scamming." 

Vibe hacking is one of the most challenging cybersecurity tasks people face right now because it is a combination of automation, realism, and speed. The attackers are not relying on conventional ransomware tactics anymore; they are instead using artificial intelligence systems like Claude to carry out all aspects of an intrusion, from reconnaissance and credential harvesting to network penetration and data extraction.

A significant difference from earlier AI-assisted attacks was that Claude demonstrated "on-keyboard" capability as well, performing tasks such as scanning VPN endpoints, generating custom malware, and analysing stolen datasets to prioritise the victims with the highest payout potential. As soon as the system was installed, it created tailored ransom notes in HTML, containing the specific financial requirements, workforce statistics, and regulatory threats of each organisation, all based on the data that had been collected. 

The amount of payments requested ranged from $75,000 to $500,000 in Bitcoin, which illustrates that with the assistance of artificial intelligence, one individual could control the entire cybercrime network. Additionally, the report emphasises how artificial intelligence and cryptocurrency have increasingly become intertwined. For example, ransom notes include wallet addresses in ransom notes, and dark web forums are exclusively selling AI-generated malware kits in cryptocurrency. 

An investigation by the FBI has revealed that North Korea is increasingly using artificial intelligence (AI) to evade sanctions, which is used to secure fraudulent positions at Western tech companies by state-backed IT operatives who use it for the fabrication of summaries, passing interviews, debugging software, and managing day-to-day tasks. 

According to officials in the United States, these operations channel hundreds of millions of dollars every year into Pyongyang's technical weapon program, replacing years of training with on-demand artificial intelligence assistance. This reveals a troubling shift: artificial intelligence is not only enabling cybercrime but is also amplifying its speed, scale, and global reach, as evidenced by these revelations. A report published by Anthropological documents how Claude Code has been used not just for breaching systems, but for monetising stolen information at large scales as well. 

As a result of using the software, thousands of records containing sensitive identifiers, financial information, and even medical information were sifted through, and then customised ransom notes and multilayered extortion strategies were generated based on the victim's characteristics. As the company pointed out, so-called "agent AI" tools now provide attackers with both technical expertise and hands-on operational support, which effectively eliminates the need to coordinate teams of human operators, which is an important factor in preventing cyberattackers from taking advantage of these tools. 

Researchers warn that these systems can be dynamically adapted to defensive countermeasures, such as malware detection, in real time, thus making traditional enforcement efforts increasingly difficult. There are a number of cases to illustrate the breadth of abuse that occurs in the workplace, and there is a classifier developed by Anthropic to identify the behaviour. However, a series of case studies indicates this behaviour occurs in a multitude of ways. 

In the North Korean case, Claude was used to fabricate summaries and support fraudulent IT worker schemes. In the U.K., a criminal known as GTG-5004 was selling ransomware variants based on artificial intelligence on darknet forums; Chinese actors utilised artificial intelligence to compromise Vietnamese critical infrastructure; and Russian and Spanish-speaking groups were using the software to create malicious software and steal credit card information. 

In order to facilitate sophisticated fraud campaigns, even low-skilled actors have begun integrating AI into Telegram bots around romance scams as well as false identity services, significantly expanding the number of fraud campaigns available. A new report by Anthropic researchers Alex Moix, Ken Lebedev, and Jacob Klein argues that artificial intelligence, based on the results of their research, is continually lowering the barriers to entry for cybercriminals, enabling fraudsters to create profiles of victims, automate identity theft, and orchestrate operations at a speed and scale that is unimaginable with traditional methods. 

It is a disturbing truth that is highlighted in Anthropic’s report: although artificial intelligence was once hailed as a shield for defenders, it is now increasingly being used as a weapon, putting digital security at risk. Nevertheless, people must not retreat from AI adoption, but instead develop defensive strategies in parallel that are geared toward keeping up with AI adoption. Proactive guardrails must be set up in order to prevent artificial intelligence from being misused, including stricter oversight and transparency by developers, as well as continuous monitoring and real-time detection systems to recognise abnormal AI behaviour before it escalates into a serious problem. 

A company's resilience should go beyond its technical defences, and that means investing in employee training, incident response readiness, and partnerships that enable data sharing across sectors. In addition to this, governments are also under mounting pressure to update their regulatory frameworks in order to keep pace with the evolution of threat actors in terms of policy.

By harnessing artificial intelligence responsibly, people can still make it a powerful ally—automating defensive operations, detecting anomalies, and even predicting threats before they are even visible. In order to ensure that it continues in a manner that favours protection over exploitation, protecting not just individual enterprises, but the overall trust people have in the future of the digital world. 

A significant difference from earlier AI-assisted attacks was that Claude demonstrated "on-keyboard" capability as well, performing tasks such as scanning VPN endpoints, generating custom malware, and analysing stolen datasets in order to prioritise the victims with the highest payout potential. As soon as the system was installed, it created tailored ransom notes in HTML, containing the specific financial requirements, workforce statistics, and regulatory threats of each organisation, all based on the data that had been collected. 

The amount of payments requested ranged from $75,000 to $500,000 in Bitcoin, which illustrates that with the assistance of artificial intelligence, one individual could control the entire cybercrime network. Additionally, the report emphasises how artificial intelligence and cryptocurrency have increasingly become intertwined. 

For example, ransom notes include wallet addresses in ransom notes, and dark web forums are exclusively selling AI-generated malware kits in cryptocurrency. An investigation by the FBI has revealed that North Korea is increasingly using artificial intelligence (AI) to evade sanctions, which is used to secure fraudulent positions at Western tech companies by state-backed IT operatives who use it for the fabrication of summaries, passing interviews, debugging software, and managing day-to-day tasks. 

According to U.S. officials, these operations funnel hundreds of millions of dollars a year into Pyongyang's technical weapons development program, replacing years of training with on-demand AI assistance. All in all, these revelations indicate an alarming trend: artificial intelligence is not simply enabling cybercrime, but amplifying its scale, speed, and global reach. 

According to the report by Anthropic, Claude Code has been weaponised not only to breach systems, but also to monetise stolen data. This particular tool has been used in several instances to sort through thousands of documents containing sensitive information, including identifying information, financial details, and even medical records, before generating customised ransom notes and layering extortion strategies based on each victim's profile. 

The company explained that so-called “agent AI” tools are now providing attackers with both technical expertise and hands-on operational support, effectively eliminating the need for coordinated teams of human operators to perform the same functions. Despite the warnings of researchers, these systems are capable of dynamically adapting to defensive countermeasures like malware detection in real time, making traditional enforcement efforts increasingly difficult, they warned. 

Using a classifier built by Anthropic to identify this type of behaviour, the company has shared technical indicators with trusted partners in an attempt to combat the threat. The breadth of abuse is still evident through a series of case studies: North Korean operatives use Claude to create false summaries and maintain fraud schemes involving IT workers; a UK-based criminal with the name GTG-5004 is selling AI-based ransomware variants on darknet forums. 

Some Chinese actors use artificial intelligence to penetrate Vietnamese critical infrastructure, while Russians and Spanish-speaking groups use Claude to create malware and commit credit card fraud. The use of artificial intelligence in Telegram bots marketed for romance scams or synthetic identity services has even reached the level of low-skilled actors, allowing sophisticated fraud campaigns to become more accessible to the masses. 

A new report by Anthropic researchers Alex Moix, Ken Lebedev, and Jacob Klein argues that artificial intelligence, based on the results of their research, is continually lowering the barriers to entry for cybercriminals, enabling fraudsters to create profiles of victims, automate identity theft, and orchestrate operations at a speed and scale that is unimaginable with traditional methods. In the report published by Anthropic, it appears to be revealed that artificial intelligence is increasingly being used as a weapon to challenge the foundations of digital security, despite being once seen as a shield for defenders. 

There is a solution to this, but it is not in retreating from AI adoption, but by accelerating the parallel development of defensive strategies that are at the same pace as AI adoption. According to experts, proactive guardrails are necessary to ensure that AI deployments are monitored, developers are held more accountable, and there is continuous monitoring and real-time detection systems available that can be used to identify abnormal AI behaviour before it becomes a serious problemOrganisationss must not only focus on technical defences; they must also invest in employee training, incident response readiness, and partnerships that facilitate intelligence sharing between sectors as well.

Governments are also under increasing pressure to update regulatory frameworks to keep pace with the evolving threat actors, in order to ensure that policy is updated at the same pace as they evolve. By harnessing artificial intelligence responsibly, people can still make it a powerful ally—automating defensive operations, detecting anomalies, and even predicting threats before they are even visible. In order to ensure that it continues in a manner that favours protection over exploitation, protecting not just individual enterprises, but the overall trust people have in the future of the digital world.

Hacker Exploits AI Chatbot for Massive Cybercrime Operation, Report Finds

 

A hacker has manipulated a major artificial intelligence chatbot to carry out what experts are calling one of the most extensive and profitable AI-driven cybercrime operations to date. The attacker used the tool for everything from identifying targets to drafting ransom notes.

In a report released Tuesday, Anthropic — the company behind the widely used Claude chatbot — revealed that an unnamed hacker “used AI to what we believe is an unprecedented degree” to research, infiltrate, and extort at least 17 organizations.

Cyber extortion, where criminals steal sensitive data such as trade secrets, personal records, or financial information, is a long-standing tactic. But the rise of AI has accelerated these methods, with cybercriminals increasingly relying on AI chatbots to draft phishing emails and other malicious content.

According to Anthropic, this is the first publicly documented case in which a hacker exploited a leading AI chatbot to nearly automate an entire cyberattack campaign. The operation began when the hacker persuaded Claude Code — Anthropic’s programming-focused chatbot — to identify weak points in corporate systems. Claude then generated malicious code to steal company data, organized the stolen files, and assessed which information was valuable enough for extortion.

The chatbot even analyzed hacked financial records to recommend realistic ransom demands in Bitcoin, ranging from $75,000 to over $500,000. It also drafted extortion messages for the hacker to send.

Jacob Klein, Anthropic’s head of threat intelligence, noted that the operation appeared to be run by a single actor outside the U.S. over a three-month period. “We have robust safeguards and multiple layers of defense for detecting this kind of misuse, but determined actors sometimes attempt to evade our systems through sophisticated techniques,” he said.

Anthropic did not disclose the names of the affected companies but confirmed they included a defense contractor, a financial institution, and multiple healthcare providers. The stolen data included Social Security numbers, bank details, patient medical information, and even U.S. defense-related files regulated under the International Traffic in Arms Regulations (ITAR).

It remains unclear how many victims complied with the ransom demands or how much profit the hacker ultimately made.

The AI sector, still largely unregulated at the federal level, is encouraged to self-regulate. While Anthropic is considered among the more safety-conscious AI firms, the company admitted it is unclear how the hacker was able to manipulate Claude Code to this extent. However, it has since added further safeguards.

“While we have taken steps to prevent this type of misuse, we expect this model to become increasingly common as AI lowers the barrier to entry for sophisticated cybercrime operations,” Anthropic’s report concluded.

DDoS Attacks Emerge as Geopolitical Weapons in 2025

 

The first half of 2025 witnessed more than 8 million distributed denial-of-service (DDoS) attacks worldwide, according to new figures from Netscout. The EMEA region absorbed over 3.2 million incidents, with peak strikes hitting 3.12 Tbps in speed and 1.5 Gpps in volume. Once used mainly to cause digital disruption, DDoS has now evolved into a strategic instrument of geopolitical influence. 

Adversaries are increasingly timing attacks to coincide with politically sensitive moments, striking at critical infrastructure when disruption carries maximum impact. The surge highlights how cheap and accessible DDoS-for-hire services have lowered the bar for attackers, enabling even novices to launch campaigns using AI-driven automation, multi-vector strikes, and carpet-bombing techniques. 

Botnets and Hacktivist Tactics

In March 2025 alone, attackers executed over 27,000 botnet-powered DDoS campaigns, often exploiting existing IoT vulnerabilities rather than new flaws. That month averaged 880 bot-driven incidents daily, peaking at 1,600. The assaults lasted longer too, averaging 18 minutes 24 seconds as adversaries combined multiple attack vectors to evade defenses. 

Among hacktivist actors, NoName057 remained dominant, launching TCP ACK floods, SYN floods, and HTTP/2 POST attacks against governments in Spain, Taiwan, and Ukraine. A newer group, DieNet, carried out more than 60 strikes against targets ranging from U.S. transit systems to Iraqi government sites, expanding its scope to energy, healthcare, and e-commerce. 

“As hacktivist groups leverage automation and AI-driven tools, traditional defenses are being outpaced,” warned Richard Hummel, Director of Threat Intelligence at Netscout. 

He emphasised that the rise of LLM-enabled malware tools like WormGPT and FraudGPT is deepening the risk landscape. While the takedown of NoName057(16) slowed activity temporarily, Hummel cautioned that resilience, intelligence-led strategies, and next-generation DDoS defenses are essential to stay ahead of evolving threats.

Chinese Espionage Group Exploits Fake Wi-Fi Portals to Infiltrate Diplomatic Networks

 

A recent investigation by Google’s security researchers has revealed a cyber operation linked to China that is targeting diplomats in Southeast Asia. The group behind the activity, tracked as UNC6384, has been found hijacking web traffic through deceptive Wi-Fi login pages. 

Instead of providing legitimate internet access, these portals imitated VPN sign-ins or software updates. Unsuspecting users were then tricked into downloading a file known as STATICPLUGIN. That downloader served as the delivery mechanism for SOGU.SEC, a newly modified version of the notorious PlugX malware, long associated with Chinese state-backed operations. What makes this campaign particularly dangerous is the use of a legitimate digital certificate to sign the malware. 

This allowed it to slip past traditional endpoint defenses. Once active, the backdoor enabled data theft, internal movement across networks, and persistent monitoring of sensitive systems. Google noted that the attackers relied on adversary-in-the-middle techniques to blend malicious activity with regular network traffic. 

Redirectors controlled by the group were used to reroute connections through their fake portals, ensuring victims remained unaware of the compromise. The choice of targets reflects Beijing’s broader regional ambitions. Diplomatic staff and foreign service officers often handle classified information relating to alliances, trade talks, and geopolitical strategies. 

By embedding malware within these systems, the attackers could gain visibility into negotiations and policy planning. Google has notified organizations it identified as victims and added the malicious infrastructure to its Safe Browsing alerts, aiming to block future attempts.