Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cyber Security. Show all posts

Chinese Open AI Models Rival US Systems and Reshape Global Adoption

 

Chinese artificial intelligence models have rapidly narrowed the gap with leading US systems, reshaping the global AI landscape. Once considered followers, Chinese developers are now producing large language models that rival American counterparts in both performance and adoption. At the same time, China has taken a lead in model openness, a factor that is increasingly shaping how AI spreads worldwide. 

This shift coincides with a change in strategy among major US firms. OpenAI, which initially emphasized transparency, moved toward a more closed and proprietary approach from 2022 onward. As access to US-developed models became more restricted, Chinese companies and research institutions expanded the availability of open-weight alternatives. A recent report from Stanford University’s Human-Centered AI Institute argues that AI leadership today depends not only on proprietary breakthroughs but also on reach, adoption, and the global influence of open models. 

According to the report, Chinese models such as Alibaba’s Qwen family and systems from DeepSeek now perform at near state-of-the-art levels across major benchmarks. Researchers found these models to be statistically comparable to Anthropic’s Claude family and increasingly close to the most advanced offerings from OpenAI and Google. Independent indices, including LMArena and the Epoch Capabilities Index, show steady convergence rather than a clear performance divide between Chinese and US models. 

Adoption trends further highlight this shift. Chinese models now dominate downstream usage on platforms such as Hugging Face, where developers share and adapt AI systems. By September 2025, Chinese fine-tuned or derivative models accounted for more than 60 percent of new releases on the platform. During the same period, Alibaba’s Qwen surpassed Meta’s Llama family to become the most downloaded large language model ecosystem, indicating strong global uptake beyond research settings. 

This momentum is reinforced by a broader diffusion effect. As Meta reduces its role as a primary open-source AI provider and moves closer to a closed model, Chinese firms are filling the gap with freely available, high-performing systems. Stanford researchers note that developers in low- and middle-income countries are particularly likely to adopt Chinese models as an affordable alternative to building AI infrastructure from scratch. However, adoption is not limited to emerging markets, as US companies are also increasingly integrating Chinese open-weight models into products and workflows. 

Paradoxically, US export restrictions limiting China’s access to advanced chips may have accelerated this progress. Constrained hardware access forced Chinese labs to focus on efficiency, resulting in models that deliver competitive performance with fewer resources. Researchers argue that this discipline has translated into meaningful technological gains. 

Openness has played a critical role. While open-weight models do not disclose full training datasets, they offer significantly more flexibility than closed APIs. Chinese firms have begun releasing models under permissive licenses such as Apache 2.0 and MIT, allowing broad use and modification. Even companies that once favored proprietary approaches, including Baidu, have reversed course by releasing model weights. 

Despite these advances, risks remain. Open-weight access does not fully resolve concerns about state influence, and many users rely on hosted services where data may fall under Chinese jurisdiction. Safety is another concern, as some evaluations suggest Chinese models may be more susceptible to jailbreaking than US counterparts. 

Even with these caveats, the broader trend is clear. As performance converges and openness drives adoption, the dominance of US commercial AI providers is no longer assured. The Stanford report suggests China’s role in global AI will continue to expand, potentially reshaping access, governance, and reliance on artificial intelligence worldwide.

2026 Digital Frontiers: AI Deregulation to Surveillance Surge

 

Digital technology is rapidly redrawing the boundaries of politics, business and daily life, and 2026 looks set to intensify that disruption—from AI-driven services and hyper-surveillance to new forms of protest organised on social platforms. Experts warn that governments and companies will find it increasingly difficult to balance innovation with safeguards for privacy and vulnerable communities as investment in AI accelerates and its social side-effects become harder to ignore.

One key battleground is regulation. Policymakers are tugged between pressures to “future-proof” oversight and demands from large technology firms to loosen restrictions that could slow development. In Europe, the European Commission is expected to ease parts of its year-old privacy and AI framework, including allowing firms to use personal data to train AI models under “legitimate interest” without seeking consent.

In the United States, President Donald Trump is considering an executive order that could pre-empt state AI laws—an approach aimed at reducing legal friction for Big Tech. The deregulatory push comes alongside rising scrutiny of AI harms, including lawsuits involving OpenAI and claims linked to mental health outcomes.

At the same time, countries are experimenting with tougher rules for children online. Australia has introduced fines of up to A$49.5 million for platforms that fail to take reasonable steps to block under-16 users, a move applied across major social networks and video services, and later extended to AI chatbots. France is also pushing for a European ban on social media for children under 15, while Britain’s Online Safety Act has introduced stringent age requirements for major platforms and pornography sites—though critics argue age checks can expand data collection and may isolate vulnerable young people from support communities.

Another frontier is civic unrest and the digital tools surrounding it. Social media helped catalyse youth-led protests in 2025, including movements that toppled governments in Nepal and Madagascar, and analysts expect Gen Z uprisings to continue in response to corruption, inequality and joblessness. Governments, meanwhile, are increasingly turning to internet shutdowns to suppress mobilisation, with recent examples cited in Tanzania, Afghanistan and Myanmar.

Beyond politics, border control is going digital. Britain plans to use AI to speed asylum decisions and deploy facial age estimation technology, alongside proposals for digital IDs for workers, while Trump has expanded surveillance tools tied to immigration enforcement. Finally, the climate cost of “AI everything” is rising: data centres powering generative AI consume vast energy and water, with Google reporting 6.1 billion gallons of water used by its data centres in 2023 and projections that US data centres could reach up to 9% of national electricity use by 2030.

This Week in Cybersecurity: User Data Theft, AI-Driven Fraud, and System Vulnerabilities

 



This week surfaced several developments that accentuate how cyber threats continue to affect individuals, corporations, and governments across the globe.

In the United States, federal records indicate that Customs and Border Protection is expanding its use of small surveillance drones, shifting from limited testing to routine deployment. These unmanned systems are expected to significantly widen the agency’s monitoring capabilities, with some operations extending beyond physical U.S. borders. At the same time, Immigration and Customs Enforcement is preparing to roll out a new cybersecurity contract that would increase digital monitoring of its workforce. This move aligns with broader government efforts to tighten internal controls amid growing concerns about leaks and internal opposition.

On the criminal front, a major data extortion case has emerged involving user records linked to PornHub, one of the world’s most visited adult platforms. A hacking group associated with a broader online collective claims to have obtained hundreds of millions of data entries tied to paid users. The stolen material reportedly includes account-linked browsing activity and email addresses. The company has stated that the data appears to originate from a third-party analytics service it previously relied on, meaning the exposed records may be several years old. While sensitive financial credentials were not reported as part of the breach, the attackers have allegedly attempted to pressure the company through extortion demands, raising concerns about how behavioral data can be weaponized even years after collection.

Geopolitical tensions also spilled into cyberspace this week. Venezuela’s state oil firm reported a cyber incident affecting its administrative systems, occurring shortly after U.S. authorities seized an oil tanker carrying Venezuelan crude. Officials in Caracas accused Washington of being behind the intrusion, framing it as part of a broader campaign targeting the country’s energy sector. Although the company said oil production continued, external reporting suggests that internal systems were temporarily disabled and shipping operations were disrupted. The U.S. government has not publicly accepted responsibility, and no independently verified technical evidence has been released.

In enterprise security, Cisco disclosed an actively exploited zero-day vulnerability affecting certain email security products used by organizations worldwide. Researchers confirmed that attackers had been abusing the flaw for weeks before public disclosure. The weakness exists within a specific email filtering feature and can allow unauthorized access under certain configurations. Cisco has not yet issued a patch but has advised customers to disable affected components as a temporary safeguard while remediation efforts continue.

Separately, two employees from cybersecurity firms admitted guilt in a ransomware operation, highlighting insider risk within the security industry itself. Court records show that the individuals used their professional expertise to carry out extortion attacks, including one case that resulted in a seven-figure ransom payment.

Together, these incidents reflect the expanding scope of cyber risk, spanning personal data privacy, national infrastructure, corporate security, and insider threats. Staying informed, verifying claims, and maintaining updated defenses remain essential in an increasingly complex digital environment.


Amazon Links Five-Year Cloud Cyber Campaign to Russia’s Sandworm Group

 

Amazon is talking about a hacking problem that has been going on for a long time. This problem was targeting customers who use cloud services in countries. Amazon says that a group called Sandworm, which is linked to Russias intelligence is behind this hacking. Amazons team that looks at threats found out that this hacking has been happening for five years. The hackers were looking for weaknesses in how customers set up their devices than trying to find problems with the software. They were exploiting these weaknesses to get into customer environments. 

Amazon and the customers were using cloud services. The hackers were targeting these cloud-connected environments. The hacking group Sandworm is the one that Amazon says is responsible, for this activity. The people at Amazon looked at this problem in December. Amazons chief information security officer, CJ Moses said that this is a change in how some groups try to get into important systems. CJ Moses said that these groups are not trying to get in by using software that has not been updated. 

Instead they are looking at devices that are connected to the cloud and are not set up correctly. These devices are how they get into the organizations they are trying to attack. CJ Moses and the people, at Amazon think that this is a way that state-sponsored actors are trying to get into critical infrastructure. The devices that are connected to the cloud are the way that these actors get into the systems they are trying to attack. 

The cyberattacks were different from others. The systems that were compromised were not old or missing security updates. The people who did the attack found problems with the equipment that helps connect things, like gateways and devices that sit at the edge of networks. These devices had been set up incorrectly by the customers who used them. This equipment is usually between the networks of a company and the cloud services they use outside. 

So it gave the attackers a way to get into the rest of the system without needing to find brand weaknesses or use very complicated bad software at the start. The attackers used these edge devices as a kind of bridge to get into the system. They were able to do this because the devices were not set up correctly by the customers. The cyberattacks were able to happen because of this mistake. It made it easier for the attackers to get into the system. The compromised systems, including the routing equipment and gateways were the key, to the attack. 

The bad people got into the system. They were able to get important information like passwords. Then they were able to move to different cloud services and the internal system. Amazon looked at this. They think that the bad people were able to hide what they were doing by making it look like normal activity on the network. This made it harder to catch them. The bad people used passwords and normal paths, on the network so they did not trip any alarms. This meant that the security people did not notice them because they were not doing anything that seemed out of the ordinary. 

The Sandworm activity was seen times over a few years with signs of it going back to at least 2021. The people behind this campaign were going after targets all around the world. They were especially interested in organizations that do important work like those that deal with critical infrastructure. Amazon found out that the people behind the Sandworm activity were really focused on energy companies, in North America and Europe. This shows that the Sandworm activity was a thoughtful and planned operation and that is what makes it so serious the Sandworm activity is a big deal. 

Security specialists looked at the results. They think this is part of a bigger pattern with advanced threat actors. What is happening is that people are taking advantage of mistakes in how thingsre set up rather than looking for things that need to be updated. As organizations start to use hybrid and cloud-based systems this is becoming a bigger problem. Even people who are very good at IT can miss mistakes in how thingsre set up and this can leave them open, to attacks all the time. Security specialists and these advanced threat actors know that they can take advantage of these mistakes without setting off the warnings that something is wrong. 

Advanced threat actors are using these mistakes to get in. Amazons disclosure is a warning that having cloud security is not just about doing the usual updates. Companies that use cloud and hybrid environments for work need to do more. They need to make sure everything is set up correctly always check for problems with devices that are connected to the internet and limit who can get into the system. These things are very important, for security. Amazons cloud security is an example of this. Cloud security requires a lot of work to keep it safe. 

In a separate disclosure, Amazon also acknowledged detecting attempts by North Korean operators to conduct large-scale cyber activity, though this was unrelated to the Sandworm campaign. The company later clarified that the Russian-linked operation targeted customer-managed devices hosted on AWS rather than Amazon’s own infrastructure, and that the activity represented sustained targeting over several years rather than uninterrupted access.

NYC Inauguration Security Policy Draws Attention for Targeting Specific Tech Tools

 



New York City’s official guidelines for the 2026 mayoral inauguration of Zohran Mamdani include an unusual restriction: attendees are not permitted to bring Flipper Zero devices or Raspberry Pi computers to the event. The prohibition appears in the event’s publicly released FAQ, which outlines items considered unsuitable for entry due to safety and security concerns.

The restricted items list largely follows standard event security practices. Objects such as weapons, fireworks, drones, large bags, strollers, bicycles, alcohol, illegal substances, laser pointers, and blunt instruments are all prohibited. However, the explicit naming of two specific technology products has drawn attention, as most other entries are described in broad categories rather than by product name.

The Flipper Zero is a compact electronic device designed for learning and testing wireless communication systems. It can interact with technologies such as RFID cards, NFC tags, infrared signals, Bluetooth, and other radio-based protocols. These capabilities make it popular among cybersecurity researchers, developers, and students who use it to study how digital systems communicate and identify weaknesses in controlled environments.

Raspberry Pi, on the other hand, is a small and affordable single-board computer that runs full operating systems, most commonly Linux. It is widely used for educational purposes, programming practice, home automation, and prototyping technical projects. With additional accessories, a Raspberry Pi can perform many of the same functions as a traditional computer.

What has raised questions among technology professionals is the selective nature of the ban. While these two devices are specifically listed, laptops and smartphones are not mentioned as restricted items. This distinction has caused confusion, as modern phones and computers can run advanced security tools, wireless analysis software, and penetration-testing platforms with significantly greater processing power.

Devices like the Flipper Zero have previously been the subject of public concern and regulatory attention in several regions. Authorities and lawmakers have, at times, expressed fears that such tools could be misused for activities such as unauthorized access to vehicles, payment systems, or wireless networks. In response, some retailers have temporarily removed listings, and certain governments have proposed restrictions. However, many of these measures were later reversed, and the devices remain legal to own and use in most countries, including the United States.

Security experts note that the risk associated with a device often depends more on intent and usage than on the hardware itself. Tools designed for learning and testing can be misused, but the same is true for everyday consumer electronics. As a result, critics argue that banning specific products without addressing broader technical capabilities may reflect a limited understanding of modern technology.

Event organizers have not yet provided a public explanation for why the Flipper Zero and Raspberry Pi were singled out. Until further clarification is issued, the decision continues to prompt discussion about how cybersecurity concerns are interpreted in public safety planning and whether naming individual devices is an effective approach to risk management.



Why the Leak of 16 Billion Passwords Remains a Live Cybersecurity Threat in 2025

 

As the year 2025 comes to an end people are still talking about a problem with cybersecurity. This problem is really big. It is still causing trouble. A lot of passwords and login credentials were exposed. We are talking about 16 billion of them. People first found out about this problem earlier, in the year.. The problem is not going away. Experts who know about security say that these passwords and credentials are being used again in cyberattacks. So the problem is not something that happened a time ago it is still something that is happening now with the cybersecurity incident and the exposure of these 16 billion passwords and login credentials. 

The big problem is that people who do bad things on the internet use something called credential stuffing attacks. This is when they try to log in to lots of websites using usernames and passwords that they got from somewhere else. They do this because lots of people use the password for lots of different things. So even if the bad people got the passwords a time ago they can still use them to get into accounts. If people did not change their passwords after the bad people got them then their accounts are still not safe today. Credential stuffing attacks are a deal because of this. Credential stuffing attacks can get into accounts if the passwords are not changed. 

Recently people who keep an eye on these things have noticed that there has been a lot credential stuffing going on towards the end of the year. The people who study this stuff saw an increase in automated attempts to log in to virtual private network platforms. Some of these platforms were seeing millions of attempts to authenticate over short periods of time. Credential stuffing attacks, like these use computers to try a lot of things quickly rather than trying to find new ways to exploit software vulnerabilities. This just goes to show that credential stuffing can be very effective because it only needs a list of credentials that have been compromised to get around the security defenses of private network platforms and credential stuffing is a big problem. 

The thing about this threat is that it just will not go away. We know this because the police found hundreds of millions of stolen passwords on devices that belonged to one person. People in charge of security say that this shows how long passwords can be used by people after they have been stolen. When passwords get out they often get passed from one person to another which means they can still be used for a time after they were first stolen. This is the case, with stolen passwords. Password reuse is a problem. People use the password for lots of things like their personal stuff, work and bank accounts. 

This is not an idea because if someone gets into one of your accounts they can get into all of them. That means they can do a lot of damage like steal your money use your identity or get your information. Password reuse is a risk factor and it makes it easy for bad people to take over all of your accounts. Security professionals say that when you take action to defend yourself is very important. If you wait until something bad happens or your account is compromised it can cause a lot of damage. You should take steps before anything bad happens. 

For example you should check the databases that list breached information to see if your credentials are exposed. This is an important thing to do to stay safe. If you can you should stop using passwords and start using stronger ways to authenticate, like passkeys. Security professionals think that passkeys are a safer way to do things and they can really reduce the risk of something bad happening to your Security. Checking for exposed credentials and using passkeys are ways to defend yourself and stay safe from people who might try to hurt you or your Security. When we talk about accounts that still use passwords experts say we should use password managers. 

These managers help us create and store passwords for each service. This way if someone gets one of our passwords they cannot use it to get into our accounts. Password managers make sure we have strong passwords for each service so if one password is leaked it does not affect our other accounts. 

Experts, like password managers because they help keep our accounts safe by making sure each one has a password. The scale of the 16 billion credential leak serves as a reminder that cybersecurity incidents do not end when headlines fade. Compromised passwords retain their threat value for months or even years, and ongoing vigilance remains essential. 

As attackers continue to exploit old data in new ways, timely action by users remains one of the most effective defenses against account takeover and identity-related cybercrime.

TikTok US Deal: ByteDance Sells Majority Stake Amid Security Fears

 


TikTok’s Chinese parent company, ByteDance, has finalized a landmark deal with US investors to restructure its operations in America, aiming to address longstanding national security concerns and regulatory pressures. The agreement, signed in late December 2025, will see a consortium of American investors take a controlling stake in TikTok’s US business, effectively separating it from ByteDance’s direct management. This move comes after years of scrutiny by US lawmakers, who have raised alarms about data privacy and potential foreign influence through the popular social media platform.

Under the new arrangement, TikTok US will operate as an independent entity, with its own board and leadership team. The investors involved are said to include major US financial firms and technology executives, signaling strong confidence in the platform’s future growth prospects. The deal is expected to preserve TikTok’s core features and user experience for its more than 170 million American users, while ensuring compliance with US data protection laws and national security standards.

Critics and privacy advocates have welcomed the move as a step toward greater transparency and accountability, but some remain skeptical about whether the separation will be deep enough to truly mitigate risks. National security experts argue that as long as ByteDance retains any indirect influence or access to user data, the underlying concerns may persist. 

US regulators have indicated they will continue to monitor the situation closely, with potential further oversight measures possible in the coming months.The deal is also expected to impact TikTok’s global expansion strategy. With its US operations now under American control, TikTok may find it easier to negotiate partnerships and investments in other Western markets where similar regulatory hurdles exist. However, challenges remain, especially in regions where geopolitical tensions could complicate business operations.

For users, the immediate effect is likely to be minimal. TikTok’s content, features, and community guidelines are expected to remain unchanged in the short term. Over the longer term, the separation could lead to new product innovations and business models tailored specifically to the US market. The deal marks a significant shift in the global tech landscape, reflecting the growing importance of data sovereignty and regulatory compliance in the digital age.

Airbus Signals Shift Toward European Sovereign Cloud to Reduce Reliance on US Tech Giants

 

Airbus, the aerospace manufacturer in Europe is getting ready to depend less on big American technology companies like Google and Microsoft. The company wants to rethink how and where it does its important digital work. 

Airbus is going to put out a request for companies to help it move its most critical systems to a European cloud that is controlled by Europeans. This is a change in how Airbus handles its digital infrastructure. Airbus is doing this to have control over its digital work. The company wants to use a cloud, for its mission-critical systems. Airbus uses a lot of services from Google and Microsoft. The company has a setup that includes big data centers and tools like Google Workspace that help people work together. 

Airbus also uses software from Microsoft to handle money matters.. When it comes to very secret and military documents these are not allowed to be stored in public cloud environments. This is because Airbus wants to be in control of its data and does not want to worry about rules and regulations. Airbus has had these concerns for a time. 

The company wants to make sure it can keep its information safe. Airbus is careful, about where it stores its documents, especially the ones that are related to the military. The company is now looking at moving its applications from its own premises to the cloud. This includes things like systems for planning and managing the business platforms for running the factories tools for managing customer relationships and software for managing the life cycle of products which's where the designs for the aircraft are kept. 

These systems are really important to Airbus because they hold a lot of information and are used to run the business. So it is very important to think about where they are hosted. The people in charge have said that the information, in these systems is a matter of European security, which means the systems need to be kept in Europe. Airbus needs to make sure that the cloud infrastructure it uses is controlled by companies. The company wants to keep its aircraft design data safe and secure which is why it is looking for a solution that meets European security standards. 

European companies are getting really worried about being in control of their digital stuff. This is a deal for them especially now that people are talking about how different the rules are in Europe and the United States. Some big American companies like Microsoft, Google and Amazon Web Services are trying to make European companies feel better by offering services that deal with these worries.. European companies are still not sure if they can really trust these American companies. 

The main reason they are worried is because of a law in the United States called the US CLOUD Act. This law lets American authorities ask companies for access to data even if that data is stored in other countries. European companies do not like this because they think it means American authorities have much power over their digital sovereignty. Digital sovereignty is a concern for European companies and they want to make sure they have control, over their own digital stuff. 

For organizations that deal with sensitive information related to industry, defense or the government this set of laws is a big problem. Digital sovereignty is about a country or region being in charge of its digital systems the way it handles data and who gets to access that data. This means that the laws of that country decide how information is taken care of and protected. The way Airbus is doing things shows that Europe, as a whole is trying to make sure its cloud operations follow the laws and priorities of the region. European organizations and Europe are working on sovereignty and cloud operations to keep their information safe. 

People are worried about the CLOUD Act. This is because of things that happened in court before. Microsoft said in a court in France that it cannot promise to keep people from the United States government getting their data. This is true even if the data is stored in Europe. Microsoft said it has not had to give the United States government any data from customers yet.. The company admitted that it does have to follow the law. 

This shows that companies, like Microsoft that are based in the United States and provide cloud services have to deal with some legal problems. The CLOUD Act is a part of these problems. Airbus’ reported move toward a sovereign European cloud underscores a growing shift among major enterprises that view digital infrastructure not just as a technical choice, but as a matter of strategic autonomy. 

As geopolitical tensions and regulatory scrutiny increase, decisions about where data lives and who ultimately controls access to it are becoming central to corporate risk management and long-term resilience.

FCC Rules Out Foreign Drone Components to Protect National Networks

 


A decisive step in federal oversight on unmanned aerial technology has been taken by the United States Federal Communications Commission, in a move that is aimed at escalating federal control over unmanned aerial technology. Specifically, the FCC has prohibited the sale of newly manufactured foreign drones and their essential hardware components in the United States, citing the necessity for national security. 

According to the FCC's regulatory action, which was revealed on Monday, drone manufacturers such as DJI and Autel, as well as other overseas drone manufacturers, have been placed on the FCC's "Covered List," which means that they cannot obtain the agency's mandatory authorization to sell, market, or market new drone models and critical parts to consumers.

The decision follows a directive issued by the U.S. Congress in December 2024, which required DJI and Autel to go on the list within a year of being notified if the government did not validate the continued sale of these systems under government monitoring. 

A ban on foreign drone systems and components has been imposed by the Federal Communications Commission without approval as it indicates that there are perceived risks associated with them-especially those originating from Chinese manufacturers-that are incompatible with the security thresholds established to protect U.S. technology infrastructure and communication networks, as well as the security standards in place to obtain such clearances, which are incompatible with the security thresholds. 

The decision adds unmanned aerial technology to the Federal Communications Commission's "Covered List", which is a list of technologies that cannot be imported or sold commercially in the United States for the sake of safety reasons. DJI and other foreign drone manufacturers will not be able to obtain the equipment authorization required for importing and selling drones. 

A statement issued by the agency on Monday emphasized the security rationale for its decision, stating that the ban is meant to mitigate risk associated with potential drone disruption, unauthorized surveillance operations, data extraction, and other airborne threats that could threaten the nation's infrastructure. 

In spite of the fact that the rule does not impact the current drone ecosystem in the country in any significant way, the rule does not seem to have any significant impact on it. During the Commission's meeting, it was clarified that the restrictions were only affecting future product approvals and were not affecting drones or drone components currently being sold in the United States; thus, previously authorized drone models still remain operational and legal in operation. 

Neither the FCC nor the FCC's spokesperson have responded to media inquiries regarding whether such actions are being contemplated, and the agency has not indicated any immediate plans to revoke past approvals or to impose retroactive prohibitions. 

For now, the regulatory scope remains forward-looking, leaving thousands of unmanned aircraft, manufactured by foreign companies, already deployed in the commercial, civilian, and industrial sectors, unaffected by this ruling. Though drones manufactured by foreign companies which were previously authorized to be purchased and sold can still be owned and sold, the FCC has incorporated critical parts into the scope of the ban, causing new uncertainty regarding long-term maintenance, repair, and supply chain security. 

The industry observers warn that replacement batteries, controllers, sensors, and other components that are crucial to the operation of drone fleets will become more difficult to source in the future as well as more expensive, thus potentially threatening operational uptime for these drones. 

A strong opposition has been raised within the U.S. commercial drone industry, which is composed of almost 500,000 FAA-licensed pilots, who are dependent on imported aircraft for a variety of day-to-day business functions including mapping, surveys, inspections of infrastructure, agricultural monitoring, and assistance in emergency situations. 8,000 commercial pilots were surveyed by the Pilot Institute last year, according to the Wall Street Journal, and 43 percent expect the ban to have an “extremely negative” impact on their companies, or even end the businesses altogether. 

This further emphasizes the concerns that this policy could have as disruptive an economic impact as its security motivations are preventative, reinforcing concerns about its economic impacts. In anticipation of the ruling, a number of operators had already begun stockpiling drones and spare parts, which was indicative of the market's expectation that procurement bottlenecks would soon take place. 

It is clear that the level of foreign dependency is profound, as evidenced by DJI, the Shenzhen-based drone manufacturer, which alone accounts for 70 to 90 percent of the commercial, government, and consumer drone market in the United States. 

A common example of this type of reliance is in the geospatial data industry, where firms like Spexi, whose headquarters is based in Vancouver, deploy large freelance pilot networks to scan regions looking for maps and mapping intelligence. 

According to CEO Bill Lakeland of Spexi, their pilots primarily operate DJI aircraft, such as the widely used DJI Mini series, and acknowledge the company's dependence on imported hardware. He stated that the company's operations have been mostly "reliant on the DJI Minis" however he did confirm that the company is in the process of exploring diversification strategies, as well as developing proprietary hardware solutions in the future. 

Although there are significant costs associated with domestically manufactured drones, resulting in firms like Spexi deciding to build their own alternatives despite the engineering and financial overhead entailed by such a move, cost is a significant barrier. This is a factor that is driving firms like Spexi to consider building their own alternatives. 

In Lin's words, “The U.S. should correct its erroneous practices and protect Chinese businesses by providing them an environment that is fair, just, and non-discriminatory,” this is a confirmation of Beijing’s view that exclusion is more appropriate than risk-based regulation. Accordingly, the recent dispute mirrors previous actions taken by the FCC, in which the FCC has previously added several Chinese enterprises to the same Covered List due to similar security concerns, effectively preventing those firms from getting federal equipment authorizations. 

However, there has been an air of unease around Chinese-manufactured drones since long before the current regulatory wave of legislation was instituted. The U.S. Army has banned the use of DJI drones since 2017 because it believes that there are cyber security vulnerabilities posed to operational risks. 

In that same year, the Department of Homeland Security circulated an internal advisory warning that Chinese-built unmanned aerial systems may be transmitting sensitive data such as flight logs and geolocations back to the manufacturers. Before Congress and federal agencies began formalizing import controls, there was a growing concern about cross-border data exposure. 

The FCC explained the rationale behind its sweeping drone restrictions in detail, pointing out that unmanned aerial systems and their associated components manufactured overseas are extremely vulnerable to being exploited by the federal government. This includes data transmission modules, communication systems, flight controllers, ground control stations, navigation units, batteries, and smart power systems. 

Various techniques, including persistent surveillance, unauthorized extraction of sensitive data, and even destructive actions within the U.S., can be manipulated to facilitate such activities. Nevertheless, the agency indicated that specific drones or parts of drones made by foreign nations could be exempted from the ban if the Department of Homeland Security deemed them to not pose such risks, underlining that the restrictions are not blanket exclusions but rather are based on assessed security vulnerabilities. 

A new rule passed by the FCC today also preserves continuity for current owners as well as the retail sector. Consumers can continue to use drones that have already been purchased, and authorized retailers are still eligible to sell, import, and market the models that have been approved by the Government in the current year. 

A regulatory development that follows a larger national security policy development is a result of President Donald Trump signing the National Defense Authorization Act for Fiscal Year 2026 last week, which included enhanced measures intended to protect the nation's airspace from unmanned aircraft that pose a threat to public safety or critical infrastructure. 

There have been prior moves taken by the FCC to tighten technological controls, and this latest move is reminiscent of those prior to it. Earlier this year, the agency announced that it had expanded its "Covered List" to include Russian cybersecurity firm Kaspersky, effectively barring the company from offering its software directly or indirectly to Americans on the basis of the same concerns over data integrity and national security. 

This decision of the FCC is one of the most significant regulatory interventions that have ever been made in the U.S. drone industry, reinforcing a broader federal strategy that continues to connect supply-chain sovereignty, aviation security, and communications infrastructure.

However, while the ban has been limited to future approvals, it has caused a significant shift in the policy environment where market access is now highly dependent on geopolitical risk assessments, hardware traceability, and data governance transparency, among other things. 

A critical point that industry analysts point out is that these rulings may accelerate domestic innovation by incentivizing domestic manufacturers to expand production, increase cost efficiencies, and strengthen standards for cybersecurity at component levels. 

Additionally, commercial operators are advised to prepare for short-term constraints by reevaluating their vendor reliance, maintaining maintenance inventories where technically viable, and optimizing modular platforms to facilitate interoperability between manufacturers should they arise in the near future. 

During the same time, policymakers may have to balance national security and economic continuity, making sure safeguards don't unintentionally obstruct critical services such as disaster response, infrastructure monitoring, and geospatial intelligence in the process. As a result of the ruling, the world's largest commercial UAS market could be transformed into a revolutionary one, defining a new way for drones to be built, approved, deployed, and secured.

A Year of Unprecedented Cybersecurity Incidents Redefined Global Risk in 2025

 

The year 2025 marked a turning point in the global cybersecurity landscape, with the scale, frequency, and impact of attacks surpassing anything seen before. Across governments, enterprises, and critical infrastructure, breaches were no longer isolated technical failures but events with lasting economic, political, and social consequences. The year served as a stark reminder that digital systems underpinning modern life remain deeply vulnerable to both state-backed and financially motivated actors. 

Government systems emerged as some of the most heavily targeted environments. In the United States, multiple federal agencies suffered intrusions throughout the year, including departments responsible for financial oversight and national security. Exploited software vulnerabilities enabled attackers to gain access to sensitive systems, while foreign threat actors were reported to have siphoned sealed judicial records from court filing platforms. The most damaging episode involved widespread unauthorized access to federal databases, resulting in what experts described as the largest exposure of U.S. government data to date. Legal analysts warned that violations of established security protocols could carry long-term legal and national security ramifications. 

The private sector faced equally severe challenges, particularly from organized ransomware and extortion groups. One of the most disruptive campaigns involved attackers exploiting a previously unknown flaw in widely used enterprise business software. By silently accessing systems months before detection, the group extracted vast quantities of sensitive employee and executive data from organizations across education, healthcare, media, and corporate sectors. When victims were finally alerted, many were confronted with ransom demands accompanied by proof of stolen personal information, highlighting the growing sophistication of data-driven extortion tactics. 

Cloud ecosystems also proved to be a major point of exposure. A series of downstream breaches at technology service providers resulted in the theft of approximately one billion records stored within enterprise cloud platforms. By compromising vendors with privileged access, attackers were able to reach data belonging to some of the world’s largest technology companies. The stolen information was later advertised on leak sites, with new victims continuing to surface long after the initial disclosures, underscoring the cascading risks of interconnected software supply chains. 

In the United Kingdom, cyberattacks moved beyond data theft and into large-scale operational disruption. Retailers experienced outages and customer data losses that temporarily crippled supply chains. The most economically damaging incident struck a major automotive manufacturer, halting production for months and triggering financial distress across its supplier network. The economic fallout was so severe that government intervention was required to stabilize the workforce and prevent wider industrial collapse, signaling how cyber incidents can now pose systemic economic threats. 

Asia was not spared from escalating cyber risk. South Korea experienced near-monthly breaches affecting telecom providers, technology firms, and online retail platforms. Tens of millions of citizens had personal data exposed due to prolonged undetected intrusions and inadequate data protection practices. In one of the year’s most consequential incidents, a major retailer suffered months of unauthorized data extraction before discovery, ultimately leading to executive resignations and public scrutiny over corporate accountability. 

Collectively, the events of 2025 demonstrated that cybersecurity failures now carry consequences far beyond IT departments. Disruption, rather than data theft alone, has become a powerful weapon, forcing governments and organizations worldwide to reassess resilience, accountability, and the true cost of digital insecurity.

India Warns on ‘Silent Calls’ as Telecom Firms Roll Out Verified Caller Names to Curb Fraud

 

India’s telecom authorities have issued a fresh advisory highlighting how ordinary phone calls are increasingly being used as entry points for scams, even as a long-discussed caller identity system begins to take shape as a countermeasure.

For many users, the pattern is familiar: the phone rings, the call is picked up, and no one responds. According to the Department of Telecommunications (DoT), these “silent calls” are intentional rather than technical faults.

Officials explain that such calls are designed to check whether a number is active. Once answered, the number is marked as live and becomes more valuable to fraud networks. It can then be circulated within scam databases and later targeted for phishing, impersonation or financial fraud. The DoT has advised users to block these numbers immediately and report them via the government’s Sanchar Saathi portal, which aims to gather public inputs to identify and disrupt telecom abuse.

The warning signals a broader concern within the government: many frauds today begin not with advanced hacking tools, but with simple behavioural triggers that rely on users answering calls out of habit.

At the same time, India’s telecom ecosystem is seeing a gradual but significant change. Reliance Jio has started deploying Caller Name Presentation (CNAP), a feature that shows the registered name of the caller on the recipient’s screen.

Unlike third-party caller-ID applications that depend on user-generated labels, CNAP pulls data directly from subscriber details submitted during SIM registration. Since this information is document-verified, authorities argue it is harder to falsify on a large scale.

Supporters believe this could help restore confidence in voice calls, which have become a weak link in the digital security chain. Seeing a verified name, they say, may discourage users from engaging with unknown or spoofed callers. However, the initiative also revives concerns around privacy, data accuracy and the risk of misuse—issues regulators and telecom companies say they are addressing through a phased rollout.

Regulators Push for a Unified Approach

The Telecom Regulatory Authority of India (TRAI) has instructed other major operators—Airtel, Vodafone-Idea (Vi) and BSNL—to implement CNAP, aiming to make it a nationwide standard rather than a single-network feature.

Progress varies by operator. Jio’s CNAP is already active across several regions in eastern, northern and southern India, including West Bengal, Kerala, Bihar, Rajasthan and Odisha. Airtel has introduced the feature in select circles such as West Bengal, Gujarat and Madhya Pradesh. Vodafone-Idea has rolled it out primarily in Maharashtra, with limited testing in Tamil Nadu, while BSNL is still conducting pilot trials.

Industry executives note that the rollout is technically demanding, involving upgrades to older network infrastructure and coordination between operators. Regulators view CNAP as one layer in a broader anti-spam strategy that also includes call filtering, identification of bulk callers and tighter controls on telemarketers.

The rise of silent calls alongside verified caller names reflects a larger shift: phone calls are no longer inherently trustworthy. Scammers thrive on anonymity and volume, while authorities are responding with greater emphasis on identity and traceability.

Whether CNAP will significantly reduce fraud remains uncertain. Experts point out that fake or improperly verified SIM registrations still exist, and user trust in displayed names will depend on data quality and enforcement.

For now, the official guidance is cautious. Silent calls should be treated as red flags, not harmless glitches. Caller names, even when verified, should be assessed carefully. In a country handling billions of calls daily, small changes in how people respond to their phones could meaningfully influence the fight between fraud and vigilance.

Microsoft Users Warned as Hackers Use Typosquatting to Steal Login Credentials

 

Microsoft account holders are being urged to stay vigilant as cybercriminals increasingly target them through a deceptive tactic known as typosquatting. Attackers are registering look-alike websites and email addresses that closely resemble legitimate Microsoft domains, with the goal of tricking users into revealing their passwords.

Harley Sugarman, CEO of Anagram Security, recently highlighted this risk by sharing a screenshot of a phishing email he received that used this method. In the sender’s address, the letter “m” was cleverly replaced with an “r” and an “n,” creating a nearly identical visual match. Because the difference is subtle, many users may not notice the change and could easily be misled.

Typosquatting itself is not a new cybercrime technique. For years, hackers and online fraudsters have relied on it to exploit small typing errors or momentary lapses in attention. The strategy involves purchasing domains or email addresses that closely mimic real ones, hoping users will accidentally visit or click them. Once there, victims are often presented with fake login pages designed to look authentic. Any credentials entered are then captured and sent directly to the attackers.

A major reason this tactic continues to succeed is that many people don’t take time to carefully inspect URLs or sender addresses. A single incorrect character in a link or email can redirect users to a convincing replica of a legitimate site, where usernames and passwords are harvested without suspicion.

To reduce the risk of falling victim, security experts recommend switching to passkeys wherever possible, as they are significantly more secure than traditional passwords. Microsoft and other tech companies have been actively encouraging this shift. For users who can’t yet adopt passkeys, strong and unique passwords—or long passphrases—are essential, ideally stored and autofilled using a reputable password manager.

Additional protection measures include enabling browser safeguards. Both Microsoft Edge and Google Chrome can flag suspicious or mistyped URLs if these features are turned on. Bookmarking frequently used websites, such as email services, banking platforms, shopping portals, and social media accounts, can also help ensure you’re visiting the correct destination.

Standard phishing precautions remain just as important. Be skeptical of unexpected emails claiming there’s an issue with your account. Instead of clicking links, log in through a trusted, independent method to verify any alerts. Avoid downloading attachments or replying to unsolicited messages, as engagement can signal to scammers that your account is active.

Carefully reviewing sender email addresses, hovering over links to preview their destinations, and watching for messages that create urgency—such as demands to immediately reset a password—can help identify phishing attempts. Using reliable antivirus software adds another layer of defense against malware and other online threats.

Although typosquatting is one of the oldest scams in cybersecurity, it continues to resurface because it preys on simple mistakes. Staying alert while browsing unfamiliar websites or checking your inbox remains one of the most effective ways to stay safe

Webrat Malware Targets Students and Junior Security Researchers Through Fake Exploits

 

In early 2025, security researchers uncovered a new malware family dubbed Webrat, which at that time was predominantly targeting ordinary users through fake distribution methods. The first propagation involved masking malware as cheats for online games-like Rust, Counter-Strike, and Roblox-but also as cracked versions of some commercial software. By the second half of that year, though, the Webrat operators had indeed widened their horizons, shifting toward a new target group that covered students and young professionals seeking careers in information security. 

This evolution started to surface in September and October 2025, when researchers discovered a campaign spreading Webrat through open GitHub repositories. The attackers embedded the malicious payloads as proof-of-concept exploits of highly publicized software vulnerabilities. Those vulnerabilities were chosen due to their resonance in security advisories and high severity ratings, making the repositories look relevant and credible for people searching for hands-on learning materials.  

Each of the GitHub repositories was crafted to closely resemble legitimate exploit releases. They all had detailed descriptions outlining the background of the vulnerability, affected systems, steps to install it, usage, and the most recommended ways of mitigation. Many of the repository descriptions have a similar or almost identical structure; the defensive advice offered is often strikingly similar, adding strong evidence that they were generated through automated or AI-assisted tools rather than various independent researchers. Inside each repository, users were instructed to fetch an archive with a password, labeled as the exploit package. 

The password was hidden in the name of one of the files inside the archive, a move intended to lure users into unzipping the file and researching its contents. Once unpacked, the archive contains a set of files meant to masquerade or divert attention from the actual payload. Among those is a corrupted dynamic-link library file meant as a decoy, along with a batch file whose purpose was to instruct execution of the main malicious executable file. The main executable, when run, executed several high-risk actions: It tried to elevate its privileges to administrator level, disabled the inbuilt security protections such as Windows Defender, and then downloaded the Webrat backdoor from a remote server and started it.

The Webrat backdoor provides a way to attackers for persistent access to infected systems, allowing them to conduct widespread surveillance and data theft activities. Webrat can steal credentials and other sensitive information from cryptocurrency wallets and applications like Telegram, Discord, and Steam. In addition to credential theft, it also supports spyware functionalities such as screen capture, keylogging, and audio and video surveillance via connected microphones and webcams. The functionality seen in this campaign is very similar to versions of Webrat described in previous incidents. 

It seems that the move to dressing the malware up as vulnerability exploits represents an effort to affect hobbyists rather than professionals. Professional analysts normally analyze such untrusted code in a sandbox or isolated environment, where such attacks have limited consequences. 

Consequently, researchers believe the attack focuses on students and beginners with lax operational security discipline. It ranges in topic from the risks in running unverified code downloaded from open-source sites to the need to perform malware analysis and exploit testing in a sandbox or virtual machine environment. 

Security professionals and students are encouraged to be keen in their practices, to trust only known and reputable security tools, and to bypass protection mechanisms only when this is needed with a clear and well-justified reason.

FBI Discovers 630 Million Stolen Passwords in Major Cybercrime Investigation

 

A newly disclosed trove of stolen credentials has underscored the scale of modern cybercrime after U.S. federal investigators uncovered hundreds of millions of compromised passwords on devices seized from a single suspected hacker. The dataset, comprising approximately 630 million passwords, has now been integrated into the widely used Have I Been Pwned (HIBP) database, significantly expanding its ability to warn users about exposed credentials. 

The passwords were provided to HIBP by the Federal Bureau of Investigation as part of ongoing cybercrime investigations. According to Troy Hunt, the security researcher behind the service, this latest contribution is particularly striking because it originates from one individual rather than a large breach aggregation. While the FBI has shared compromised credentials with HIBP for several years, the sheer volume associated with this case highlights how centralized and extensive credential theft operations have become. 

Initial analysis suggests the data was collected from a mixture of underground sources, including dark web marketplaces, messaging platforms such as Telegram, and large-scale infostealer malware campaigns. Not all of the passwords were previously unknown, but a meaningful portion had never appeared in public breach repositories. Roughly 7.4% of the dataset represents newly identified compromised passwords, amounting to tens of millions of credentials that were previously undetectable by users relying on breach-monitoring tools. 

Security experts warn that even recycled or older passwords remain highly valuable to attackers. Stolen credentials are frequently reused in credential-stuffing attacks, where automated tools attempt the same password across multiple platforms. Because many users continue to reuse passwords, a single exposed credential can provide access to multiple accounts, amplifying the potential impact of historical data leaks. 

The expanded dataset is now searchable through the Pwned Passwords service, which allows users to check whether a password has appeared in known breach collections. The system is designed to preserve privacy by hashing submitted passwords and ensuring no personally identifiable information is stored or associated with search results. This enables individuals and organizations to proactively block compromised passwords without exposing sensitive data. 

The discovery has renewed calls for stronger credential hygiene across both consumer and enterprise environments. Cybersecurity professionals consistently emphasize that password reuse and weak password creation remain among the most common contributors to account compromise. Password managers are widely recommended as an effective countermeasure, as they allow users to generate and store long, unique passwords for every service without relying on memory. 

In addition to password managers, broader adoption of passkeys and multi-factor authentication is increasingly viewed as essential. These technologies significantly reduce reliance on static passwords and make stolen credential databases far less useful to attackers. Many platforms now support these features, yet adoption remains inconsistent. 

As law enforcement continues to uncover massive credential repositories during cybercrime investigations, experts caution that similar discoveries are likely in the future. Each new dataset reinforces the importance of assuming passwords will eventually be exposed and building defenses accordingly. Regular password audits, automated breach detection, and layered authentication controls are now considered baseline requirements for maintaining digital security.

Trend Micro Warns: 'Vibe Crime' Ushers in Agentic AI-Driven Cybercrime Era

 

Trend Micro, a cybersecurity firm, has sounded the alarm over what it calls the rise of "vibe crime": fully automated cybercriminal operations powered by agentic AI, which marks a fundamental turn away from traditional ransomware and phishing campaigns. The report from the company forecasts a massive increase in attack volume as criminals take advantage of autonomous AI agents to perform continuous, large-scale operations. 

From service to servant model 

The criminal ecosystem is evolving from "Cybercrime as a Service" to "Cybercrime as a Servant," where chained AI agents and autonomous orchestration layers manage end-to-end criminal enterprises. Robert McArdle, director of forward-looking threat research at Trend Micro, stressed that the real risk does not come from sudden explosive growth but rather from the gradual automation of attacks that previously required a lot of skill, time, and effort.

"We will see an optimization of today's leading attacks, the amplification of attacks that previously had poor ROI, and the emergence of brand new 'Black Swan' cybercrime business models," McArdle stated. 

Researchers expect enterprise cloud and AI infrastructure to be increasingly targeted in the future, as criminals use these platforms as sources of scalable computing power, AI, storage, and potentially valuable data to run their agentic infrastructures. This transformation is supposed to bring with it new, previously unthinkable types of attacks as well as shake up the entire criminal ecosystem, introducing new revenue streams and business models.

Industry-wide alarm bells 

Trend Micro's alert echoes other warnings about an “agentic” AI threat in cyberspace. Anthropic acknowledged that its AI tools had been “weaponized” by hackers in September, criminals employed Claude Code to automate reconnaissance, gather credentials, and breach networks at 17 organizations in the fields of healthcare, emergency services, and government.

In a similar vein, the 2025 State of Malware report from Malwarebytes warned that agentic AI would “continue to dramatically change cyber criminal tactics” and accelerate development of even more dangerous malware. The researchers further stressed that defensive platforms must deploy their own autonomous agents and orchestrators to counter this evolution or face being overwhelmed. Organizations need to reassess security strategies immediately and invest in AI-driven defense before criminals industrialize their AI capabilities, or risk falling behind in an exponential arms race.

Network Detection and Response Defends Against AI Powered Cyber Attacks

 

Cybersecurity teams are facing growing pressure as attackers increasingly adopt artificial intelligence to accelerate, scale, and conceal malicious activity. Modern threat actors are no longer limited to static malware or simple intrusion techniques. Instead, AI-powered campaigns are using adaptive methods that blend into legitimate system behavior, making detection significantly more difficult and forcing defenders to rethink traditional security strategies. 

Threat intelligence research from major technology firms indicates that offensive uses of AI are expanding rapidly. Security teams have observed AI tools capable of bypassing established safeguards, automatically generating malicious scripts, and evading detection mechanisms with minimal human involvement. In some cases, AI-driven orchestration has been used to coordinate multiple malware components, allowing attackers to conduct reconnaissance, identify vulnerabilities, move laterally through networks, and extract sensitive data at machine speed. These automated operations can unfold faster than manual security workflows can reasonably respond. 

What distinguishes these attacks from earlier generations is not the underlying techniques, but the scale and efficiency at which they can be executed. Credential abuse, for example, is not new, but AI enables attackers to harvest and exploit credentials across large environments with only minimal input. Research published in mid-2025 highlighted dozens of ways autonomous AI agents could be deployed against enterprise systems, effectively expanding the attack surface beyond conventional trust boundaries and security assumptions. 

This evolving threat landscape has reinforced the relevance of zero trust principles, which assume no user, device, or connection should be trusted by default. However, zero trust alone is not sufficient. Security operations teams must also be able to detect abnormal behavior regardless of where it originates, especially as AI-driven attacks increasingly rely on legitimate tools and system processes to hide in plain sight. 

As a result, organizations are placing renewed emphasis on network detection and response technologies. Unlike legacy defenses that depend heavily on known signatures or manual investigation, modern NDR platforms continuously analyze network traffic to identify suspicious patterns and anomalous behavior in real time. This visibility allows security teams to spot rapid reconnaissance activity, unusual data movement, or unexpected protocol usage that may signal AI-assisted attacks. 

NDR systems also help security teams understand broader trends across enterprise and cloud environments. By comparing current activity against historical baselines, these tools can highlight deviations that would otherwise go unnoticed, such as sudden changes in encrypted traffic levels or new outbound connections from systems that rarely communicate externally. Capturing and storing this data enables deeper forensic analysis and supports long-term threat hunting. 

Crucially, NDR platforms use automation and behavioral analysis to classify activity as benign, suspicious, or malicious, reducing alert fatigue for security analysts. Even when traffic is encrypted, network-level context can reveal patterns consistent with abuse. As attackers increasingly rely on AI to mask their movements, the ability to rapidly triage and respond becomes essential.  

By delivering comprehensive network visibility and faster response capabilities, NDR solutions help organizations reduce risk, limit the impact of breaches, and prepare for a future where AI-driven threats continue to evolve.

VPN Surge: Americans Bypass Age Verification Laws

 

Americans are increasingly seeking out VPNs as states enact stringent age verification laws that limit what minors can see online. These regulations compel users to provide personal information — like government issued IDs — to verify their age, leading to concerns about privacy and security. As a result, VPN usage is skyrocketing, particularly in states such as Missouri, Florida, Louisiana, Utah and more where VPN searches have jumped by a factor of four following the new regulations. 

How age verification laws work 

Age verification laws require websites and apps that contain a substantial amount of "material harmful to minors" to verify users' age prior to access. This step frequently entails submitting photographs or scans of ID documents, potentially exposing personal info to breaches. Even though laws forbid companies from storing this information, there is no assurance it will be kept secure, not with the record of massive data breaches at big tech firms. 

The vague definition of "harmful content" suggests that age verification could be required for many other types of digital platforms, such as social media, streaming services, and video games. The expansion raises questions about digital privacy and identity protection for all users, minors not excluded. From the latest Pew Research Center finding, 40% of Americans say government regulation of business does more harm than good, illustrating bipartisan wariness of these laws. 

Bypassing restrictions with VPNs 

VPN services enable users to mask their IP addresses and circumvent these age verification policies, allowing them to maintain their anonymity and have their sensitive information protected. Some VPNs are available on desktop and mobile devices, and some can be used on Amazon Fire TV Stick, among other platforms. To maximize privacy and security, experts suggest opting for VPN providers with robust no-logs policies and strong encryption.

Higher VPN adoption has fueled speculation on whether the US lawmakers will attempt to ban VPNs outright, which would be yet another blow to digital privacy and freedom. For now, VPNs are still a popular option for Americans who want to keep their online activity hidden from nosy age verification schemes.

US DoJ Charges 54 Linked to ATM Jackpotting Scheme Using Ploutus Malware, Tied to Tren de Aragua

 

The U.S. Department of Justice (DoJ) has revealed the indictment of 54 people for their alleged roles in a sophisticated, multi-million-dollar ATM jackpotting operation that targeted machines across the United States.

According to authorities, the operation involved the use of Ploutus malware to compromise automated teller machines and force them to dispense cash illegally. Investigators say the accused individuals are connected to Tren de Aragua (TdA), a Venezuelan criminal group that the U.S. State Department has classified as a foreign terrorist organization.

The DoJ noted that in July 2025, the U.S. government imposed sanctions on TdA’s leader, Hector Rusthenford Guerrero Flores, also known as “Niño Guerrero,” along with five senior members. They were sanctioned for alleged involvement in crimes including “illicit drug trade, human smuggling and trafficking, extortion, sexual exploitation of women and children, and money laundering, among other criminal activities.”

An indictment returned on December 9, 2025, charged 22 individuals with offenses such as bank fraud, burglary, and money laundering. Prosecutors allege that TdA used ATM jackpotting attacks to steal millions of dollars in the U.S. and distribute the proceeds among its network.

In a separate but related case, another 32 defendants were charged under an indictment filed on October 21, 2025. These charges include “one count of conspiracy to commit bank fraud, one count of conspiracy to commit bank burglary and computer fraud, 18 counts of bank fraud, 18 counts of bank burglary, and 18 counts of damage to computers.”

If found guilty, the defendants could face sentences ranging from 20 years to as much as 335 years in prison.

“These defendants employed methodical surveillance and burglary techniques to install malware into ATM machines, and then steal and launder money from the machines, in part to fund terrorism and the other far-reaching criminal activities of TDA, a designated Foreign Terrorist Organization,” said Acting Assistant Attorney General Matthew R. Galeotti of the Justice Department’s Criminal Division.

Officials explained that the scheme relied on recruiting individuals to physically access ATMs nationwide. These recruits reportedly carried out reconnaissance to study security measures, tested whether alarms were triggered, and then accessed the machines’ internal components.

Once access was obtained, the attackers allegedly installed Ploutus either by swapping the ATM’s hard drive with a preloaded one or by using removable media such as a USB drive. The malware can send unauthorized commands to the ATM’s Cash Dispensing Module, causing it to release money on demand.

“The Ploutus malware was also designed to delete evidence of malware in an effort to conceal, create a false impression, mislead, or otherwise deceive employees of the banks and credit unions from learning about the deployment of the malware on the ATM,” the DoJ said. “Members of the conspiracy would then split the proceeds in predetermined portions.”

Ploutus first surfaced in Mexico in 2013. Security firms later documented its evolution, including its exploitation of vulnerabilities in Windows XP-based ATMs and its ability to control Diebold machines running multiple Windows versions.

“Once deployed to an ATM, Ploutus-D makes it possible for a money mule to obtain thousands of dollars in minutes,” researchers noted. “A money mule must have a master key to open the top portion of the ATM (or be able to pick it), a physical keyboard to connect to the machine, and an activation code (provided by the boss in charge of the operation) in order to dispense money from the ATM.”

The DoJ estimates that since 2021, at least 1,529 jackpotting incidents have occurred in the U.S., resulting in losses of approximately $40.73 million as of August 2025.

“Many millions of dollars were drained from ATM machines across the United States as a result of this conspiracy, and that money is alleged to have gone to Tren de Aragua leaders to fund their terrorist activities and purposes,” said U.S. Attorney Lesley Woods