Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AI cyberattacks. Show all posts

AI Browsers Raise Privacy and Security Risks as Prompt Injection Attacks Grow

 

A new wave of competition is stirring in the browser market as companies like OpenAI, Perplexity, and The Browser Company aggressively push to redefine how humans interact with the web. Rather than merely displaying pages, these AI browsers will be engineered to reason, take action independently, and execute tasks on behalf of end users. At least four such products, including ChatGPT's Atlas, Perplexity's Comet, and The Browser Company's Dia, represent a transition reminiscent of the early browser wars, when Netscape and Internet Explorer battled to compete for a role in the shaping of the future of the Internet. 

Whereas the other browsers rely on search results and manual navigation, an AI browser is designed to understand natural language instructions and perform multi-step actions. For instance, a user can ask an AI browser to find a restaurant nearby, compare options, and make a reservation without the user opening the booking page themselves. In this context, the browser has to process both user instructions and the content of each of the webpages it accesses, intertwining decision-making with automation. 

But this capability also creates a serious security risk that's inherent in the way large language models work. AI systems cannot be sure whether a command comes from a trusted user or comes with general text on an untrusted web page. Malicious actors may now inject malicious instructions within webpages, which can include uses of invisible text, HTML comments, and image-based prompts. Unbeknownst to them, that might get processed by an AI browser along with the user's original request-a type of attack now called prompt injection. 

The consequence of such attacks could be dire, since AI browsers are designed to gain access to sensitive data in order to function effectively. Many ask for permission to emails, calendars, contacts, payment information, and browsing histories. If compromised, those very integrations become conduits for data exfiltration. Security researchers have shown just how prompt injections can trick AI browsers into forwarding emails, extracting stored credentials, making unauthorized purchases, or downloading malware without explicit user interaction. One such neat proof-of-concept was that of Perplexity's Comet browser, wherein the researchers had embedded command instructions in a Reddit comment, hidden behind a spoiler tag. When the browser arrived and was asked to summarise the page, it obediently followed the buried commands and tried to scrape email data. The user did nothing more than request a summary; passive interactions indeed are enough to get someone compromised. 

More recently, researchers detailed a method called HashJack, which abuses the way web browsers process URL fragments. Everything that appears after the “#” in a URL never actually makes it to the server of a given website and is only accessible to the browser. An attacker can embed nefarious commands in this fragment, and AI-powered browsers may read and act upon it without the hosting site detecting such commands. Researchers have already demonstrated that this method can make AI browsers show the wrong information, such as incorrect dosages of medication on well-known medical websites. Though vendors are experimenting with mitigations, such as reinforcement learning to detect suspicious prompts or restricting access during logged-out browsing sessions, these remain imperfect. 

The flexibility that makes AI browsers useful also makes them vulnerable. As the technology is still in development, it shows great convenience, but the security risks raise questions of whether fully trustworthy AI browsing is an unsolved problem.

AI-Assisted Cyberattacks Signal a Shift in Modern Threat Strategies and Defense Models

 

A new wave of cyberattacks is using large language models as an offensive tool, according to recent reporting from Anthropic and Oligo Security. Both groups said hackers used jailbroken LLMs-some capable of writing code and conducting autonomous reasoning-to conduct real-world attack campaigns. While the development is alarming, cybersecurity researchers had already anticipated such advancements. 

Earlier this year, a group at Cornell University published research predicting that cybercriminals would eventually use AI to automate hacking at scale. The evolution is consistent with a recurring theme in technology history: Tools designed for productivity or innovation inevitably become dual-use. Any number of examples-from drones to commercial aircraft to even Alfred Nobel's invention of dynamite-demonstrate how innovation often carries unintended consequences. 

The biggest implication of it all in cybersecurity is that LLMs today finally allow attackers to scale and personalize their operations simultaneously. In the past, cybercriminals were mostly forced to choose between highly targeted efforts that required manual work or broad, indiscriminate attacks with limited sophistication. 

Generative AI removes this trade-off, allowing attackers to run tailored campaigns against many targets at once, all with minimal input. In Anthropic's reported case, attackers initially provided instructions on ways to bypass its model safeguards, after which the LLM autonomously generated malicious output and conducted attacks against dozens of organizations. Similarly, Oligo Security's findings document a botnet powered by AI-generated code, first exploiting an AI infrastructure tool called Ray and then extending its activity by mining cryptocurrency and scanning for new targets. 

Traditional defenses, including risk-based prioritization models, may become less effective within this new threat landscape. These models depend upon the assumption that attackers will strategically select targets based upon value and feasibility. Automation collapses the cost of producing custom attacks such that attackers are no longer forced to prioritize. That shift erases one of the few natural advantages defenders had. 

Complicating matters further, defenders must weigh operational impact when making decisions about whether to implement a security fix. In many environments, a mitigation that disrupts legitimate activity poses its own risk and may be deferred, leaving exploitable weaknesses in place. Despite this shift, experts believe AI can also play a crucial role in defense. The future could be tied to automated mitigations capable of assessing risks and applying fixes dynamically, rather than relying on human intervention.

In some cases, AI might decide that restrictions should narrowly apply to certain users; in other cases, it may recommend immediate enforcement across the board. While the attackers have momentum today, cybersecurity experts believe the same automation that today enables large-scale attacks could strengthen defenses if it is deployed strategically.

Cybercriminals Speed Up Tactics as AI-Driven Attacks, Ransomware Alliances, and Rapid Exploitation Reshape Threat Landscape

 

Cybercriminals are rapidly advancing their attack methods, strengthening partnerships, and harnessing artificial intelligence to gain an edge over defenders, according to new threat intelligence. Rapid7’s latest quarterly findings paint a picture of a threat environment that is evolving at high speed, with attackers leaning on fileless ransomware, instant exploitation of vulnerabilities, and AI-enabled phishing operations.

While newly exploited vulnerabilities fell by 21% compared to the previous quarter, threat actors are increasingly turning to long-standing unpatched flaws—some over a decade old. These outdated weaknesses remain potent entry points, reflected in widespread attacks targeting Microsoft SharePoint and Cisco ASA/FTD devices via recently revealed critical bugs.

The report also notes a shrinking window between public disclosure of vulnerabilities and active exploitation, leaving organisations with less time to respond.

"The moment a vulnerability is disclosed, it becomes a bullet in the attacker's arsenal," said Christiaan Beek, Senior Director of Threat Intelligence and Analytics, Rapid7.
"Attackers are no longer waiting. Instead, they're weaponising vulnerabilities in real time and turning every disclosure into an opportunity for exploitation. Organisations must now assume that exploitation begins the moment a vulnerability is made public and act accordingly," said Beek.

The number of active ransomware groups surged from 65 to 88 this quarter. Rapid7’s analysis shows increasing consolidation among these syndicates, with groups pooling infrastructure, blending tactics, and even coordinating public messaging to increase their reach. Prominent operators such as Qilin, SafePay, and WorldLeaks adopted fileless techniques, launched extensive data-leak operations, and introduced affiliate services such as ransom negotiation assistance. Sectors including business services, healthcare, and manufacturing were among the most frequently targeted.

"Ransomware has evolved significantly beyond its early days to become a calculated strategy that destabilises industries," said Raj Samani, Chief Scientist, Rapid7.
"In addition, the groups themselves are operating like shadow corporations. They merge infrastructure, tactics, and PR strategies to project dominance and erode trust faster than ever," said Samani.

Generative AI continues to lower the barrier for cybercriminals, enabling them to automate and scale phishing and malware development. The report points to malware families such as LAMEHUG, which now have advanced adaptive features, allowing them to issue new commands on the fly and evade standard detection tools.

AI is making it easier for inexperienced attackers to craft realistic, large-volume phishing campaigns, creating new obstacles for security teams already struggling to keep pace with modern threats.

State-linked actors from Russia, China, and Iran are also evolving, shifting from straightforward espionage to intricate hybrid operations that blend intelligence collection with disruptive actions. Many of these campaigns focus on infiltrating supply chains and compromising identity systems, employing stealthy tactics to maintain long-term access and avoid detection.

Overall, Rapid7’s quarterly analysis emphasises the urgent need for organisations to modernise their security strategies to counter the speed, coordination, and technological sophistication of today’s attackers.

India Most Targeted by Malware as AI Drives Surge in Ransomware and Phishing Attacks

 

India has become the world’s most-targeted nation for malware, according to the latest report by cybersecurity firm Acronis, which highlights how artificial intelligence is fueling a sharp increase in ransomware and phishing activity. The findings come from the company’s biannual threat landscape analysis, compiled by the Acronis Threat Research Unit (TRU) and its global network of sensors tracking over one million Windows endpoints between January and June 2025. 

The report indicates that India accounted for 12.4 percent of all monitored attacks, placing it ahead of every other nation. Analysts attribute this trend to the rising sophistication of AI-powered cyberattacks, particularly phishing campaigns and impersonation attempts that are increasingly difficult to detect. With Windows systems still dominating business environments compared to macOS or Linux, the operating system remained the primary target for threat actors. 

Ransomware continues to be the most damaging threat to medium and large businesses worldwide, with newer criminal groups adopting AI to automate attacks and enhance efficiency. Phishing was found to be a leading driver of compromise, making up 25 percent of all detected threats and over 52 percent of those aimed at managed service providers, marking a 22 percent increase compared to the first half of 2024. 

Commenting on the findings, Rajesh Chhabra, General Manager for India and South Asia at Acronis, noted that India’s rapidly expanding digital economy has widened its attack surface significantly. He emphasized that as attackers leverage AI to scale operations, Indian enterprises—especially those in manufacturing and infrastructure—must prioritize AI-ready cybersecurity frameworks. He further explained that organizations need to move away from reactive security approaches and embrace behavior-driven models that can anticipate and adapt to evolving threats. 

The report also points to collaboration platforms as a growing entry point for attackers. Phishing attempts on services like Microsoft Teams and Slack spiked dramatically, rising from nine percent to 30.5 percent in the first half of 2025. Similarly, advanced email-based threats such as spoofed messages and payload-less attacks increased from nine percent to 24.5 percent, underscoring the urgent requirement for adaptive defenses. 

Acronis recommends that businesses adopt a multi-layered protection strategy to counter these risks. This includes deploying behavior-based threat detection systems, conducting regular audits of third-party applications, enhancing cloud and email security solutions, and reinforcing employee awareness through continuous training on social engineering and phishing tactics. 

The findings make clear that India’s digital growth is running parallel to escalating cyber risks. As artificial intelligence accelerates the capabilities of malicious actors, enterprises will need to proactively invest in advanced defenses to safeguard critical systems and sensitive data.

AI-Driven Cyberattacks Surge Globally as Stolen Credentials Flood the Dark Web: Fortinet Report

 

Artificial intelligence is accelerating the scale and sophistication of cyberattacks, according to Fortinet’s latest 2025 Global Threat Landscape Report. The cybersecurity firm observed a significant 16.7% rise in automated scanning activity compared to last year, with a staggering 36,000 scans occurring every second worldwide. The report emphasizes that attackers are increasingly "shifting left" — targeting vulnerable digital entry points such as Remote Desktop Protocol (RDP), Internet of Things (IoT) devices, and Session Initiation Protocols (SIP) earlier in the attack cycle.

Infostealer malware remains a major concern, with a dramatic 500% increase in compromised system logs now available online. This translates to over 1.7 billion stolen credentials circulating on the dark web. The report warns, “this flood of stolen data has led to a sharp increase in targeted cyberattacks against businesses and individuals.” Cybercriminals are actively exploiting this data, leading to a 42% jump in credentials listed for sale on underground forums.

Interestingly, zero-day vulnerabilities only make up a minor portion of the current threat landscape. Instead, attackers are leveraging “living off the land” tactics — exploiting built-in system tools and overlooked weaknesses — to stay hidden and avoid detection.

The ransomware ecosystem is also evolving. New groups are emerging while established ones strengthen their presence. In 2024, Ransomhub led the charts, accounting for 13% of ransomware victims. It was followed closely by LockBit 3.0 (12%), Play (8%), and Medusa (4%).

A majority of these ransomware incidents targeted U.S.-based entities, which experienced 61% of the reported cases. The United Kingdom and Canada followed with 6% and 5% respectively, suggesting a disproportionate focus on American organizations.

“Our 2025 Global Threat Landscape Report makes it clear: cybercriminals are scaling faster than ever, using AI and automation to gain the upper hand,” stated Derek Manky, Chief Security Strategist and Global Vice President of Threat Intelligence at FortiGuard Labs.

He added, “Defenders must abandon outdated security playbooks and transition to proactive, intelligence-driven strategies that incorporate AI, zero trust architectures, and continuous threat exposure management.”