Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AI. Show all posts

Microsoft to end support for Windows 10, 400 million PCs will be impacted


Microsoft is ending software updates for Windows 10

From October 14, Microsoft will end its support for Windows 10, experts believe it will impact around 400 million computers, exposing them to cyber threats. People and groups worldwide are requesting that Microsoft extend its free support. 

According to recent research, 40.8% of desktop users still use Windows 10. This means around 600 million PCs worldwide use Windows 10. Soon, most of them will not receive software updates, security fixes, or technical assistance. 

400 million PCs will be impacted

Experts believe that these 400 million PCs will continue to work even after October 14th because hardware upgrades won’t be possible in such a short duration. 

“When support for Windows 8 ended in January 2016, only 3.7% of Windows users were still using it. Only 2.2% of Windows users were still using Windows 8.1 when support ended in January 2023,” PIRG said. PIGR has also called this move a “looming security disaster.”

What can Windows users do?

The permanent solution is to upgrade to Windows 11. But there are certain hardware requirements when you want to upgrade, and most users will not be able to upgrade as they will have to buy new PCs with compatible hardware. 

But Microsoft has offered few free options for personal users, if you use 1,000 Microsoft Rewards points. Users can also back up their data to the Windows Backup cloud service to get a free upgrade. If this impacts you, you can earn these points via Microsoft services such as Xbox games, store purchases, and Bing searches. But this will take time, and users don’t have it, unfortunately. 

The only viable option for users is to pay $30 (around Rs 2,650) for an Extended Security Updates (ESU) plan, but it will only work for one year.

According to PIGR, “Unless Microsoft changes course, users will face the choice between exposing themselves to cyberattacks or discarding their old computers and buying new ones. The solution is clear: Microsoft must extend free, automatic support.”

Zero-click Exploit AI Flaws to Hack Systems


What if machines, not humans, become the centre of cyber-warfare? Imagine if your device could be hijacked without you opening any link, downloading a file, or knowing the hack happened? This is a real threat called zero-click attacks, a covert and dangerous type of cyber attack that abuses software bugs to hack systems without user interaction. 

The threat

These attacks have used spywares such as Pegasus and AI-driven EchoLeak, and shown their power to attack millions of systems, compromise critical devices, and steal sensitive information. With the surge of AI agents, the risk is high now. The AI-driven streamlining of work and risen productivity has become a lucrative target for exploitation, increasing the scale and attack tactics of breaches.

IBM technology explained how the combination of AI systems and zero-click flaws has reshaped the cybersecurity landscape. “Cybercriminals are increasingly adopting stealthy tactics and prioritizing data theft over encryption and exploiting identities at scale. A surge in phishing emails delivering infostealer malware and credential phishing is fueling this trend—and may be attributed to attackers leveraging AI to scale distribution,” said the IBM report.

A few risks of autonomous AI are highlighted, such as:

  • Threat of prompt injection 
  • Need for an AI firewall
  • Gaps in addressing the challenges due to AI-driven tech

About Zero-click attacks

These attacks do not need user interaction, unlike traditional cyberattacks that relied on social engineering campaigns or phishing attacks. Zero-click attacks exploit flaws in communication or software protocols to gain unauthorized entry into systems.  

Echoleak: An AI-based attack that modifies AI systems to hack sensitive information.

Stagefright: A flaw in Android devices that allows hackers to install malicious code via multimedia messages (MMS), hacking millions of devices.

Pegasus: A spyware that hacks devices through apps such as iMessage and WhatsApp, it conducts surveillance, can gain unauthorized access to sensitive data, and facilitate data theft as well.

How to stay safe?

According to IBM, “Despite the magnitude of these challenges, we found that most organizations still don’t have a cyber crisis plan or playbooks for scenarios that require swift responses.” To stay safe, IBM suggests “quick, decisive action to counteract the faster pace with which threat actors, increasingly aided by AI, conduct attacks, exfiltrate data, and exploit vulnerabilities.”

Microsoft Stops Phishing Scam Which Used Gen-AI Codes to Fool Victims


AI: Boon or Curse?

AI code is in use across sectors for variety of tasks, particularly cybersecurity, and both threat actors and security teams have turned to LLMs for supporting their work. 

Security experts use AI to track and address to threats at scale as hackers are experimenting with AI to make phishing traps, create obfuscated codes, and make spoofed malicious payloads. 

Microsoft Threat Intelligence recently found and stopped a phishing campaign that allegedly used AI-generated code to cover payload within an SVG file. 

About the campaign 

The campaign used a small business email account to send self addressed mails with actual victims coveted in BCC fields, and the attachment looked like a PDF but consisted SVG script content. 

The SVG file consisted hidden elements that made it look like an original business dashboard, while a secretly embedded script changed business words into code that exposed a secret payload. Once opened, the file redirects users to a CAPTCHA gate, a standard social engineering tactical that leads to a scanned sign in page used to steal credentials. 

The hidden process combined business words and formulaic code patterns instead of cryptographic techniques. 

Security Copilot studied the file and listed markers in lines with LLM output. These things made the code look fancy on the surface, however, it made the experts think it was AI generated. 

Combating the threat

The experts used AI powered tools in Microsoft Defender for Office 375 to club together hints that were difficult for hackers to push under the rug. 

The AI tool flagged the rare self-addressed email trend , the unusual SVG file hidden as a PDF, the redirecting to a famous phishing site, the covert code within the file, and the detection tactics deployed on the phishing page. 

The incident was contained, and blocked without much effort, mainly targeting US based organizations, Microsoft, however, said that the attack show how threat actors are aggressively toying with AI to make believable tracks and sophisticated payloads.

AI Turns Personal: Criminals Now Cloning Loved Ones to Steal Money, Warns Police

 



Police forces in the United Kingdom are alerting the public to a surge in online fraud cases, warning that criminals are now exploiting artificial intelligence and deepfake technology to impersonate relatives, friends, and even public figures. The warning, issued by West Mercia Police, stresses upon how technology is being used to deceive people into sharing sensitive information or transferring money.

According to the force’s Economic Crime Unit, criminals are constantly developing new strategies to exploit internet users. With the rapid evolution of AI, scams are becoming more convincing and harder to detect. To help people stay informed, officers have shared a list of common fraud-related terms and explained how each method works.

One of the most alarming developments is the use of AI-generated deepfakes, realistic videos or voice clips that make it appear as if a known person is speaking. These are often used in romance scams, investment frauds, or emotional blackmail schemes to gain a victim’s trust before asking for money.

Another growing threat is keylogging, where fraudsters trick victims into downloading malicious software that secretly records every keystroke. This allows criminals to steal passwords, banking details, and other private information. The software is often installed through fake links or phishing emails that look legitimate.

Account takeover, or ATO, remains one of the most common types of identity theft. Once scammers access an individual’s online account, they can change login credentials, reset security settings, and impersonate the victim to access bank or credit card information.

Police also warned about SIM swapping, a method in which criminals gather personal details from social media or scam calls and use them to convince mobile providers to transfer a victim’s number to a new SIM card. This gives the fraudster control over the victim’s messages and verification codes, making it easier to access online accounts.

Other scams include courier fraud, where offenders pose as police officers or bank representatives and instruct victims to withdraw money or purchase expensive goods. A “courier” then collects the items directly from the victim’s home. In many cases, scammers even ask for bank cards and PIN numbers.

The force’s notice also included reminders about malware and ransomware, malicious programs that can steal or lock files. Criminals may also encourage victims to install legitimate-looking remote access tools such as AnyDesk, allowing them full control of a victim’s device.

Additionally, spoofing — the act of disguising phone numbers, email addresses, or website links to appear genuine, continues to deceive users. Fraudsters often combine spoofing with AI to make fake communication appear even more authentic.

Police advise the public to remain vigilant, verify any unusual requests, and avoid clicking on suspicious links. Anyone seeking more information or help can visit trusted resources such as Action Fraud or Get Safe Online, which provide updates on current scams and guidance on reporting cybercrime.



Gemini in Chrome: Google Can Now Track Your Phone

Gemini in Chrome: Google Can Now Track Your Phone

Is the Gemini browser collecting user data?

A new warning for 2 billion Chrome users, Google has announced that its browser will start collecting “sensitive data” on smartphones. “Starting today, we’re rolling out Gemini in Chrome,” Google said, which will be the “biggest upgrade to Chrome in its history.” The data that can be collected includes the device ID, username, location, search history, and browsing history. 

Agentic AI and browsers

Surfshark investigated the user privacy of AI browsers after Google’s announcement and found that if you use Chrome with Gemini on your smartphone, Google can collect 24 types of data. According to Surfshark, this is bigger than any other agentic AI browsers that have been analyzed. 

For instance, Microsoft’s Edge browser, which has Copilot, only collects half the data compared to Chrome and Gemini. Even Brave, Opera, and Perplexity collect less data. With the Gemini-in-Chrome extension, however, users should be more careful. 

Now that AI is everywhere, a lot of browsers like Firefox, Chrome, and Edge allow users to integrate agentic AI extensions. Although these tools are handy, relying on them can expose your privacy and personal data to third-party companies.

There have been incidents recently where data harvesting resulted from browser extensions, even those downloaded from official stores. 

The new data collection warning comes at the same time as the Gemini upgrade this month, called “Nano Banana.” This new update will also feed on user data. 

According to Android Authority, “Google may be working on bringing Nano Banana, Gemini’s popular image editing tool, to Google Photos. We’ve uncovered a GIF for a new ‘Create’ feature in the Google Photos app, suggesting it’ll use Nano Banana inside the app. It’s unclear when the feature will roll out.”

AI browser concerns

Experts have warned that every photo you upload has a biometric fingerprint which consists of your micro-expressions, unique facial geometry, body proportions, and micro-expressions. The biometric data included device fingerprinting, behavioural biometrics, social network mapping, and GPS coordinates.

Besides this, Apple’s Safari now has anti-fingerprinting technology as the default browsing for iOS 26. However, users should only use their own browser for it to work. For instance, if you use Chrome on an Apple device, it won’t work. Another reason why Apply is advising users to use the Safari browser and not Chrome. 

The Cookie Problem. Should you Accept or Reject?


It is impossible for a user today to surf the internet without cookies, to reject or accept. A pop-up shows in our browser that asks to either “accept all” or “reject all.” In a few cases, a third option allows you to ‘manage preferences’.

The pop-ups can be annoying, and your first reaction is to remove them immediately, and you hit that “accept all” button. But is there anything else you can do?

About cookies

Cookies are small files that are saved by web pages, and they have information for personalizing user experience, particularly for the most visited websites. The cookies may remember your login details, preferred news items, or your shopping preferences based on your browsing history. Cookies also help advertisers target your browsing behaviour via targeted ads. 

Types of cookies

Session cookies: These are for temporary use, like tracking items in your shopping cart. When a browser session is inactive, the cookies are automatically deleted.

Persistent cookies: As the name suggests, these cookies are used for longer periods. For example, saving logging details for accessing emails faster. They can expire from days to years. 

About cookie options

When you are on a website, pop-ups inform you about the “essential cookies” that you can’t opt out of because if you do, you may not be able to use the website's online features, like shopping carts wouldn’t work. But in the settings, you can opt out of “non-essential cookies.”

Three types of non-essential cookies

  1. Functional cookies- Based on browsing experience. (for instance, region or language selection)
  2. Advertising cookies- Third-party cookies, which are used to track user browsing activities. These cookies can be shared with third parties and across domains and platforms that you did not visit.
  3. Analytics cookies- They give details about metrics, such as how visitors use the website

Panama and Vietnam Governments Suffer Cyber Attacks, Data Leaked


Hackers stole government data from organizations in Panama and Vietnam in multiple cyber attacks that surfaced recently.

About the incident

According to Vietnam’s state news outlet, the Cyber Emergency Response Team (VNCERT) confirmed reports of a breach targeting the National Credit Information Center (CIC) that manages credit information for businesses and people, an organization run by the State Bank of Vietnam. 

Personal data leaked

Earlier reports suggested that personal information was exposed due to the attack. VNCERT is now investigating and working with various agencies and Viettel, a state-owned telecom. It said, “Initial verification results show signs of cybercrime attacks and intrusions to steal personal data. The amount of illegally acquired data is still being counted and clarified.”

VNCERT has requested citizens to avoid downloading and sharing stolen data and also threatened legal charges against people who do so.

Who was behind the attack?

The statement has come after threat actors linked to the Shiny Hunters Group and Scattered Spider cybercriminal organization took responsibility for hacking the CIC and stealing around 160 million records. 

Threat actors put up stolen data for sale on the cybercriminal platforms, giving a sneak peek of a sample that included personal information. DataBreaches.net interviewed the hackers, who said they abused a bug in end-of-life software, and didn’t offer a ransom for the stolen information.

CIC told banks that the Shiny Hunters gang was behind the incident, Bloomberg News reported.

The attackers have gained the attention of law enforcement agencies globally for various high-profile attacks in 2025, including various campaigns attacking big enterprises in the insurance, retail, and airline sectors. 

The Finance Ministry of Panama also hit

The Ministry of Economy and Finance in Panam was also hit by a cyber attack, government officials confirmed. “The Ministry of Economy and Finance (MEF) informs the public that today it detected an incident involving malicious software at one of the offices of the Ministry,” they said in a statement. 

The INC ransomware group claimed responsibility for the incident and stole 1.5 terabytes of data, such as emails, budgets, etc., from the ministry.

Hackers Exploit Zero-Day Bug to Install Backdoors and Steal Data


Sitecore bug abused

Threat actors exploited a zero-day bug in legacy Sitecore deployments to install WeepSteel spying malware. 

The bug, tracked as CVE-2025-53690, is a ViewState deserialization flaw caused by the addition of a sample ASP.NET machine key in pre-2017 Sitecore guides. 

A few users reused this key, which allowed hackers who knew about the key to create valid, but infected '_VIEWSTATE' payloads that fooled the server into deserializing and executing them, which led to remote code execution (RCE). 

The vulnerability isn’t a bug in ASP.NET; however, it is a misconfiguration flaw due to the reuse of publicly documented keys that were never intended for production use.

About exploitation

Mandiant experts found the exploit in the wild and said that the threat actors have been exploiting the bug in various multi-stage attacks. Threat actors target the '/sitecore/blocked.Aspx' endpoint, which consists of an unauthorized ViewState field, and get RCE by exploiting CVE-2025-53690. 

The malicious payload threat actors deploy is WeepSteel, a spying backdoor that gets process, system, disk, and network details, hiding its exfiltration as standard ViewState responses. Mendiant experts found the RCE of monitoring commands on compromised systems- tasklist, ipconfig/all, whoami, and netstat-ano. 

Mandiant observed the execution of reconnaissance commands on compromised environments, including whoami, hostname, tasklist, ipconfig /all, and netstat -ano. 

In the next attack stage, the threat actors installed Earthworm (a network tunneling and reverse SOCKS proxy), Dwagent (a remote access tool), and 7-Zip, which is used to make archives of the stolen information. After this, the threat actors increased access privileges by making local administrator accounts ('asp$,' 'sawadmin'), “cached (SAM and SYSTEM hives) credentials dumping, and attempted token impersonating via GoTokenTheft,” Bleeping Computer said. 

Threat actors secured persistence by disabling password expiration, which gave them RDP access and allowed them to register Dwagent as a SYSTEM service. 

“Mandiant recommends following security best practices in ASP.NET, including implementing automated machine key rotation, enabling View State Message Authentication Code (MAC), and encrypting any plaintext secrets within the web.config file,” the company said.

Massive database of 250 million data leaked online for public access


Around a quarter of a billion identity records were left publicly accessible, exposing people located in seven countries- Saudi Arabia, the United Arab Emirates, Canada, Mexico, South Africa, Egypt, and Turkey. 

According to experts from Cybernews, three misconfigured servers, registered in the UAE and Brazil, hosting IP addresses, contained personal information such as “government-level” identity profiles. The leaked data included contact details, dates of birth, ID numbers, and home addresses. 

Cybernews experts who found the leak said the databases seemed to have similarities with the naming conventions and structure, which hinted towards the same source. But they could not identify the actor who was responsible for running the servers. 

“These databases were likely operated by a single party, due to the similar data structures, but there’s no attribution as to who controlled the data, or any hard links proving that these instances belonged to the same party,” they said. 

The leak is particularly concerning for citizens in South Africa, Egypt, and Turkey, as the databases there contained full-spectrum data. 

The leak would have exposed the database to multiple threats, such as phishing campaigns, scams, financial fraud, and abuses.

Currently, the database is not publicly accessible (a good sign). 

This is not the first incident where a massive database holding citizen data (250 million) has been exposed online. Cybernews’ research revealed that the entire Brazilian population might have been impacted by the breach.

Earlier, a misconfigured Elasticsearch instance included the data with details such as sex,  names, dates of birth, and Cadastro de Pessoas Físicas (CPF) numbers. This number is used to identify taxpayers in Brazil. 

Cybercriminals Weaponize AI for Large-Scale Extortion and Ransomware Attacks

 

AI company Anthropic has uncovered alarming evidence that cybercriminals are weaponizing artificial intelligence tools for sophisticated criminal operations. The company's recent investigation revealed three particularly concerning applications of its Claude AI: large-scale extortion campaigns, fraudulent recruitment schemes linked to North Korea, and AI-generated ransomware development. 

Criminal AI applications emerge 

In what Anthropic describes as an "unprecedented" case, hackers utilized Claude to conduct comprehensive reconnaissance across 17 different organizations, systematically gathering usernames and passwords to infiltrate targeted networks.

The AI tool autonomously executed multiple malicious functions, including determining valuable data for exfiltration, calculating ransom demands based on victims' financial capabilities, and crafting threatening language to coerce compliance from targeted companies. 

The investigation also uncovered North Korean operatives employing Claude to create convincing fake personas capable of passing technical coding evaluations during job interviews with major U.S. technology firms. Once successfully hired, these operatives leveraged the AI to fulfill various technical responsibilities on their behalf, potentially gaining access to sensitive corporate systems and information. 

Additionally, Anthropic discovered that individuals with limited technical expertise were using Claude to develop complete ransomware packages, which were subsequently marketed online to other cybercriminals for prices reaching $1,200 per package. 

Defensive AI measures 

Recognizing AI's potential for both offense and defense, ethical security researchers and companies are racing to develop protective applications. XBOW, a prominent player in AI-driven vulnerability discovery, has demonstrated significant success using artificial intelligence to identify software flaws. The company's integration of OpenAI's GPT-5 model resulted in substantial performance improvements, enabling the discovery of "vastly more exploits" than previous methods.

Earlier this year, XBOW's AI-powered systems topped HackerOne's leaderboard for vulnerability identification, highlighting the technology's potential for legitimate security applications. Multiple organizations focused on offensive and defensive strategies are now exploring AI agents to infiltrate corporate networks for defense and intelligence purposes, assisting IT departments in identifying vulnerabilities before malicious actors can exploit them. 

Emerging cybersecurity arms race 

The simultaneous adoption of AI technologies by both cybersecurity defenders and criminal actors has initiated what experts characterize as a new arms race in digital security. This development represents a fundamental shift where AI systems are pitted against each other in an escalating battle between protection and exploitation. 

The race's outcome remains uncertain, but security experts emphasize the critical importance of equipping legitimate defenders with advanced AI tools before they fall into criminal hands. Success in this endeavor could prove instrumental in thwarting the emerging wave of AI-fueled cyberattacks that are becoming increasingly sophisticated and autonomous. 

This evolution marks a significant milestone in cybersecurity, as artificial intelligence transitions from merely advising on attack strategies to actively executing complex criminal operations independently.

Cryptoexchange SwissBorg Suffers $41 Million Theft, Will Reimburse Users


According to SwissBorg, a cryptoexchange platform, $41 million worth of cryptocurrency was stolen from an external wallet used for its SOL earn strategy in a cyberattack that also affected a partner company. The company, which is based in Switzerland, acknowledged the industry reports of the attack but has stressed that the platform was not compromised. 

CEO Cyrus Fazel said that an external finance wallet of a partner was compromised. The incident happened due to hacking of the partner’s API, a process that lets software customers communicate with each other, impacting a single counterparty. It was not a compromise of SwissBorg, the company said on X. 

SwissBorg said that the hack has impacted fewer than 1% of users. “A partner API was compromised, impacting our SOL Earn Program (~193k SOL, <1% of users).  Rest assured, the SwissBorg app remains fully secure and all other funds in Earn programs are 100% safe,” it tweeted. The company said they are looking into the incident with other blockchain security firms. 

All other assets are secure and will compensate for any losses, and user balances in the SwissBorg app are not impacted. SOL Earn redemptions have been stopped as recovery efforts are undergoing. The company has also teamed up with law enforcement agencies to recover the stolen funds. A detailed report will be released after the investigations end. 

The exploit surfaced after a surge in crypto thefts, with more than $2.17 billion already stolen this year. Kiln, the partner company, released its own statement: “SwissBorg and Kiln are investigating an incident that may have involved unauthorized access to a wallet used for staking operations. The incident resulted in Solana funds being improperly removed from the wallet used for staking operations.” 

After the attack, “SwissBorg and Kiln immediately activated an incident response plan, contained the activity, and engaged our security partners,” it said in a blogpost, and that “SwissBorg has paused Solana staking transactions on the platform to ensure no other customers are impacted.”

Fazel posted a video about the incident, informing users that the platform had suffered multiple breaches in the past.

Antrhopic to use your chats with Claude to train its AI


Antrhopic to use your chats with Claude to train its AI

Anthropic announced last week that it will update its terms of service and privacy policy to allow the use of chats for training its AI model “Claude.” Users of all subscription levels- Claude Free, Max, Pro, and Code subscribers- will be impacted by this new update. Anthropic’s new Consumer Terms and Privacy Policy will take effect from September 28, 2025. 

But users who use Claude under licenses such as Work, Team, and Enterprise plans, Claude Education, and Claude Gov will be exempted. Besides this, third-party users who use the Claude API through Google Cloud’s Vertex AI and Amazon Bedrock will also not be affected by the new policy.

If you are a Claude user, you can delay accepting the new policy by choosing ‘not now’, however, after September 28, your user account will be opted in by default to share your chat transcript for training the AI model. 

Why the new policies?

The new policy has come after the genAI boom, thanks to the massive data that has prompted various tech companies to rethink their update policies (although quietly) and update their terms of service. With this, these companies can use your data to train their AI models or give it out to other companies to improve their AI bots. 

"By participating, you’ll help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations. You’ll also help future Claude models improve at skills like coding, analysis, and reasoning, ultimately leading to better models for all users," Anthropic said.

Concerns around user safety

Earlier this year, in July, Wetransfer, a famous file-sharing platform, fell into controversy when it changed its terms of service agreement, facing immediate backlash from its users and online community. WeTransfer wanted the files uploaded on its platform could be used for improving machine learning models. After the incident, the platform has been trying to fix things by removing “any mention of AI and machine learning from the document,” according to the Indian Express. 

With rising concerns over the use of personal data for training AI models that compromise user privacy, companies are now offering users the option to opt out of data training for AI models.

How cybersecurity debts can damage your organization and finances

How cybersecurity debts can damage your organization and finances

A new term has emerged in the tech industry: “cybersecurity debt.” Similar to technical debt, cybersecurity debt refers to the accumulation of unaddressed security bugs and outdated systems resulting from inadequate investments in cybersecurity services. 

Delaying these expenditures can provide short-term financial gains, but long-term repercussions can be severe, causing greater dangers and exponential costs.

What causes cybersecurity debt?

Cybersecurity debt happens when organizations don’t update their systems frequently, ignoring software patches and neglecting security improvements for short-term financial gains. Slowly, this leads to a backlog of bugs that threat actors can abuse- leading to severe consequences. 

Contrary to financial debt that accumulates predictable interest, cybersecurity debt compounds in uncertain and hazardous ways. Even a single ignored bug can cause a massive data breach, a regulatory fine that can cost millions, or a ransomware attack. 

A 2024 IBM study about data breaches cost revealed that the average data breach cost had increased to $4.9 million, a record high. And even worse, 83% of organizations surveyed had suffered multiple breaches, suggesting that many businesses keep operating with cybersecurity debt. The more an organization avoids addressing problems, the greater the chances of cyber threats.

What can CEOs do?

Short-term gain vs long-term security

CEOs and CFOs are under constant pressure to give strong quarterly profits and increase revenue. As cybersecurity is a “cost center” and non-revenue-generating expenditure, it is sometimes seen as a service where costs can be cut without severe consequences. 

A CEO or CFO may opt for this short-term security gain, failing to address the long-term risks involved with rising cybersecurity debt. In some cases, the consequences are only visible when a business suffers a data breach. 

Philip D. Harris, Research Director, GRC Software & Services, IDC, suggests, “Executive management and the board of directors must support the strategic direction of IT and cybersecurity. Consider implementing cyber-risk quantification to accomplish this goal. When IT and cybersecurity leaders speak to executives and board members, from a financial perspective, it is easier to garner interest and support for investments to reduce cybersecurity debt.”

Limiting cybersecurity debt

CEOs and leaders should consider reassessing the risks. This can be achieved by adopting a comprehensive approach that adds cybersecurity debt into an organization’s wider risk management plans.

Misuse of AI Agents Sparks Alarm Over Vibe Hacking


 

Once considered a means of safeguarding digital battlefields, artificial intelligence has now become a double-edged sword —a tool that can not only arm defenders but also the adversaries it was supposed to deter, giving them both a tactical advantage in the digital fight. According to Anthropic's latest Threat Intelligence Report for August 2025, shown below, this evolving reality has been painted in a starkly harsh light. 

It illustrates how cybercriminals are developing AI as a product of choice, no longer using it to support their attacks, but instead executing them as a central instrument of attack orchestration. As a matter of fact, according to the report, malicious actors are now using advanced artificial intelligence in order to automate phishing campaigns on a large scale, circumvent traditional security measures, and obtain sensitive information very efficiently, with very little human oversight needed. As a result of AI's precision and scalability, the threat landscape is escalating in troubling ways. 

By leveraging AI's accuracy and scalability, modern cyberattacks are being accelerated, reaching, and sophistication. A disturbing evolution of cybercrime is being documented by Anthropologic, as it turns out that artificial intelligence is no longer just used to assist with small tasks such as composing phishing emails or generating malicious code fragments, but is also serving as a force multiplier for lone actors, giving them the capacity to carry out operations at scale and with precision that was once reserved for organized criminal syndicates to accomplish. 

Investigators have been able to track down a sweeping extortion campaign back to a single perpetrator in one particular instance. This perpetrator used Claude Code's execution environment as a means of automating key stages of intrusion, such as reconnaissance, credential theft, and network penetration, to carry out the operation. The individual compromised at least 17 organisations, ranging from government agencies to hospitals to financial institutions, and he has made ransom demands that have sometimes exceeded half a million dollars in some instances. 

It was recently revealed that researchers have conceived of a technique called “vibe hacking” in which coding agents can be used not just as tools but as active participants in attacks, marking a profound shift in both cybercriminal activity’s speed and reach. It is believed by many researchers that the concept of “vibe hacking” has emerged as a major evolution in cyberattacks, as instead of exploiting conventional network vulnerabilities, it focuses on the logic and decision-making processes of artificial intelligence systems. 

In the year 2025, Andrej Karpathy started a research initiative called “vibe coding” - an experiment in artificial intelligence-generated problem-solving. Since then, the concept has been co-opted by cybercriminals to manipulate advanced language models and chatbots for unauthorised access, disruption of operations, or the generation of malicious outputs, originating from a research initiative. 

By using AI, as opposed to traditional hacking, in which technical defences are breached, this method exploits the trust and reasoning capabilities of machine learning itself, making detection especially challenging. Furthermore, the tactic is reshaping social engineering as well: attackers can create convincing phishing emails, mimic human speech, build fraudulent websites, create clones of voices, and automate whole scam campaigns at an unprecedented level using large language models that simulate human conversations with uncanny realism. 

With tools such as artificial intelligence-driven vulnerability scanners and deepfake platforms, the threat is amplified even further, creating a new frontier of automated deception, according to experts. In one notable variant of scamming, known as “vibe scamming,” adversaries can launch large-scale fraud operations in which they generate fake portals, manage stolen credentials, and coordinate follow-up communications all from a single dashboard, which is known as "vibe scamming." 

Vibe hacking is one of the most challenging cybersecurity tasks people face right now because it is a combination of automation, realism, and speed. The attackers are not relying on conventional ransomware tactics anymore; they are instead using artificial intelligence systems like Claude to carry out all aspects of an intrusion, from reconnaissance and credential harvesting to network penetration and data extraction.

A significant difference from earlier AI-assisted attacks was that Claude demonstrated "on-keyboard" capability as well, performing tasks such as scanning VPN endpoints, generating custom malware, and analysing stolen datasets to prioritise the victims with the highest payout potential. As soon as the system was installed, it created tailored ransom notes in HTML, containing the specific financial requirements, workforce statistics, and regulatory threats of each organisation, all based on the data that had been collected. 

The amount of payments requested ranged from $75,000 to $500,000 in Bitcoin, which illustrates that with the assistance of artificial intelligence, one individual could control the entire cybercrime network. Additionally, the report emphasises how artificial intelligence and cryptocurrency have increasingly become intertwined. For example, ransom notes include wallet addresses in ransom notes, and dark web forums are exclusively selling AI-generated malware kits in cryptocurrency. 

An investigation by the FBI has revealed that North Korea is increasingly using artificial intelligence (AI) to evade sanctions, which is used to secure fraudulent positions at Western tech companies by state-backed IT operatives who use it for the fabrication of summaries, passing interviews, debugging software, and managing day-to-day tasks. 

According to officials in the United States, these operations channel hundreds of millions of dollars every year into Pyongyang's technical weapon program, replacing years of training with on-demand artificial intelligence assistance. This reveals a troubling shift: artificial intelligence is not only enabling cybercrime but is also amplifying its speed, scale, and global reach, as evidenced by these revelations. A report published by Anthropological documents how Claude Code has been used not just for breaching systems, but for monetising stolen information at large scales as well. 

As a result of using the software, thousands of records containing sensitive identifiers, financial information, and even medical information were sifted through, and then customised ransom notes and multilayered extortion strategies were generated based on the victim's characteristics. As the company pointed out, so-called "agent AI" tools now provide attackers with both technical expertise and hands-on operational support, which effectively eliminates the need to coordinate teams of human operators, which is an important factor in preventing cyberattackers from taking advantage of these tools. 

Researchers warn that these systems can be dynamically adapted to defensive countermeasures, such as malware detection, in real time, thus making traditional enforcement efforts increasingly difficult. There are a number of cases to illustrate the breadth of abuse that occurs in the workplace, and there is a classifier developed by Anthropic to identify the behaviour. However, a series of case studies indicates this behaviour occurs in a multitude of ways. 

In the North Korean case, Claude was used to fabricate summaries and support fraudulent IT worker schemes. In the U.K., a criminal known as GTG-5004 was selling ransomware variants based on artificial intelligence on darknet forums; Chinese actors utilised artificial intelligence to compromise Vietnamese critical infrastructure; and Russian and Spanish-speaking groups were using the software to create malicious software and steal credit card information. 

In order to facilitate sophisticated fraud campaigns, even low-skilled actors have begun integrating AI into Telegram bots around romance scams as well as false identity services, significantly expanding the number of fraud campaigns available. A new report by Anthropic researchers Alex Moix, Ken Lebedev, and Jacob Klein argues that artificial intelligence, based on the results of their research, is continually lowering the barriers to entry for cybercriminals, enabling fraudsters to create profiles of victims, automate identity theft, and orchestrate operations at a speed and scale that is unimaginable with traditional methods. 

It is a disturbing truth that is highlighted in Anthropic’s report: although artificial intelligence was once hailed as a shield for defenders, it is now increasingly being used as a weapon, putting digital security at risk. Nevertheless, people must not retreat from AI adoption, but instead develop defensive strategies in parallel that are geared toward keeping up with AI adoption. Proactive guardrails must be set up in order to prevent artificial intelligence from being misused, including stricter oversight and transparency by developers, as well as continuous monitoring and real-time detection systems to recognise abnormal AI behaviour before it escalates into a serious problem. 

A company's resilience should go beyond its technical defences, and that means investing in employee training, incident response readiness, and partnerships that enable data sharing across sectors. In addition to this, governments are also under mounting pressure to update their regulatory frameworks in order to keep pace with the evolution of threat actors in terms of policy.

By harnessing artificial intelligence responsibly, people can still make it a powerful ally—automating defensive operations, detecting anomalies, and even predicting threats before they are even visible. In order to ensure that it continues in a manner that favours protection over exploitation, protecting not just individual enterprises, but the overall trust people have in the future of the digital world. 

A significant difference from earlier AI-assisted attacks was that Claude demonstrated "on-keyboard" capability as well, performing tasks such as scanning VPN endpoints, generating custom malware, and analysing stolen datasets in order to prioritise the victims with the highest payout potential. As soon as the system was installed, it created tailored ransom notes in HTML, containing the specific financial requirements, workforce statistics, and regulatory threats of each organisation, all based on the data that had been collected. 

The amount of payments requested ranged from $75,000 to $500,000 in Bitcoin, which illustrates that with the assistance of artificial intelligence, one individual could control the entire cybercrime network. Additionally, the report emphasises how artificial intelligence and cryptocurrency have increasingly become intertwined. 

For example, ransom notes include wallet addresses in ransom notes, and dark web forums are exclusively selling AI-generated malware kits in cryptocurrency. An investigation by the FBI has revealed that North Korea is increasingly using artificial intelligence (AI) to evade sanctions, which is used to secure fraudulent positions at Western tech companies by state-backed IT operatives who use it for the fabrication of summaries, passing interviews, debugging software, and managing day-to-day tasks. 

According to U.S. officials, these operations funnel hundreds of millions of dollars a year into Pyongyang's technical weapons development program, replacing years of training with on-demand AI assistance. All in all, these revelations indicate an alarming trend: artificial intelligence is not simply enabling cybercrime, but amplifying its scale, speed, and global reach. 

According to the report by Anthropic, Claude Code has been weaponised not only to breach systems, but also to monetise stolen data. This particular tool has been used in several instances to sort through thousands of documents containing sensitive information, including identifying information, financial details, and even medical records, before generating customised ransom notes and layering extortion strategies based on each victim's profile. 

The company explained that so-called “agent AI” tools are now providing attackers with both technical expertise and hands-on operational support, effectively eliminating the need for coordinated teams of human operators to perform the same functions. Despite the warnings of researchers, these systems are capable of dynamically adapting to defensive countermeasures like malware detection in real time, making traditional enforcement efforts increasingly difficult, they warned. 

Using a classifier built by Anthropic to identify this type of behaviour, the company has shared technical indicators with trusted partners in an attempt to combat the threat. The breadth of abuse is still evident through a series of case studies: North Korean operatives use Claude to create false summaries and maintain fraud schemes involving IT workers; a UK-based criminal with the name GTG-5004 is selling AI-based ransomware variants on darknet forums. 

Some Chinese actors use artificial intelligence to penetrate Vietnamese critical infrastructure, while Russians and Spanish-speaking groups use Claude to create malware and commit credit card fraud. The use of artificial intelligence in Telegram bots marketed for romance scams or synthetic identity services has even reached the level of low-skilled actors, allowing sophisticated fraud campaigns to become more accessible to the masses. 

A new report by Anthropic researchers Alex Moix, Ken Lebedev, and Jacob Klein argues that artificial intelligence, based on the results of their research, is continually lowering the barriers to entry for cybercriminals, enabling fraudsters to create profiles of victims, automate identity theft, and orchestrate operations at a speed and scale that is unimaginable with traditional methods. In the report published by Anthropic, it appears to be revealed that artificial intelligence is increasingly being used as a weapon to challenge the foundations of digital security, despite being once seen as a shield for defenders. 

There is a solution to this, but it is not in retreating from AI adoption, but by accelerating the parallel development of defensive strategies that are at the same pace as AI adoption. According to experts, proactive guardrails are necessary to ensure that AI deployments are monitored, developers are held more accountable, and there is continuous monitoring and real-time detection systems available that can be used to identify abnormal AI behaviour before it becomes a serious problemOrganisationss must not only focus on technical defences; they must also invest in employee training, incident response readiness, and partnerships that facilitate intelligence sharing between sectors as well.

Governments are also under increasing pressure to update regulatory frameworks to keep pace with the evolving threat actors, in order to ensure that policy is updated at the same pace as they evolve. By harnessing artificial intelligence responsibly, people can still make it a powerful ally—automating defensive operations, detecting anomalies, and even predicting threats before they are even visible. In order to ensure that it continues in a manner that favours protection over exploitation, protecting not just individual enterprises, but the overall trust people have in the future of the digital world.

PromptLock: the new AI-powered ransomware and what to do about it

 



Security researchers recently identified a piece of malware named PromptLock that uses a local artificial intelligence model to help create and run harmful code on infected machines. The finding comes from ESET researchers and has been reported by multiple security outlets; investigators say PromptLock can scan files, copy or steal selected data, and encrypt user files, with code for destructive deletion present but not active in analysed samples. 


What does “AI-powered” mean here?

Instead of a human writing every malicious script in advance, PromptLock stores fixed text prompts on the victim machine and feeds them to a locally running language model. That model then generates small programs, written in the lightweight Lua language, which the malware executes immediately. Researchers report the tool uses a locally accessible open-weight model called gpt-oss:20b through the Ollama API to produce those scripts. Because the AI runs on the infected computer rather than contacting a remote service, the activity can be harder to spot. 


How the malware works

According to the technical analysis, PromptLock is written in Go, produces cross-platform Lua scripts that work on Windows, macOS and Linux, and uses a SPECK 128-bit encryption routine to lock files in flagged samples. The malware’s prompts include a Bitcoin address that investigators linked to an address associated with the pseudonymous Bitcoin creator known as Satoshi Nakamoto. Early variants have been uploaded to public analysis sites, and ESET treats this discovery as a proof of concept rather than evidence of widespread live attacks. 


Why this matters

Two features make this approach worrying for defenders. First, generated scripts vary each time, which reduces the effectiveness of signature or behaviour rules that rely on consistent patterns. Second, a local model produces no network traces to cloud providers, so defenders lose one common source of detection and takedown. Together, these traits could make automated malware harder to detect and classify. 

Practical, plain steps to protect yourself:

1. Do not run files or installers you do not trust.

2. Keep current, tested backups offline or on immutable storage.

3. Maintain up-to-date operating system and antivirus software.

4. Avoid running untrusted local AI models or services on critical machines, and restrict access to local model APIs.

These steps will reduce the risk from this specific technique and from ransomware in general. 


Bottom line

PromptLock is a clear signal that attackers are experimenting with local AI to automate malicious tasks. At present it appears to be a work in progress and not an active campaign, but the researchers stress vigilance and standard defensive practices while security teams continue monitoring developments. 



Experts discover first-ever AI-powered ransomware called "PromptLock"

Experts discover first-ever AI-powered ransomware called "PromptLock"

A ransomware attack is an organization’s worst nightmare. Not only does it harm the confidentiality of the organizations and their customers, but it also drains money and causes damage to the reputation. Defenders have been trying to address this serious threat, but threat actors keep developing new tactics to launch attacks. To make things worse, we have a new AI-powered ransomware strain. 

First AI ransomware

Cybersecurity experts have found the first-ever AI-powered ransomware strain. Experts Peter Strycek and Anton Cherepanov from ESET found the strain and have termed it “PromptLock.” "During infection, the AI autonomously decides which files to search, copy, or encrypt — marking a potential turning point in how cybercriminals operate," ESET said.

The malware has not been spotted in any cyberattack as of yet, experts say. Promptlock appears to be in development and is poised for launch. 

Although cyber criminals used GenAI tools to create malware in the past, PromptLock is the first ransomware case that is based on an AI model. According to Cherepanov’s LinkedIn post, Promptlock exploits the gpt-oss:20b model from OpenAI through the Ollama API to make new scripts.

About PromptLock

Cherepanov’s LinkedIn post highlighted that the ransomware script can exfiltrate files and encrypt data, but may destroy files in the future. He said that “while multiple indicators suggest that the sample is a proof-of-concept (PoC) or a work-in-progress rather than an operational threat in the wild, we believe it is crucial to raise awareness within the cybersecurity community about such emerging risks.

AI and ransomware threat

According to Dark Reading’s conversation with ESET experts, the AI-based ransomware is a serious threat to security teams. Strycek and Cherepanov are trying to find out more about PromptLock, but they want to warn the security teams immediately about the ransomware. 

ESET on X noted that "the PromptLock ransomware is written in #Golang, and we have identified both Windows and Linux variants uploaded to VirusTotal."

Threat actors have started using AI tools to launch phishing campaigns by creating fake content and malicious websites, thanks to the rapid adoption across the industry. However, AI-powered ransomware will be a worse challenge for cybersecurity defenders.

CISOs fear material losses amid rising cyberattacks


Chief information security officers (CISOs) are worried about the dangers of a cyberattack, and there is an anxiety due to the material losses of data that organizations have suffered in the past year.

According to a report by Proofpoint, the majority of CISOs fear a material cyberattack in the next 12 months. These concerns highlight the increasing risks and cultural shifts among CISOs.

Changing roles of CISOs

“76% of CISOs anticipate a material cyberattack in the next year, with human risk and GenAI-driven data loss topping their concerns,” Proofpoint said. In this situation, corporate stakeholders are trying to get a better understanding of the risks involved when it comes to tech and whether they are safe or not. 

Experts believe that CISOs are being more open about these attacks, thanks to SEC disclosure rules, strict regulations, board expectations, and enquiries. The report surveyed 1,600 CISOs worldwide; all the organizations had more than 1000 employees. 

Doing business is a concern

The study highlights a rising concern about doing business amid incidents of cyberattacks. Although the majority of CISOs are confident about their cybersecurity culture, six out of 10 CISOs said their organizations are not prepared for a cyberattack. The majority of the CISOs were found in favour of paying ransoms to avoid the leak of sensitive data.

AI: Saviour or danger?

AI has risen both as a top concern as well as a top priority for CISOs. Two-thirds of CISOs believe that enabling GenAI tools is a top priority over the next two years, despite the ongoing risks. In the US, however, 80% CISOs worry about possible data breaches through GenAI platforms. 

With adoption rates rising, organizations have started to move from restriction to governance. “Most are responding with guardrails: 67% have implemented usage guidelines, and 68% are exploring AI-powered defenses, though enthusiasm has cooled from 87% last year. More than half (59%) restrict employee use of GenAI tools altogether,” Proofpoint said.

Malicous npm package exploit crypto wallets


Experts have found a malicious npm package that consists of stealthy features to deploy malicious code into pc apps targeting crypto wallets such as Exodus and Atomic. 

About the package

Termed as “nodejs-smtp,” the package imitates the genuine email library nodemailer with the same README descriptions, page styling, and tagline, bringing around 347 downloads since it was uploaded to the npm registry earlier this year by a user “nikotimon.” 

It is not available anymore. Socket experts Krill Boychenko said, "On import, the package uses Electron tooling to unpack Atomic Wallet's app.asar, replace a vendor bundle with a malicious payload, repackage the application, and remove traces by deleting its working directory.”

What is the CIS build kit?

The aim is to overwrite the recipient address with hard-coded wallets handled by a cybercriminal. The package delivers by working as an SMTP-based mailer while trying to escape developers’ attention. 

This has surfaced after ReversingLabs found an npm package called "pdf-to-office" that got the same results by releasing the “app.asar” archives linked to Exodus and Atomic wallets and changing the JavaScript file inside them to launch the clipper function. 

According to Boychenko, “this campaign shows how a routine import on a developer workstation can quietly modify a separate desktop application and persist across reboots. He also said that “by using import time execution and Electron packaging, a lookalike mailer becomes a wallet drainer that alters Atomic and Exodus on compromised Windows systems."

What next?

The campaign has exposed how a routine import on a developer's pc can silently change a different desktop application and stay alive in reboots. By exploiting the import time execution and Electron packaging, an identical mailer turns into a wallet drainer. Security teams should be careful of incoming wallet drainers deployed through package registries. 

Beware of SIM swapping attacks, your phone is at risk


In today’s digital world, most of our digital life is connected to our phone numbers, so keeping them safe becomes a necessity. Sad news: hackers don’t need your phone to access your number. 

What is SIM swapping?

Also known as SIMjacking, SIM swapping is a tactic where a cybercriminal convinces your ISP to port your phone number to their own SIM card. This results in the user losing access to their phone number and service provider, while the cybercriminal gains full access. 

To convince the ISP of a SIM swap, the threat actor has to know about you. They can get the information from data breaches available on the dark web. You might also get tricked by a phishing scam and end up giving your info, or the threat actor may harvest your social media in case you have public information. 

Once the information is received, the threat actor calls the customer support, requesting to move your number to a new SIM card. In most cases, your carrier doesn’t need much convincing. 

Threats concerning SIM swapping

An attacker with your phone number can impersonate you to friends and family, and extort money. Your phone security is also at risk, as most online services ask for your phone number for account recovery. 

SIM swapping is dangerous as SMS based two-factor-authentication is still in use. Many services require us to activate 2FA on our accounts, and sometimes through SMS. 

You can also check your carrier’s website to see if there’s any option to deactivate SIM change requests. This way, you can secure your phone number. 

But when this isn’t available with your carrier, look out for the option to enable a PIN or secret phrase. A few companies allow users to set these, and call you back to confirm about your account.

How to stay safe from SIM swapping?

Avoid using 2FA; use passkeys.

Use a SIM PIN for your phone to lock your SIM card.

Researchers Expose AI Prompt Injection Attack Hidden in Images

 

Researchers have unveiled a new type of cyberattack that can steal sensitive user data by embedding hidden prompts inside images processed by AI platforms. These malicious instructions remain invisible to the human eye but become detectable once the images are downscaled using common resampling techniques before being sent to a large language model (LLM).

The technique, designed by Trail of Bits experts Kikimora Morozova and Suha Sabi Hussain, builds on earlier research from a 2020 USENIX paper by TU Braunschweig, which first proposed the concept of image-scaling attacks in machine learning systems.

Typically, when users upload pictures into AI tools, the images are automatically reduced in quality for efficiency and cost optimization. Depending on the resampling method—such as nearest neighbor, bilinear, or bicubic interpolation—aliasing artifacts can emerge, unintentionally revealing hidden patterns if the source image was crafted with this purpose in mind.

In one demonstration by Trail of Bits, carefully engineered dark areas within a malicious image shifted colors when processed through bicubic downscaling. This transformation exposed black text that the AI system interpreted as additional user instructions. While everything appeared normal to the end user, the model silently executed these hidden commands, potentially leaking data or performing harmful tasks.

In practice, the team showed how this vulnerability could be exploited in Gemini CLI, where hidden prompts enabled the extraction of Google Calendar data to an external email address. With Zapier MCP configured to trust=True, the tool calls were automatically approved without requiring user consent.

The researchers emphasized that the success of such attacks depends on tailoring the malicious image to the specific downscaling algorithm used by each AI system. Their testing confirmed the method’s effectiveness against:

  1. Google Gemini CLI
  2. Vertex AI Studio (Gemini backend)
  3. Gemini’s web interface
  4. Gemini API via llm CLI
  5. Google Assistant on Android
  6. Genspark

Given the broad scope of this vulnerability, the team developed Anamorpher, an open-source tool (currently in beta) that can generate attack-ready images aligned with multiple downscaling methods.

To defend against this threat, Trail of Bits recommends that AI platforms enforce image dimension limits, provide a preview of the downscaled output before submission to an LLM, and require explicit user approval for sensitive tool calls—especially if text is detected in images.

"The strongest defense, however, is to implement secure design patterns and systematic defenses that mitigate impactful prompt injection beyond multi-modal prompt injection," the researchers said, pointing to their earlier paper on robust LLM design strategies.