Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Artificial Intelligence. Show all posts

AI-Powered Malware ‘LameHug’ Attacks Windows PCs via ZIP Files

 

Cybersecurity researchers have discovered a new and alarming trend in the world of online threats: "LameHug". This malicious program distinguishes out because it uses artificial intelligence, notably large language models (LLMs) built by companies such as Alibaba. 

LameHug, unlike classic viruses, can generate its own instructions and commands, making it a more adaptive and potentially difficult to detect adversary. Its primary goal is to infiltrate Windows-based personal PCs and then take valuable data surreptitiously. 

The malicious program typically begins its infiltration camouflaged as ordinary-looking ZIP files. These files are frequently sent via fraudulent emails that seem to come from legitimate government sources. When a user opens the seemingly innocent archive, the hidden executable and Python files inside begin to work. The malware then collects information about the affected Windows PC. 

Following this first reconnaissance, LameHug actively looks for text documents and PDF files stored in popular computer directories before discreetly transferring the obtained data to a remote web server. Its ability to employ AI to write its own commands makes it exceptionally cunning in its actions. 

LameHug was discovered by the Ukrainian national cyber incident response team (CERT-UA). Their investigation points to the Russian cyber group APT028, as the most likely source of this advanced threat. The malware is written in Python and uses Hugging Face's programming interfaces. These interfaces, in turn, are powered by a special Alibaba Cloud language model known as Qwen-2.5-Coder-32B-Instruct LLM, demonstrating the complex technological foundation of this new digital weapon. 

LameHug's arrival marks the first instance of malicious software being observed to use artificial intelligence to produce its own executable commands. Existing security software, which is often made to identify known attack patterns, has significant challenges as a result of these capabilities. The ongoing and intensifying arms race in the digital sphere is highlighted by this breakthrough as well as the mention of other emerging malware, such as "Skynet," that may elude AI detection techniques.

AI-Driven Phishing Threats Loom After Massive Data Breach at Major Betting Platforms

 

A significant data breach impacting as many as 800,000 users from two leading online betting platforms has heightened fears over sophisticated phishing risks and the growing role of artificial intelligence in exploiting compromised personal data.

The breach, confirmed by Flutter Entertainment, the parent company behind Paddy Power and Betfair, exposed users’ IP addresses, email addresses, and activity linked to their gambling profiles.

While no payment or password information was leaked, cybersecurity experts warn that the stolen details could still enable highly targeted attacks. Flutter, which also owns brands like Sky Bet and Tombola, referred to the event as a “data incident” that has been contained. The company informed affected customers that there is, “nothing you need to do in response to this incident,” but still advised them to stay alert.

With an average of 4.2 million monthly users across the UK and Ireland, even partial exposure poses a serious risk.

Harley Morlet, chief marketing officer at Storm Guidance, emphasized: “With the advent of AI, I think it would actually be very easy to build out a large-scale automated attack. Basically, focusing on crafting messages that look appealing to those gamblers.”

Similarly, Tim Rawlins, director and senior adviser at the NCC Group, urged users to remain cautious: “You might re-enter your credit card number, you might re-enter your bank account details, those are the sort of things people need to be on the lookout for and be conscious of that sort of threat. If it's too good to be true, it probably is a fraudster who's coming after your money.”

Rawlins also noted that AI technology is making phishing emails increasingly convincing, particularly in spear-phishing campaigns where stolen data is leveraged to mimic genuine communications.

Experts caution that relying solely on free antivirus tools or standard Android antivirus apps offers limited protection. While these can block known malware, they are less effective against deceptive emails that trick users into voluntarily revealing sensitive information.

A stronger defense involves practicing layered security—maintaining skepticism, exercising caution, and following strict cyber hygiene habits to minimize exposure

Germany’s Warmwind May Be the First True AI Operating System — But It’s Not What You Expect

 



Artificial intelligence is starting to change how we interact with computers. Since advanced chatbots like ChatGPT gained popularity, the idea of AI systems that can understand natural language and perform tasks for us has been gaining ground. Many have imagined a future where we simply tell our computer what to do, and it just gets done, like the assistants we’ve seen in science fiction movies.

Tech giants like OpenAI, Google, and Apple have already taken early steps. AI tools can now understand voice commands, control some apps, and even help automate tasks. But while these efforts are still in progress, the first real AI operating system appears to be coming from a small German company called Jena, not from Silicon Valley.

Their product is called Warmwind, and it’s currently in beta testing. Though it’s not widely available yet, over 12,000 people have already joined the waitlist to try it.


What exactly is Warmwind?

Warmwind is an AI-powered system designed to work like a “digital employee.” Instead of being a voice assistant or chatbot, Warmwind watches how users perform digital tasks like filling out forms, creating reports, or managing software, and then learns to do those tasks itself. Once trained, it can carry out the same work over and over again without any help.

Unlike traditional operating systems, Warmwind doesn’t run on your computer. It operates remotely through cloud servers based in Germany, following the strict privacy rules under the EU’s GDPR. You access it through your browser, but the system keeps running even if you close the window.

The AI behaves much like a person using a computer. It clicks buttons, types, navigates through screens, and reads information — all without needing special APIs or coding integrations. In short, it automates your digital tasks the same way a human would, but much faster and without tiring.

Warmwind is mainly aimed at businesses that want to reduce time spent on repetitive computer work. While it’s not the futuristic AI companion from the movies, it’s a step in that direction, making software more hands-free and automated.

Technically, Warmwind runs on a customized version of Linux built specifically for automation. It uses remote streaming technology to show you the user interface while the AI works in the background.

Jena, the company behind Warmwind, says calling it an “AI operating system” is symbolic. The name helps people understand the concept quickly, it’s an operating system, not for people, but for digital AI workers.

While it’s still early days for AI OS platforms, Warmwind might be showing us what the future of work could look like, where computers no longer wait for instructions but get things done on their own.

Can AI Be Trusted With Sensitive Business Data?

 



As artificial intelligence becomes more common in businesses, from retail to finance to technology— it’s helping teams make faster decisions. But behind these smart predictions is a growing problem: how do you make sure employees only see what they’re allowed to, especially when AI mixes information from many different places?

Take this example: A retail company’s AI tool predicts upcoming sales trends. To do this, it uses both public market data and private customer records. The output looks clean and useful but what if that forecast is shown to someone who isn’t supposed to access sensitive customer details? That’s where access control becomes tricky.


Why Traditional Access Rules Don’t Work for AI

In older systems, access control was straightforward. Each person had certain permissions: developers accessed code, managers viewed reports, and so on. But AI changes the game. These systems pull data from multiple sources, internal files, external APIs, sensor feeds, and combine everything to create insights. That means even if a person only has permission for public data, they might end up seeing results that are based, in part, on private or restricted information.


Why It Matters

Security Concerns: If sensitive data ends up in the wrong hands even indirectly, it can lead to data leaks. A 2025 study showed that over two-thirds of companies had AI-related security issues due to weak access controls.

Legal Risks: Privacy laws like the GDPR require clear separation of data. If a prediction includes restricted inputs and is shown to the wrong person, companies can face heavy fines.

Trust Issues: When employees or clients feel their data isn’t safe, they lose trust in the system, and the business.


What’s Making This So Difficult?

1. AI systems often blend data so deeply that it’s hard to tell what came from where.

2. Access rules are usually fixed, but AI relies on fast-changing data.

3. Companies have many users with different roles and permissions, making enforcement complicated.

4. Permissions are often too broad, for example, someone allowed to "view reports" might accidentally access sensitive content.


How Can Businesses Fix This?

• Track Data Origins: Label data as "public" or "restricted" and monitor where it ends up.

• Flexible Access Rules: Adjust permissions based on user roles and context.

• Filter Outputs: Build AI to hide or mask parts of its response that come from private sources.

• Separate Models: Train different AI models for different user groups, each with its own safe data.

• Monitor Usage: Keep logs of who accessed what, and use alerts to catch suspicious activity.


As AI tools grow more advanced and rely on live data from many sources, managing access will only get harder. Businesses must modernize their security strategies to protect sensitive information without slowing down innovation.

Is Your Bank Login at Risk? How Chatbots May Be Guiding Users to Phishing Scams

 


Cybersecurity researchers have uncovered a troubling risk tied to how popular AI chatbots answer basic questions. When asked where to log in to well-known websites, some of these tools may unintentionally direct users to the wrong places, putting their private information at risk.

Phishing is one of the oldest and most dangerous tricks in the cybercrime world. It usually involves fake websites that look almost identical to real ones. People often get an email or message that appears to be from a trusted company, like a bank or online store. These messages contain links that lead to scam pages. If you enter your username and password on one of these fake sites, the scammer gets full access to your account.

Now, a team from the cybersecurity company Netcraft has found that even large language models or LLMs, like the ones behind some popular AI chatbots, may be helping scammers without meaning to. In their study, they tested how accurately an AI chatbot could provide login links for 50 well-known companies across industries such as finance, retail, technology, and utilities.

The results were surprising. The chatbot gave the correct web address only 66% of the time. In about 29% of cases, the links led to inactive or suspended pages. In 5% of cases, they sent users to a completely different website that had nothing to do with the original question.

So how does this help scammers? Cybercriminals can purchase these unclaimed or inactive domain names, the incorrect ones suggested by the AI, and turn them into realistic phishing pages. If people click on them, thinking they’re going to the right site, they may unknowingly hand over sensitive information like their bank login or credit card details.

In one example observed by Netcraft, an AI-powered search tool redirected users who asked about a U.S. bank login to a fake copy of the bank’s website. The real link was shown further down the results, increasing the risk of someone clicking on the wrong one.

Experts also noted that smaller companies, such as regional banks and mid-sized fintech platforms, were more likely to be affected than global giants like Apple or Google. These smaller businesses may not have the same resources to secure their digital presence or respond quickly when problems arise.

The researchers explained that this problem doesn't mean the AI tools are malicious. However, these models generate answers based on patterns, not verified sources and that can lead to outdated or incorrect responses.

The report serves as a strong reminder: AI is powerful, but it is not perfect. Until improvements are made, users should avoid relying on AI-generated links for sensitive tasks. When in doubt, type the website address directly into your browser or use a trusted bookmark.

Amid Federal Crackdown, Microsoft Warns Against Rising North Korean Jobs Scams

Amid Federal Crackdown, Microsoft Warns Against Rising North Korean Jobs Scams

North Korean hackers are infiltrating high-profile US-based tech firms through scams. Recently, they have even advanced their tactics, according to the experts. In a recent investigation by Microsoft, the company has requested its peers to enforce stronger pre-employment verification measures and make policies to stop unauthorized IT management tools. 

Further investigation by the US government revealed that these actors were working to steal money for the North Korean government and use the funds to run its government operations and its weapons program.  

US imposes sanctions against North Korea

The US has imposed strict sanctions on North Korea, which restrict US companies from hiring North Korean nationals. It has led to threat actors making fake identities and using all kinds of tricks (such as VPNs) to obscure their real identities and locations. This is being done to avoid getting caught and get easily hired. 

Recently, the threat actors have started using spoof tactics such as voice-changing tools and AI-generated documents to appear credible. In one incident, the scammers somehow used an individual residing in New Jersey, who set up shell companies to fool victims into believing they were paying a legitimate local business. The same individual also helped overseas partners to get recruited. 

DoJ arrests accused

The clever campaign has now come to an end, as the US Department of Justice (DoJ) arrested and charged a US national called Zhenxing “Danny” Wanf with operating a “year-long” scam. The scheme earned over $5 million. The agency also arrested eight more people - six Chinese and two Taiwanese nationals. The arrested individuals are charged with money laundering, identity theft, hacking, sanctions violations, and conspiring to commit wire fraud.

In addition to getting paid in these jobs, which Microsoft says is a hefty payment, these individuals also get access to private organization data. They exploit this access by stealing sensitive information and blackmailing the company.

Lazarus group behind such scams

One of the largest and most infamous hacking gangs worldwide is the North Korean state-sponsored group, Lazarus. According to experts, the gang extorted billions of dollars from the Korean government through similar scams. The entire campaign is popular as “Operation DreamJob”. 

"To disrupt this activity and protect our customers, we’ve suspended 3,000 known Microsoft consumer accounts (Outlook/Hotmail) created by North Korean IT workers," said Microsoft.

How Ransomware Has Impacted Cyber Insurance Assessment Approach

How Ransomware Has Impacted Cyber Insurance Assessment Approach

Cyber insurance and ransomware

The surge in ransomware campaigns has compelled cyber insurers to rethink their security measures. Ransomware attacks have been a threat for many years, but it was only recently that threat actors realized the significant financial benefits they could reap from such attacks. The rise of ransomware-as-a-service (RaaS) and double extortion tactics has changed the threat landscape, as organizations continue to fall victim and suffer data leaks that are accessible to everyone. 

According to a 2024 threat report by Cisco, "Ransomware remains a prevalent threat as it directly monetizes attacks by holding data or systems hostage for ransom. Its high profitability, coupled with the increasing availability of ransomware-as-a-service platforms, allows even less skilled attackers to launch campaigns."

Changing insurance landscape due to ransomware

Cyber insurance is helping businesses to address such threats by offering services such as ransom negotiation, ransom reimbursement, and incident response. Such support, however, comes with a price. The years 2020 and 2021 witnessed a surge in insurance premiums. The Black Hat USA conference, scheduled in Las Vegas, will discuss how ransomware has changed businesses’ partnerships with insurers. Ransomware impacts an organization’s business model.

At the start of the 21st century, insurance firms required companies to buy a security audit to get a 25% policy discount. Insurance back then used to be a hands-on approach. The 2000s were followed by the data breach era; however, breaches were less common and frequent, targeting the hospitality and retail sectors. 

This caused insurers to stop checking for in-depth security audits, and they began using questionnaires to measure risk. In 2019, the ransomware wave happened, and insurers started paying out more claims than they were accepting. It was a sign that the business model was inadequate.

Questionnaires tend to be tricky for businesses to fill out. For instance, multifactor authentication (MFA) can be a complicated question to answer. Besides questionnaires, insurers have started using scans. 

Incentives to promote security measures

Threats have risen, but so have assessments, coverage incentives like vanishing retention mean that if policy users follow security instructions, retention disappears. Safety awareness training and patching vulnerabilities are other measures that can help in cost reductions. Scanning assessment can help in premium pricing, as it is lower currently. 

Cybercriminals Target AI Enthusiasts with Fake Websites to Spread Malware

 


Cyber attackers are now using people’s growing interest in artificial intelligence (AI) to distribute harmful software. A recent investigation has uncovered that cybercriminals are building fake websites designed to appear at the top of Google search results for popular AI tools. These deceptive sites are part of a strategy known as SEO poisoning, where attackers manipulate search engine algorithms to increase the visibility of malicious web pages.

Once users click on these links believing they’re accessing legitimate AI platforms they’re silently redirected to dangerous websites where malware is secretly downloaded onto their systems. The websites use layers of code and redirection to hide the true intent from users and security software.

According to researchers, the malware being delivered includes infostealers— a type of software that quietly gathers personal and system data from a user’s device. These can include saved passwords, browser activity, system information, and more. One type of malware even installs browser extensions designed to steal cryptocurrency.

What makes these attacks harder to detect is the attackers' use of trusted platforms. For example, the malicious code is sometimes hosted on widely used cloud services, making it seem like normal website content. This helps the attackers avoid detection by antivirus tools and security analysts.

The way these attacks work is fairly complex. When someone visits one of the fake AI websites, their browser is immediately triggered to run hidden JavaScript. This script gathers information about the visitor’s browser, encrypts it, and sends it to a server controlled by the attacker. Based on this information, the server then redirects the user to a second website. That second site checks details like the visitor’s IP address and location to decide whether to proceed with delivering the final malicious file.

This final step often results in the download of harmful software that invades the victim’s system and begins stealing data or installing other malicious tools.

These attacks are part of a growing trend where the popularity of new technologies, such as AI chatbots is being exploited by cybercriminals for fraudulent purposes. Similar tactics have been observed in the past, including misleading users with fake tools and promoting harmful applications through hijacked social media pages.

As AI tools become more common, users should remain alert while searching for or downloading anything related to them. Even websites that appear high in search engine results can be dangerous if not verified properly.

To stay safe, avoid clicking on unfamiliar links, especially when looking for AI services. Always download tools from official sources, and double-check website URLs. Staying cautious and informed is one of the best ways to avoid falling victim to these evolving online threats.