Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Microsoft. Show all posts

Emerging Threat Uses Windows Tools to Facilitate Banking Credential Theft


An alarming development that underscores how financial cybercrime is evolving is a Windows-based banking trojan dubbed Coyote. It has been observed for the first time that a malware strain leveraging the Microsoft UI Automation (UIA) framework for stealthy extraction of sensitive user data has emerged. It was developed in 2024 by Kaspersky, and it is specifically targeted at Brazilian users. Through its advanced capabilities, Coyote can log keystrokes, record screenshots, and use deceptive overlays on banking login pages that are designed to fool users into providing their information to the malware. 


A security researcher at Akamai has reported that in the latest variant, the legitimate Microsoft UIA component, which is designed to provide accessibility to desktop UI elements for those with disabilities, is exploited to retrieve credentials from websites linked to 75 financial institutions and cryptocurrency platforms via a phishing attack. A novel abuse of an accessibility tool demonstrates that threat actors are becoming increasingly sophisticated in their attempts to circumvent traditional security measures and compromise digital financial ecosystems. 

The Coyote virus first appeared in Latin American cybersecurity in February 2024 and has since been a persistent and damaging threat across the region. Coyote, a banking trojan, was originally used to steal financial information from unsuspecting users by using traditional methods, such as keylogging and phishing overlays. 

Despite being classified as a banking trojan, its distribution mechanism is based on the popular Squirrel installer, a feature which is also the inspiration for its name, a reference to the coyote-squirrel relationship, which is a predator-prey relationship. It was not long ago that Coyote began targeting Brazilian businesses, with the intent of deploying an information-stealing Remote Access Trojan (RAT) in their networks in an effort to steal information. 

After the malware was discovered, cybersecurity researchers began to discover critical insight into its behaviour as soon as it became apparent. The Fortinet company released a comprehensive technical report in January 2025 that detailed Coyote's attack chain, including the methods used to propagate the attack and the techniques used to infiltrate the system. In the evolution of Coyote from conventional credential theft to sophisticated abuse of legitimate accessibility frameworks, one can see a common theme in modern malware development—a trend in which native system utilities are retooled to facilitate covert surveillance and data theft. 

Through innovation and stealth, Coyote is proving to be an excellent example of how regionally focused threats can rapidly escalate into globally significant risks through the use of innovation and stealth. The Coyote malware has evolved significantly in its attack methodology since its previous appearance in 2015, which has prompted cybersecurity professionals to have new concerns. 

Since December 2024, Akamai researchers have been following Coyote closely, and they have found out that earlier versions of the malware have mainly relied on keylogging and phishing overlays to steal login credentials from users of 75 targeted banking and cryptocurrency websites. However, users had to access financial applications outside of traditional web browsers in order for these methods to work, meaning that browser-based sessions largely remained safe. 

In contrast, Coyote's newest version, which was released earlier this year, demonstrates a markedly higher level of sophistication. Using Microsoft's UI Automation framework (UIA), Coyote can now detect and analyse banking and crypto exchange websites that are open directly within browsers by utilising its Microsoft UI Automation framework. As a result of this enhancement, malware is now able to identify financial activity more accurately and extract sensitive information even from less vulnerable sessions, significantly increasing the scope and impact of the malware. 

With stealth and precision, the Coyote malware activates on a victim's computer as soon as the program they are infected with—typically through the widely used Squirrel installer—is executed on their system. As soon as the malware has been installed, it runs silently in the background, gathering fundamental system details as well as continuously monitoring all active programs and windows. One of the primary objectives of this malware is to detect interactions with cryptocurrency platforms or banking services.

If Coyote detects such activity, it utilises the UI Automation framework (UIA) to programmatically read the content displayed on the screen, bypassing traditional input-based detection mechanisms. Furthermore, the malware is capable of extracting web addresses directly from browser tabs or the address bar, cross-referenced to a predefined list of financial institutions and crypto exchanges that are targeted. This further elevates the malware's threat profile. 

Upon finding a match, the tool initiates a credential harvesting operation that is aimed at capturing credentials such as login information and wallet information. As of right now, Coyote appears to have a geographic focus on Brazilian users, targeting companies like Banco do Brasil, Santander, as well as global platforms like Binance, as well. 

Although it is unlikely that this regional concentration will remain static for long, threat actors often launch malware campaigns in limited geographies for the purpose of testing them out before attempting to spread their campaign to a broader audience. Among the latest versions of Coyote malware, there is an impressive combination of technical refinement and operational stealth that sets it apart from typical financial Trojans in terms of performance.

It is particularly noteworthy that it utilises Microsoft's UI Automation framework to look directly at application window content to be able to steal sensitive information without having to rely on visible URLs or browser titles. There are no longer any traditional techniques for this variant that rely on keylogging or phishing overlays, but rather rely on UI-level reconnaissance that allows it to identify and engage with targeted Brazilian cryptocurrency and banking platforms with remarkable subtlety. Further increasing its evasiveness is its ability to operate offline. 

By doing so, it can gather and scan data without requiring a connection to the command-and-control (C2) server. In order to initiate an attack sequence, the malware first profiles the infected system, obtaining information such as the name of the device, the operating system version, and the credentials of the user. As a result, Coyote scans the titles of active windows in an attempt to find financial platforms that are well-known. 

If no direct match is found, Coyote escalates its efforts by parsing the visual user interface elements via the UIA interface, resulting in critical data such as URLs and tab labels that are crucial for the application. As soon as the application detects a target, it uses an array of credential harvesting techniques, which include token interception and direct access to usernames and passwords.

Although the current campaign remains focused in Brazil, the fact that Coyote can operate undetected at the user interface layer and that it uses native Windows APIs poses a serious and scalable threat to businesses across the globe. Considering its offline functionality, small network footprint, and ability to evade standard security solutions, it is a potent reminder that legitimate system tools can be repurposed to quietly undermine digital defences complex cybersecurity landscape that is getting ever more complex. 

Cybersecurity is rapidly evolving, and it is becoming increasingly apparent to us that the dynamic between threat actors and defenders has become more of a high-stakes game, where innovation can change the balance quite rapidly between the two sides. A case study such as the Coyote malware underscores the fact that even system components which appear harmless, such as Microsoft's UI Automation (UIA) framework, can be exploited to achieve malicious objectives. 

Although UIA was created to enhance accessibility and usability, the abuse of the tool by advanced malware proves the inherent risks associated with native tools that are trusted. The objective of security researchers is to give defenders a better understanding of the inner workings and methods employed by Coyote, so they can detect, mitigate, and respond more effectively to such stealthy intrusions. 

It is important to note that the exploitation of UIA as an attack vector is not simply a tactic that is used for a single attack-it signals a shift in adversarial strategy that emphasises invisibility and manipulation of systems. Organisations must strengthen their security posture by observing how legitimate technologies may be repurposed as a means to commit cybercrime, as well as staying vigilant against threats that blur the line between utility and vulnerability. 

There is no question that the advent of Coyote malware marked a turning point in the evolution of cyber threats. It underscores the growing abuse of legitimate system tools for malicious purposes as well. Using Microsoft's UI Automation framework (UIA), an accessibility feature which was created to support users with disabilities, Coyote illustrates to us that trusted functionality could be repurposed to steal information from systems by silently infiltrating them. 

The malware operations of this company, which are currently focused on Brazilian financial institutions and crypto exchanges, represent the emerging trend toward stealth-driven malware campaigns that target specific regions of the globe. A call to action has been issued to defenders by this evolution, as traditional security tools that are based on network-based detection or signature matching may not be up to the task of combating threats that operate entirely within the user interface layer and do not require the use of command-and-control communications. 

Consequently, organisations have to develop more nuanced strategies to keep their data secure, such as behavioural monitoring, heuristic analysis, and visibility of native API usage. As a further precaution, maintaining strict controls over software distribution methods, such as Squirrel installers, is also a great way to prevent the spread of early-stage infections. By adopting a silent, system-native approach, Coyote reflects a change in the cyber threat landscape, shifting away from overt, disruptive attacks to covert, credential-stealing surveillance. 

Coyote utilizes low-noise approaches to achieve maximum data exfiltration, often as part of long-term campaigns, in order to evade detection, resulting in maximum data exfiltration. This demonstrates the sophistication of modern malware and the urgent need for adaptive cybersecurity frameworks to cope with these threats. In addition to exploiting UIA, it is also likely that it will result in more widespread abuse of accessibility features that have traditionally been overlooked in security planning, and which may eventually become a major security concern.

As threat actors continue to refine their approaches, companies need to be vigilant, rethink what constitutes potential attack surfaces, and take measures to detect threats as soon as possible. Coyote is an example of malware that requires a combination of stronger tools, as well as a deeper understanding of the way even helpful technology can be turned into a security liability quickly if it is misused.

New Coyote Malware Variant Exploits Windows Accessibility Tool for Data Theft

 




A recently observed version of the banking malware known as Coyote has begun using a lesser-known Windows feature, originally designed to help users with disabilities, to gather sensitive information from infected systems. This marks the first confirmed use of Microsoft’s UI Automation (UIA) framework by malware for this purpose in real-world attacks.

The UI Automation framework is part of Windows’ accessibility system. It allows assistive tools, such as screen readers, to interact with software by analyzing and controlling user interface (UI) elements, like buttons, text boxes, and navigation bars. Unfortunately, this same capability is now being turned into a tool for cybercrime.


What is the malware doing?

According to recent findings from cybersecurity researchers, this new Coyote variant targets online banking and cryptocurrency exchange platforms by monitoring user activity on the infected device. When a person accesses a banking or crypto website through a browser, the malware scans the visible elements of the application’s interface using UIA. It checks things like the tab names and address bar to figure out which website is open.

If the malware recognizes a target website based on a preset list of 75 financial services, it continues tracking activity. This list includes major banks and crypto platforms, with a focus on Brazilian users.

If the browser window title doesn’t give away the website, the malware digs deeper. It uses UIA to scan through nested elements in the browser, such as open tabs or address bars, to extract URLs. These URLs are then compared to its list of targets. While current evidence shows this technique is being used mainly for tracking, researchers have also demonstrated that it could be used to steal login credentials in the future.


Why is this alarming?

This form of cyberattack bypasses many traditional security tools like antivirus programs or endpoint detection systems, making it harder to detect. The concern grows when you consider that accessibility tools are supposed to help people with disabilities not become a pathway for cybercriminals.

The potential abuse of accessibility features is not limited to Windows. On Android, similar tactics have long been used by malicious apps, prompting developers to build stricter safeguards. Experts believe it may now be time for Microsoft to take similar steps to limit misuse of its accessibility systems.

While no official comment has been made regarding new protections, the discovery highlights how tools built for good can be misused if not properly secured. For now, the best defense remains being careful, both from users and from developers of operating systems and applications.



Ditch Passwords, Use Passkeys to Secure Your Account

Ditch Passwords, Use Passkeys to Secure Your Account

Ditch passwords, use passkeys

Microsoft and Google users, in particular, have been warned about ditching passwords for passkeys. Passwords are easy to steal and can unlock your digital life. Microsoft has been at the forefront, confirming it will delete passwords for more than a billion users. Google, too, has warned that most of its users will have to add passkeys to their accounts. 

What are passkeys?

Instead of a username and password, passkeys use our device security to log into our account. This means that there is no password to hack and no two-factor authentication codes to bypass, making it phishing-resistant.

At the same time, the Okta team warned that it found threat actors exploiting v0, an advanced GenAI tool made by Vercelopens, to create phishing websites that mimic real sign-in webpages

Okta warns users to not use passwords

A video shows how this works, raising concerns about users still using passwords to sign into their accounts, even when backed by multi-factor authentication, and “especially if that 2FA is nothing better than SMS, which is now little better than nothing at all,” according to Forbes. 

According to Okta, “This signals a new evolution in the weaponization of GenAI by threat actors who have demonstrated an ability to generate a functional phishing site from simple text prompts. The technology is being used to build replicas of the legitimate sign-in pages of multiple brands, including an Okta customer.”

Why are passwords not safe?

It is shocking how easy a login webpage can be mimicked. Users should not be surprised that today’s cyber criminals are exploiting and weaponizing GenAI features to advance and streamline their phishing attacks. AI in the wrong hands can have massive repercussions for the cybersecurity industry.

According to Forbes, “Gone are the days of clumsy imagery and texts and fake sign-in pages that can be detected in an instant. These latest attacks need a technical solution.”

Users are advised to add passkeys to their accounts if available and stop using passwords when signing in to their accounts. Users should also ensure that if they use passwords, they should be long and unique, and not backed up by SMS 2-factor authentication. 

Microsoft Defender for Office 365 Will Now Block Email Bombing Attacks



Microsoft Defender for Office 365 Will Now Block Email Bombing Attacks

Microsoft Defender for Office 365, a cloud-based email safety suite, will automatically detect and stop email-bombing attacks, the company said.  Previously known as Office 365 Advanced Threat Protection (Office 365 ATP), Defender for Office 365 safeguards businesses operating in high-risk sectors and dealing with advanced threat actors from harmful threats originating from emails, collaboration tools, and links. 

"We're introducing a new detection capability in Microsoft Defender for Office 365 to help protect your organization from a growing threat known as email bombing," Redmond said in a Microsoft 365 message center update. These attacks flood mailboxes with emails to hide important messages and crash systems. The latest ‘Mail Bombing’ identification will spot and block such attempts, increasing visibility for real threats. 

About the new feature

The latest feature was rolled out in June 2025, toggled as default, and would not require manual configuration. Mail Bombing will automatically send all suspicious texts to the Junk folder. It is now available for security analysts and admins in Threat Explorer, Advanced Hunting, the Email entity page, the Email summary panel, and the Email entity page. 

About email bombing attacks

In mail bombing campaigns, the attackers spam their victims’ emails with high volumes of messages. This is done by subscribing users to junk newsletters and using specific cybercrime services that can send thousands or tens of thousands of messages within minutes. The goal is to crash email security systems as a part of social engineering attacks, enabling ransomware attacks and malware to extract sensitive data from victims. These attacks have been spotted for over a year, and used by ransomware gangs. 

Mode of operation

BlackBast gang first used email bombing to spam their victims’ mailboxes. The attackers would later follow up and pretend to be IT support teams to lure victims into allowing remote access to their devices via AnyDesk or the default Windows Quick Assist tool. 

After gaining access, threat actors install malicious tools and malware that help them travel laterally through the corporate networks before installing ransomware payloads.

Microsoft Entra ID Faces Surge in Coordinated Credential-Based Attacks

An extensive account takeover (ATO) campaign targeting Microsoft Entra ID has been identified by cybersecurity experts, exploiting a powerful open-source penetration testing framework known as TeamFiltration. 

First detected in December 2024, the campaign has accelerated rapidly, compromising more than 80,000 user accounts across many cloud environments over the past several years. It is a sophisticated and stealthy attack operation aimed at breaching enterprise cloud infrastructure that has been identified by the threat intelligence firm Proofpoint with the codename UNK_SneakyStrike, a sophisticated and stealthy attack operation. 

UNK_SneakyStrike stands out due to its distinctive operational pattern, which tends to unfold in waves of activity throughout a single cloud environment often targeting a broad spectrum of users. The attacks usually follow a period of silent periods lasting between four and five days following these aggressive bursts of login attempts, a tactic that enables attackers to avoid triggering traditional detection mechanisms while maintaining sustained pressure on organizations' defence systems. 

Several technical indicators indicate that the attackers are using TeamFiltration—a sophisticated, open-source penetration testing framework first introduced at the Def Con security conference in 2022—a framework that is highly sophisticated and open source. As well as its original purpose of offering security testing and red teaming services in enterprises, TeamFiltration is now being used by malicious actors to automate large-scale user enumeration, password spraying, and stealthy data exfiltration, all of which are carried out on a massive scale by malicious actors. 

To simulate real-world account takeover scenarios in Microsoft cloud environments, this tool has been designed to compromise Microsoft Entra ID, also known as Azure Active Directory, in an attempt to compromise these accounts. It is important to know that TeamFiltration's most dangerous feature is its integration with the Microsoft Teams APIs, along with its use of Amazon Web Services (AWS) cloud infrastructure to rotate the source IP addresses dynamically. 

Not only will this strategy allow security teams to evade geofencing and rate-limiting defences, but also make attribution and traffic filtering a significant deal more challenging. Additionally, the framework features advanced functionalities that include the ability to backdoor OneDrive accounts so that attackers can gain prolonged, covert access to compromised systems without triggering immediate alarms, which is the main benefit of this framework. 

A combination of these features makes TeamFiltration a useful tool for long-term intrusion campaigns as it enhances an attacker's ability to keep persistence within targeted networks and to siphon sensitive data for extended periods of time. By analysing a series of distinctive digital fingerprints that were discovered during forensic analysis, Proofpoint was able to pinpoint both the TeamFiltration framework and the threat actor dubbed UNK_SneakyStrike as being responsible for this malicious activity. 

As a result, there were numerous issues with the tool, including a rarely observed user agent string, hardcoded client identifications for OAuth, and a snapshot of the Secureworks FOCI project embedded within its backend architecture that had been around for quite some time. As a result of these technical artefacts, researchers were able to trace the attack's origin and misuse of tools with a high degree of confidence, enabling them to trace the campaign's origin and tool misuse with greater certainty. 

An in-depth investigation of the attack revealed that the attackers were obfuscating and circumventing geo-based blocking mechanisms by using Amazon Web Services (AWS) infrastructure spanning multiple international regions in order to conceal their real location. A particularly stealthy manoeuvre was used by the threat actors when they interacted with the Microsoft Teams API using a "sacrificial" Microsoft Office 365 Business Basic account, which gave them the opportunity to conduct covert account enumeration activities. 

Through this tactic, they were able to verify existing Entra ID accounts without triggering security alerts, thereby silently creating a map of user credentials that were available. As a result of the analysis of network telemetry, the majority of malicious traffic originated in the United States (42%). Additional significant activity was traced to Ireland (11%) and the United Kingdom (8%) as well. As a consequence of the global distribution of attack sources, attribution became even more complex and time-consuming, compromising the ability to respond efficiently. 

A detailed advisory issued by Proofpoint, in response to the campaign, urged organisations, particularly those that rely on Microsoft Entra ID for cloud identity management and remote access-to initiate immediate mitigations or improvements to the system. As part of its recommendations, the TeamFiltration-specific user-agent strings should be flagged by detection rules, and multi-factor authentication (MFA) should be enforced uniformly across all user roles, based on all IP addresses that are listed in the published indicators of compromise (IOCs). 

It is also recommended that organisations comply with OAuth 2.0 security standards and implement granular conditional access policies within Entra ID environments to limit potential exposure to hackers. There has been no official security bulletin issued by Microsoft concerning this specific threat, but internal reports have revealed that multiple instances of unauthorised access involving enterprise accounts have been reported. This incident serves as a reminder of the risks associated with dual-use red-teaming tools such as TeamFiltration, which can pose a serious risk to organisations. 

There is no doubt in my mind that such frameworks are designed to provide legitimate security assessments, however, as they are made available to the general public, they continue to raise concerns as they make it more easy for threat actors to use them to gain an advantage, blurring the line between offensive research and actual attack vectors as threats evolve. 

The attackers during the incident exploited the infrastructure of Amazon Web Services (AWS), but Amazon Web Services (AWS) reiterated its strong commitment to promoting responsible and lawful use of its cloud platform. As stated by Amazon Web Services, in order to use its resources lawfully and legally, all customers are required to adhere to all applicable laws and to adhere to the platform's terms of service. 

A spokesperson for Amazon Web Services explained that the company maintains a clearly defined policy framework that prevents misappropriation of its infrastructure. As soon as a company receives credible reports that indicate a potential violation of these policies, it initiates an internal investigation and takes appropriate action, such as disabling access to content that is deemed to be violating the company's terms. As part of this commitment, Amazon Web Services actively supports and values the global community of security researchers. 

Using the UNK_SneakyStrike codename, the campaign has been classified as a highly orchestrated and large-scale operation that is based on the enumeration of users and password spraying. According to researchers at Proofpoint, these attempts to gain access to cloud computing services usually take place in bursts that are intense and short-lived, resulting in a flood of credentials-based login requests to cloud environments. Then, there is a period of quietness lasting between four and five days after these attacks, which is an intentional way to prevent continuous detection and prolong the life cycle of the campaign while enabling threat actors to remain evasive. 

A key concern with this operation is the precision with which it targets its targets, which makes it particularly concerning. In the opinion of Proofpoint, attackers are trying to gain access to nearly all user accounts within the small cloud tenants, while selectively targeting particular users within the larger enterprise environments. 

TeamFiltration's built-in filtering capabilities, which allow attackers to prioritise the highest value accounts while avoiding detection by excessive probing, are a calculated approach that mirrors the built-in filtering capabilities of TeamFiltration. This situation underscores one of the major challenges the cybersecurity community faces today: tools like TeamFiltration that were designed to help defenders simulate real-world attacks are increasingly being turned against organisations, instead of helping them fight back. 

By weaponizing these tools, threat actors can infiltrate cloud infrastructure, extract sensitive data, establish long-term access, and bypass conventional security controls, while infiltrating it, extracting sensitive data, and establishing long-term control. In this campaign, we are reminded that dual-purpose cybersecurity technologies, though essential for improving organization resilience, can also pose a persistent and evolving threat when misappropriated. 

As the UNK_SneakyStrike campaign demonstrates, the modern threat landscape continues to grow in size and sophistication, which is why it is imperative that cloud security be taken into account in a proactive, intelligence-driven way. Cloud-native organisations must take steps to enhance their threat detection capabilities and go beyond just reactive measures by investing in continuous threat monitoring, behavioural analytics, and threat hunting capabilities tailored to match their environments' needs. 

In the present day, security strategies must adapt to the dynamic nature of cloud infrastructure and the growing threat of identity-based attacks, which means relying on traditional perimeter defences or static access controls will no longer be sufficient. In order to maintain security, enterprise defenders need to routinely audit their identity and access management policies, verify that integrated third-party applications are secure, and review logs for anomalies indicative of low-and-slow intrusion patterns. 

In order to build a resilient ecosystem that can withstand emerging threats, cloud service providers, vendors, and enterprise security teams need to work together in order to create a collaborative ecosystem. As an added note, cybersecurity community members must engage in ongoing discussions about how dual-purpose security tools should be distributed and governed to ensure that innovation intended to strengthen defences is not merely a weapon that compromises them, but rather a means of strengthening those defences. 

The ability to deal with advanced threats requires agility, visibility, and collaboration in order for organisations to remain resilient. There is no doubt that organisations are more vulnerable to attacks than they were in the past, but they can minimise exposure, contain intrusions quickly, and ensure business continuity despite increasingly coordinated, deceptive attack campaigns if they are making use of holistic security hygiene and adopting a zero-trust architecture.

Microsoft's Latest AI Model Outperforms Current Weather Forecasting

 

Microsoft has created an artificial intelligence (AI) model that outperforms current forecasting methods in tracking air quality, weather patterns, and climate-affected tropical storms, according to studies published last week.

The new model, known as Aurora, provided 10-day weather forecasts and forecasted hurricane courses more precisely and quickly than traditional forecasting, and at a lower cost, according to researchers who published their findings in journal Nature. 

"For the first time, an AI system can outperform all operational centers for hurricane forecasting," noted senior author Paris Perdikaris, an associate professor of mechanical engineering at the University of Pennsylvania.

Aurora, trained just on historical data, was able to estimate all hurricanes in 2023 more precisely than operational forecasting centres such as the US National Hurricane Centre. Traditional weather prediction models are based on fundamental physics principles such as mass, momentum, and energy conservation, and therefore demand significant computing power. The study found that Aurora's computing expenses were several hundred times cheaper. 

The trial results come on the heels of the Pangu-meteorological AI model developed and unveiled by Chinese tech giant Huawei in 2023, and might mark a paradigm shift in how the world's leading meteorological agencies predict weather and possibly deadly extreme events caused by global warming. According to its creators, Aurora is the first AI model to regularly surpass seven forecasting centres in predicting the five-day path of deadly storms. 

Aurora's simulation, for example, correctly predicted four days in advance where and when Doksuri, the most expensive typhoon ever recorded in the Pacific, would reach the Philippines. Official forecasts at the time, in 2023, showed it moving north of Taiwan. 

Microsoft's AI model also surpassed the European Centre for Medium-Range Weather Forecasts (ECMWF) model in 92% of 10-day worldwide forecasts, on a scale of about 10 square kilometres (3.86 square miles). The ECMWF, which provides forecasts for 35 European countries, is regarded as the global standard for meteorological accuracy.

In December, Google announced that its GenCast model has exceeded the European center's accuracy in more than 97 percent of the 1,320 climate disasters observed in 2019. Weather authorities are closely monitoring these promising performances—all experimental and based on observed phenomena.

London Startup Allegedly Deceived Microsoft with Fake AI Engineers

 


There have now been serious allegations of fraud against London-based startup Builder.ai, once considered a disruptor of software development and valued at $1.5 billion. Builder.ai is now in bankruptcy. The company claims that its artificial intelligence-based platform will revolutionise app development. With the help of its AI-assisted platform, Natasha, the company claims that building software will be easier than ordering pizza. 

The recent revelations, however, have revealed a starkly different reality: instead of employing cutting-edge AI technology, Builder.ai reportedly relies on hundreds of human developers in India, who manually execute customer requests while pretending to be AI-generated results.

Having made elaborate misrepresentations about this company, Microsoft and Qatar Investment Authority invested $445 million, led by the false idea that they were backed by a scalable, AI-based solution, which resulted in over $445 million in funding being raised. This scandal has sparked a wider conversation about transparency, ethics, and the hype-driven nature of the startup ecosystem, as well as raised serious concerns about due diligence in the AI investment landscape. 

In 2016, Builder.ai, which was founded by entrepreneur Sachin Dev Duggal under the name Engineer.ai, was conceived with a mission to revolutionise app development. In the company's brand, the AI-powered, no-code platform was touted to be able to dramatically simplify the process of creating software applications by cutting down on the amount of code required. 

Founded by a group of MIT engineers and researchers, Builder.ai quickly captured the attention of investors worldwide, as the company secured significant funding from high-profile companies including Microsoft, the Qatar Investment Authority, the International Finance Corporation (IFC), and SoftBank's DeepCore. 

The company highlighted its proprietary artificial intelligence assistant, Natasha, as the technological breakthrough that could be used to build custom software without human intervention. This innovative approach was a central part of the company's value proposition. With the help of a compelling narrative, the startup secured more than $450 million in funding and achieved unicorn status with a peak valuation of $1.5 billion. 

It was widely recognised in the early stages of the evolution of Builder.ai that it was a pioneering force that revolutionised software development, reducing the reliance on traditional engineering teams and democratizing software development. However, underneath the surface of the company's slick marketing campaigns and investor confidence lay a very different operational model—one which relied heavily on human engineers, rather than advanced artificial intelligence. 

Building.ai's public image unravelled dramatically when its promotional promises diverged from its internal practices. It was inevitable that the dramatic collapse of Builder.ai, once regarded as a rising star in the global tech industry, would eventually lead to mounting scrutiny and a dramatic unravelling of its public image. This has revealed troubling undercurrents in the AI startup sector.

In its beginnings, Builder.ai was marketed as a groundbreaking platform for creating custom applications, but it also promised automation, scale, and cost savings, and was positioned as a revolutionary platform for developing custom applications. Natasha was the company's flagship artificial intelligence assistant, which was widely advertised as enabling it to develop software with no code. Yet internal testimonies, lawsuits, and investigation findings have painted a much more troubling picture since then. 

According to its claims of integrating sophisticated artificial intelligence, Natasha was only used as a simple interface for collecting client requirements, whereas the actual development work was done by large engineering teams in India, despite Natasha's claims of sophisticated artificial intelligence integration. According to whistleblowers, including former executives, Builder.ai did not have any genuine AI infrastructure in place. 

As it turns out, internal documentation indicates that applications are being marketed as “80% built by AI” when in fact their underlying tools are rudimentary at best, when they are actually built with artificial intelligence. Former CEO Robert Holdheim filed a $5 million lawsuit alleging wrongful termination after raising concerns about deceptive practices and investor misrepresentation in the company. Due to his case catalysing broader scrutiny, allegations of financial misconduct, as well as technological misrepresentations, were made, resulting in allegations of both. 

After Sachin Dev Duggal had taken over as CEO in mid-2025, Manpreet Ratia took over as CEO, starting things off in a positive manner by stabilising operations. An independent financial audit was ordered under Ratia's leadership that revealed massive discrepancies between the reported revenue and the actual revenue. 

Builder.ai claimed that it had generated more than $220 million in revenues for 2024, while the true figure was closer to $50 million. As a result, Viola Credit, a company's loan partner, quickly seized $37 million in the company's accounts and raised alarm among creditors and investors alike. A final-ditch measure was to release a press release acknowledging Builder.ai had been unable to sustain payroll or its global operations, with only $5 million remaining in restricted funds. 

In the statement, it acknowledged that it had not been able to recover from its past decisions and historic challenges. Several bankruptcy filings were initiated across multiple jurisdictions within a short period of time, including India, the United Kingdom, and the United States. The result was the layoff of over 1,000 employees and the suspension of a variety of client projects. 

The controversy exploded as new allegations were made about revenue roundtrips with Indian technology company VerSe, which was believed to be a strategy aimed at inflating financial performance and attracting new investors. Further, reports revealed that Builder.ai has defaulted on substantial payments to Amazon and Microsoft, owing approximately $85 million to Amazon and $30 million to Microsoft for unpaid cloud services. 

As a result of these developments, a federal investigation has been launched, with authorities requesting access to the company's finances and client contracts as well. As a result of the Builder.ai scandal, a broader issue is at play in the tech sector — "AI washing", where startups exaggerate or misstate their artificial intelligence capabilities to get funding and market traction. 

In an interview with Info-Tech Research Group, Principal Analyst Phil Brunkard summarised this crisis succinctly: "Many of these so-called AI companies scaled based on narrative rather than infrastructure." There is a growing concern among entrepreneurs, investors, and the entire technology industry that Builder.ai could be serving as a cautionary tale for investors, entrepreneurs, and the entire technology industry as regulatory bodies tighten scrutiny of AI marketing claims. 

There have been concerns regarding the legitimacy of Builder.ai's artificial intelligence capabilities ever since a report published by The Wall Street Journal in 2019 raised questions about how heavily the company relies on human labour over artificial intelligence. It has been reported that, despite the company's marketing narrative emphasising automation and machine learning, the company's internal operations paint a different picture. 

The article quotes former employees of Builder.ai saying that Builder.ai was a platform that was primarily engineering, and not AI-driven. This statement starkly contradicted the company's claim to be an AI-first, no-coding platform. Even though many investors and stakeholders ignored these early warnings, they hinted that there might be deeper structural inconsistencies with the startup's operations than what the initial warnings indicated. 

When Manpreet Ratia took on the role of CEO of the company in February 2025, succeeding founder Sachin Dev Duggal, the extent to which the company's internal dysfunction was revealed. It became apparent to Ratia quickly that the company had been misreported and that data had been manipulated for years in order to increase its valuation and public image, despite the fact that it had been tasked with restoring investor confidence and operational transparency. 

Following the revelations in this case, U.S. federal prosecutors immediately began an investigation into the company's business practices in response to the disclosures. Earlier this week, the authorities formally requested access to Builder.AI's financial records, internal communications, and its customer data. The request is part of a broader investigation looking into the possibility of fraud, deception of investors, and violations related to false descriptions of AI capabilities.

It should be noted that the failure of Builder.AI serves as an obvious sign that the investment and innovation ecosystems surrounding artificial intelligence need to be recalibrated urgently and sharply. Capital is continuing to flow into AI-powered ventures at a rapid pace, and stakeholders need to raise their standards in regards to due diligence, technical validation and governance oversight as a result. 

It is important to temper investor enthusiasm for innovative startups by rigorously evaluating the company's technical capabilities beyond polished pitch decks and strategic storytelling. The case reinforces the importance of transparency and sustainability over short-term hype for founders, as well as the need for regulators to develop frameworks aimed at holding companies accountable if they make misleading claims in their product representations and financial disclosures. 

Regulators are becoming increasingly aware of what is being called "AI washing" and are developing strategies to address it. Credibility in a sector built upon trust has become an essential cornerstone of long-term viability, and the collapse of Builder.ai illustrates that this is no longer just a case of a singular failure; rather, it has become a call to action in the tech industry to place substance above spectacle in the age of artificial intelligence.

Rust-Developed InfoStealer Extracts Sensitive Data from Chromium-Based Browsers

Rust-Developed InfoStealer Extracts Sensitive Data from Chromium-Based Browsers

Browsers at risk

The latest information-stealing malware, made in the Rust programming language, has surfaced as a major danger to users of Chromium-based browsers such as Microsoft Edge, Google Chrome, and others. 

Known as “RustStealer” by cybersecurity experts, this advanced malware is made to retrieve sensitive data, including login cookies, browsing history, and credentials, from infected systems. 

Evolution of Rust language

The growth in Rust language known for memory safety and performance indicates a transition toward more resilient and hard-to-find problems, as Rust binaries often escape traditional antivirus solutions due to their combined nature and lower order in malware environments. 

RustStealers works with high secrecy, using sophisticated obfuscation techniques to escape endpoint security tools. Initial infection vectors hint towards phishing campaigns, where dangerous attachments or links in evidently genuine emails trick users into downloading the payload. 

After execution, the malware makes persistence via registry modifications or scheduled tasks, to make sure it remains active even after the system reboots. 

Distribution Mechanisms

The main aim is on Chromium-based browsers, abusing the accessibility of unencrypted information stored in browser profiles to harvest session tokens, usernames, and passwords. 

Besides this, RustStealer has been found to extract data to remote C2 servers via encrypted communication channels, making detection by network surveillance tools such as Wireshark more challenging.

Experts have also observed its potential to attack cryptocurrency wallet extensions, exposing users to risks in managing digital assets via browser plugins. This multi-faceted approach highlights the malware’s goal to increase data robbery while reducing the chances of early detection, a technique similar to advanced persistent threats (APTs).

About RustStealer malware

What makes RustStealer different is its modular build, letting hackers rework its strengths remotely. This flexibility reveals that future ve

This adaptability suggests that future replications could integrate functionalities such as ransomware components or keylogging, intensifying threats in the longer run. 

The deployment of Rust also makes reverse-engineering efforts difficult, as the language’s output is less direct to decompile in comparison to scripts like Python or other languages deployed in outdated malware strains. 

Businesses are advised to remain cautious, using strong phishing securities, frequently updating browser software, and using endpoint detection and response (EDR) solutions to detect suspicious behavior.