Security researchers have dismantled a substantial portion of the infrastructure powering the Kimwolf and Aisuru botnets, cutting off communication to more than 550 command-and-control servers used to manage infected devices. The action was carried out by Black Lotus Labs, the threat intelligence division of Lumen Technologies, and began in early October 2025.
Kimwolf and Aisuru operate as large-scale botnets, networks of compromised devices that can be remotely controlled by attackers. These botnets have been used to launch distributed denial-of-service attacks and to route internet traffic through infected devices, effectively turning them into unauthorized residential proxy nodes.
Kimwolf primarily targets Android systems, with a heavy concentration on unsanctioned Android TV boxes and streaming devices. Prior technical analysis showed that the malware is delivered through a component known as ByteConnect, which may be installed directly or bundled into applications that come preloaded on certain devices. Once active, the malware establishes persistent access to the device.
Researchers estimate that more than two million Android devices have been compromised. A key factor enabling this spread is the exposure of Android Debug Bridge services to the internet. When left unsecured, this interface allows attackers to install malware remotely without user interaction, enabling rapid and large-scale infection.
Follow-up investigations revealed that operators associated with Kimwolf attempted to monetize the botnet by selling access to the infected devices’ internet connections. Proxy bandwidth linked to compromised systems was offered for sale, allowing buyers to route traffic through residential IP addresses in exchange for payment.
Black Lotus Labs traced parts of the Aisuru backend to residential SSH connections originating from Canadian IP addresses. These connections were used to access additional servers through proxy infrastructure, masking malicious activity behind ordinary household networks. One domain tied to this activity briefly appeared among Cloudflare’s most accessed domains before being removed due to abuse concerns.
In early October, researchers identified another Kimwolf command domain hosted on infrastructure linked to a U.S.-based hosting provider. Shortly after, independent reporting connected multiple proxy services to a now-defunct Discord server used to advertise residential proxy access. Individuals associated with the hosting operation were reportedly active on the server for an extended period.
During the same period, researchers observed a sharp increase in Kimwolf infections. Within days, hundreds of thousands of new devices were added to the botnet, with many of them immediately listed for sale through a single residential proxy service.
Further analysis showed that Kimwolf infrastructure actively scanned proxy services for vulnerable internal devices. By exploiting configuration flaws in these networks, the malware was able to move laterally, infect additional systems, and convert them into proxy nodes that were then resold.
Separate research uncovered a related proxy network built from hundreds of compromised home routers operating across Russian internet service providers. Identical configurations and access patterns indicated automated exploitation at scale. Because these devices appear as legitimate residential endpoints, malicious traffic routed through them is difficult to distinguish from normal consumer activity.
Researchers warn that the abuse of everyday consumer devices continues to provide attackers with resilient, low-visibility infrastructure that complicates detection and response efforts across the internet.
The development was first highlighted by Leo on X, who shared that Google has begun testing Gemini integration alongside agentic features in Chrome’s Android version. These findings are based on newly discovered references within Chromium, the open-source codebase that forms the foundation of the Chrome browser.
Additional insight comes from a Chromium post, where a Google engineer explained the recent increase in Chrome’s binary size. According to the engineer, "Binary size is increased because this change brings in a lot of code to support Chrome Glic, which will be enabled in Chrome Android in the near future," suggesting that the infrastructure needed for Gemini support is already being added. For those unfamiliar, “Glic” is the internal codename used by Google for Gemini within Chrome.
While the references do not reveal exactly how Gemini will function inside Chrome for Android, they strongly indicate that Google is actively preparing the feature. The integration could mirror the experience offered by Microsoft Copilot in Edge for Android. In such a setup, users might see a floating Gemini button that allows them to summarize webpages, ask follow-up questions, or request contextual insights without leaving the browser.
On desktop platforms, Gemini in Chrome already offers similar functionality by using the content of open tabs to provide contextual assistance. This includes summarizing articles, comparing information across multiple pages, and helping users quickly understand complex topics. However, Gemini’s desktop integration is still not widely available. Users who do have access can launch it using Alt + G on Windows or Ctrl + G on macOS.
The potential arrival of Gemini in Chrome for Android could make AI-powered browsing more accessible to a wider audience, especially as mobile devices remain the primary way many users access the internet. Agentic capabilities could help automate common tasks such as researching topics, extracting key points from long articles, or navigating complex websites more efficiently.
At present, Google has not confirmed when Gemini will officially roll out to Chrome for Android. However, the appearance of multiple references in Chromium suggests that development is progressing steadily. With Google continuing to expand Gemini across its ecosystem, an official announcement regarding its availability on Android is expected in the near future.
Then business started expecting more.
Slowly, companies started using organizational agents over personal copilots- agents integrated into customer support, HR, IT, engineering, and operations. These agents didn't just suggest, but started acting- touching real systems, changing configurations, and moving real data:
Organizational agents are made to work across many resources, supporting various roles, multiple users, and workflows via a single implement. Instead of getting linked with an individual user, these business agents work as shared resources that cater to requests, and automate work of across systems for many users.
To work effectively, the AI agents depend on shared accounts, OAuth grants, and API keys to verify with the systems for interaction. The credentials are long-term and managed centrally, enabling the agent to work continuously.
While this approach maximizes convenience and coverage, these design choices can unintentionally create powerful access intermediaries that bypass traditional permission boundaries.
Although this strategy optimizes coverage and convenience, these design decisions may inadvertently provide strong access intermediaries that go beyond conventional permission constraints. The next actions may seem legitimate and harmless when agents inadvertently grant access outside the specific user's authority.
Reliable detection and attribution are eliminated when the execution is attributed to the agent identity, losing the user context. Conventional security controls are not well suited for agent-mediated workflows because they are based on direct system access and human users. Permissions are enforced by IAM systems according to the user's identity, but when an AI agent performs an activity, authorization is assessed based on the agent's identity rather than the requester's.
Therefore, user-level limitations are no longer in effect. By assigning behavior to the agent's identity and concealing who started the action and why, logging and audit trails exacerbate the issue. Security teams are unable to enforce least privilege, identify misuse, or accurately assign intent when using agents, which makes it possible for permission bypasses to happen without setting off conventional safeguards. Additionally, the absence of attribution slows incident response, complicates investigations, and makes it challenging to ascertain the scope or aim of a security occurrence.
Tibor Blaho, a prominent AI researcher, discovered previously undisclosed placeholders buried within the latest versions of OpenAI’s website code, as well as its Android and iOS applications. It was evident from that evidence that active development takes place across desktop and mobile platforms.
'Agora' is the Greek word for a public gathering space or marketplace, which means community, and its use within the software industry has sparked informed speculation, with leaks revealing references like 'is_agora_ios' and 'is_agora_android' as hints of a tightly controlled, cross-platform experience.
As a result of the parallels between the project and established real-time media technologies bearing the same name, analysts believe the project could signal anything from the development of a unified, cross-platform application, a collaborative social environment, to the development of a more advanced, real-time voice or video communication framework.
As news has surfaced recently about OpenAI's interest in developing an AI-powered headset, which raises the possibility that Agora could serve as a foundational layer for a broader hardware and software ecosystem, this timing is noteworthy, as reports since surfaced indicate OpenAI is interested in building a headset powered by AI.
Although the project has not yet been officially acknowledged by the company, OpenAI has already demonstrated its execution momentum by providing tangible improvements to its voice input capabilities that have been logged in to the system.
In this way, it has demonstrated a clear strategy toward providing seamless, interactive, and real-time AI experiences for logged-in users. These references suggest that the initiative is manufactured to operate seamlessly across multiple environments, possibly pointing to a unified application or a device-level feature that may be able to operate across platforms due to its breadth and depth of references.
A term commonly associated with public gathering spaces and marketplaces is the name “Agora,” which has fueled speculation that OpenAI is exploring the possibility of collaborating with communities in an effort to enhance their interaction with each other.
A number of experts have suggested that the name may be a reference to real-time communication technology, given that it has been associated with a variety of audio and video development frameworks.
It is interesting to note that these findings have been released alongside reports that OpenAI is considering new AI-powered hardware products, such as wireless audio devices positioned as potential alternatives to Apple's AirPods, and that Agora could be an integral part of this tightly integrated hardware-software ecosystem in the future.
In addition to these early indicators, ChatGPT has already seen tangible improvements as a result of the latest update. OpenAI, the artificial intelligence system, has significantly improved the performance of dictation by reducing empty transcriptions and improving overall accuracy of dictation, thus reinforcing the company's commitment to voice-driven, real-time interaction.
An important part of this initiative is to address longstanding inefficiencies in cross-border payments that have existed for a long time. Due to the fragmented correspondent banking networks that they rely on, cross-border payments remain slow, expensive, and difficult to track. They are characterized by a lack of liquidity and difficulty managing cash flows.
In addition, the Agorá Project is exploring alternatives to existing wholesale payment frameworks based on tokenization and utilizing advanced digital mechanisms such as smart contracts to achieve faster settlements, greater transparency, and better accessibility than their current counterparts.
Developing tokenized representations of commercial bank deposits and central bank reserves is an example of the project's focus on understanding how to execute transactions in a secure and verifiable manner, while preserving the crucial role that central bank money plays in terms of being the final settlement asset.
There are several benefits to this approach, such as eliminating counterparty credit risk, ensuring transaction finality, and strengthening financial stability, in addition to providing new payment capabilities such as atomic, always-on, or conditional payments, among others.
The initiative is not only evaluating the technical aspects of tokenised money, but will also assess both the regulatory and legal consequences of tokenised money, including they will assess if the tokenised money complies with settlement finality rules, anti-money laundering obligations, and counter-terrorism financing regulations across different jurisdictions.
Although Project Agorá is being positioned as an experimental prototype rather than a market-ready product, the results of its research could help shape the development of a more efficient, reliable, and transparent global payments infrastructure, and provide a blueprint for the future evolution of cross-border financial systems in the long run.
Taking this into account, Agora's emergence reveals a broader strategic direction in which OpenAI has begun going beyond incremental feature updates toward building platform-agnostic platforms which can be extended across devices, use cases, and even industries in order to achieve their goals.
In spite of the fact that Agora may ultimately be developed as a real-time communication layer, a collaborative digital environment, or a component of the infrastructure necessary to support future hardware and financial systems, its early signals indicate that it is focused strongly on interoperability, immediacy, and trust.
The advantages of taking such an approach could include better AI-driven workflows, closer integration between voice, data, and transactions, and the opportunity to design services that operate seamlessly across boundaries and platforms for enterprises and developers alike.
It has also been suggested that the parallel focus on regulatory alignment and system resilience reflects a desire to strike a balance between fast innovation and the stability needed for a wide-scale adoption of the innovations.
In the meantime, OpenAI is continuing to refine these initiatives behind the scenes. Moreover, the Agora project shows how we may soon find that the next phase of AI evolution will be defined more by interconnected ecosystems, rather than by isolated tools, enabling real-time interaction, secure exchange, and sustained economic growth worldwide.
Cybersecurity analysts have identified a sophisticated web skimming operation that has been running continuously since early 2022, silently targeting online checkout systems. The campaign focuses on stealing payment card information and is believed to affect businesses that rely on globally used card networks.
Web skimming is a type of cyberattack where criminals tamper with legitimate shopping websites rather than attacking customers directly. By inserting malicious code into payment pages, attackers are able to intercept sensitive information at the exact moment a customer attempts to complete a purchase. Because the website itself appears normal, victims are usually unaware their data has been compromised.
This technique is commonly associated with Magecart-style attacks. While Magecart initially referred to groups exploiting Magento-based websites, the term now broadly describes any client-side attack that captures payment data through infected checkout pages across multiple platforms.
The operation was uncovered during an investigation into a suspicious domain hosting malicious scripts. This domain was linked to infrastructure previously associated with a bulletproof hosting provider that had faced international sanctions. Researchers found that the attackers were using this domain to distribute heavily concealed JavaScript files that were loaded directly by e-commerce websites.
Once active, the malicious script continuously monitors user activity on the payment page. It is programmed to detect whether a website administrator is currently logged in by checking for specific indicators commonly found on WordPress sites. If such indicators are present, the script automatically deletes itself, reducing the risk of detection during maintenance or inspection.
The attack becomes particularly deceptive when certain payment options are selected. In these cases, the malicious code creates a fake payment form that visually replaces the legitimate one. Customers unknowingly enter their card number, expiration date, and security code into this fraudulent interface. After the information is captured, the website displays a generic payment error, making it appear as though the transaction failed due to a simple mistake.
In addition to financial data, the attackers collect personal details such as names, contact numbers, email addresses, and delivery information. This data is sent to an external server controlled by the attackers using standard web communication methods. Once the transfer is complete, the fake form is removed, the real payment form is restored, and the script marks the victim as already compromised to avoid repeating the attack.
Researchers noted that the operation reflects an advanced understanding of website behavior, especially within WordPress-based environments. By exploiting both technical features and user trust, the attackers have managed to sustain this campaign for years without drawing widespread attention.
This discovery reinforces the importance of continuous website monitoring and script validation for businesses, as well as cautious online shopping practices for consumers.
The breach happened in EEOC's Public Portal system where unauthorized access of agency data may have disclosed personal data in logs given to agency by the public. “Staff employed by the contractor, who had privileged access to EEOC systems, were able to handle data in an unauthorized (UA) and prohibited manner in early 2025,” reads the EEOC email notification sent by data security office.
The email said that the review suggested personally identifiable information (PII) may have been leaked, depending on the individual. The exposed information may contain names, contact and other data. The review of is still ongoing while EOCC works with the law enforcement.
EOCC has asked individuals to review their financial accounts for any malicious activity and has also asked portal users to reset their passwords.
Contracting data indicates that EEOC had a contract with Opexus, a company that provides case management software solutions to the federal government.
Open spokesperson confirmed this and said EEOC and Opex “took immediate action when we learned of this activity, and we continue to support investigative and law enforcement efforts into these individuals’ conduct, which is under active prosecution in the Federal Court of the Eastern District of Virginia.”
Talking about the role of employees in the breach, the spokesperson added that “While the individuals responsible met applicable seven-year background check requirements consistent with prevailing government and industry standards at the time of hire, this incident made clear that personnel screening alone is not sufficient."
The second Trump administration's efforts to prevent claimed “illegal discrimination” driven by diversity, equity, and inclusion programs, which over the past year have been examined and demolished at almost every level of the federal government, centre on the EEOC.
Large private companies all throughout the nation have been affected by the developments. In an X post this month, EEOC chairwoman Andrea Lucas asked white men if they had experienced racial or sexual discrimination at work and urged them to report their experiences to the organization "as soon as possible.”