Artificial intelligence is often presented as a neutral and forward-looking force that improves efficiency and removes human bias from decision-making. In practice, however, many women working in Indonesia’s gig economy experience these systems very differently. Rather than easing workloads, AI-driven platforms are intensifying existing pressures.
Recent research examining female gig workers introduces the concept of “AI colonialism.” This idea describes how older patterns of domination continue through digital systems. In this framework, powerful technology actors, largely based in wealthier regions, extract labour, data, and economic value from workers in developing countries, reinforcing unequal global relationships. The structure resembles historical colonial systems, but operates through algorithms and platforms instead of direct political control.
In Indonesia, platforms such as Gojek, Grab, Maxim, and Shopee rely heavily on informal workers. These companies have not transformed the nature of employment. Instead, they have digitised an already informal labour market. Workers are labelled as independent “partners,” which excludes them from basic protections such as minimum wages, paid sick leave, and maternity benefits. Earnings depend entirely on the number of completed tasks and algorithm-based performance scores.
For women, this structure intersects with what is often described as the “double burden,” where paid work must be balanced alongside unpaid domestic responsibilities. One delivery worker, Lia, begins her day before sunrise by preparing meals and organising her children’s routines. Only after completing these responsibilities can she log into the platform. As she explains, the system recognises only whether she is online, not the constraints shaping her availability.
Platform algorithms prioritise continuous, uninterrupted activity. Incentive systems often require completing a fixed number of orders within strict time windows. For workers managing caregiving roles, this creates structural disadvantages. Logging off to attend to family responsibilities can result in lost bonuses, while reducing work hours due to fatigue or health issues leads to declining performance metrics.
This reflects a greater economic reality in which unpaid domestic labour underpins the formal economy without recognition or compensation. Instead of addressing this imbalance, AI systems can intensify it. Another worker, Cinthia, observed a noticeable drop in job assignments after taking time off due to illness. The experience created a sense that the system penalises any interruption, making workers reluctant to pause even when necessary.
Although algorithms do not explicitly target women, they are designed around an ideal worker who is always available and unconstrained by caregiving duties. This assumption produces indirect but consistent disadvantage. The claim that digital platforms operate neutrally is further challenged by everyday experiences. For example, a driver named Yanti often informs passengers in advance that she is female, leading to frequent cancellations. While the system records these cancellations, it does not capture the gender bias behind them.
Safety concerns also shape participation. Many women avoid working late hours due to risk, which limits access to peak-demand periods and higher earnings. The system interprets this reduced availability as lower productivity. Scholars such as Virginia Eubanks have argued that automated systems frequently replicate and amplify existing social inequalities rather than eliminate them.
Similar patterns have been observed in other countries. In India, women working in ride-hailing services report lower average earnings, partly because safety considerations influence when and where they work. Algorithms, however, measure output without accounting for these risks.
Safety challenges persist even within delivery roles. Around 90% of women in group discussions reported choosing delivery work over ride-hailing due to perceived safety advantages, yet harassment remains a concern from both customers and other drivers. During the COVID-19 pandemic, gig workers were classified as essential, but their incomes declined sharply, in some cases by up to 67% in early 2020. To compensate, many worked more than 13 hours a day. Despite these conditions, platform performance systems remained unchanged, and illness-related breaks often resulted in lower ratings.
This inflicts a deeper impact in the contemporary labour control, where oversight is embedded within digital systems rather than managed by human supervisors. AI colonialism, in this sense, extends beyond ownership to the structure of control itself. Workers provide labour, time, and data, while platforms retain authority over decision-making processes.
In response, women workers have developed informal networks through messaging platforms to share information, warn others about unsafe situations, and adapt to algorithmic changes. They support each other by increasing activity on inactive accounts, lending money for operational costs, and collectively responding to account suspensions. When harassment occurs, information is circulated quickly to protect others.
These practices represent a form of mutual support rooted in shared vulnerability. Rather than relying on formal recognition as employees, many women build systems of protection among themselves. This surfaces a form of everyday resistance, where collective action becomes a strategy for navigating structural constraints.
Artificial intelligence is not inherently exploitative. However, when deployed within unequal economic systems, it can reinforce patterns of extraction and imbalance. As digital platforms continue to expand, understanding the lived experiences of workers, particularly women in developing economies, is essential. Behind every efficient system is a human reality shaped by trade-offs between income, safety, and dignity.
A newly identified infostealer called Storm has emerged on underground cybercrime forums in early 2026, signalling a change in how attackers steal and use credentials. Priced at under $1,000 per month, the malware collects browser-stored data such as login credentials, session cookies, and cryptocurrency wallet information, then covertly transfers the data to attacker-controlled servers where it is decrypted outside the victim’s system.
This change becomes clearer when compared to earlier techniques. Traditionally, infostealers decrypted browser credentials directly on infected machines by loading SQLite libraries and accessing local credential databases. Because of this, endpoint security tools learned to treat such database access as one of the strongest indicators of malicious activity.
The approach began to break down after Google Chrome introduced App-Bound Encryption in version 127 in July 2024. This mechanism tied encryption keys to the browser environment itself, making local decryption exponentially more difficult. Initial bypass attempts relied on injecting into browser processes or exploiting debugging protocols, but these techniques still generated detectable traces.
Storm avoids this entirely by skipping local decryption. Instead, it extracts encrypted browser files and quietly sends them to attacker infrastructure, removing the behavioural signals that endpoint tools typically rely on. It extends this model by supporting both Chromium-based browsers and Gecko-based browsers such as Firefox, Waterfox, and Pale Moon, whereas tools like StealC V2 still handle Firefox data locally.
The data collected includes saved passwords, session cookies, autofill entries, Google account tokens, payment card details, and browsing history. This combination gives attackers everything required to rebuild authenticated sessions remotely. In practice, a single compromised employee browser can provide direct access to SaaS platforms, internal systems, and cloud environments without triggering any password-based alerts.
Storm also automates session hijacking. Once decrypted, credentials and cookies appear in the attacker’s control panel. By supplying a valid Google refresh token along with a geographically matched SOCKS5 proxy, the platform can silently recreate the victim’s active session.
This technique aligns with earlier research by Varonis Threat Labs. Its Cookie-Bite study showed that stolen Azure Entra ID session cookies can bypass multi-factor authentication, granting persistent access to Microsoft 365. Similarly, its SessionShark analysis demonstrated how phishing kits intercept session tokens in real time to defeat MFA protections. Storm packages these methods into a commercial subscription service.
Beyond credentials, the malware collects files from user directories, extracts session data from applications like Telegram, Signal, and Discord, and targets cryptocurrency wallets through browser extensions and desktop applications. It also gathers system information and captures screenshots across multiple monitors. Most operations run in memory, reducing the likelihood of detection.
Its infrastructure design adds resilience. Operators connect their own virtual private servers to Storm’s central system, routing stolen data through infrastructure they control. This setup limits the impact of takedowns, as enforcement actions are more likely to affect individual operator nodes rather than the core service.
Storm supports multi-user operations, allowing teams to divide responsibilities such as log access, malware build generation, and session restoration. It also automatically categorises stolen credentials by service, with visible rules for platforms including Google, Facebook, Twitter/X, and cPanel, helping attackers prioritise targets.
At the time of analysis, the control panel displayed 1,715 log entries linked to locations including India, the United States, Brazil, Indonesia, Ecuador, and Vietnam. While it is unclear whether all entries represent real victims or test data, variations in IP addresses, internet service providers, and data volumes suggest ongoing campaigns.
The logs include credentials associated with platforms such as Google, Facebook, Twitter/X, Coinbase, Binance, Blockchain.com, and Crypto.com. Such information often feeds into underground credential marketplaces, enabling account takeovers, fraud, and more targeted intrusions.
Storm is offered through a tiered pricing model: $300 for a seven-day trial, $900 per month for standard access, and $1,800 per month for a team licence supporting up to 100 operators and 200 builds. Use of an additional crypter is required. Notably, once deployed, malware builds continue operating even after a subscription expires, allowing ongoing data collection.
Security researchers view Storm as part of a broader evolution in credential theft. By shifting decryption to remote servers, attackers avoid detection mechanisms designed to identify on-device activity. At the same time, session cookie theft is increasingly replacing password theft as the primary objective.
The data collected by such tools often marks the beginning of further attacks, including logins from unusual locations, lateral movement within networks, and unauthorised access patterns.
Indicators of compromise include:
Alias: StormStealer
Forum ID: 221756
Registration date: December 12, 2025
Current version: v0.0.2.0 (Gunnar)
Build details: Developed in C++ (MSVC/msbuild), approximately 460 KB in size, targeting Windows systems
This advent of Storm underlines how cybercriminal tools are becoming more advanced, automated, and difficult to detect, requiring organisations to strengthen monitoring of sessions, user behaviour, and access patterns rather than relying solely on traditional credential protection methods.
Artificial intelligence is not only improving everyday technology but also strengthening both traditional and emerging scam techniques. As a result, avoiding fraud now requires greater awareness of how these schemes are taking new shapes.
Being able to identify scams is an essential skill for everyone, regardless of age. This is especially important as AI tools continue to advance rapidly, contributing to a noticeable increase in reported fraud cases. According to the Federal Bureau of Investigation’s 2025 Internet Crime Report, complaints linked to cryptocurrency and artificial intelligence ranked among the most financially damaging cybercrimes, with total losses approaching $21 billion. The agency also highlighted that, for the first time in its history, its Internet Crime Complaint Center included a dedicated section on artificial intelligence, documenting 22,364 cases that resulted in losses of nearly $893 million.
These scams are increasingly convincing. AI can generate realistic emails and replicate human voices through audio deepfakes, making fraudulent communication difficult to distinguish from legitimate interactions. Because of this, such threats should be treated as ongoing and persistent risks.
Protecting yourself, your family, and your finances requires both instinct and awareness. By training both your attention to detail and your ability to listen carefully, you can better identify suspicious activity. Below are seven warning signs that can help you recognize AI-driven scams and avoid serious consequences.
1. Messages that feel unusually personalized
AI can gather publicly available details, including your job, interests, or recent purchases, to create messages that appear tailored specifically to you. While these messages may seem accurate, they can still contain subtle errors or incorrect assumptions about your life, which should raise concern.
2. Requests that create urgency
Scammers often attempt to rush you with statements such as warnings that your account will be locked, demands for immediate payment, or requests for login credentials to restore access. This pressure is designed to force quick decisions without careful thinking.
3. Messages that appear overly polished
Unlike older scams filled with spelling or grammar mistakes, AI-generated messages are often clear and well-written. However, phrases like “confirm your information to avoid cancellation” or “we noticed unusual activity” should still be treated cautiously, especially if accompanied by suspicious visuals or a lack of supporting detail.
4. Audio that sounds slightly unnatural
Voice-cloning technology can imitate people you know, making phone-based scams more believable. Still, these voices may reveal themselves through unnatural pacing, limited emotional variation, or requests that seem out of character for the person being impersonated.
5. Deepfake videos that seem real but contain flaws
AI can also generate convincing videos of colleagues, family members, or even public figures. These may appear during video calls, workplace interactions, or through compromised social media accounts. Warning signs include inconsistent lighting, unusual shadows, or subtle distortions in facial movement.
6. Attempts to move conversations across platforms
Scammers may begin communication through email or professional platforms and then attempt to shift the interaction to messaging apps, payment platforms, or other channels. This tactic, often supported by chatbot-driven conversations, is used to appear credible while avoiding detection.
7. Unusual or suspicious payment requests
Requests for payment through gift cards, wire transfers, or cryptocurrency remain a major red flag. These methods are difficult to trace and are frequently used in fraudulent schemes, regardless of how legitimate the request may initially appear.
Why awareness matters
While AI has not changed the underlying tactics of scams, it has made them far more refined and scalable. Techniques such as impersonation, urgency, and trust-building are now enhanced through automation and data-driven personalization.
As these technologies continue to become an omnipresent aspect of our lives and keep developing, the risk will proportionately grow. Staying cautious, verifying unexpected requests, and sharing this knowledge with friends and family are critical steps in reducing exposure.
In a digital environment where scams increasingly resemble genuine communication, recognizing these warning signs remains one of the most effective ways to stay protected.