A single photograph captured in a remote forest over a decade ago has become central to one of the most complex legal questions of the digital age: what happens when creative work is produced without direct human authorship? The answer now carries long-term consequences for artificial intelligence, creative industries, and ownership rights in the modern world.
The image in question originated in 2011, when wildlife photographer David Slater was documenting crested black macaques in Indonesia. These monkeys are not only endangered but also known for their highly expressive faces, making them attractive subjects for photography. However, Slater faced difficulty capturing close-up shots because the animals were wary of human presence.
To work around this, he positioned his camera on a tripod, enabled automatic focus, and used a flash, allowing the monkeys to approach and interact with the equipment without feeling threatened. His approach relied on curiosity rather than control. Eventually, one macaque handled the camera and pressed the shutter button while looking directly into the lens. The resulting image, widely known as the “monkey selfie,” appeared almost intentional, with the animal’s expression resembling a posed portrait.
While the photograph initially brought attention and recognition, it soon triggered an unexpected legal dispute. The core issue was deceptively simple: if a photograph is not taken by a human, can anyone claim ownership over it?
The situation escalated when the image was uploaded to Wikipedia, making it freely accessible worldwide. Slater objected to this distribution, arguing that he had lost approximately £10,000 in potential earnings because the image could now be used without payment. However, the Wikimedia Foundation refused to remove the photograph. Its reasoning was based on copyright law, which generally requires a human creator. Since the image was captured by an animal, the organisation classified it as public domain material.
This interpretation was later reinforced by the U.S. Copyright Office, which formally clarified that works produced without human authorship cannot be registered. In its guidance, the office explicitly listed a photograph taken by a monkey as an example of ineligible material, establishing a clear precedent.
The dispute took another unusual turn when People for the Ethical Treatment of Animals filed a lawsuit attempting to assign copyright ownership to the macaque itself. Although framed as a legal claim over the photograph, the case was widely interpreted as an effort to establish broader legal rights for animals. After several years of legal proceedings, a court dismissed the case, concluding that animals do not have the legal capacity to initiate lawsuits.
Legal experts later observed that, although the case focused on animal authorship, it introduced a broader conceptual challenge that would become more relevant with the rise of artificial intelligence. According to intellectual property lawyer Ryan Abbott, the debate could easily extend beyond animals to machines capable of producing creative outputs.
This possibility became reality when computer scientist Stephen Thaler attempted to secure copyright protection for an image generated by his AI system, DABUS. Thaler described the system as capable of independently producing ideas, arguing that it should be recognised as the sole creator of its output. He characterised the system as exhibiting a form of machine-based cognition, though this view is strongly disputed within the scientific community.
Despite these claims, the Copyright Office rejected the application, applying the same reasoning used in the monkey selfie case. Because the work was not created by a human, it could not qualify for copyright protection. This rejection led to a legal challenge that progressed through multiple levels of the U.S. judicial system.
When the case reached the Supreme Court of the United States, the court declined to hear it, leaving lower court rulings intact. The outcome effectively confirmed that, under current U.S. law, works generated entirely by artificial intelligence cannot be owned by anyone, including the developer of the system or the individual who prompted it.
This position has reverberating implications for the creative economy. Copyright law exists to allow creators and organisations to control and monetise their work. Without ownership rights, it becomes difficult to build sustainable business models around fully AI-generated content. Legal scholar Stacey Dogan noted that this limitation reduces the likelihood of a future where machine-generated content completely replaces human-created media.
At the same time, the rapid expansion of generative AI tools continues to complicate the landscape. These systems function by analysing large datasets and producing outputs based on user instructions, often referred to as prompts. While they can generate text, images, and video at scale, their outputs raise questions about originality and authorship, particularly when human involvement is minimal.
Recent industry developments illustrate this uncertainty. Experimental AI-generated content has attracted large audiences online, suggesting a level of public interest, even if motivations such as novelty or criticism play a role. However, some technology companies have begun reassessing their AI content strategies, particularly where ownership and profitability remain unclear.
Expert opinion on the value of fully AI-generated content remains divided. Some specialists argue that such content lacks depth or authenticity, while others view AI as a useful tool for supporting human creativity rather than replacing it. This perspective positions AI as a collaborator rather than an independent creator.
Legal approaches also vary internationally. In the United Kingdom, copyright law allows ownership of computer-generated works by assigning authorship to the individual responsible for arranging their creation. However, this framework is currently being reconsidered as policymakers evaluate whether it remains appropriate in the context of modern AI systems.
One of the most complex unresolved issues involves hybrid creation. When humans actively guide, refine, and edit AI-generated outputs, determining ownership becomes less straightforward. A notable example involves an AI-assisted artwork that won a competition after extensive prompting and editing, raising questions about how much human contribution is required for copyright protection.
This debate is not entirely new. When photography first emerged, similar concerns were raised about whether cameras, rather than humans, were responsible for creative output. Over time, legal systems adapted by recognising the role of human intention and decision-making. Artificial intelligence now presents a more advanced version of that same challenge.
For now, the legal position in the United States remains clear: without meaningful human involvement, creative works cannot be protected by copyright. However, as AI becomes increasingly integrated into creative processes, the distinction between human and machine contribution is becoming more difficult to define.
What began as an unexpected interaction between a monkey and a camera has therefore evolved into a defining case in the global conversation about creativity, ownership, and technology. The decisions made in courts today will shape how creative work is produced, distributed, and valued in the future.
Artificial intelligence is often presented as a neutral and forward-looking force that improves efficiency and removes human bias from decision-making. In practice, however, many women working in Indonesia’s gig economy experience these systems very differently. Rather than easing workloads, AI-driven platforms are intensifying existing pressures.
Recent research examining female gig workers introduces the concept of “AI colonialism.” This idea describes how older patterns of domination continue through digital systems. In this framework, powerful technology actors, largely based in wealthier regions, extract labour, data, and economic value from workers in developing countries, reinforcing unequal global relationships. The structure resembles historical colonial systems, but operates through algorithms and platforms instead of direct political control.
In Indonesia, platforms such as Gojek, Grab, Maxim, and Shopee rely heavily on informal workers. These companies have not transformed the nature of employment. Instead, they have digitised an already informal labour market. Workers are labelled as independent “partners,” which excludes them from basic protections such as minimum wages, paid sick leave, and maternity benefits. Earnings depend entirely on the number of completed tasks and algorithm-based performance scores.
For women, this structure intersects with what is often described as the “double burden,” where paid work must be balanced alongside unpaid domestic responsibilities. One delivery worker, Lia, begins her day before sunrise by preparing meals and organising her children’s routines. Only after completing these responsibilities can she log into the platform. As she explains, the system recognises only whether she is online, not the constraints shaping her availability.
Platform algorithms prioritise continuous, uninterrupted activity. Incentive systems often require completing a fixed number of orders within strict time windows. For workers managing caregiving roles, this creates structural disadvantages. Logging off to attend to family responsibilities can result in lost bonuses, while reducing work hours due to fatigue or health issues leads to declining performance metrics.
This reflects a greater economic reality in which unpaid domestic labour underpins the formal economy without recognition or compensation. Instead of addressing this imbalance, AI systems can intensify it. Another worker, Cinthia, observed a noticeable drop in job assignments after taking time off due to illness. The experience created a sense that the system penalises any interruption, making workers reluctant to pause even when necessary.
Although algorithms do not explicitly target women, they are designed around an ideal worker who is always available and unconstrained by caregiving duties. This assumption produces indirect but consistent disadvantage. The claim that digital platforms operate neutrally is further challenged by everyday experiences. For example, a driver named Yanti often informs passengers in advance that she is female, leading to frequent cancellations. While the system records these cancellations, it does not capture the gender bias behind them.
Safety concerns also shape participation. Many women avoid working late hours due to risk, which limits access to peak-demand periods and higher earnings. The system interprets this reduced availability as lower productivity. Scholars such as Virginia Eubanks have argued that automated systems frequently replicate and amplify existing social inequalities rather than eliminate them.
Similar patterns have been observed in other countries. In India, women working in ride-hailing services report lower average earnings, partly because safety considerations influence when and where they work. Algorithms, however, measure output without accounting for these risks.
Safety challenges persist even within delivery roles. Around 90% of women in group discussions reported choosing delivery work over ride-hailing due to perceived safety advantages, yet harassment remains a concern from both customers and other drivers. During the COVID-19 pandemic, gig workers were classified as essential, but their incomes declined sharply, in some cases by up to 67% in early 2020. To compensate, many worked more than 13 hours a day. Despite these conditions, platform performance systems remained unchanged, and illness-related breaks often resulted in lower ratings.
This inflicts a deeper impact in the contemporary labour control, where oversight is embedded within digital systems rather than managed by human supervisors. AI colonialism, in this sense, extends beyond ownership to the structure of control itself. Workers provide labour, time, and data, while platforms retain authority over decision-making processes.
In response, women workers have developed informal networks through messaging platforms to share information, warn others about unsafe situations, and adapt to algorithmic changes. They support each other by increasing activity on inactive accounts, lending money for operational costs, and collectively responding to account suspensions. When harassment occurs, information is circulated quickly to protect others.
These practices represent a form of mutual support rooted in shared vulnerability. Rather than relying on formal recognition as employees, many women build systems of protection among themselves. This surfaces a form of everyday resistance, where collective action becomes a strategy for navigating structural constraints.
Artificial intelligence is not inherently exploitative. However, when deployed within unequal economic systems, it can reinforce patterns of extraction and imbalance. As digital platforms continue to expand, understanding the lived experiences of workers, particularly women in developing economies, is essential. Behind every efficient system is a human reality shaped by trade-offs between income, safety, and dignity.
A newly identified infostealer called Storm has emerged on underground cybercrime forums in early 2026, signalling a change in how attackers steal and use credentials. Priced at under $1,000 per month, the malware collects browser-stored data such as login credentials, session cookies, and cryptocurrency wallet information, then covertly transfers the data to attacker-controlled servers where it is decrypted outside the victim’s system.
This change becomes clearer when compared to earlier techniques. Traditionally, infostealers decrypted browser credentials directly on infected machines by loading SQLite libraries and accessing local credential databases. Because of this, endpoint security tools learned to treat such database access as one of the strongest indicators of malicious activity.
The approach began to break down after Google Chrome introduced App-Bound Encryption in version 127 in July 2024. This mechanism tied encryption keys to the browser environment itself, making local decryption exponentially more difficult. Initial bypass attempts relied on injecting into browser processes or exploiting debugging protocols, but these techniques still generated detectable traces.
Storm avoids this entirely by skipping local decryption. Instead, it extracts encrypted browser files and quietly sends them to attacker infrastructure, removing the behavioural signals that endpoint tools typically rely on. It extends this model by supporting both Chromium-based browsers and Gecko-based browsers such as Firefox, Waterfox, and Pale Moon, whereas tools like StealC V2 still handle Firefox data locally.
The data collected includes saved passwords, session cookies, autofill entries, Google account tokens, payment card details, and browsing history. This combination gives attackers everything required to rebuild authenticated sessions remotely. In practice, a single compromised employee browser can provide direct access to SaaS platforms, internal systems, and cloud environments without triggering any password-based alerts.
Storm also automates session hijacking. Once decrypted, credentials and cookies appear in the attacker’s control panel. By supplying a valid Google refresh token along with a geographically matched SOCKS5 proxy, the platform can silently recreate the victim’s active session.
This technique aligns with earlier research by Varonis Threat Labs. Its Cookie-Bite study showed that stolen Azure Entra ID session cookies can bypass multi-factor authentication, granting persistent access to Microsoft 365. Similarly, its SessionShark analysis demonstrated how phishing kits intercept session tokens in real time to defeat MFA protections. Storm packages these methods into a commercial subscription service.
Beyond credentials, the malware collects files from user directories, extracts session data from applications like Telegram, Signal, and Discord, and targets cryptocurrency wallets through browser extensions and desktop applications. It also gathers system information and captures screenshots across multiple monitors. Most operations run in memory, reducing the likelihood of detection.
Its infrastructure design adds resilience. Operators connect their own virtual private servers to Storm’s central system, routing stolen data through infrastructure they control. This setup limits the impact of takedowns, as enforcement actions are more likely to affect individual operator nodes rather than the core service.
Storm supports multi-user operations, allowing teams to divide responsibilities such as log access, malware build generation, and session restoration. It also automatically categorises stolen credentials by service, with visible rules for platforms including Google, Facebook, Twitter/X, and cPanel, helping attackers prioritise targets.
At the time of analysis, the control panel displayed 1,715 log entries linked to locations including India, the United States, Brazil, Indonesia, Ecuador, and Vietnam. While it is unclear whether all entries represent real victims or test data, variations in IP addresses, internet service providers, and data volumes suggest ongoing campaigns.
The logs include credentials associated with platforms such as Google, Facebook, Twitter/X, Coinbase, Binance, Blockchain.com, and Crypto.com. Such information often feeds into underground credential marketplaces, enabling account takeovers, fraud, and more targeted intrusions.
Storm is offered through a tiered pricing model: $300 for a seven-day trial, $900 per month for standard access, and $1,800 per month for a team licence supporting up to 100 operators and 200 builds. Use of an additional crypter is required. Notably, once deployed, malware builds continue operating even after a subscription expires, allowing ongoing data collection.
Security researchers view Storm as part of a broader evolution in credential theft. By shifting decryption to remote servers, attackers avoid detection mechanisms designed to identify on-device activity. At the same time, session cookie theft is increasingly replacing password theft as the primary objective.
The data collected by such tools often marks the beginning of further attacks, including logins from unusual locations, lateral movement within networks, and unauthorised access patterns.
Indicators of compromise include:
Alias: StormStealer
Forum ID: 221756
Registration date: December 12, 2025
Current version: v0.0.2.0 (Gunnar)
Build details: Developed in C++ (MSVC/msbuild), approximately 460 KB in size, targeting Windows systems
This advent of Storm underlines how cybercriminal tools are becoming more advanced, automated, and difficult to detect, requiring organisations to strengthen monitoring of sessions, user behaviour, and access patterns rather than relying solely on traditional credential protection methods.