Artificial intelligence has become one of the most talked-about technologies in recent years, with billions of dollars poured into projects aimed at transforming workplaces. Yet, a new study by MIT suggests that while official AI programs inside companies are struggling, employees are quietly driving a separate wave of adoption on their own. Researchers are calling this the rise of the “shadow AI economy.”
The report, titled State of AI in Business 2025 and conducted by MIT’s Project NANDA, examined more than 300 public AI initiatives, interviewed leaders from 52 organizations, and surveyed 153 senior executives. Its findings reveal a clear divide. Only 40% of companies have official subscriptions to large language model (LLM) tools such as ChatGPT or Copilot, but employees in more than 90% of companies are using personal accounts to complete their daily work.
This hidden usage is not minor. Many workers reported turning to AI multiple times a day for tasks like drafting emails, summarizing information, or basic data analysis. These personal tools are often faster, easier to use, and more adaptable than the expensive systems companies are trying to build in-house.
MIT researchers describe this contrast as the “GenAI divide.” Despite $30–40 billion in global investments, only 5% of businesses have seen real financial impact from their official AI projects. In most cases, these tools remain stuck in test phases, weighed down by technical issues, integration challenges, or limited flexibility. Employees, however, are already benefiting from consumer AI products that require no approvals or training to start using.
The study highlights several reasons behind this divide:
1. Accessibility: Consumer tools are easy to set up, requiring little technical knowledge.
2. Flexibility: Workers can adapt them to their own workflows without waiting for management decisions.
3. Immediate value: Users see results instantly, unlike with many corporate systems that fail to show clear benefits.
Because of this, employees are increasingly choosing AI for routine tasks. The survey found that around 70% prefer AI for simple work like drafting emails, while 65% use it for basic analysis. At the same time, most still believe humans should handle sensitive or mission-critical responsibilities.
The findings also challenge some popular myths about AI. According to MIT, widespread fears of job losses have not materialized, and generative AI has yet to revolutionize business operations in the way many predicted. Instead, the problem lies in rigid tools that fail to learn, adapt, or integrate smoothly into existing systems. Internal projects built by companies themselves also tend to fail at twice the rate of externally sourced solutions.
For now, the “shadow AI economy” shows that the real adoption of AI is happening at the individual level, not through large-scale corporate programs. The report concludes that companies that recognize and build on this grassroots use of AI may be better placed to succeed in the future.
The ongoing debate around Meta’s use of European data to train its artificial intelligence (AI) systems is far from over. While Meta has started training its large language models (LLMs) using public content from Facebook and Instagram, privacy regulators in Europe are still questioning whether this is lawful and the issue may soon reach the European Court of Justice (ECJ).
Meta began training its AI using public posts made by users in the EU shortly after getting the go-ahead from several privacy watchdogs. This approval came just before Meta launched AI-integrated products, including its smart glasses, which rely heavily on understanding cultural and regional context from online data.
However, some regulators and consumer groups are not convinced the approval was justified. A German consumer organization had attempted to block the training through an emergency court appeal. Although the request was denied, that was only a temporary decision. The core legal challenges, including one led by Hamburg’s data protection office, are still expected to proceed in court.
Hamburg’s commissioner, who initially supported blocking the training, later withdrew a separate emergency measure under Europe’s data protection law. He stated that while the training has been allowed to continue for now, it’s highly likely that the final ruling will come from the EU’s highest court.
The controversy centers on whether Meta has a strong enough legal reason, known as "legitimate interest" to use personal data for AI training. Meta’s argument was accepted by Irish regulators, who oversee Meta’s EU operations, on the condition that strict privacy safeguards are in place.
What Does ‘Legitimate Interest’ Mean Under GDPR?
Under the General Data Protection Regulation (GDPR), companies must have a valid reason to collect and use personal data. One of the six legal bases allowed is called “legitimate interest.”
This means a company can process someone’s data if it’s necessary for a real business purpose, as long as it does not override the privacy rights of the individual.
In the case of AI model training, companies like Meta claim that building better products and improving AI performance qualifies as a legitimate interest. However, this is debated, especially when public data includes posts with personal opinions, cultural expressions, or identity-related content.
Data protection regulators must carefully balance:
1. The company’s business goals
2. The individual’s right to privacy
3. The potential long-term risks of using personal data for AI systems
Some experts argue that this sets a broader precedent. If Meta can train its AI using public data under the concept of legitimate interest, other companies may follow. This has raised hopes among many European AI firms that have felt held back by unclear or strict regulations.
Industry leaders say that regulatory uncertainty, specifically surrounding Europe’s General Data Protection Regulation (GDPR) and the upcoming AI Act has been one of the biggest barriers to innovation in the region. Others believe the current developments signal a shift toward supporting responsible AI development while protecting users’ rights.
Despite approval from regulators and support from industry voices, legal clarity is still missing. Many legal experts and companies agree that only a definitive ruling from the European Court of Justice can settle whether using personal data for AI training in this way is truly lawful.
Cybersecurity researchers have released a warning about a sophisticated cyberattack campaign in which users are attempted to access DeepSeek-R1, a widely recognized large language model (LLM), which has been identified as a large language model. Cybercriminals have launched a malicious operation designed to exploit unsuspecting users through deceptive tactics to capitalise on the soaring global interest in artificial intelligence tools, and more specifically, open-source machine learning models (LLMs).