Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Shadow AI. Show all posts

Unauthorized Use of AI Tools by Employees Exposes Sensitive Corporate Data


 

Artificial intelligence has rapidly revolutionised the modern workplace, creating both unprecedented opportunities and presenting complex challenges at the same time. Despite the fact that AI was initially conceived to improve productivity, it has quickly evolved into a transformational force that has changed the way employees think, work, and communicate. 

Despite the rapid rise in technology, many organisations are still ill-prepared to deal with the unchecked use of artificial intelligence. With the advent of generative AI, which can produce text, images, videos, and audio in a variety of ways, employees have increasingly adopted it for drafting emails, preparing reports, analysing data, and even creating creative content. 

The ability of advanced language models, which have been trained based on vast datasets, to mimic the language of humans with remarkable fluency can enable workers to perform tasks that once took hours to complete. According to some surveys, a majority of American employees rely on AI tools, often without formal approval or oversight, which are freely accessible with a little more than an email address to use. 

Platforms such as ChatGPT, where all you need is an email address if you wish to use the tool, are inspiring examples of this fast-growing trend. Nonetheless, this widespread use of unregulated artificial intelligence tools raises many concerns about privacy, data protection, and corporate governance—a concern employers must address with clear policies, robust safeguards, and a better understanding of the evolving digital landscape to prevent these concerns from becoming unfounded. 

Cybernews has recently found out that the surge in unapproved AI use in the workplace is a concerning phenomenon. While digital risks are on the rise, a staggering 75 per cent of employees who use so-called “shadow artificial intelligence” tools admit to having shared sensitive or confidential information through them.

Information that could easily compromise their organisations. However, what is more troubling is that the trend is not restricted to junior staff; it is actually a trend led by the leadership at the organisation. With approximately 93 per cent of executives and senior managers admitting to using unauthorised AI tools, it is clear that executives and senior managers are the most frequent users. Management accounts for 73 per cent, followed by professionals who account for 62 per cent. 

In other words, it seems that unauthorised AI tools are not isolated, but rather a systemic problem. In addition to employee records and customer information, internal documents, financial and legal records, and proprietary code, these categories of sensitive information are among the most commonly exposed categories, each of which can lead to serious security breaches each of which has the potential to be a major vulnerability. 

However, despite nearly nine out of ten workers admitting that utilising AI entails significant risks, this continues to happen. It has been found that 64 per cent of respondents recognise the possibility of data leaks as a result of unapproved artificial intelligence tools, and more than half say they will stop using those tools if such a situation occurs. However, proactive measures remain rare in the industry. As a result, there is a growing disconnect between awareness and action in corporate data governance, one that could have profound consequences if not addressed. 

There is also an interesting paradox within corporate hierarchies revealed by the survey: even though senior management is often responsible for setting data governance standards, they are the most frequent infringers on those standards. According to a recent study, 93 per cent of executives and senior managers use unapproved AI tools, outpacing all other job levels by a wide margin.

There is also a significant increase in engagement with unauthorised platforms by managers and team leaders, who are responsible for ensuring compliance and modelling best practices within the organisation. This pattern, researchers suggest, reflects a worrying disconnect between policy enforcement and actual behaviour, one that erodes accountability from the top down. Žilvinas Girėnas, head of product at Nexos.ai, warns that the implications of such unchecked behaviour extend far beyond simple misuse. 

The truth is that it is impossible to determine where sensitive data will end up if it is pasted into unapproved AI tools. "It might be stored, used to train another model, exposed in logs, or even sold to third parties," he explained. It could be possible to slip confidential contracts, customer details, or internal records quietly into external systems without detection through such actions, he added.

A study conducted by IBM underscores the seriousness of this issue by estimating that shadow artificial intelligence can result in an average data breach cost of up to $670,000, an expense that few companies are able to afford. Even so, the Cybernews study found that almost one out of four employers does not have formal policies in place governing artificial intelligence use in the workplace. 

Experts believe that awareness alone will not be enough to prevent these risks from occurring. As Sabeckis noted, “It would be a shame if the only way to stop employees from using unapproved AI tools was through the hard lesson of a data breach. For many companies, even a single breach can be catastrophic. Girėnas echoed this sentiment, emphasising that shadow AI “thrives in silence” when leadership fails to act decisively. 

The speaker warned that employees will continue to rely on whatever tools seem convenient to them if clear guidelines and sanctioned alternatives are not provided, leading to efficiency shortcuts becoming potential security breaches without clear guidelines and sanctioned alternatives. Experts emphasise that organisations must adopt comprehensive internal governance strategies to mitigate the growing risks associated with the use of unregulated artificial intelligence, beyond technical safeguards. 

There are a number of factors that go into establishing a well-structured artificial intelligence framework, including establishing a formal AI policy. This policy should clearly state the acceptable uses for AI, prohibit the unauthorised download of free AI tools, and limit the sharing of personal, proprietary, and confidential information through these platforms. 

Businesses are also advised to revise and update existing IT, network security, and procurement policies in order to keep up with the rapidly changing AI environment. Additionally, proactive employee engagement continues to be a crucial part of addressing AI-related risks. Training programs can provide workers with the information and skills needed to understand potential risks, identify sensitive information, and follow best practices for safe, responsible use of AI. 

Also essential is the development of a robust data classification strategy that enables employees to recognise and handle confidential or sensitive information before interacting with AI systems in a proper manner. 

The implementation of formal authorisation processes for AI tools may also benefit organisations by limiting access to the tools to qualified personnel, along with documentation protocols that document inputs and outputs so that compliance and intellectual property issues can be tracked. Further safeguarding the reputation of your brand can be accomplished by periodic reviews of AI-generated content for bias, accuracy, and appropriateness. 

By continuously monitoring AI tools, including reviewing their evolving terms of service, organisations can ensure ongoing compliance with their company's standards, as well. Finally, it is important to put in place a clearly defined incident response plan, which includes designated points of contact for potential data exposure or misuse. This will help organisations respond more quickly to any AI-related incident. 

Combined, these measures represent a significant step forward in the adoption of structured, responsible artificial intelligence that balances innovation and accountability. Although internal governance is the cornerstone of responsible AI usage, external partnerships and vendor relationships are equally important when it comes to protecting organisational data. 

According to experts, organisation leaders need to be vigilant not just about internal compliance, but also about third-party contracts and data processing agreements. Data privacy, retention, and usage provisions should be explicitly included in any agreement with an external AI provider. These provisions are meant to protect confidential information from being exploited or stored in ways that are outside of the intended use of the information.

Business leaders, particularly CEOs and senior executives, must examine vendor agreements carefully in order to ensure that they are aligned with international data protection frameworks, such as the General Data Protection Regulation and California Consumer Privacy Act (CCPA). In order to improve their overall security posture, organisations can ensure that sensitive data is handled with the same rigour and integrity as their internal privacy standards by incorporating these safeguards into the contract terms. 

In the current state of artificial intelligence, which has been redefining the limits of workplace efficiency, its responsible integration has become an important factor in enhancing organisational trust and resilience as it continues to redefine the boundaries of workplace efficiency. Getting AI to work effectively in business requires not only innovation but also a mature set of governance frameworks that accompany its use. 

Companies that adopt a proactive approach, such as by enforcing clear internal policies, establishing transparency with vendors, and cultivating a culture of accountability, will be able to gain more than simply security. They will also gain credibility with clients and employees, as well as regulators. Although internal governance is the cornerstone of responsible AI usage, external partnerships and vendor relationships are equally important when it comes to protecting organisational data. 

According to experts, organisation leaders need to be vigilant not just about internal compliance, but also about third-party contracts and data processing agreements. Data privacy, retention, and usage provisions should be explicitly included in any agreement with an external AI provider. 

These provisions are meant to protect confidential information from being exploited or stored in ways that are outside of the intended use of the information. Business leaders, particularly CEOs and senior executives, must examine vendor agreements carefully in order to ensure that they are aligned with international data protection frameworks, such as the General Data Protection Regulation and California Consumer Privacy Act (CCPA). 

In order to improve their overall security posture, organisations can ensure that sensitive data is handled with the same rigour and integrity as their internal privacy standards by incorporating these safeguards into the contract terms. In the current state of artificial intelligence, which has been redefining the limits of workplace efficiency, its responsible integration has become an important factor in enhancing organisational trust and resilience as it continues to redefine the boundaries of workplace efficiency. 

Getting AI to work effectively in business requires not only innovation but also a mature set of governance frameworks that accompany its use. Companies that adopt a proactive approach, such as by enforcing clear internal policies, establishing transparency with vendors, and cultivating a culture of accountability, will be able to gain more than simply security. They will also gain credibility with clients and employees, as well as regulators.

In addition to ensuring compliance, responsible AI adoption can improve operational efficiency, increase employee confidence, and strengthen brand loyalty in an increasingly data-conscious market. According to experts, artificial intelligence should not be viewed merely as a risk to be controlled, but as a powerful tool to be harnessed under strong ethical and strategic guidelines. 

It is becoming increasingly apparent that in today's business climate, every prompt, every dataset can potentially create a vulnerability, so organisations that thrive will be those that integrate technological ambition with a disciplined governance framework - trying to transform AI from being a source of uncertainty to being a tool for innovation that is as sustainable and secure as possible.

The Rise of the “Shadow AI Economy”: Employees Outpace Companies in AI Adoption

 




Artificial intelligence has become one of the most talked-about technologies in recent years, with billions of dollars poured into projects aimed at transforming workplaces. Yet, a new study by MIT suggests that while official AI programs inside companies are struggling, employees are quietly driving a separate wave of adoption on their own. Researchers are calling this the rise of the “shadow AI economy.”

The report, titled State of AI in Business 2025 and conducted by MIT’s Project NANDA, examined more than 300 public AI initiatives, interviewed leaders from 52 organizations, and surveyed 153 senior executives. Its findings reveal a clear divide. Only 40% of companies have official subscriptions to large language model (LLM) tools such as ChatGPT or Copilot, but employees in more than 90% of companies are using personal accounts to complete their daily work.

This hidden usage is not minor. Many workers reported turning to AI multiple times a day for tasks like drafting emails, summarizing information, or basic data analysis. These personal tools are often faster, easier to use, and more adaptable than the expensive systems companies are trying to build in-house.

MIT researchers describe this contrast as the “GenAI divide.” Despite $30–40 billion in global investments, only 5% of businesses have seen real financial impact from their official AI projects. In most cases, these tools remain stuck in test phases, weighed down by technical issues, integration challenges, or limited flexibility. Employees, however, are already benefiting from consumer AI products that require no approvals or training to start using.


The study highlights several reasons behind this divide:

1. Accessibility: Consumer tools are easy to set up, requiring little technical knowledge.

2. Flexibility: Workers can adapt them to their own workflows without waiting for management decisions.

3. Immediate value: Users see results instantly, unlike with many corporate systems that fail to show clear benefits.


Because of this, employees are increasingly choosing AI for routine tasks. The survey found that around 70% prefer AI for simple work like drafting emails, while 65% use it for basic analysis. At the same time, most still believe humans should handle sensitive or mission-critical responsibilities.

The findings also challenge some popular myths about AI. According to MIT, widespread fears of job losses have not materialized, and generative AI has yet to revolutionize business operations in the way many predicted. Instead, the problem lies in rigid tools that fail to learn, adapt, or integrate smoothly into existing systems. Internal projects built by companies themselves also tend to fail at twice the rate of externally sourced solutions.

For now, the “shadow AI economy” shows that the real adoption of AI is happening at the individual level, not through large-scale corporate programs. The report concludes that companies that recognize and build on this grassroots use of AI may be better placed to succeed in the future.



Emerging Cybersecurity Threats in 2025: Shadow AI, Deepfakes, and Open-Source Risks

 

Cybersecurity continues to be a growing concern as organizations worldwide face an increasing number of sophisticated attacks. In early 2024, businesses encountered an alarming 1,308 cyberattacks per week—a sharp 28% rise from the previous year. This surge highlights the rapid evolution of cyber threats and the pressing need for stronger security strategies. As technology advances, cybercriminals are leveraging artificial intelligence, exploiting open-source vulnerabilities, and using advanced deception techniques to bypass security measures. 

One of the biggest cybersecurity risks in 2025 is ransomware, which remains a persistent and highly disruptive threat. Attackers use this method to encrypt critical data, demanding payment for its release. Many cybercriminals now employ double extortion tactics, where they not only lock an organization’s files but also threaten to leak sensitive information if their demands are not met. These attacks can cripple businesses, leading to financial losses and reputational damage. The growing sophistication of ransomware groups makes it imperative for companies to enhance their defensive measures, implement regular backups, and invest in proactive threat detection systems. 

Another significant concern is the rise of Initial Access Brokers (IABs), cybercriminals who specialize in selling stolen credentials to hackers. By gaining unauthorized access to corporate systems, these brokers enable large-scale cyberattacks, making it easier for threat actors to infiltrate networks. This trend has made stolen login credentials a valuable commodity on the dark web, increasing the risk of data breaches and financial fraud. Organizations must prioritize multi-factor authentication and continuous monitoring to mitigate these risks. 

A new and rapidly growing cybersecurity challenge is the use of unauthorized artificial intelligence tools, often referred to as Shadow AI. Employees frequently adopt AI-driven applications without proper security oversight, leading to potential data leaks and vulnerabilities. In some cases, AI-powered bots have unintentionally exposed sensitive financial information due to default settings that lack robust security measures. 

As AI becomes more integrated into workplaces, businesses must establish clear policies to regulate its use and ensure proper safeguards are in place. Deepfake technology has also emerged as a major cybersecurity threat. Cybercriminals are using AI-generated deepfake videos and audio recordings to impersonate high-ranking officials and deceive employees into transferring funds or sharing confidential data. 

A recent incident involved a Hong Kong-based company losing $25 million after an employee fell victim to a deepfake video call that convincingly mimicked their CFO. This alarming development underscores the need for advanced fraud detection systems and enhanced verification protocols to prevent such scams. Open-source software vulnerabilities are another critical concern. Many businesses and government institutions rely on open-source platforms, but these systems are increasingly being targeted by attackers. Cybercriminals have infiltrated open-source projects, gaining the trust of developers before injecting malicious code. 

A notable case involved a widely used Linux tool where a contributor inserted a backdoor after gradually establishing credibility within the project. If not for a vigilant security expert, the backdoor could have remained undetected, potentially compromising millions of systems. This incident highlights the importance of stricter security audits and increased funding for open-source security initiatives. 

To address these emerging threats, organizations and governments must take proactive measures. Strengthening regulatory frameworks, investing in AI-driven threat detection, and enhancing collaboration between cybersecurity experts and policymakers will be crucial in mitigating risks. The cybersecurity landscape is evolving at an unprecedented pace, and without a proactive approach, businesses and individuals alike will remain vulnerable to increasingly sophisticated attacks.

Ensuring Governance and Control Over Shadow AI

 


AI has become almost ubiquitous in software development, as a GitHub survey shows, 92 per cent of developers in the United States use artificial intelligence as part of their everyday coding. This has led many individuals to participate in what is termed “shadow AI,” which involves leveraging the technology without the knowledge or approval of their organization’s Information Technology department and/or Chief Information Security Officer (CISO). 

This has increased their productivity. In light of this, it should not come as a surprise to learn that motivated employees will seek out the technology that can maximize their value potential as well as minimize repetitive tasks that interfere with more creative, challenging endeavours. It is not uncommon for companies to be curious about new technologies, especially those that can be used to make work easier and more efficient, such as artificial intelligence (AI) and automation tools. 

Despite the increasing amount of ingenuity, some companies remain reluctant to adopt technology at their first, or even second, glances. Nevertheless, resisting change does not necessarily mean employees will stop secretly using AI in a non-technical way, especially since tools such as Microsoft Copilot, ChatGPT, and Claude make these technologies more accessible to non-technical employees.

Known as shadow AI, shadow AI is a growing phenomenon that has gained popularity across many different sectors. There is a concept known as shadow AI, which is the use of artificial intelligence tools or systems without the official approval or oversight of the organization's information technology or security department. These tools are often adopted to solve immediate problems or boost efficiency within an organization. 

If these tools are not properly governed, they can lead to data breaches, legal violations, or regulatory non-compliance, which could pose significant risks to businesses. When Shadow AI is not properly managed, it can introduce vulnerabilities into users' infrastructure that can lead to unauthorized access to sensitive data. In a world where artificial intelligence is becoming increasingly ubiquitous, organizations should take proactive measures to make sure their operations are protected. 

Shadow generative AI poses specific and substantial risks to an organization's integrity and security, and poses significant threats to both of them. A non-regulated use of artificial intelligence can lead to decisions and actions that could undermine regulatory and corporate compliance. Particularly in industries with very strict data handling protocols, such as finance and healthcare, where strict data handling protocols are essential. 

As a result of the bias inherent in the training data, generative AI models can perpetuate these biases, generate outputs that breach copyrights, or generate code that violates licensing agreements. The untested code may cause the software to become unstable or error-prone, which can increase maintenance costs and cause operational disruptions. In addition, such code may contain undetected malicious elements, which increases the risk of data breach and system downtime, as well.

It is important to recognize that the mismanagement of Artificial Intelligence interactions in customer-facing applications can result in regulatory non-compliance, reputational damage, as well as ethical concerns, particularly when the outputs adversely impact the customer experience. Consequently, organization leaders must ensure that their organizations are protected from unintended and adverse consequences when utilizing generative AI by implementing robust governance measures to mitigate these risks. 

In recent years, AI technology, including generative and conversational AI, has seen incredible growth in popularity, leading to widespread grassroots adoption of these technologies. The accessibility of consumer-facing AI tools, which require little to no technical expertise, combined with a lack of formal AI governance, has enabled employees to utilize unvetted AI solutions, The 2025 CX Trends Report highlights a 250% year-over-year increase in shadow AI usage in some industries, exposing organizations to heightened risks related to data security, compliance, and business ethics. 

There are many reasons why employees turn to shadow AI for personal or team productivity enhancement because they are dissatisfied with their existing tools, because of the ease of access, and because they want to enhance the ability to accomplish specific tasks. In the future, this gap will grow as CX Traditionalists delay the development of AI solutions due to limitations in budget, a lack of knowledge, or an inability to get internal support from their teams. 

As a result, CX Trendsetters are taking steps to address this challenge by adopting approved artificial intelligence solutions like AI agents and customer experience automation, as well as ensuring the appropriate oversight and governance are in place. Identifying AI Implementations: CISOs and security teams, must determine who will be introducing AI throughout the software development lifecycle (SDLC), assess their security expertise, and evaluate the steps taken to minimize risks associated with AI deployment. 

In training programs, it is important to raise awareness among developers of the importance and potential of AI-assisted code as well as develop their skills to address these vulnerabilities. To identify vulnerable phases of the software development life cycle, the security team needs to analyze each phase of the SDLC and identify if any are vulnerable to unauthorized uses of AI. 

Fostering a Security-First Culture: By promoting a proactive protection mindset, organizations can reduce the need for reactive fixes by emphasizing the importance of securing their systems from the onset, thereby saving time and money. In addition to encouraging developers to prioritize safety and transparency over convenience, a robust security-first culture, backed by regular training, encourages a commitment to security. 

CISOs are responsible for identifying and managing risks associated with new tools and respecting decisions made based on thorough evaluations. This approach builds trust, ensures tools are properly vetted before deployment, and safeguards the company's reputation. Incentivizing Success: There is great value in having developers who contribute to bringing AI usage into compliance with their organizations. 

For this reason, these individuals should be promoted, challenged, and given measurable benchmarks to demonstrate their security skills and practices. As organizations reward these efforts, they create a culture in which AI deployment is considered a critical, marketable skill that can be acquired and maintained. If these strategies are implemented effectively, a CISO and development teams can collaborate to manage AI risks the right way, ensuring faster, safer, and more effective software production while avoiding the pitfalls caused by shadow AI. 

As an alternative to setting up sensitive alerts to make sure that confidential data isn't accidentally leaked, it is also possible to set up tools using artificial intelligence, for example, to help detect when a model of artificial intelligence incorrectly inputs or processes personal data, financial information, or other proprietary information. 

It is possible to identify and mitigate security breaches in real-time by providing real-time alerts in real-time, and by enabling management to reduce these breaches before they escalate into a full-blown security incident, adding a layer of security protection, in this way. 

When an API strategy is executed well, it is possible to give employees the freedom to use GenAI tools productively while safeguarding the company's data, ensuring that AI usage is aligned with internal policies, and protecting the company from fraud. To increase innovation and productivity, one must strike a balance between securing control and ensuring that security is not compromised.

Ransomware Gangs Actively Recruiting Pen Testers: Insights from Cato Networks' Q3 2024 Report

 

Cybercriminals are increasingly targeting penetration testers to join ransomware affiliate programs such as Apos, Lynx, and Rabbit Hole, according to Cato Network's Q3 2024 SASE Threat Report, published by its Cyber Threats Research Lab (CTRL).

The report highlights numerous Russian-language job advertisements uncovered through surveillance of discussions on the Russian Anonymous Marketplace (RAMP). Speaking at an event in Stuttgart, Germany, on November 12, Etay Maor, Chief Security Strategist at Cato Networks, explained:"Penetration testing is a term from the security side of things when we try to reach our own systems to see if there are any holes. Now, ransomware gangs are hiring people with the same level of expertise - not to secure systems, but to target systems."

He further noted, "There's a whole economy in the criminal underground just behind this area of ransomware."

The report details how ransomware operators aim to ensure the effectiveness of their attacks by recruiting skilled developers and testers. Maor emphasized the evolution of ransomware-as-a-service (RaaS), stating, "[Ransomware-as-a-service] is constantly evolving. I think they're going into much more details than before, especially in some of their recruitment."

Cato Networks' team discovered instances of ransomware tools being sold, such as locker source code priced at $45,000. Maor remarked:"The bar keeps going down in terms of how much it takes to be a criminal. In the past, cybercriminals may have needed to know how to program. Then in the early 2000s, you could buy viruses. Now you don't need to even buy them because [other cybercriminals] will do this for you."

AI's role in facilitating cybercrime was also noted as a factor lowering barriers to entry. The report flagged examples like a user under the name ‘eloncrypto’ offering a MAKOP ransomware builder, an offshoot of PHOBOS ransomware.

The report warns of the growing threat posed by Shadow AI—where organizations or employees use AI tools without proper governance. Of the AI applications monitored, Bodygram, Craiyon, Otter.ai, Writesonic, and Character.AI were among those flagged for security risks, primarily data privacy concerns.

Cato CTRL also identified critical gaps in Transport Layer Security (TLS) inspection. Only 45% of surveyed organizations utilized TLS inspection, and just 3% inspected all relevant sessions. This lapse allows attackers to leverage encrypted TLS traffic to evade detection.

In Q3 2024, Cato CTRL noted that 60% of CVE exploit attempts were blocked within TLS traffic. Prominent vulnerabilities targeted included Log4j, SolarWinds, and ConnectWise.

The report is based on the analysis of 1.46 trillion network flows across over 2,500 global customers between July and September 2024. It underscores the evolving tactics of ransomware gangs and the growing challenges organizations face in safeguarding their systems.

Shadow AI: The Novel, Unseen Threat to Your Company's Data

 

Earlier this year, ChatGPT emerged as the face of generative AI. ChatGPT was designed to help with almost everything, from creating business plans to breaking down complex topics into simple terms. Since then, businesses of all sizes have been eager to explore and reap the benefits of generative AI. 

However, as this new chapter of AI innovation moves at breakneck speed, CEOs and leaders risk overlooking a type of technology that has been infiltrating through the back door: shadow AI. 

Overlooking AI shadow a risky option 

To put it simply, "shadow AI" refers to employees who, without management awareness, add AI tools to their work systems to make life easier. Although most of the time this pursuit of efficiency is well-intentioned, it is exposing businesses to new cybersecurity and data privacy risks.

When it comes to navigating tedious tasks or laborious processes, employees who want to increase productivity and process efficiency are usually the ones who embrace shadow AI. This could imply that AI is being asked to summarise the main ideas from meeting minutes or to comb through hundreds of PowerPoint decks in search of critical data. 

Employees typically don't intentionally expose their company to risk. On the contrary. All they're doing is simplifying things so they can cross more things off their to-do list. However, given that over a million adults in the United Kingdom have already utilised generative AI at work, there is a chance that an increasing number of employees will use models that their employers have not approved for safe use, endangering data security in the process. 

Major risks 

Shadow AI carries two risks. First, employees may feed sensitive company information into such tools or leave sensitive company information open to be scraped while the technology continues to operate in the background. For example, when an employee uses ChatGPT or Google Bard to increase productivity or clarify information, they may be entering sensitive or confidential company information. 

Sharing data isn't always an issue—companies frequently rely on third-party tools and service providers for information—but problems can arise when the tool in question and its data-handling policies haven't been assessed and approved by the business. 

The second risk related to shadow AI is that, because businesses generally aren't aware that these tools are being used, they can't assess the risks or take appropriate action to minimise them. (This may also apply to employees who receive false information and subsequently use it in their work.) 

This is something that occurs behind closed doors and beyond the knowledge of business leaders. In 2022, 41% of employees created, modified, or accessed technology outside of IT's purview, according to research from Gartner. By 2027, the figure is expected to increase to 75%. 

And therein lies the crux of the issue. How can organisations monitor and assess the risks of something they don't understand? 

Some companies, such as Samsung, have gone so far as to ban ChatGPT from their offices after employees uploaded proprietary source code and leaked confidential company information via the public platform. Apple and JP Morgan have also restricted employee use of ChatGPT. Others are burying their heads in the sand or failing to notice the problem at all. 

What should business leaders do to mitigate the risks of shadow AI while also ensuring that they and their teams can benefit from the efficiencies and insights that artificial intelligence can offer? 

First, leaders should educate teams on what constitutes safe AI practise, as well as the risks associated with shadow AI, and provide clear guidance on when ChatGPT can and cannot be used safely at work. 

Companies should consider offering private, in-house generative AI tools to employees who fall into the latter category. Models such as Llama 2 and Falcon AI can be downloaded and used securely to power generative AI tools. Azure Open AI provides a middle-ground option in which data remains within the company's Microsoft "tenancy." 

These options avoid the risk to data and IP that comes with public large language models like ChatGPT—whose various uses of our data aren't yet known—while allowing employees to yield the results they desire.