Search This Blog

Powered by Blogger.

Blog Archive

Labels

Here's Why Multiple Top Firms are Banning ChatGPT

Many organisations are still unsure about how to handle generative AI in the workplace.

 

Several major companies are preventing their staff from using ChatGPT despite its exceptional capabilities. 

ChatGPT and other generative AI technologies were outlawed by Samsung in May 2023. The Commonwealth Bank of Australia then followed suit in June 2023, along with businesses like Apple, JPMorgan Chase & Co., and Amazon. Employee use of ChatGPT has also been prohibited by some government organisations, legal firms, and hospitals. 

Why then are companies banning ChatGPT at an increasing rate? Here are three main justifications. 

Data breach

To train and function properly, ChatGPT needs a lot of data. The chatbot was taught using enormous volumes of internet-derived data, and it is still being trained. Every piece of data you provide the chatbot, including private customer information, business trade secrets, and sensitive information, may be inspected by its trainers, who may use the knowledge to enhance their own systems, according to OpenAI's Help Page. 

Regulations governing data protection are very strict and apply to numerous enterprises. As a result, they are wary of disclosing personal information to third parties because doing so raises the possibility of data leaks. 

Additionally, OpenAI doesn't provide any assurances of 100% data privacy and secrecy. OpenAI acknowledged a flaw in March 2023 that allowed some users to see the chat titles in the histories of other active users. OpenAI implemented a bug bounty programme and rectified this flaw, but the company does not guarantee the security and privacy of user data. 

To prevent data leaks, which might harm their reputation, result in financial losses, and put their customers and employees at risk, many companies are choosing to forbid employees from using ChatGPT.

Cybersecurity threats

Although it's uncertain whether ChatGPT is actually vulnerable to cybersecurity threats, there's a chance that its use within a company may offer possible weaknesses that cyberattackers can take advantage of. If a business incorporates ChatGPT, attackers might be able to take advantage of security flaws and inject malware if the chatbot has inadequate protection. 

Furthermore, ChatGPT's capacity to produce human-like responses makes it a veritable gold mine for phishing attackers who can hijack an account or pose as legitimate organisations to trick staff members into divulging critical data. 

Employees' careless use

Certain employees across multiple firms rely entirely on ChatGPT answers to develop content and fulfil their tasks. This promotes lethargy in the workplace and stifles creativity and innovation. Being dependant on AI can impair your ability to think critically. It can also harm a company's credibility because ChatGPT frequently offers false and untrustworthy statistics. 

Although ChatGPT is a useful tool, employing it to answer complicated questions that require domain-specific expertise might jeopardise a company's operations and efficiency. Some employees may forget to fact-check and verify the AI chatbot's responses, treating them as a one-size-fits-all solution.
Share it:

AI technology

AI Tool

ChatGPT

Online Security

Technology

Threat Intelligence