Search This Blog

Powered by Blogger.

Blog Archive

Labels

Microsoft's Response to "Privacy-Concerns" of ChatGPT in Business

'Private' ChatGPT offers healthcare, finance, and banking organizations a way to safeguard data privacy concerns.

 


As a response to concerns over using individuals' data to train artificial intelligence models, Microsoft is considering launching a privacy-centric version of ChatGPT. There is a possibility that the decision will be attractive to industries such as healthcare, finance, and banking that have not adopted ChatGPT. This is because they are concerned that sensitive information will be shared with the system by their staff. This is due to the risk of sensitive information being shared. 

The use of ChatGPT has greatly benefited some businesses, especially banks and other corporations. However, these companies have resisted the adoption of the technology due to privacy concerns. They fear that their employees might unintentionally disclose confidential information while using it. 

By adding OpenAI's GPT-4 or ChatGPT to Azure, Microsoft wants to make it easier for enterprises to integrate proprietary data with user queries. In addition, Microsoft wants to see the results of its analytics on this platform. 

A user fires off a query to Azure; Microsoft's cloud determines what data is required to complete that query, so it is returned to the user as soon as possible. Using the question and the retrieving information, an initial query is created, which is then passed on to an OpenAI model of choice hosted in Azure. The model predicts an answer, which is sent back to the user. 

Some businesses have already become interested in the new artificial intelligence-powered chatbot to automate their business processes, but many others, such as banks, have opted against adopting it for fear that the chatbot will inadvertently give them proprietary information when used by their employees. 

According to reports, Microsoft, which holds the rights to resell the startup's technology, has a plan in place to get holdouts on board. 

As part of the AI tool, a separate version will operate on separate cloud servers. This version will be kept apart from other customers' data, to ensure privacy. Dedicated servers will store the data separately from the main ChatGPT system to ensure the privacy of the data stored on these dedicated servers. As a result, customers would have to pay up to 10 times more for private ChatGPT setup compared with the charges they face currently. 

It is also planned for OpenAI to launch an exclusive subscription service for businesses that will focus on privacy by not allowing users' data to be fed into those training models by default. 

Additionally, OpenAI has sold a private ChatGPT service to Morgan Stanley as part of its recent sales activity. A wealth management division of the bank can use this platform to ask questions and analyze thousands of market research documents that have been generated over the years by its wealth management division. Microsoft has already invested multi-year, multibillion-dollar amounts in OpenAI, which means that it can resell its products without violating any terms. 

In response to the voluminous data that ChatGPT gathered from numerous sources in its initial training and continues to collect from its users, there have been numerous privacy and regulatory concerns about ChatGPT since its release. Microsoft seems to have taken the opposite approach. Andy Beatman, senior product marketing manager of Azure AI, said that this enhanced data handover feature is among the most requested features among customers. 

As reported by The Register, the upcoming system, which will undergo a public preview after being released in the spring, operates on Azure for retrieving relevant data. This is so it can best satisfy the worker's request based on its internal data. 

Microsoft also explained that Azure OpenAI delivers insights based on the content and level of information provided by the user. Together with Azure Cognitive Search, this data can be retrieved for the user based on their input and conversation history. 

However, there is a drawback to this type of ChatGPT, which will come with a cost of deployment that will be higher than that of the public version, thus making it a rather high-priced option. Reports suggest that exclusive instances of ChatGPT could have a price tag that is up to 10 times more than what clients are currently paying for using a standard version of the software. 

As part of OpenAI's ongoing efforts to develop a similar offering to Microsoft's 'private' ChatGPT, the company will be releasing it in the "coming months." According to the company, by default, the subscription-based service will not use the input provided by employees and clients when training its language models. 

Since OpenAI was banned in Italy as a result of the chat history being used for training the AI model as part of the search engine results, an option has been added to shut off the chat history. A company spokesperson mentioned that ChatGPT now can turn off chat history and plans to introduce that soon. The conversations started during the period when chat history is disabled will not be used for training or improving their models, and will not appear in the sidebar of the history of the conversation. 

There is no doubt that Microsoft's AI-based privacy-centric service can be a game changer for businesses that receive and manage sensitive and important data. When Samsung found out that some of its employees were uploading company source code to the devices they use in the workplace, they banned them from using generational AI chatbots at work or on devices they use for their work. Several Microsoft representatives are already contacting organizations who could be interested in this upcoming product since many existing customers have contracts with Azure that could prove to be beneficial in securely managing data in the coming years.
Share it:

Azure

Business

ChatGPT

Cyberattacks

Cybersecurity

Microsoft

OpenAI

Privacy