Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label ChatGPT data privacy. Show all posts

AI Adoption Surges Faster Than Cybersecurity Awareness, Study Reveals

 

A recent study has revealed that the rapid adoption of AI tools like ChatGPT and Gemini is far outpacing efforts to educate users about the cybersecurity risks associated with them. The research, conducted by the National Cybersecurity Alliance (NCA) — a nonprofit organization promoting data privacy and online safety — in collaboration with cybersecurity firm CybNet, surveyed over 6,500 participants across seven countries, including the United States.

The findings show that 65% of respondents now use AI tools daily, reflecting a 21% increase compared to last year. However, 58% of users said they had not received any formal training from their employers on the data security and privacy risks of using such technologies.

"People are embracing AI in their personal and professional lives faster than they are being educated on its risks," said Lisa Plaggemier, Executive Director at the NCA. Alarmingly, 43% of respondents admitted to sharing sensitive information — including financial and client data — in conversations with AI tools. This underscores the growing gap between AI adoption and cybersecurity preparedness.

The NCA-CybNet report adds weight to a growing concern among experts that the surge in AI use is not being matched by adequate awareness or safety measures. Earlier this year, a SailPoint survey found that 96% of IT professionals viewed AI agents as potential security risks, yet 84% said their companies had already begun deploying them internally.

AI agents, designed to automate complex tasks and boost efficiency, often require access to internal systems and sensitive documents — a setup that could lead to data leaks or breaches. Some incidents, such as AI tools accidentally deleting entire company databases, highlight how vulnerabilities can quickly escalate into serious problems.

Even conventional chatbots carry risks. Besides producing inaccurate information, many also store user interactions as training data, making privacy a persistent concern. The 2023 case of Samsung engineers inadvertently leaking confidential data to ChatGPT serves as a cautionary example, prompting the company to prohibit employee use of the chatbot.

As generative AI becomes embedded in everyday tools — Microsoft recently added AI features to Word, Excel, and PowerPoint — users may be adopting it without realizing the full scope of its implications. Without robust cybersecurity education, individuals and businesses could expose themselves to significant risks in pursuit of productivity and convenience.