Cybersecurity researchers have uncovered a troubling risk tied to how popular AI chatbots answer basic questions. When asked where to log in to well-known websites, some of these tools may unintentionally direct users to the wrong places, putting their private information at risk.
Phishing is one of the oldest and most dangerous tricks in the cybercrime world. It usually involves fake websites that look almost identical to real ones. People often get an email or message that appears to be from a trusted company, like a bank or online store. These messages contain links that lead to scam pages. If you enter your username and password on one of these fake sites, the scammer gets full access to your account.
Now, a team from the cybersecurity company Netcraft has found that even large language models or LLMs, like the ones behind some popular AI chatbots, may be helping scammers without meaning to. In their study, they tested how accurately an AI chatbot could provide login links for 50 well-known companies across industries such as finance, retail, technology, and utilities.
The results were surprising. The chatbot gave the correct web address only 66% of the time. In about 29% of cases, the links led to inactive or suspended pages. In 5% of cases, they sent users to a completely different website that had nothing to do with the original question.
So how does this help scammers? Cybercriminals can purchase these unclaimed or inactive domain names, the incorrect ones suggested by the AI, and turn them into realistic phishing pages. If people click on them, thinking they’re going to the right site, they may unknowingly hand over sensitive information like their bank login or credit card details.
In one example observed by Netcraft, an AI-powered search tool redirected users who asked about a U.S. bank login to a fake copy of the bank’s website. The real link was shown further down the results, increasing the risk of someone clicking on the wrong one.
Experts also noted that smaller companies, such as regional banks and mid-sized fintech platforms, were more likely to be affected than global giants like Apple or Google. These smaller businesses may not have the same resources to secure their digital presence or respond quickly when problems arise.
The researchers explained that this problem doesn't mean the AI tools are malicious. However, these models generate answers based on patterns, not verified sources and that can lead to outdated or incorrect responses.
The report serves as a strong reminder: AI is powerful, but it is not perfect. Until improvements are made, users should avoid relying on AI-generated links for sensitive tasks. When in doubt, type the website address directly into your browser or use a trusted bookmark.
AI chatbots like ChatGPT have captured widespread attention for their remarkable conversational abilities, allowing users to engage on diverse topics with ease. However, while these tools offer convenience and creativity, they also pose significant privacy risks. The very technology that powers lifelike interactions can also store, analyze, and potentially resurface user data, raising critical concerns about data security and ethical use.
Chatbots like ChatGPT rely on Large Language Models (LLMs) trained on vast datasets to generate human-like responses. This training often includes learning from user interactions. Much like how John Connor taught the Terminator quirky catchphrases in Terminator 2: Judgment Day, these systems refine their capabilities through real-world inputs. However, this improvement process comes at a cost: personal data shared during conversations may be stored and analyzed, often without users fully understanding the implications.
For instance, OpenAI’s terms and conditions explicitly state that data shared with ChatGPT may be used to improve its models. Unless users actively opt-out through privacy settings, all shared information—from casual remarks to sensitive details like financial data—can be logged and analyzed. Although OpenAI claims to anonymize and aggregate user data for further study, the risk of unintended exposure remains.
Despite assurances of data security, breaches have occurred. In May 2023, hackers exploited a vulnerability in ChatGPT’s Redis library, compromising the personal data of around 101,000 users. This breach underscored the risks associated with storing chat histories, even when companies emphasize their commitment to privacy. Similarly, companies like Samsung faced internal crises when employees inadvertently uploaded confidential information to chatbots, prompting some organizations to ban generative AI tools altogether.
Governments and industries are starting to address these risks. For instance, in October 2023, President Joe Biden signed an executive order focusing on privacy and data protection in AI systems. While this marks a step in the right direction, legal frameworks remain unclear, particularly around the use of user data for training AI models without explicit consent. Current practices are often classified as “fair use,” leaving consumers exposed to potential misuse.
Until stricter regulations are implemented, users must take proactive steps to safeguard their privacy while interacting with AI chatbots. Here are some key practices to consider:
Training courses in GenAI cover a wide range of topics. Introductory courses, which can be completed in just a few hours, address the fundamentals, ethics, and social implications of GenAI. For those seeking deeper knowledge, advanced modules are available that focus on development using GenAI and large language models (LLMs), requiring over 100 hours to complete.
These courses are designed to cater to various job roles and functions within the organisations. For example, KPMG India aims to have its entire workforce trained in GenAI by the end of the fiscal year, with 50% already trained. Their programs are tailored to different levels of employees, from teaching leaders about return on investment and business envisioning to training coders in prompt engineering and LLM operations.
EY India has implemented a structured approach, offering distinct sets of courses for non-technologists, software professionals, project managers, and executives. Presently, 80% of their employees are trained in GenAI. Similarly, PwC India focuses on providing industry-specific masterclasses for leaders to enhance their client interactions, alongside offering brief nano courses for those interested in the basics of GenAI.
Wipro organises its courses into three levels based on employee seniority, with plans to develop industry-specific courses for domain experts. Cognizant has created shorter courses for leaders, sales, and HR teams to ensure a broad understanding of GenAI. Infosys also has a program for its senior leaders, with 400 of them currently enrolled.
Ray Wang, principal analyst and founder at Constellation Research, highlighted the extensive range of programs developed by tech firms, including training on Python and chatbot interactions. Cognizant has partnerships with Udemy, Microsoft, Google Cloud, and AWS, while TCS collaborates with NVIDIA, IBM, and GitHub.
Cognizant boasts 160,000 GenAI-trained employees, and TCS offers a free GenAI course on Oracle Cloud Infrastructure until the end of July to encourage participation. According to TCS's annual report, over half of its workforce, amounting to 300,000 employees, have been trained in generative AI, with a goal of training all staff by 2025.
The investment in GenAI training by IT and consulting firms pivots towards the importance of staying ahead in the rapidly evolving technological landscape. By equipping their employees with essential AI skills, these companies aim to enhance their capabilities, drive innovation, and maintain a competitive edge in the market. As the demand for AI expertise grows, these training programs will play a crucial role in shaping the future of the industry.
Apart from providing a space for experimentation, other points increasingly show that open-source LLMs are going to gain the same attention closed-source LLMs are getting now.
The open-source nature allows organizations to understand,
modify, and tailor the models to their specific requirements. The collaborative
environment nurtured by open-source fosters innovation, enabling faster
development cycles. Additionally, the avoidance of vendor lock-in and adherence
to industry standards contribute to seamless integration. The security benefits
derived from community scrutiny and ethical considerations further bolster the
appeal of open-source LLMs, making them a strategic choice for enterprises
navigating the evolving landscape of artificial intelligence.
In recent years, the emergence of Large Language Models (LLMs), commonly referred to as Smart Computers, has ushered in a technological revolution with profound implications for various industries. As these models promise to redefine human-computer interactions, it's crucial to explore both their remarkable impacts and the challenges that come with them.
Smart Computers, or LLMs, have become instrumental in expediting software development processes. Their standout capability lies in the swift and efficient generation of source code, enabling developers to bring their ideas to fruition with unprecedented speed and accuracy. Furthermore, these models play a pivotal role in advancing artificial intelligence applications, fostering the development of more intelligent and user-friendly AI-driven systems. Their ability to understand and process natural language has democratized AI, making it accessible to individuals and organizations without extensive technical expertise. With their integration into daily operations, Smart Computers generate vast amounts of data from nuanced user interactions, paving the way for data-driven insights and decision-making across various domains.
Managing Risks and Ensuring Responsible Usage
However, the benefits of Smart Computers are accompanied by inherent risks that necessitate careful management. Privacy concerns loom large, especially regarding the accidental exposure of sensitive information. For instance, models like ChatGPT learn from user interactions, raising the possibility of unintentional disclosure of confidential details. Organisations relying on external model providers, such as Samsung, have responded to these concerns by implementing usage limitations to protect sensitive business information. Privacy and data exposure concerns are further accentuated by default practices, like ChatGPT saving chat history for model training, prompting the need for organizations to thoroughly inquire about data usage, storage, and training processes to safeguard against data leaks.
Addressing Security Challenges
Security concerns encompass malicious usage, where cybercriminals exploit Smart Computers for harmful purposes, potentially evading security measures. The compromise or contamination of training data introduces the risk of biased or manipulated model outputs, posing significant threats to the integrity of AI-generated content. Additionally, the resource-intensive nature of Smart Computers makes them prime targets for Distributed Denial of Service (DDoS) attacks. Organisations must implement proper input validation strategies, selectively restricting characters and words to mitigate potential attacks. API rate controls are essential to prevent overload and potential denial of service, promoting responsible usage by limiting the number of API calls for free memberships.
A Balanced Approach for a Secure Future
To navigate these challenges and anticipate future risks, organisations must adopt a multifaceted approach. Implementing advanced threat detection systems and conducting regular vulnerability assessments of the entire technology stack are essential. Furthermore, active community engagement in industry forums facilitates staying informed about emerging threats and sharing valuable insights with peers, fostering a collaborative approach to security.
All in all, while Smart Computers bring unprecedented opportunities, the careful consideration of risks and the adoption of robust security measures are essential for ensuring a responsible and secure future in the era of these groundbreaking technologies.