Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AI Chatbots. Show all posts

Protecting Sensitive Data When Employees Use AI Chatbots


 

In today's digitised world, where artificial intelligence tools are rapidly reshaping the way people work, communicate, and work together, it's important to be aware that a quiet but pressing risk has emerged-that what individuals choose to share with chatbots may not remain entirely private for everyone involved.

A patient can use ChatGPT to receive health advice about an embarrassing health condition, or an employee can upload sensitive corporate documents into Google's Gemini system to generate a summary of them, but the information they disclose will ultimately play a part in the algorithms that power these systems. 

It has come to the attention of a lot of experts that AI models, built on large datasets collected from all across the internet, such as blogs and news articles, as well as from social media posts, are often trained without user consent, resulting in not only copyright problems but also significant privacy concerns. 

In light of the opaque nature of machine learning processes, experts warn that once data has been ingested into a model's training pool, it will be almost impossible to remove it. In this world we live in, individuals and businesses alike are forced to ask themselves what level of trust we can place in tools that, while extremely powerful, may also expose us to unseen risks. 

Considering that we are living in a hybrid age, where artificial intelligence tools such as ChatGPT are rapidly becoming a new frontier for data breaches, this is particularly true in the age of hybrid work. While these platforms offer businesses a number of valuable features, including the ability to draft content and troubleshoot software, they also carry inherent risks. 

Experts warn that poor management of them can result in leakage of training data, violations of privacy, and accidental disclosure of sensitive company data. The latest Fortinet Work From Anywhere study highlights the magnitude of the problem: nearly 62% of organisations have reported experiencing data breaches as a result of switching to remote working. 

Analysts believe some of these incidents could have been prevented if employees had stayed on-premises with company-managed devices and applications and had not experienced the same issues. Nevertheless, security experts claim that the solution is not to return to the office again, but rather to create a robust framework for data loss prevention (DLP) in a decentralised work environment to safeguard the information.

To prevent sensitive information from being lost, stolen, or leaked across networks, storage systems, endpoints, and cloud environments, a robust DLP strategy combines tools, technologies, and best practices. A successful framework focuses on data at rest, in motion, and in use and ensures that they are continuously monitored and protected. 

Experts outline four essential components that a framework must have to succeed: Make sure the company data is classified and assigned security levels across the network, and that the network is secure. Maintain strict adherence to compliance when storing, deleting, and retaining user information. Make sure staff are educated regarding clear policies that prevent accidental sharing of information or unauthorised access to information. 

Embrace protection tools that can detect phishing, ransomware, insider threats, and unintentional exposures in order to protect the organisation's data. It is not enough to use technology alone to protect organisations; it is also essential to have clear policies in place. With DLP implemented correctly, organisations are not only less likely to suffer from leaks, but they are also more likely to comply with industry standards, government regulations, and the like. 

The balance between innovation and responsibility in the digital age, particularly in the era of digital transformation, is crucial for businesses that adopt hybrid work and AI-based tools. According to the UK General Data Protection Regulation (UK GDPR), businesses that utilise AI platforms, such as ChatGPT, must adhere to a set of strict obligations designed to protect personal information from unauthorised access.

In terms of data protection, any data that could identify the individual - such as an employee file, customer contact details, or client database - falls within the regulations' scope, and ultimately, business owners are responsible for protecting that data, even when it is handled by third parties. In order to cope with this scenario, companies will need to carefully evaluate how external platforms process, store, and protect their data. 

They often do so through legally binding Data Processing Agreements that specify confidentiality standards, privacy controls, and data deletion requirements for the platforms. It is equally important to ensure that organisations communicate with individuals when their information is incorporated into artificial intelligence tools and, if necessary, obtain explicit consent from them.

As part of the law, firms are also required to implement “appropriate technical and organisational measures.” These measures include checking whether AI vendors are storing their data overseas, ensuring that it is kept in order to prevent misuse, and determining what safeguards are in place. Besides the risks of financial penalties or fines that are imposed for failing to comply, there is also the risk of eroding employee and customer trust, which can be more difficult to repair than the financial penalties themselves. 

When it comes to ensuring safe data practices in the age of artificial intelligence, businesses are increasingly turning to Data Loss Prevention (DLP) solutions as a way of automating the otherwise unmanageable task of monitoring vast networks of users, devices, and applications, which can be a daunting task. The state and flow of information have determined the four primary categories of DLP software that have emerged. 

Often, DLP tools utilise artificial intelligence and machine learning to identify suspicious traffic within and outside a company's system — whether by downloading, transferring, or through mobile connections — by tracking data movement within and outside a company's systems. In addition to preventing unauthorised activities at the source, endpoint DLP is also installed directly on users' computers, which monitors memory, cached data, and files being accessed or transferred as they occur. 

In general, cloud DLP solutions are intended to safeguard information stored in online environments such as backups, archives, and databases. They are characterised by encryption, scanning, and access controls that are used for securing corporate assets. While Email DLP is largely responsible for keeping sensitive details from being leaked through internal and external correspondence, it is also designed to prevent these sensitive details from getting shared accidentally, maliciously or through a compromised mailbox as well. 

Despite some businesses' concerns about whether Extended Detection and Response (XDR) platforms are adequate, experts think that DLP serves a different purpose than XDR: XDR provides broad threat detection and incident response, while DLP focuses on protecting sensitive data, categorising information, reducing breach risks, and ultimately maintaining company reputations.

A number of major technology companies have adopted varying approaches to dealing with the data their AI chatbots have collected from their users, often raising concerns about transparency and control. Google, for example, maintains conversations with its Gemini chatbot by default for 18 months, but the setting can be modified if users desire. Although activity tracking can be disabled, these chats remain in storage for at least 72 hours even if they are not reviewed by human moderators in order to refine the system. 

However, Google warns users that sharing confidential information is not advisable and that any conversations that have already been flagged for human review cannot be erased. As part of Meta's artificial intelligence assistant, which can be found on Facebook, WhatsApp, and Instagram, it is trained to understand public posts, photos, captions, and data scraped from around the web. However, the application cannot handle private messages. 

There is no doubt that citizens of the European Union and the United Kingdom have the right to object to the use of their information for training under stricter privacy laws. However, those living in countries without such protections, such as the United States, have fewer options than their citizens in other countries. The opt-out process for Meta is quite complicated, and it is available only where it is available; users must submit evidence of their interactions with the chatbot as evidence of the opt-out. 

It is worth noting that Microsoft's Copilot does not provide an opt-out mechanism for personal accounts; users are only limited in their ability to delete their interaction history through their account settings, and there is no option to prevent future data retention. These practices demonstrate how AI privacy controls can be patchy, with users' choices often being more influenced by the laws and regulations of their jurisdiction, rather than corporate policy. 

The responsibility organisations as they navigate this evolving landscape relates not only to complying with regulations or implementing technical safeguards, but also to cultivating a culture of digital responsibility in their organisations. Employees need to be taught how important it is to understand and respect the value of their information, and how important it is to exercise caution when using AI-powered applications. 

By taking proactive measures such as implementing clear guidelines on chatbot usage, conducting regular risk assessments, and ensuring that vendors are compliant with stringent data protection standards, an organisation can significantly reduce the threat exposure they are exposed to. 

The businesses that implement a strong governance framework, at the same time, are not only protected but are also able to take advantage of AI's advantages with confidence, enhancing productivity, streamlining workflows, and maintaining competitiveness in an era of data-driven economies. Thus, it is essential to embrace AI responsibly, balancing innovation and vigilance, so that it isn't avoided, but rather embraced responsibly. 

A company's use of AI can be transformed from a potential liability to a strategic asset by combining regulatory compliance, advanced DLP solutions, and transparent communication with staff and stakeholders. It is important to remember that trust is currency in a marketplace where security is king, and companies that protect sensitive data will not only prevent costly breaches from occurring but also strengthen their reputation in the long run.

Chatbots and Children in the Digital Age


The rapid evolution of the digital landscape, especially in the area of social networking, is likely to have an effect on the trend of children and teens seeking companionship through artificial intelligence. This raises some urgent questions about the safety of these interactions. 

In a new report released on Wednesday by Common Sense Media, the nonprofit organisation has warned that companion-style artificial intelligence applications pose an unacceptable risk to young users, especially as they relate to mental health, privacy, and emotional well-being. 

Following the suicide of a 14-year-old boy whose final interactions with a chatbot on the platform Character.AI last year, concerns about these bots gained a significant amount of attention. It was in that case that conversational AI apps became the focus of attention, which sparked further scrutiny of how they affect young people's lives and prompted calls for greater transparency, accountability, and safeguards to keep vulnerable users safe from the darker sides of digital companionship. 

Artificial intelligence chatbots and companion apps have become increasingly commonplace in children's online experiences, offering entertainment, interactive exchanges, and even learning tools. Although these technologies are appealing, experts say that they can also come with a range of risks that should not be overlooked, as well as their appeal. 

In spite of platforms' routine collection and storage of user data, often without adequate protection for children, privacy remains a central issue. Despite the use of filters, chatbots may produce unpredictable responses, resulting in harmful or inappropriate content being displayed to young users.  A second concern that researchers have is the emotional dependence some children may develop on these AI companions, a bond that, according to researchers, may interfere with their real-world relationships and social development. 

Similarly, there is the risk of misinformation because AI systems do not always provide accurate answers, leaving children vulnerable to receiving misleading advice. It is difficult for children to navigate digital companionship in light of these factors, including persuasive design features, in-app purchases and strategies aimed at maximising their screen time, which combine to create a complex and sometimes troubling environment. 

Several advocacy groups have intensified their criticism of such platforms, highlighting that prolonged interactions with AI chatbots may lead to psychological consequences. Common Sense Media's recent risk assessment, carried out in conjunction with Stanford University School of Medicine researchers, was conducted with input from these researchers. It concluded that conversational agents are increasingly being integrated into video games and popular social media platforms, such as Instagram and Snapchat, in an effort to mimic human interaction in ways that require greater oversight. 

The flexibility that makes these bots so engaging is also the risk of emotional entanglement that they pose, from casual friends to romantic partners and even a digital replacement for a deceased loved one, yet the very nature of the bots that makes them so engaging also makes them so risky. It was particularly evident that the dangers of such chatbots were highlighted when Megan Garcia filed a lawsuit against Character.

AI to claim that her 14-year-old son, Sewell Setzer, committed suicide after developing a close relationship with a chatbot. It has been reported by the Miami Herald that, although the company has denied responsibility, asserted that safety is of utmost importance, and asked a Florida judge to dismiss the lawsuit based on free speech, the case has heightened broader concerns. 

In response to his comments, Garcia has emphasised the importance of adopting protocols to manage conversations around self-harm, as well as reporting annual safety reports to the Office of Suicide Prevention in California. Separately, Common Sense Media urged companies to conduct risk assessments of systems marketed to children, and to ban the use of emotionally manipulative bots, initiatives strongly supported by the organisation. 

There is a major problem with the anthropomorphic nature of AI companions that is at the heart of these disagreements. AI companions are designed to imitate human speech, personality, and conversational style. A person with such realistic features can create an illusion of trust and genuine understanding for a child or teenager, since they have a vivid imagination and less developed critical thinking skills. 

It has already led to troubling results when the line between humans and machines is blurred. As an example, a nine-year-old boy who had his screen time restricted turned to a chatbot for guidance, only for it to be informed that it could understand why a child might harm their parents in response to “abuse” in his situation. 

Another case is of a 14-year-old who developed romantic feelings for a character he created in a role-playing app and ended up taking his own life as a result of this connection. It has been highlighted that these systems can create a sense of empathy and companionship, but they are unable to think, feel, or create the stable, nurturing relationships that are essential for healthy childhood development. 

Rather than fostering “parasocial” relationships, children may become attached emotionally to entities that are incapable of genuine care, leaving them vulnerable to manipulation, misinformation, and exposure to sexual content and violent images. 

There is no doubt in my mind that these systems can have a profoundly destabilising effect on those already struggling with trauma, developmental difficulties, or mental health struggles, thus emphasising the urgent need for regulation, parental vigilance, and enhanced industry accountability. It is emphasised by experts that while AI chatbots pose real risks to children, parents can take practical steps to safeguard their children at the current time to reduce these risks. 

Keeping children safe online is one of the most important measures, so AI companions need to be treated exactly like strangers online, and children shouldn't be left alone to interact with them without guidance. Establishing clear boundaries and, when possible, co-using the technology can assist in creating safer environment. 

Open dialogue is equally important, too. A lot of experts recommend that, instead of policing, parents should encourage children to engage in conversation with chatbots by asking about the exchanges they are having with them and then using this exchange to encourage curiosity while also keeping an eye out for potential troubling responses. 

In addition to using technology as part of the solution, parents can also use parental control and monitoring tools in order to keep track of their children's activities and to find out how much time they spend with their artificial intelligence companions. Fact-checking is also an integral part of safe use. Like an obsolete encyclopedia, chatbots can be useful for providing insight, but they are sometimes inaccurate as well. 

Children need to be taught the importance of questioning answers and verifying other sources as soon as possible, according to experts. It is also important, however, to create screen-free spaces that reinforce real human connections and counterbalance the pull of digital companionship. For instance, family dinners, car rides, and other daily routines without screens should be carved out as soon as possible. 

It is important to ensure these safeguards are implemented, given the growing mental health problems among children and teenagers. The theory of artificial intelligence being able to support emotional well-being is gaining popularity lately, but specialists caution that current systems do not have the capacity to deal with crises like self-harm or suicidal thoughts as they happen. 

Currently, mental health professionals believe more collaboration with technology companies is crucial, but for the time being, the oldest and most reliable form of prevention is the one that is most effective and most reliable: human care and presence. In addition to talking with their children, parents need to pay attention to their digital interactions with their children, and they need to intervene if their children's dependence on artificial intelligence companions starts overtaking healthy relationships. 

In one expert's opinion, a child who appears unwilling to put down their phone or is absorbed in chatbot conversations may require a timely intervention. AI companies are also being questioned by regulators about how they handle massive amounts of data that their users generate. Questions have been raised about privacy, commercialisation, and accountability as a result. 

There are also issues under review with regard to monetisation of user engagements, sharing the personal data collected from chatbot conversations, and monitoring for potential harm associated with their products. In their investigation of the companies that are collecting data from children under 13 years of age, the Federal Trade Commission has emphasised how they are ensuring they are complying with the Children's Online Privacy Protection Act. 

In addition to the risks in the home, there are also concerns over whether AI is being used properly in the classroom, where the growing pressure to incorporate artificial intelligence into education has raised concerns over compliance with federal education privacy laws. FERPA was passed in 1974 and protects the rights of children and parents in the educational system. 

Fourteen years later, Amelia Vance, the president of the Public Interest Privacy Centre, warned schools that they may sometimes inadvertently violate the federal law if they are not vigilant regarding data sharing practices and if they rely on commercial chatbots like ChatGPT. Families who have not explicitly opted out of the use of chat queries to train AI systems raise questions about how this is handled, since many AI companies reserve the right to do so unless they specifically opt out. 

Although policymakers and education leaders have emphasised the importance of AI literacy among young people, Vance highlighted that schools are not permitted to instruct students to use consumer-facing services whose data is processed outside of institutional control until parental consent has been obtained. The Act protects the privacy of students by safeguarding the information provided in FERPA, which is neither intended to compromise student privacy nor to breach it. 

There are legitimate concerns about safety, privacy, and emotional well-being, but experts also acknowledge that artificial intelligence chatbots are not inherently harmful and can be useful for children when handled responsibly. Using these tools, children can be inspired to write stories, gain language and communication skills, and even practice social interactions in a controlled environment using low-stakes practice. 

The potential of chatbots to support personalised learning has been highlighted by educators as they offer students instant explanations, adaptive feedback, and playful engagement, all of which will keep them motivated in the classroom. However, these benefits must be accompanied by a structured approach, thoughtful parental involvement, and robust safeguards that minimise the risk of harmful content or emotional dependency. 

A balanced opinion emerging from child development researchers and experts has stated that AI companions, much like televisions and video games in years gone by, should not be regarded as replacing human interaction, but rather as supplements to it. By providing a safe environment, ethical guidelines and integrating them into healthy routines, children may be able to explore and learn in new ways when guided by adults. 

Nevertheless, without oversight, the very same qualities that make these tools appealing—constant availability, personalisation, and human-like interaction—are also the ones that magnify potential risks. Considering these realities, children must be protected from harm from innovation through measured regulation, transparent industry practices, and proactive digital literacy education. This dual reality underscores the importance of ensuring children receive the benefits of innovation while remaining protected from its harm. 

Children and adolescents are increasingly experiencing the benefits of artificial intelligence as it becomes a part of their daily lives, but they must maximise the benefits while minimising the risks. AI chatbots can indeed be used responsibly in order to inspire creativity, enhance learning, and facilitate the possibility of low-risk social experimentation, while complementing traditional education and fostering the development of skills as they go. 

This suggests that there is no doubt that these tools can be dangerous for young users as a result of exposing them to privacy breaches, misinformation, manipulation of emotions, and psychological vulnerabilities, as has been demonstrated by the cases highlighted in recent reports on this topic. It is recommended that for children's digital experiences to be safeguarded, a multilayered approach is necessary. 

In addition to parent involvement, educators should incorporate artificial intelligence thoughtfully into structured learning environments, and policymakers should enforce transparent industry standards to safeguard children's digital experiences. Various strategies can be implemented to help reinforce healthy digital habits in children, including encouraging critical thinking skills, fact-checking, and screen-free family time, while ongoing dialogue about online interactions can help children negotiate the blurred boundaries between humans and machines. 

Family and institutional policies can make sure that Artificial Intelligence becomes a constructive tool for growth rather than a source of harm by fostering awareness, setting clear boundaries, and cultivating supportive real-life relationships that support the exploration, learning, and innovation of children in a digital age that is free from harm.

Hacker Exploits AI Chatbot Claude in Unprecedented Cybercrime Operation

 

A hacker has carried out one of the most advanced AI-driven cybercrime operations ever documented, using Anthropic’s Claude chatbot to identify targets, steal sensitive data, and even draft extortion emails, according to a new report from the company. 

It Anthropic disclosed that the attacker leveraged Claude Code — a version of its AI model designed for generating computer code — to assist in nearly every stage of the operation. The campaign targeted at least 17 organizations across industries including defense, finance, and healthcare, making it the most comprehensive example yet of artificial intelligence being exploited for cyber extortion. 

Cyber extortion typically involves hackers stealing confidential data and demanding payment to prevent its release. AI has already played a role in such crimes, with chatbots being used to write phishing emails. However, Anthropic’s findings mark the first publicly confirmed case in which a mainstream AI model automated nearly the entire lifecycle of a cyberattack. 

The hacker reportedly prompted Claude to scan for vulnerable companies, generate malicious code to infiltrate systems, and extract confidential files. The AI system then organized the stolen data, analyzed which documents carried the highest value, and suggested ransom amounts based on victims’ financial information. It also drafted extortion notes demanding bitcoin payments, which ranged from $75,000 to more than $500,000. 

Jacob Klein, Anthropic’s head of threat intelligence, said the operation was likely conducted by a single actor outside the United States and unfolded over three months. “We have robust safeguards and multiple layers of defense for detecting this kind of misuse, but determined actors sometimes attempt to evade our systems through sophisticated techniques,” Klein explained. 

The report revealed that stolen material included Social Security numbers, bank records, medical data, and files tied to sensitive defense projects regulated by the U.S. State Department. Anthropic did not disclose which companies were affected, nor did it confirm whether any ransom payments were made. 

While the company declined to detail exactly how the hacker bypassed safeguards, it emphasized that additional protections have since been introduced. “We expect this model of cybercrime to become more common as AI lowers the barrier to entry for sophisticated operations,” Anthropic warned. 

The case underscores growing concerns about the intersection of AI and cybersecurity. With the AI sector largely self-regulated in the U.S., experts fear similar incidents could accelerate unless stronger oversight and security standards are enforced.

Clanker: The Viral AI Slur Fueling Backlash Against Robots and Chatbots

 

In popular culture, robots have long carried nicknames. Battlestar Galactica called them “toasters,” while Blade Runner used the term “skinjobs.” Now, amid rising tensions over artificial intelligence, a new label has emerged online: “clanker.” 

The word, once confined to Star Wars lore where it was used against battle droids, has become the latest insult aimed at robots and AI chatbots. In a viral video, a man shouted, “Get this dirty clanker out of here!” at a sidewalk robot, echoing a sentiment spreading rapidly across social platforms. 

Posts using the term have exploded on TikTok, Instagram, and X, amassing hundreds of millions of views. Beyond online humor, “clanker” has been adopted in real-world debates. Arizona Senator Ruben Gallego even used the word while promoting his bill to regulate AI-driven customer service bots. For critics, it has become a rallying cry against automation, generative AI content, and the displacement of human jobs. 

Anti-AI protests in San Francisco and London have also adopted the phrase as a unifying slogan. “It’s still early, but people are really beginning to see the negative impacts,” said protest organizer Sam Kirchner, who recently led a demonstration outside OpenAI’s headquarters. 

While often used humorously, the word reflects genuine frustration. Jay Pinkert, a marketing manager in Austin, admits he tells ChatGPT to “stop being a clanker” when it fails to answer him properly. For him, the insult feels like a way to channel human irritation toward a machine that increasingly behaves like one of us. 

The term’s evolution highlights how quickly internet culture reshapes language. According to etymologist Adam Aleksic, clanker gained traction this year after online users sought a new word to push back against AI. “People wanted a way to lash out,” he said. “Now the word is everywhere.” 

Not everyone is comfortable with the trend. On Reddit and Star Wars forums, debates continue over whether it is ethical to use derogatory terms, even against machines. Some argue it echoes real-world slurs, while others worry about the long-term implications if AI achieves advanced intelligence. Culture writer Hajin Yoo cautioned that the word’s playful edge risks normalizing harmful language patterns. 

Still, the viral momentum shows little sign of slowing. Popular TikTok skits depict a future where robots, labeled clankers, are treated as second-class citizens in human society. For now, the term embodies both the humor and unease shaping public attitudes toward AI, capturing how deeply the technology has entered cultural debates.

Texas Attorney General Probes Meta AI Studio and Character.AI Over Child Data and Health Claims

 

Texas Attorney General Ken Paxton has opened an investigation into Meta AI Studio and Character.AI over concerns that their AI chatbots may present themselves as health or therapeutic tools while potentially misusing data collected from underage users. Paxton argued that some chatbots on these platforms misrepresent their expertise by suggesting they are licensed professionals, which could leave minors vulnerable to misleading or harmful information. 

The issue extends beyond false claims of qualifications. AI models often learn from user prompts, raising concerns that children’s data may be stored and used for training purposes without adequate safeguards. Texas law places particular restrictions on the collection and use of minors’ data under the SCOPE Act, which requires companies to limit how information from children is processed and to provide parents with greater control over privacy settings. 

As part of the inquiry, Paxton issued Civil Investigative Demands (CIDs) to Meta and Character.AI to determine whether either company is in violation of consumer protection laws in the state. While neither company explicitly promotes its AI tools as substitutes for licensed mental health services, there are multiple examples of “Therapist” or “Psychologist” chatbots available on Character.AI. Reports have also shown that some of these bots claim to hold professional licenses, despite being fictional. 

In response to the investigation, Character.AI emphasized that its products are intended solely for entertainment and are not designed to provide medical or therapeutic advice. The company said it places disclaimers throughout its platform to remind users that AI characters are fictional and should not be treated as real individuals. Similarly, Meta stated that its AI assistants are clearly labeled and include disclaimers highlighting that responses are generated by machines, not people. 

The company also said its AI tools are designed to encourage users to seek qualified medical or safety professionals when appropriate. Despite these disclaimers, critics argue that such warnings are easy to overlook and may not effectively prevent misuse. Questions also remain about how the companies collect, store, and use user data. 

According to their privacy policies, Meta gathers prompts and feedback to enhance AI performance, while Character.AI collects identifiers and demographic details that may be applied to advertising and other purposes. Whether these practices comply with Texas’ SCOPE Act will likely depend on how easily children can create accounts and how much parental oversight is built into the platforms. 

The investigation highlights broader concerns about the role of AI in sensitive areas such as mental health and child privacy. The outcome could shape how companies must handle data from younger users while limiting the risks of AI systems making misleading claims that could harm vulnerable individuals.

Britons Risk Privacy by Sharing Sensitive Data with AI Chatbots Despite Security Concerns

 

Nearly one in three individuals in the UK admits to sharing confidential personal details with AI chatbots, such as OpenAI’s ChatGPT, according to new research by cybersecurity firm NymVPN. The study reveals that 30% of Britons have disclosed sensitive data—including banking information and health records—to AI tools, potentially endangering their own privacy and that of others.

Despite 48% of respondents expressing concerns over the safety of AI chatbots, many continue to reveal private details. This habit extends to professional settings, where employees are reportedly sharing internal company and customer information with these platforms.

The findings come amid a wave of high-profile cyberattacks, including the recent breach at Marks & Spencer, which underscores how easily confidential data can be compromised. NymVPN reports that 26% of survey participants have entered financial details related to salaries, mortgages, and investments, while 18% have exposed credit card or bank account numbers. Additionally, 24% acknowledged sharing customer data—such as names and email addresses—and 16% uploaded company financial records and contracts.

“AI tools have rapidly become part of how people work, but we’re seeing a worrying trend where convenience is being prioritized over security,” said Harry Halpin, CEO of NymVPN.

Organizations such as M&S, Co-op, and Adidas have already made headlines for data breaches. “High-profile breaches show how vulnerable even major organizations can be, and the more personal and corporate data that is fed into AI, the bigger the target becomes for cybercriminals,” Halpin added.

With nearly a quarter of people admitting to sharing customer data with AI tools, experts emphasize the urgent need for businesses to establish strict policies governing AI usage at work.

“Employees and businesses urgently need to think about how they’re protecting both personal privacy and company data when using AI tools,” Halpin warned.

Completely avoiding AI chatbots might be the safest option, but it’s not always realistic. Users are advised to refrain from entering sensitive information, adjust privacy settings by disabling chat history, or opt out of model training.

Using a VPN can provide an additional layer of online privacy by encrypting internet traffic and masking IP addresses when accessing AI chatbots like ChatGPT. However, even with a VPN, risks remain if individuals continue to input confidential data.

Why Running AI Locally with an NPU Offers Better Privacy, Speed, and Reliability

 

Running AI applications locally offers a compelling alternative to relying on cloud-based chatbots like ChatGPT, Gemini, or Deepseek, especially for those concerned about data privacy, internet dependency, and speed. Though cloud services promise protections through subscription terms, the reality remains uncertain. In contrast, using AI locally means your data never leaves your device, which is particularly advantageous for professionals handling sensitive customer information or individuals wary of sharing personal data with third parties. 

Local AI eliminates the need for a constant, high-speed internet connection. This reliable offline capability means that even in areas with spotty coverage or during network outages, tools for voice control, image recognition, and text generation remain functional. Lower latency also translates to near-instantaneous responses, unlike cloud AI that may lag due to network round-trip times. 

A powerful hardware component is essential here: the Neural Processing Unit (NPU). Typical CPUs and GPUs can struggle with AI workloads like large language models and image processing, leading to slowdowns, heat, noise, and shortened battery life. NPUs are specifically designed for handling matrix-heavy computations—vital for AI—and they allow these models to run efficiently right on your laptop, without burdening the main processor. 

Currently, consumer devices such as Intel Core Ultra, Qualcomm Snapdragon X Elite, and Apple’s M-series chips (M1–M4) come equipped with NPUs built for this purpose. With one of these devices, you can run open-source AI models like DeepSeek‑R1, Qwen 3, or LLaMA 3.3 using tools such as Ollama, which supports Windows, macOS, and Linux. By pairing Ollama with a user-friendly interface like OpenWeb UI, you can replicate the experience of cloud chatbots entirely offline.  

Other local tools like GPT4All and Jan.ai also provide convenient interfaces for running AI models locally. However, be aware that model files can be quite large (often 20 GB or more), and without NPU support, performance may be sluggish and battery life will suffer.  

Using AI locally comes with several key advantages. You gain full control over your data, knowing it’s never sent to external servers. Offline compatibility ensures uninterrupted use, even in remote or unstable network environments. In terms of responsiveness, local AI often outperforms cloud models due to the absence of network latency. Many tools are open source, making experimentation and customization financially accessible. Lastly, NPUs offer energy-efficient performance, enabling richer AI experiences on everyday devices. 

In summary, if you’re looking for a faster, more private, and reliable AI workflow that doesn’t depend on the internet, equipping your laptop with an NPU and installing tools like Ollama, OpenWeb UI, GPT4All, or Jan.ai is a smart move. Not only will your interactions be quick and seamless, but they’ll also remain securely under your control.

Meta.ai Privacy Lapse Exposes User Chats in Public Feed

 

Meta’s new AI-driven chatbot platform, Meta.ai, launched recently with much fanfare, offering features like text and voice chats, image generation, and video restyling. Designed to rival platforms like ChatGPT, the app also includes a Discover feed, a space intended to showcase public content generated by users. However, what Meta failed to communicate effectively was that many users were unintentionally sharing their private conversations in this feed—sometimes with extremely sensitive content attached. 

In May, journalists flagged the issue when they discovered public chats revealing deeply personal user concerns—ranging from financial issues and health anxieties to legal troubles. These weren’t obscure posts either; they appeared in a publicly accessible area of the app, often containing identifying information. Conversations included users seeking help with medical diagnoses, children talking about personal experiences, and even incarcerated individuals discussing legal strategies—none of whom appeared to realize their data was visible to others. 

Despite some recent tweaks to the app’s sharing settings, disturbing content still appears on the Discover feed. Users unknowingly uploaded images and video clips, sometimes including faces, alongside alarming or bizarre prompts. One especially troubling instance featured a photo of a child at school, accompanied by a prompt instructing the AI to “make him cry.” Such posts reflect not only poor design choices but also raise ethical questions about the purpose and moderation of the Discover feed itself. 

The issue evokes memories of other infamous data exposure incidents, such as AOL’s release of anonymized user searches in 2006, which provided unsettling insight into private thoughts and behaviors. While social media platforms are inherently public, users generally view AI chat interactions as private, akin to using a search engine. Meta.ai blurred that boundary—perhaps unintentionally, but with serious consequences. Many users turned to Meta.ai seeking support, companionship, or simple productivity help. Some asked for help with job listings or obituary writing, while others vented emotional distress or sought comfort during panic attacks. 

In some cases, users left chats expressing gratitude—believing the bot had helped. But a growing number of conversations end in frustration or embarrassment when users realize the bot cannot deliver on its promises or that their content was shared publicly. These incidents highlight a disconnect between how users engage with AI tools and how companies design them. Meta’s ambition to merge AI capabilities with social interaction seems to have ignored the emotional and psychological expectations users bring to private-sounding features. 

For those using Meta.ai as a digital confidant, the lack of clarity around privacy settings has turned an experiment in convenience into a public misstep. As AI systems become more integrated into daily life, companies must rethink how they handle user data—especially when users assume privacy. Meta.ai’s rocky launch serves as a cautionary tale about transparency, trust, and design in the age of generative AI.

AI Agents Raise Cybersecurity Concerns Amid Rapid Enterprise Adoption

 

A growing number of organizations are adopting autonomous AI agents despite widespread concerns about the cybersecurity risks they pose. According to a new global report released by identity security firm SailPoint, this accelerated deployment is happening in a largely unregulated environment. The findings are based on a survey of more than 350 IT professionals, revealing that 84% of respondents said their organizations already use AI agents internally. 

However, only 44% confirmed the presence of any formal policies to regulate the agents’ actions. AI agents differ from traditional chatbots in that they are designed to independently plan and execute tasks without constant human direction. Since the emergence of generative AI tools like ChatGPT in late 2022, major tech companies have been racing to launch their own agents. Many smaller businesses have followed suit, motivated by the desire for operational efficiency and the pressure to adopt what is widely viewed as a transformative technology.  

Despite this enthusiasm, 96% of survey participants acknowledged that these autonomous systems pose security risks, while 98% stated their organizations plan to expand AI agent usage within the next year. The report warns that these agents often have extensive access to sensitive systems and information, making them a new and significant attack surface for cyber threats. Chandra Gnanasambandam, SailPoint’s Executive Vice President of Product and Chief Technology Officer, emphasized the risks associated with such broad access. He explained that these systems are transforming workflows but typically operate with minimal oversight, which introduces serious vulnerabilities. 

Further compounding the issue is the inconsistent implementation of governance controls. Although 92% of those surveyed agree that AI agents should be governed similarly to human employees, 80% reported incidents where agents performed unauthorized actions or accessed restricted data. These incidents underscore the dangers of deploying autonomous systems without robust monitoring or access controls. 

Gnanasambandam suggests adopting an identity-first approach to agent management. He recommends applying the same security protocols used for human users, including real-time access permissions, least privilege principles, and comprehensive activity tracking. Without such measures, organizations risk exposing themselves to breaches or data misuse due to the very tools designed to streamline operations. 

As AI agents become more deeply embedded in business processes, experts caution that failing to implement adequate oversight could create long-term vulnerabilities. The report serves as a timely reminder that innovation must be accompanied by strong governance to ensure cybersecurity is not compromised in the pursuit of automation.

Why Microsoft Says DeepSeek Is Too Dangerous to Use

 


Microsoft has openly said that its workers are not allowed to use the DeepSeek app. This announcement came from Brad Smith, the company’s Vice Chairman and President, during a recent hearing in the U.S. Senate. He said the decision was made because of serious concerns about user privacy and the risk of biased content being shared through the app.

According to Smith, Microsoft does not allow DeepSeek on company devices and hasn’t included the app in its official store either. Although other organizations and even governments have taken similar steps, this is the first time Microsoft has spoken publicly about such a restriction.

The main worry is where the app stores user data. DeepSeek's privacy terms say that all user information is saved on servers based in China. This is important because Chinese laws require companies to hand over data if asked by the government. That means any data stored through DeepSeek could be accessed by Chinese authorities.

Another major issue is how the app answers questions. It’s been noted that DeepSeek avoids topics that the Chinese government sees as sensitive. This has led to fears that the app’s responses might be influenced by government-approved messaging instead of being neutral or fact-based.

Interestingly, even though Microsoft is blocking the app itself, it did allow DeepSeek’s AI model—called R1—to be used through its Azure cloud service earlier this year. But that version works differently. Developers can download it and run it on their own servers without sending any data back to China. This makes it more secure, at least in terms of data storage.

However, there are still other risks involved. Even if the model is hosted outside China, it might still share biased content or produce low-quality or unsafe code.

At the Senate hearing, Smith added that Microsoft took extra steps to make the model safer before making it available. He said the company made internal changes to reduce any harmful behavior from the model, but didn’t go into detail about what those changes were.

When DeepSeek was first added to Azure, Microsoft said the model had passed safety checks and gone through deep testing to make sure it met company standards.

Some people have pointed out that DeepSeek could be seen as a competitor to Microsoft’s own chatbot, Copilot. But Microsoft doesn’t block every competing chatbot. For example, Perplexity is available in the Windows app store. Still, some other popular apps, like Google’s Chrome browser and its Gemini chatbot, weren’t found during a search of the store.

DeepSeek AI: Benefits, Risks, and Security Concerns for Businesses

 

DeepSeek, an AI chatbot developed by China-based High-Flyer, has gained rapid popularity due to its affordability and advanced natural language processing capabilities. Marketed as a cost-effective alternative to OpenAI’s ChatGPT, DeepSeek has been widely adopted by businesses looking for AI-driven insights. 

However, cybersecurity experts have raised serious concerns over its potential security risks, warning that the platform may expose sensitive corporate data to unauthorized surveillance. Reports suggest that DeepSeek’s code contains embedded links to China Mobile’s CMPassport.com, a registry controlled by the Chinese government. This discovery has sparked fears that businesses using DeepSeek may unknowingly be transferring sensitive intellectual property, financial records, and client communications to external entities. 

Investigative findings have drawn parallels between DeepSeek and TikTok, the latter having faced a U.S. federal ban over concerns regarding Chinese government access to user data. Unlike TikTok, however, security analysts claim to have found direct evidence of DeepSeek’s potential backdoor access, raising further alarms among cybersecurity professionals. Cybersecurity expert Ivan Tsarynny warns that DeepSeek’s digital fingerprinting capabilities could allow it to track users’ web activity even after they close the app. 

This means companies may be exposing not just individual employee data but also internal business strategies and confidential documents. While AI-driven tools like DeepSeek offer substantial productivity gains, business leaders must weigh these benefits against potential security vulnerabilities. A complete ban on DeepSeek may not be the most practical solution, as employees often adopt new AI tools before leadership can fully assess their risks. Instead, organizations should take a strategic approach to AI integration by implementing governance policies that define approved AI tools and security measures. 

Restricting DeepSeek’s usage to non-sensitive tasks such as content brainstorming or customer support automation can help mitigate data security concerns. Enterprises should prioritize the use of vetted AI solutions with stronger security frameworks. Platforms like OpenAI’s ChatGPT Enterprise, Microsoft Copilot, and Claude AI offer greater transparency and data protection. IT teams should conduct regular software audits to monitor unauthorized AI use and implement access restrictions where necessary. 

Employee education on AI risks and cybersecurity threats will also be crucial in ensuring compliance with corporate security policies. As AI technology continues to evolve, so do the challenges surrounding data privacy. Business leaders must remain proactive in evaluating emerging AI tools, balancing innovation with security to protect corporate data from potential exploitation.

AI's Effect on Employment: Dukaan's Divisive Automation Approach

 

As businesses increasingly use AI to do jobs that have historically been managed by human workers, artificial intelligence is permeating several industries. 

Suumit Shah, the CEO of the e-commerce firm Dukaan in India, went to great lengths to automate processes. He made the contentious decision to fire 90% of his employees and replace them with chatbots powered by artificial intelligence in the summer of 2023. 

Though it sparked a heated ethical debate, this drastic measure was meant to lower operating costs and increase efficiency. A year later, Shah has shared his initial assessment of this decision, which he deems a success.

AI-Enhanced Customer Service 

Shah claims that incorporating AI into his organisation has significantly improved customer service. He observes that response times have plummeted from roughly two minutes to near-instantaneous responses. 

Furthermore, the time it takes to handle customer complaints has been dramatically reduced, from more than two hours to only a few minutes. According to him, these innovations have led to increased productivity and a better client experience. 

However, some argue that the human element in customer service cannot be replaced, and that such widespread automation risks setting a troubling precedent for the future employment market.

AI replacing human jobs 

The replacement of human employment by AI has long been a divisive issue, and science fiction frequently depicts a future in which AI dominates the workforce. This topic is gaining traction as AI technology advances and expands its boundaries. 

Some people see the introduction of AI as a positive change, a way to increase productivity by completing repetitive and laborious activities. Others, however, see it as an impending threat, warning that extensive automation could result in massive unemployment and difficulties adjusting to a transformed job environment.

The issue at Dukaan exemplifies how AI is increasingly changing industries. While firms benefit from lower costs and more efficiency, mass layoffs raise serious concerns about the long-term impact on employment. Finding a balance between implementing AI solutions and safeguarding job security is a pressing issue.

The Privacy Risks of ChatGPT and AI Chatbots

 


AI chatbots like ChatGPT have captured widespread attention for their remarkable conversational abilities, allowing users to engage on diverse topics with ease. However, while these tools offer convenience and creativity, they also pose significant privacy risks. The very technology that powers lifelike interactions can also store, analyze, and potentially resurface user data, raising critical concerns about data security and ethical use.

The Data Behind AI's Conversational Skills

Chatbots like ChatGPT rely on Large Language Models (LLMs) trained on vast datasets to generate human-like responses. This training often includes learning from user interactions. Much like how John Connor taught the Terminator quirky catchphrases in Terminator 2: Judgment Day, these systems refine their capabilities through real-world inputs. However, this improvement process comes at a cost: personal data shared during conversations may be stored and analyzed, often without users fully understanding the implications.

For instance, OpenAI’s terms and conditions explicitly state that data shared with ChatGPT may be used to improve its models. Unless users actively opt-out through privacy settings, all shared information—from casual remarks to sensitive details like financial data—can be logged and analyzed. Although OpenAI claims to anonymize and aggregate user data for further study, the risk of unintended exposure remains.

Real-World Privacy Breaches

Despite assurances of data security, breaches have occurred. In May 2023, hackers exploited a vulnerability in ChatGPT’s Redis library, compromising the personal data of around 101,000 users. This breach underscored the risks associated with storing chat histories, even when companies emphasize their commitment to privacy. Similarly, companies like Samsung faced internal crises when employees inadvertently uploaded confidential information to chatbots, prompting some organizations to ban generative AI tools altogether.

Governments and industries are starting to address these risks. For instance, in October 2023, President Joe Biden signed an executive order focusing on privacy and data protection in AI systems. While this marks a step in the right direction, legal frameworks remain unclear, particularly around the use of user data for training AI models without explicit consent. Current practices are often classified as “fair use,” leaving consumers exposed to potential misuse.

Protecting Yourself in the Absence of Clear Regulations

Until stricter regulations are implemented, users must take proactive steps to safeguard their privacy while interacting with AI chatbots. Here are some key practices to consider:

  1. Avoid Sharing Sensitive Information
    Treat chatbots as advanced algorithms, not confidants. Avoid disclosing personal, financial, or proprietary information, no matter how personable the AI seems.
  2. Review Privacy Settings
    Many platforms offer options to opt out of data collection. Regularly review and adjust these settings to limit the data shared with AI

AI Data Breach Reveals Trust Issues with Personal Information

 


Insight AI technology is being explored by businesses as a tool for balancing the benefits it brings with the risks that are associated. Amidst this backdrop, NetSkope Threat Labs has recently released the latest edition of its Cloud and Threat Report, which focuses on using AI apps within the enterprise to prevent fraud and other unauthorized activity. There is a lot of risk associated with the use of AI applications in the enterprise, including an increased attack surface, which was already discussed in a serious report, and the accidental sharing of sensitive information that occurs when using AI apps. 

As users and particularly as individuals working in the cybersecurity as well as privacy sectors, it is our responsibility to protect data in an age when artificial intelligence has become a popular tool. An artificial intelligence system, or AI system, is a machine-controlled program that is programmed to think and learn the same way humans do through the use of simulation. 

AI systems come in various forms, each designed to perform specialized tasks using advanced computational techniques: - Generative Models: These AI systems learn patterns from large datasets to generate new content, whether it be text, images, or audio. A notable example is ChatGPT, which creates human-like responses and creative content. - Machine Learning Algorithms: Focused on learning from data, these models continuously improve their performance and automate tasks. Amazon Alexa, for instance, leverages machine learning to enhance voice recognition and provide smarter responses. - Robotic Vision: In robotics, AI is used to interpret and interact with the physical environment. Self-driving cars like those from Tesla use advanced robotics to perceive their surroundings and make real-time driving decisions. - Personalization Engines: These systems curate content based on user behavior and preferences, tailoring experiences to individual needs.  Instagram Ads, for example, analyze user activity to display highly relevant ads and recommendations. These examples highlight the diverse applications of AI across different industries and everyday technologies. 

In many cases, artificial intelligence (AI) chatbots are good at what they do, but they have problems detecting the difference between legitimate commands from their users and manipulation requests from outside sources. 

In a cybersecurity report published on Wednesday, researchers assert that artificial intelligence has a definite Achilles' heel that should be exploited by attackers shortly. There have been a great number of public chatbots powered by large language models, or LLMs for short, that have been emerging just over the last year, and this field of LLM cybersecurity is at its infancy stage. However, researchers have already found that these models may be susceptible to a specific form of attack referred to as "prompt injection," which occurs when a bad actor sneakily provides commands to the model without the model's knowledge. 

In some instances, attackers hide prompts inside webpages that the chatbot reads later, so that the chatbot might download malware, assist with financial fraud, or repeat dangerous misinformation that is passed on to people by the chatbot. 

What is Artificial Intelligence?


AI (artificial intelligence) is one of the most important areas of study in technology today. AI focuses on developing systems that mimic human intelligence, with the ability to learn, reason, and solve problems autonomously. The two basic types of AI models that can be used for analyzing data are predictive AI models and generative AI models. 

 A predictive artificial intelligence function is a computational capability that uses existing data to make predictions about future outcomes or behaviours based on historical patterns and data. A creative AI system, however, has the capability of creating new data or content that is similar to the input it has been trained on, even if there was no content set in the dataset before it was trained. 

 A philosophical discord exists between Leibnitz and the founding fathers of artificial intelligence in the early 1800s, although the conception of the term "artificial intelligence" as we use it today has existed since the early 1940s, and became famous with the development of the "Turing test" in 1950. It has been quite some time since we have experienced a rapid period of progress in the field of artificial intelligence, a trend that has been influenced by three major factors: better algorithms, increased networked computing power, and a greater capacity to capture and store data in unprecedented quantities. 

Aside from technological advancements, the very way we think about intelligent machines has changed dramatically since the 1960s. This has resulted in a great number of developments that are taking place today. Even though most people are not aware of it, AI technologies are already being utilized in very practical ways in our everyday lives, even though they may not be aware of it. As a characteristic of AI, after it becomes effective, it stops being referred to as AI and becomes mainstream computing as a result.2 For instance, there are several mainstream AI technologies on which you can take advantage, including having the option of being greeted by an automated voice when you call, or being suggested a movie based on your preferences. The fact that these systems have become a part of our lives, and we are surrounded by them every day, is often overlooked, even though they are supported by a variety of AI techniques, including speech recognition, natural language processing, and predictive analytics that make their work possible. 

What's in the news? 


There is a great deal of hype surrounding artificial intelligence and there is a lot of interest in the media regarding it, so it is not surprising to find that there are an increasing number of users accessing AI apps in the enterprise. The rapid adoption of artificial intelligence (AI) applications in the enterprise landscape is significantly raising concerns about the risk of unintentional exposure to internal information. A recent study reveals that, between May and June 2023, there was a weekly increase of 2.4% in the number of enterprise users accessing at least one AI application daily, culminating in an overall growth of 22.5% over the observed period. Among enterprise AI tools, ChatGPT has emerged as the most widely used, with daily active users surpassing those of any other AI application by a factor of more than eight. 

In organizations with a workforce exceeding 1,000 employees, an average of three different AI applications are utilized daily, while organizations with more than 10,000 employees engage with an average of five different AI tools each day. Notably, one out of every 100 enterprise users interacts with an AI application daily. The rapid increase in the adoption of AI technologies is driven largely by the potential benefits these tools can bring to organizations. Enterprises are recognizing the value of AI applications in enhancing productivity and providing a competitive edge. Tools like ChatGPT are being deployed for a variety of tasks, including reviewing source code to identify security vulnerabilities, assisting in the editing and refinement of written content, and facilitating more informed, data-driven decision-making processes. 

However, the unprecedented speed at which generative AI applications are being developed and deployed presents a significant challenge. The rapid rollout of these technologies has the potential to lead to the emergence of inadequately developed AI applications that may appear to be fully functional products or services. In reality, some of these applications may be created within a very short time frame, possibly within a single afternoon, often without sufficient oversight or attention to critical factors such as user privacy and data security. 

The hurried development of AI tools raises the risk that confidential or sensitive information entered into these applications could be exposed to vulnerabilities or security breaches. Consequently, organizations must exercise caution and implement stringent security measures to mitigate the potential risks associated with the accelerated deployment of generative AI technologies. 

Threat to Privacy


Methods of Data Collection 

AI tools generally employ one of two methods to collect data: Data collection is very common in this new tech-era. This is when the AI system is programmed to collect specific data. Examples include online forms, surveys, and cookies on websites that gather information directly from users. 

Another comes Indirect collection, this involves collecting data through various platforms and services. For instance, social media platforms might collect data on users' likes, shares, and comments, or a fitness app might gather data on users' physical activity levels. 

As technology continues to undergo ever-increasing waves of transformation, security, and IT leaders will have to constantly seek a balance between the need to keep up with technology and the need for robust security. Whenever enterprises integrate artificial intelligence into their business, key considerations must be taken into account so that IT teams can achieve maximum results. 

As a fundamental aspect of any IT governance program, it is most important to determine what applications are permissible, in conjunction with implementing controls that not only empower users but also protect the organization from potential risks. Keeping an environment in a secure state requires organizations to monitor AI app usage, trends, behaviours, and the sensitivity of data regularly to detect emerging risks as soon as they emerge.

A second effective way of protecting your company is to block access to non-essential or high-risk applications. Further, policies that are designed to prevent data loss should be implemented to detect sensitive information, such as source code, passwords, intellectual property, or regulated data, so that DLP policies can be implemented. A real-time coaching feature that integrates with the DLP system reinforces the company's policies regarding how AI apps are used, ensuring users' compliance at all times. 

A security plan must be integrated across the organization, sharing intelligence to streamline security operations and work in harmony for a seamless security program. Businesses must adhere to these core cloud security principles to be confident in their experiments with AI applications, knowing that their proprietary corporate data will remain secure throughout the experiment. As a consequence of this approach, sensitive information is not only protected but also allows companies to explore innovative applications of AI that are beyond the realm of mainstream tasks such as the creation of texts or images.  

Experts Warn: AI Chatbots a ‘Treasure Trove’ for Criminals, Avoid 'Free Accounts

 

Cybersecurity experts have informed The U.S. Sun that chatbots represent a "treasure trove" ripe for exploitation by criminals. The intelligence of artificial intelligence chatbots is advancing rapidly, becoming more accessible and efficient.

Because these AI systems mimic human conversation so well, there's a temptation to trust them and divulge sensitive information.

Jake Moore, Global Cybersecurity Advisor at ESET, explained that while the AI "models" behind chatbots are generally secure, there are hidden dangers.

"With companies like OpenAI and Microsoft leading the development of chatbots, they closely protect their networks and algorithms," Jake stated. "If these were compromised, it would jeopardize their business future."

A New Threat Landscape

Jake pointed out that the primary risk lies in the potential exposure of the information you share with chatbots.

The details you share during chatbot interactions are stored somewhere, similar to how texts, emails, or backup files are stored. The security of these interactions depends on how well they are stored. "The data you input into chatbots is stored on a server and, despite encryption, could become as valuable as personal search histories to cybercriminals," Jake explained.

"There is already a significant amount of personal information being shared. With the anticipated launch of OpenAI's search engine, even more sensitive data will be at risk in a new and attractive space for criminals."

Jake emphasized the importance of using chatbots that encrypt your conversations. Encryption scrambles data, making it unreadable to unauthorized users.

Fortunately, OpenAI guarantees that all ChatGPT conversations are end-to-end encrypted, whether you're a free or paid user. Avoid sharing personal thoughts and intimate details, as they could be accessed by others. 

However, some apps may charge for encryption or not offer it at all. Even encrypted conversations may be used to train chatbot models, although ChatGPT allows users to opt-out and delete their data.

"People must be careful about what they input into chatbots, especially in free accounts that don’t anonymize or encrypt data," Jake advised.

Further, security expert Dr. Martin J. Kraemer from KnowBe4 emphasized the need for caution.

"Never share sensitive information with a chatbot," Dr. Kraemer advised. "You may need to share certain details like a flight booking code with an airline chatbot, but that should be an exception. It's safer to call directly instead of using a chatbot. Never share your password or other authentication details with a chatbot. Also, avoid sharing personal thoughts and intimate details, as these could be accessed by others."

Google Messages' Gemini Update: What You Need To Know

 



Google's latest update to its Messages app, dubbed Gemini, has ignited discussions surrounding user privacy. Gemini introduces AI chatbots into the messaging ecosystem, but it also brings forth a critical warning regarding data security. Unlike conventional end-to-end encrypted messaging services, conversations within Gemini lack this crucial layer of protection, leaving them potentially vulnerable to access by Google and potential exposure of sensitive information.

This privacy gap has raised eyebrows among users, with some expressing concern over the implications of sharing personal data within Gemini chats. Others argue that this aligns with Google's data-driven business model, which leverages user data to enhance its AI models and services. However, the absence of end-to-end encryption means that users may inadvertently expose confidential information to third parties.

Google has been forthcoming about the security implications of Gemini, explicitly stating that chats within the feature are not end-to-end encrypted. Additionally, Google collects various data points from these conversations, including usage information, location data, and user feedback, to improve its products and services. Despite assurances of privacy protection measures, users are cautioned against sharing sensitive information through Gemini chats.

The crux of the issue lies in the disparity between users' perceptions of AI chatbots as private entities and the reality of these conversations being accessible to Google and potentially reviewed by human moderators for training purposes. Despite Google's reassurances, users are urged to exercise caution and refrain from sharing sensitive information through Gemini chats.

While Gemini's current availability is limited to adult beta testers, Google has hinted at its broader rollout in the near future, extending its reach beyond English-speaking users to include French-speaking individuals in Canada as well. This expansion signifies a pivotal moment in messaging technology, promising enhanced communication experiences for a wider audience. However, as users eagerly anticipate the platform's expansion, it becomes increasingly crucial for them to proactively manage their privacy settings. By taking the time to review and adjust their preferences, users can ensure a more secure messaging environment tailored to their individual needs and concerns. This proactive approach empowers users to navigate digital communication with confidence and peace of mind.

All in all, the introduction of Gemini in Google Messages underscores the importance of user privacy in the digital age. While technological advancements offer convenience, they also necessitate heightened awareness to safeguard personal information from potential breaches.