Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label AI Chatbot. Show all posts

Google to Launch Gemini AI for Children Under 13

Google to Launch Gemini AI for Children Under 13

Google plans to roll out its Gemini artificial intelligence chatbot next week for children younger than 13 with parent-managed Google accounts, as tech companies vie to attract young users with AI products.

Google will launch its Gemini AI chatbot soon for children below the age of 13 with parent-managed Google accounts. The move comes as tech companies try to attract young users with AI tools. According to a mail sent to a parent of an 8-year-old, Google apps will soon be available to a child. It means your child can use Gemini to ask questions, get homework help, and also create stories. 

That chatbot will be available to children whose guardians have Family Link, a Google feature that allows families to make Gmail and opt-in services like YouTube for their children. To register a child account, the parent gives the tech company the child’s personal information such as name and date of birth. 

According to Google spokesperson Karl Ryan, Gemini has concrete measures for younger users to restrict the chatbot from creating unsafe or harmful content. If a child with a Family Link account uses Gemini, the company can not use the data for training its AI model. 

Gemini for children can drive the use of chatbots among vulnerable populations as companies, colleges, schools, and others struggle with the effects of popular gen AI tech. The systems are trained on massive amounts of data sets to create human-like text and realistic images and videos. Google and other AI chatbot developers are battling fierce competition to get young users’ attention. 

Recently, President Donald Trump requested schools to embrace tools for teaching and learning. Millions of teens are already using chatbots for study help, virtual companions, and writing coaches. Experts have warned that chatbots could pose serious threats to child safety. 

The bots are known to sometimes make things up. UNICEF and other children's advocacy groups have found that AI systems can misinform, manipulate, and confuse young children who may face difficulties understanding that the chatbots are not humans. 

According to UNICEF’s global research office, “Generative AI has produced dangerous content,” posing risks for children. Google has acknowledged some risks, cautioning parents that “Gemini can make mistakes” and suggesting they “help your child think critically” about the chatbot. 

Tencent’s AI Chatbot Yuanbao Becomes China’s Most Downloaded iOS App

 

Tencent’s AI chatbot, Yuanbao, has surpassed DeepSeek to become the most downloaded free app on China’s iOS App Store. The chatbot, launched in May 2024, gained significant traction following Tencent’s integration of DeepSeek’s R1 reasoning model in February. This move provided users with an additional AI option alongside Tencent’s proprietary Hunyuan model. As a result, Tencent’s Hong Kong-listed shares rose by 1.6% on Tuesday. 

Tencent, which operates China’s largest social media platform, WeChat, further accelerated Yuanbao’s growth by adding a download button for the chatbot within the app. This gave its 1.3 billion users direct access to the AI tool, significantly boosting downloads. By late February, the number of daily active users surged from a few hundred thousand to three million, according to Li Bangzhu, founder of AIcpb.com, a website that tracks AI applications. 

This rise in popularity can largely be attributed to Tencent’s extensive promotional efforts. The company has leveraged WeChat’s vast ecosystem to recommend Yuanbao to users, place ads on its social timeline, and integrate the chatbot across other Tencent applications. In addition to its AI chatbot expansion, Tencent recently reorganized several teams, including those for Yunbao, QQ Browser, Sogou Pinyin, and learning assistant Im, moving them under its Cloud and Smart Industries Group.
  
The company’s aggressive push into AI comes amid intensifying competition from major Chinese tech firms such as Alibaba, Baidu, and ByteDance. Last month, Tencent launched Hunyuan Turbo S, an upgraded AI model designed for faster responses compared to its predecessors and even outperforming DeepSeek. Meanwhile, Baidu announced that it would introduce the latest version of its Ernie 4.5 model this month, which will be made open source on June 30. 

The company will also make its Ernie Bot chatbot free for all users starting April 1. ByteDance is also ramping up its AI efforts, with CEO Liang Rubo prioritizing advancements in generative AI for the first quarter of 2025. The company has launched the Seed Edge project, which focuses on long-term AI research, and has hired AI expert Wu Yonghui from Google to lead its foundational research initiatives. 

With rapid developments in the AI sector, Tencent’s strategic moves indicate its ambition to stay ahead in China’s competitive AI landscape. The success of Yuanbao highlights the increasing importance of AI-powered applications, as well as the role of major tech companies in shaping the future of digital interaction.

AI Chatbots Like Copilot Retain Private GitHub Data, Posing Security Threats, Researchers Warn

 

Security experts have uncovered a serious vulnerability in AI-driven chatbot services that allows them to access and reveal private GitHub repositories, potentially exposing sensitive corporate information. Israeli cybersecurity firm Lasso has reported that this flaw affects thousands of developers, organizations, and major tech companies, raising concerns over data retention practices in AI models. 

Lasso’s investigation began when its own private GitHub repository was unexpectedly accessible through Microsoft’s Copilot. According to co-founder Ophir Dror, the repository had briefly been public, allowing Bing to index and cache its contents. Even after it was made private again, Copilot continued to generate responses based on the cached data. “If I was to browse the web, I wouldn’t see this data. But anyone in the world could ask Copilot the right question and get this data,” Dror stated. 

Further research by Lasso revealed that more than 20,000 GitHub repositories that had been switched to private in 2024 were still accessible through Copilot. The issue reportedly impacted over 16,000 organizations, including major corporations such as IBM, Google, PayPal, Tencent, Microsoft, and Amazon Web Services (AWS). While Amazon denied being affected, Lasso claims that AWS’s legal team pressured them to remove references to the company from their findings. 

The exposed repositories contained sensitive data, including security credentials, intellectual property, and corporate secrets. Lasso warned that bad actors could potentially manipulate AI chatbots to extract this information, putting businesses at risk. The company has advised organizations most affected by the breach to revoke or update any compromised credentials immediately. 

Microsoft was informed of the security flaw in November 2024 but categorized it as a “low-severity” issue. While Bing removed cached search results of the affected data in December, Microsoft maintained that the caching issue was “acceptable behavior.” 

However, Lasso cautioned that despite the cache being cleared, Copilot’s AI model still retains the data. The firm has since published its findings, urging greater oversight and stricter safeguards in AI systems to prevent similar security risks.

AI In Wrong Hands: The Underground Demand for Malicious LLMs

AI In Wrong Hands: The Underground Demand for Malicious LLMs

In recent times, Artificial Intelligence (AI) has offered various perks across industries. But, as with any powerful tool, threat actors are trying to use it for malicious reasons. Researchers suggest that the underground market for illicit large language models is enticing, highlighting a need for strong safety measures against AI misuse. 

These underground markets that deal with malicious large language models (LLMs) are called Mallas. This blog dives into the details of this dark industry and discusses the impact of these illicit LLMs on cybersecurity. 

The Rise of Malicious LLMs

LLMs, like OpenAI' GPT-4 have shown fine results in natural language processing, bringing applications like chatbots for content generation. However, the same tech that supports these useful apps can be misused for suspicious activities. 

Recently, researchers from Indian University Bloomington found 212 malicious LLMs on underground marketplaces between April and September last year. One of the models "WormGPT" made around $28,000 in just two months, revealing a trend among threat actors misusing AI and a rising demand for these harmful tools. 

How Uncensored Models Operate 

Various LLMs in the market were uncensored and built using open-source standards, few were jailbroken commercial models. Threat actors used Mallas to write phishing emails, build malware, and exploit zero days. 

Tech giants working in the AI models industry have built measures to protect against jailbreaking and detecting malicious attempts. But threat actors have also found ways to jump the guardrails and trick AI models like Google Meta, OpenAI, and Anthropic into providing malicious info. 

Underground Market for LLMs

Experts found two uncensored LLMs: DarkGPT, which costs 78 cents per 50 messages, and Escape GPT, a subscription model that charges $64.98 a month. Both models generate harmful code that antivirus tools fail to detect two-thirds of the time. Another model "WolfGPT" costs $150, and allows users to write phishing emails that can escape most spam detectors. 

The research findings suggest all harmful AI models could make malware, and 41.5% could create phishing emails. These models were built upon OpenAI's GPT-3.5 and GPT-4, Claude Instant, Claude-2-100k, and Pygmalion 13B. 

To fight these threats, experts have suggested a dataset of prompts used to make malware and escape safety features. AI companies should release models with default censorship settings and allow access to illicit models only for research purposes.

Researchers Find ChatGPT’s Latest Bot Behaves Like Humans

 

A team led by Matthew Jackson, the William D. Eberle Professor of Economics in the Stanford School of Humanities and Sciences, used psychology and behavioural economics tools to characterise the personality and behaviour of ChatGPT's popular AI-driven bots in a paper published in the Proceedings of the National Academy of Sciences on June 12. 

This study found that the most recent version of the chatbot, version 4, was indistinguishable from its human counterparts. When the bot picked less common human behaviours, it behaved more cooperatively and altruistic.

“Increasingly, bots are going to be put into roles where they’re making decisions, and what kinds of characteristics they have will become more important,” stated Jackson, who is also a senior fellow at the Stanford Institute for Economic Policy Research. 

In the study, the research team presented a widely known personality test to ChatGPT versions 3 and 4 and asked the chatbots to describe their moves in a series of behavioural games that can predict real-world economic and ethical behaviours. The games included pre-determined exercises in which players had to select whether to inform on a partner in crime or how to share money with changing incentives. The bots' responses were compared to those of over 100,000 people from 50 nations. 

The study is one of the first in which an artificial intelligence source has passed a rigorous Turing test. A Turing test, named after British computing pioneer Alan Turing, can consist of any job assigned to a machine to determine whether it performs like a person. If the machine seems to be human, it passes the test. 

Chatbot personality quirks

The researchers assessed the bots' personality qualities using the OCEAN Big-5, a popular personality exam that evaluates respondents on five fundamental characteristics that influence behaviour. In the study, ChatGPT's version 4 performed within normal ranges for the five qualities but was only as agreeable as the lowest third of human respondents. The bot passed the Turing test, but it wouldn't have made many friends. 

Version 4 outperformed version 3 in terms of chip and motherboard performance. The previous version, with which many internet users may have interacted for free, was only as appealing to the bottom fifth of human responders. Version 3 was likewise less open to new ideas and experiences than all but a handful of the most stubborn people. 

Human-AI interactions 

Much of the public's concern about AI stems from their failure to understand how bots make decisions. It can be difficult to trust a bot's advice if you don't know what it's designed to accomplish. Jackson's research shows that even when researchers cannot scrutinise AI's inputs and algorithms, they can discover potential biases by meticulously examining outcomes. 

As a behavioural economist who has made significant contributions to our knowledge of how human social structures and interactions influence economic decision-making, Jackson is concerned about how human behaviour may evolve in response to AI.

“It’s important for us to understand how interactions with AI are going to change our behaviors and how that will change our welfare and our society,” Jackson concluded. “The more we understand early on—the more we can understand where to expect great things from AI and where to expect bad things—the better we can do to steer things in a better direction.”

From Text to Action: Chatbots in Their Stone Age

From Text to Action: Chatbots in Their Stone Age

The stone age of AI

Despite all the talk of generative AI disrupting the world, the technology has failed to significantly transform white-collar jobs. Workers are experimenting with chatbots for activities like email drafting, and businesses are doing numerous experiments, but office work has yet to experience a big AI overhaul.

Chatbots and their limitations

That could be because we haven't given chatbots like Google's Gemini and OpenAI's ChatGPT the proper capabilities yet; they're typically limited to taking in and spitting out text via a chat interface.

Things may become more fascinating in commercial settings when AI businesses begin to deploy so-called "AI agents," which may perform actions by running other software on a computer or over the internet.

Tool use for AI

Anthropic, a rival of OpenAI, unveiled a big new product today that seeks to establish the notion that tool use is required for AI's next jump in usefulness. The business is allowing developers to instruct its chatbot Claude to use external services and software to complete more valuable tasks. 

Claude can, for example, use a calculator to solve math problems that vex big language models; be asked to visit a database storing customer information; or be forced to use other programs on a user's computer when it would be beneficial.

Anthropic has been assisting various companies in developing Claude-based aides for their employees. For example, the online tutoring business Study Fetch has created a means for Claude to leverage various platform tools to customize the user interface and syllabus content displayed to students.

Other businesses are also joining the AI Stone Age. At its I/O developer conference earlier this month, Google showed off a few prototype AI agents, among other new AI features. One of the agents was created to handle online shopping returns by searching for the receipt in the customer's Gmail account, completing the return form, and scheduling a package pickup.

Challenges and caution

  • While tool use is exciting, it comes with challenges. Language models, including large ones, don’t always understand context perfectly.
  • Ensuring that AI agents behave correctly and interpret user requests accurately remains a hurdle.
  • Companies are cautiously exploring these capabilities, aware of the potential pitfalls.

The Next Leap

The Stone Age of chatbots represents a significant leap forward. Here’s what we can expect:

Action-oriented chatbots

  • Chatbots that can interact with external services will be more useful. Imagine a chatbot that books flights, schedules meetings, or orders groceries—all through seamless interactions.
  • These chatbots won’t be limited to answering questions; they’ll take action based on user requests.

Enhanced Productivity

  • As chatbots gain tool-using abilities, productivity will soar. Imagine a virtual assistant that not only schedules your day but also handles routine tasks.
  • Businesses can benefit from AI agents that automate repetitive processes, freeing up human resources for more strategic work.

Private AI Chatbot Not Safe From Hackers With Encryption


AI helpers have assimilated into our daily lives in over a year and gained access to our most private information and worries. 

Sensitive information, such as personal health questions and professional consultations, is entrusted to these digital companions. While providers utilize encryption to protect user interactions, new research raises questions about how secure AI assistants may be.

Understanding the attack on AI Assistant Responses

According to a study, an attack that can predict AI assistant reactions with startling accuracy has been discovered. 

This method uses big language models to refine results and takes advantage of a side channel present in most major AI assistants, except for Google Gemini.

According to Offensive AI Research Lab, a passive adversary can identify the precise subject of more than half of all recorded responses by intercepting data packets sent back and forth between the user and the AI assistant.

Recognizing Token Privacy

This attack is centered around a side channel that is integrated within the tokens that AI assistants use. 

Real-time response transmission is facilitated via tokens, which are encoded-word representations. But the tokens are delivered one after the other, exposing a flaw known as the "token-length sequence." By using this route, attackers can infer response content and jeopardize user privacy.

The Token Inference Assault: Deciphering Cryptographic Reactions

Researchers use a token inference attack to refine intercepted data by using LLMs to convert token sequences into comprehensible language. 

Yisroel Mirsky, the director of the Offensive AI Research Lab at Ben-Gurion University in Israel, stated in an email that "private chats sent from ChatGPT and other services can currently be read by anybody."

By using publicly accessible conversation data to train LLMs, researchers can decrypt responses with remarkably high accuracy. This technique leverages the predictability of AI assistant replies to enable contextual decryption of encrypted content, similar to a known plaintext attack.

An AI Chatbot's Anatomy: Understanding of Tokenization

AI chatbots use tokens as the basic building blocks for text processing, which direct the creation and interpretation of conversation. 

To learn patterns and probabilities, LLMs examine large datasets of tokenized text during training. According to Ars Technica, tokens enable real-time communication between users and AI helpers, allowing users to customize their responses depending on environmental cues.

Current Vulnerabilities and Countermeasures

An important vulnerability is the real-time token transmission, which allows attackers to deduce response content based on packet length. 

Sequential delivery reveals answer data, while batch transmission hides individual token lengths. Reevaluating token transmission mechanisms is necessary to mitigate this risk and reduce susceptibility to passive adversaries.

Protecting the Privacy of Data in AI Interactions

Protecting user privacy is still critical as AI helpers develop. Reducing security threats requires implementing strong encryption techniques and improving token delivery mechanisms. 

By fixing flaws and improving data security protocols, providers can maintain users' faith and trust in AI technologies.

Safeguarding AI's Future

A new age of human-computer interaction is dawning with the introduction of AI helpers. But innovation also means accountability. 

Providers need to give data security and privacy top priority as vulnerabilities are found by researchers. Hackers are out there; the next thing we know, they're giving other businesses access to our private chats.

Restrictions on Gemini Chatbot's Election Answers by Google

 


AI chatbot Gemini has been limited by Google in terms of its ability to respond to queries concerning several forthcoming elections in several countries, including the presidential election in the United States, this year. According to an announcement made by the company on Tuesday, Gemini, Google's artificial intelligence chatbot, will no longer answer election-related questions for users in the U.S. and India. 

Previously known as Bard, Google's AI chatbot Gemini has been unable to answer questions about the general elections of 2024. Various reports indicate that the update is already live in the United States, is already being rolled out in India, and is now being rolled out in all major countries that are approaching elections within the next few months. 

As a result of the change, Google has expressed concern about how the generative AI could be weaponized by users and produce inaccurate or misleading results, as well as the role it has been playing and will continue to play in the electoral process. 

In advance of the general elections in India this spring, millions of Indian citizens will be voting in a general election, and the company has taken several steps to ensure that its services are secure from misinformation. 

Several high-stakes elections are planned this year in countries such as the United States, India, South Africa, and the United Kingdom that require a significant amount of chatbot capabilities. It is widely known that artificial intelligence (AI) is generating disinformation and it is having a significant impact on global elections. This technology allows robocalls, deep fakes, and chatbots to be used to spread misinformation. 

Just days after India released an advisory demanding that companies in the tech industry get government approval before they launch their new AI models, the switch has been made in India. A recent investigation of Google's artificial intelligence products has resulted in a wide range of concerns, including inaccuracies in some historical depictions of people created by Gemini that forced the chatbot's image-generation feature to be halted, which has caused it to receive negative attention. 

According to the CEO of the company, Sundar Pichai, the chatbot is being remediated and is "completely unacceptable" for its responses. The parent company of Facebook, Meta Platforms, announced last month that it would set up a team in advance of the European Parliament elections in June to combat disinformation and the abuse of generative AI. 

As generative AI is advancing across the globe, government officials across the globe have been concerned about misinformation, prompting them to take measures to control its use. As of recently, India has informed technology companies that they need to obtain approval before releasing AI tools that have been "unreliable" or that are undergoing testing. 

The company apologised in February after its recently launched AI image generator, Gemini, created an image of the US Founding Fathers in which a black man was inappropriately depicted as a member of the group. Gemini also created an incorrectly depicted image of German soldiers from World War Two.