Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Gemini. Show all posts

AI Adoption Outpaces Cybersecurity Awareness as Users Share Sensitive Data with Chatbots

 

The global surge in the use of AI tools such as ChatGPT and Gemini is rapidly outpacing efforts to educate users about the cybersecurity risks these technologies pose, according to a new study. The research, conducted by the National Cybersecurity Alliance (NCA) in collaboration with cybersecurity firm CybNet, surveyed over 6,500 individuals across seven countries, including the United States. It found that 65% of respondents now use AI in their everyday lives—a 21% increase from last year—yet 58% said they had received no training from employers on the data privacy and security challenges associated with AI use. 

“People are embracing AI in their personal and professional lives faster than they are being educated on its risks,” said Lisa Plaggemier, Executive Director of the NCA. The study revealed that 43% of respondents admitted to sharing sensitive information, including company financial data and client records, with AI chatbots, often without realizing the potential consequences. The findings highlight a growing disconnect between AI adoption and cybersecurity preparedness, suggesting that many organizations are failing to educate employees on how to use these tools responsibly. 

The NCA-CybNet report aligns with previous warnings about the risks posed by AI systems. A survey by software company SailPoint earlier this year found that 96% of IT professionals believe AI agents pose a security risk, while 84% said their organizations had already begun deploying the technology. These AI agents—designed to automate tasks and improve efficiency—often require access to sensitive internal documents, databases, or systems, creating new vulnerabilities. When improperly secured, they can serve as entry points for hackers or even cause catastrophic internal errors, such as one case where an AI agent accidentally deleted an entire company database. 

Traditional chatbots also come with risks, particularly around data privacy. Despite assurances from companies, most chatbot interactions are stored and sometimes used for future model training, meaning they are not entirely private. This issue gained attention in 2023 when Samsung engineers accidentally leaked confidential data to ChatGPT, prompting the company to ban employee use of the chatbot. 

The integration of AI tools into mainstream software has only accelerated their ubiquity. Microsoft recently announced that AI agents will be embedded into Word, Excel, and PowerPoint, meaning millions of users may interact with AI daily—often without any specialized training in cybersecurity. As AI becomes an integral part of workplace tools, the potential for human error, unintentional data sharing, and exposure to security breaches increases. 

While the promise of AI continues to drive innovation, experts warn that its unchecked expansion poses significant security challenges. Without comprehensive training, clear policies, and safeguards in place, individuals and organizations risk turning powerful productivity tools into major sources of vulnerability. The race to integrate AI into every aspect of modern life is well underway—but for cybersecurity experts, the race to keep users informed and protected is still lagging far behind.

Gemini in Chrome: Google Can Now Track Your Phone

Gemini in Chrome: Google Can Now Track Your Phone

Is the Gemini browser collecting user data?

A new warning for 2 billion Chrome users, Google has announced that its browser will start collecting “sensitive data” on smartphones. “Starting today, we’re rolling out Gemini in Chrome,” Google said, which will be the “biggest upgrade to Chrome in its history.” The data that can be collected includes the device ID, username, location, search history, and browsing history. 

Agentic AI and browsers

Surfshark investigated the user privacy of AI browsers after Google’s announcement and found that if you use Chrome with Gemini on your smartphone, Google can collect 24 types of data. According to Surfshark, this is bigger than any other agentic AI browsers that have been analyzed. 

For instance, Microsoft’s Edge browser, which has Copilot, only collects half the data compared to Chrome and Gemini. Even Brave, Opera, and Perplexity collect less data. With the Gemini-in-Chrome extension, however, users should be more careful. 

Now that AI is everywhere, a lot of browsers like Firefox, Chrome, and Edge allow users to integrate agentic AI extensions. Although these tools are handy, relying on them can expose your privacy and personal data to third-party companies.

There have been incidents recently where data harvesting resulted from browser extensions, even those downloaded from official stores. 

The new data collection warning comes at the same time as the Gemini upgrade this month, called “Nano Banana.” This new update will also feed on user data. 

According to Android Authority, “Google may be working on bringing Nano Banana, Gemini’s popular image editing tool, to Google Photos. We’ve uncovered a GIF for a new ‘Create’ feature in the Google Photos app, suggesting it’ll use Nano Banana inside the app. It’s unclear when the feature will roll out.”

AI browser concerns

Experts have warned that every photo you upload has a biometric fingerprint which consists of your micro-expressions, unique facial geometry, body proportions, and micro-expressions. The biometric data included device fingerprinting, behavioural biometrics, social network mapping, and GPS coordinates.

Besides this, Apple’s Safari now has anti-fingerprinting technology as the default browsing for iOS 26. However, users should only use their own browser for it to work. For instance, if you use Chrome on an Apple device, it won’t work. Another reason why Apply is advising users to use the Safari browser and not Chrome. 

Google Gemini Bug Exploits Summaries for Phishing Scams


False AI summaries leading to phishing attacks

Google Gemini for Workspace can be exploited to generate email summaries that appear legitimate but include malicious instructions or warnings that direct users to phishing sites without using attachments or direct links.

Google Gemini for Workplace can be compromised to create email summaries that look real but contain harmful instructions or warnings that redirect users to phishing websites without using direct links or attachments. 

Similar attacks were reported in 2024 and afterwards; safeguards were pushed to stop misleading responses. However, the tactic remains a problem for security experts. 

Gemini for attack

A prompt-injection attack on the Gemini model was revealed via cybersecurity researcher Marco Figueoa, at 0din, Mozilla’s bug bounty program for GenAI tools. The tactic creates an email with a hidden directive for Gemini. The threat actor can hide malicious commands in the message body text at the end via CSS and HTML, which changes the font size to zero and color to white. 

According to Marco, who is GenAI Bug Bounty Programs Manager at Mozilla, “Because the injected text is rendered in white-on-white (or otherwise hidden), the victim never sees the instruction in the original message, only the fabricated 'security alert' in the AI-generated summary. Similar indirect prompt attacks on Gemini were first reported in 2024, and Google has already published mitigations, but the technique remains viable today.”

Gmail does not render the malicious instruction as there are no attachments or links present, and the message may reach the victim’s inbox. If the receiver opens the email and asks Gemini to make a summary of the received mail, the AI tool will parse the invisible directive and create the summary. Figueroa provides an example of Gemini following hidden prompts, accompanied by a security warning that the victim’s Gmail password and phone number may be compromised.

Impact

Supply-chain threats: CRM systems, automated ticketing emails, and newsletters can become injection vectors, changing one exploited SaaS account into hundreds of thousands of phishing beacons.

Cross-product surface: The same tactics applies to Gemini in Slides, Drive search, Docs and any workplace where the model is getting third-party content.

According to Marco, “Security teams must treat AI assistants as part of the attack surface and instrument them, sandbox them, and never assume their output is benign.”

Google to Launch Gemini AI for Children Under 13

Google to Launch Gemini AI for Children Under 13

Google plans to roll out its Gemini artificial intelligence chatbot next week for children younger than 13 with parent-managed Google accounts, as tech companies vie to attract young users with AI products.

Google will launch its Gemini AI chatbot soon for children below the age of 13 with parent-managed Google accounts. The move comes as tech companies try to attract young users with AI tools. According to a mail sent to a parent of an 8-year-old, Google apps will soon be available to a child. It means your child can use Gemini to ask questions, get homework help, and also create stories. 

That chatbot will be available to children whose guardians have Family Link, a Google feature that allows families to make Gmail and opt-in services like YouTube for their children. To register a child account, the parent gives the tech company the child’s personal information such as name and date of birth. 

According to Google spokesperson Karl Ryan, Gemini has concrete measures for younger users to restrict the chatbot from creating unsafe or harmful content. If a child with a Family Link account uses Gemini, the company can not use the data for training its AI model. 

Gemini for children can drive the use of chatbots among vulnerable populations as companies, colleges, schools, and others struggle with the effects of popular gen AI tech. The systems are trained on massive amounts of data sets to create human-like text and realistic images and videos. Google and other AI chatbot developers are battling fierce competition to get young users’ attention. 

Recently, President Donald Trump requested schools to embrace tools for teaching and learning. Millions of teens are already using chatbots for study help, virtual companions, and writing coaches. Experts have warned that chatbots could pose serious threats to child safety. 

The bots are known to sometimes make things up. UNICEF and other children's advocacy groups have found that AI systems can misinform, manipulate, and confuse young children who may face difficulties understanding that the chatbots are not humans. 

According to UNICEF’s global research office, “Generative AI has produced dangerous content,” posing risks for children. Google has acknowledged some risks, cautioning parents that “Gemini can make mistakes” and suggesting they “help your child think critically” about the chatbot. 

Restrictions on Gemini Chatbot's Election Answers by Google

 


AI chatbot Gemini has been limited by Google in terms of its ability to respond to queries concerning several forthcoming elections in several countries, including the presidential election in the United States, this year. According to an announcement made by the company on Tuesday, Gemini, Google's artificial intelligence chatbot, will no longer answer election-related questions for users in the U.S. and India. 

Previously known as Bard, Google's AI chatbot Gemini has been unable to answer questions about the general elections of 2024. Various reports indicate that the update is already live in the United States, is already being rolled out in India, and is now being rolled out in all major countries that are approaching elections within the next few months. 

As a result of the change, Google has expressed concern about how the generative AI could be weaponized by users and produce inaccurate or misleading results, as well as the role it has been playing and will continue to play in the electoral process. 

In advance of the general elections in India this spring, millions of Indian citizens will be voting in a general election, and the company has taken several steps to ensure that its services are secure from misinformation. 

Several high-stakes elections are planned this year in countries such as the United States, India, South Africa, and the United Kingdom that require a significant amount of chatbot capabilities. It is widely known that artificial intelligence (AI) is generating disinformation and it is having a significant impact on global elections. This technology allows robocalls, deep fakes, and chatbots to be used to spread misinformation. 

Just days after India released an advisory demanding that companies in the tech industry get government approval before they launch their new AI models, the switch has been made in India. A recent investigation of Google's artificial intelligence products has resulted in a wide range of concerns, including inaccuracies in some historical depictions of people created by Gemini that forced the chatbot's image-generation feature to be halted, which has caused it to receive negative attention. 

According to the CEO of the company, Sundar Pichai, the chatbot is being remediated and is "completely unacceptable" for its responses. The parent company of Facebook, Meta Platforms, announced last month that it would set up a team in advance of the European Parliament elections in June to combat disinformation and the abuse of generative AI. 

As generative AI is advancing across the globe, government officials across the globe have been concerned about misinformation, prompting them to take measures to control its use. As of recently, India has informed technology companies that they need to obtain approval before releasing AI tools that have been "unreliable" or that are undergoing testing. 

The company apologised in February after its recently launched AI image generator, Gemini, created an image of the US Founding Fathers in which a black man was inappropriately depicted as a member of the group. Gemini also created an incorrectly depicted image of German soldiers from World War Two.

Winklevoss Crypto Firm Gemini to Return $1.1B to Customers in Failed "Earn" Scheme

‘Earn’ product fiasco

Gemini to return money

As part of a settlement with regulators on Wednesday, the cryptocurrency company Gemini, owned by the Winklevoss twins, agreed to repay at least $1.1 billion to consumers of its failed "Earn" loan scheme and pay a $37 million fine for "significant" compliance violations.

The New York State Department of Financial Services claims that Gemini, which the twins started following their well-known argument with Mark Zuckerberg over who developed Facebook, neglected to "fully vet or sufficiently monitor" Genesis, Gemini Earn's now-bankrupt lending partner.

What is the Earn Program?

The Earn program, which promised users up to 8% income on their cryptocurrency deposits, was canceled in November 2022 when Genesis was unable to pay withdrawals due to the fall of infamous scammer Sam Bankman-Fried's FTX enterprise.

Since then, almost 30,000 residents of New York and over 200,000 other Earn users have lost access to their money.

Gemini "engaged in unsafe and unsound practices that ultimately threatened the financial health of the company," according to the state regulator.

NYSDFS Superintendent Adrienne Harris claimed in a statement that "Gemini failed to conduct due diligence on an unregulated third party, later accused of massive fraud, harming Earn customers who were suddenly unable to access their assets after Genesis Global Capital experienced a financial meltdown." 

Customers win lawsuit

Customers of Earn, who are entitled to the assets they committed to Gemini, have won with today's settlement.

“Collecting hundreds of millions of dollars in fees from Gemini customers that otherwise could have gone to Gemini, substantially weakening Gemini’s financial condition,” was the unregulated affiliate that dubbed Gemini Liquidity during the crisis.

Although it did not provide any details, the regulator added that it "further identified various management and compliance deficiencies."

Gemini also consented to pay $40 million to Genesis' bankruptcy proceedings as part of the settlement, for the benefit of Earn customers.

"If the company does not fulfill its obligation to return at least $1.1 billion to Earn customers after the resolution of the [Genesis] bankruptcy," the NYSDFS stated that it "has the right to bring further action against Gemini."

Gemini announced that the settlement would "result in all Earn users receiving 100% of their digital assets back in kind" during the following 12 months in a long statement that was posted on X.

The business further stated that final documentation is required for the settlement and that it may take up to two months for the bankruptcy court to approve it.

The New York Department of Financial Services (DFS) was credited by Gemini with helping to reach a settlement that gives Earn users a coin-for-coin recovery.

More about the lawsuit

Attorney General Letitia James of New York filed a lawsuit against Genesis and Gemini in October, accusing them of defrauding Earn consumers out of their money and labeling them as "bad actors."

James tripled the purported scope of the lawsuit earlier this month. The complaint was submitted a few weeks after The Post revealed that, on August 9, 2022, well in advance of Genesis's bankruptcy, Gemini had surreptitiously taken $282 million in cryptocurrency from the company.

Subsequently, the twins stated that the change was made to the advantage of the patrons.

The brothers' actions, however, infuriated Earn customers, with one disgruntled investor telling The Post that "there's no good way that Gemini can spin this."

In a different lawsuit, the SEC is suing Gemini and Genesis because the Earn program was an unregistered security.

The collapse of Earn was a significant blow to the Winklevoss twins' hopes of becoming a dominant force in the industry.

Gemini had built its brand on the idea that it was a reliable player in the wild, mostly uncontrolled cryptocurrency market.

Gemini: Google Launches its Most Powerful AI Software Model


Google has recently launched Gemini, its most powerful generative AI software model to date. And since the model is designed in three different sizes, Gemini may be utilized in a variety of settings, including mobile devices and data centres.

Google has been working on the development of the Gemini large language model (LLM) for the past eight months and just recently provided access to its early versions to a small group of companies. This LLM is believed to be giving head-to-head competition to other LLMs like Meta’s Llama 2 and OpenAI’s GPT-4. 

The AI model is designed to operate on various formats, be it text, image or video, making the feature one of the most significant algorithms in Google’s history.

In a blog post, Google CEO Sundar Pichai wrote, “This new era of models represents one of the biggest science and engineering efforts we’ve undertaken as a company.”

The new LLM, also known as a multimodal model, is capable of various methods of input, like audio, video, and images. Traditionally, multimodal model creation involves training discrete parts for several modalities and then piecing them together.

“These models can sometimes be good at performing certain tasks, like describing images, but struggle with more conceptual and complex reasoning,” Pichai said. “We designed Gemini to be natively multimodal, pre-trained from the start on different modalities. Then we fine-tuned it with additional multimodal data to further refine its effectiveness.”

Google also unveiled the Cloud TPU v5p, its most potent ASIC chip, in tandem with the launch. This chip was created expressly to meet the enormous processing demands of artificial intelligence. According to the company, the new processor can train LLMs 2.8 times faster than Google's prior TPU v4.

For ChatGPT and Bard, two examples of generative AI chatbots, LLMs are the algorithmic platforms.

The Cloud TPU v5e, which touted 2.3 times the price performance over the previous generation TPU v4, was made generally available by Google earlier last year. The TPU v5p is significantly faster than the v4, but it costs three and a half times as much./ Google’s new Gemini LLM is now available in some of Google’s core products. For example, Google’s Bard chatbot is using a version of Gemini Pro for advanced reasoning, planning, and understanding. 

Developers and enterprise customers can use the Gemini API in Vertex AI or Google AI Studio, the company's free web-based development tool, to access Gemini Pro as of December 13. Further improvements to Gemini Ultra, including thorough security and trust assessments, led Google to announce that it will be made available to a limited number of users in early 2024, ahead of developers and business clients.