A new warning for 2 billion Chrome users, Google has announced that its browser will start collecting “sensitive data” on smartphones. “Starting today, we’re rolling out Gemini in Chrome,” Google said, which will be the “biggest upgrade to Chrome in its history.” The data that can be collected includes the device ID, username, location, search history, and browsing history.
Surfshark investigated the user privacy of AI browsers after Google’s announcement and found that if you use Chrome with Gemini on your smartphone, Google can collect 24 types of data. According to Surfshark, this is bigger than any other agentic AI browsers that have been analyzed.
For instance, Microsoft’s Edge browser, which has Copilot, only collects half the data compared to Chrome and Gemini. Even Brave, Opera, and Perplexity collect less data. With the Gemini-in-Chrome extension, however, users should be more careful.
Now that AI is everywhere, a lot of browsers like Firefox, Chrome, and Edge allow users to integrate agentic AI extensions. Although these tools are handy, relying on them can expose your privacy and personal data to third-party companies.
There have been incidents recently where data harvesting resulted from browser extensions, even those downloaded from official stores.
The new data collection warning comes at the same time as the Gemini upgrade this month, called “Nano Banana.” This new update will also feed on user data.
According to Android Authority, “Google may be working on bringing Nano Banana, Gemini’s popular image editing tool, to Google Photos. We’ve uncovered a GIF for a new ‘Create’ feature in the Google Photos app, suggesting it’ll use Nano Banana inside the app. It’s unclear when the feature will roll out.”
Experts have warned that every photo you upload has a biometric fingerprint which consists of your micro-expressions, unique facial geometry, body proportions, and micro-expressions. The biometric data included device fingerprinting, behavioural biometrics, social network mapping, and GPS coordinates.
Besides this, Apple’s Safari now has anti-fingerprinting technology as the default browsing for iOS 26. However, users should only use their own browser for it to work. For instance, if you use Chrome on an Apple device, it won’t work. Another reason why Apply is advising users to use the Safari browser and not Chrome.
Google Gemini for Workspace can be exploited to generate email summaries that appear legitimate but include malicious instructions or warnings that direct users to phishing sites without using attachments or direct links.
Google Gemini for Workplace can be compromised to create email summaries that look real but contain harmful instructions or warnings that redirect users to phishing websites without using direct links or attachments.
Similar attacks were reported in 2024 and afterwards; safeguards were pushed to stop misleading responses. However, the tactic remains a problem for security experts.
A prompt-injection attack on the Gemini model was revealed via cybersecurity researcher Marco Figueoa, at 0din, Mozilla’s bug bounty program for GenAI tools. The tactic creates an email with a hidden directive for Gemini. The threat actor can hide malicious commands in the message body text at the end via CSS and HTML, which changes the font size to zero and color to white.
According to Marco, who is GenAI Bug Bounty Programs Manager at Mozilla, “Because the injected text is rendered in white-on-white (or otherwise hidden), the victim never sees the instruction in the original message, only the fabricated 'security alert' in the AI-generated summary. Similar indirect prompt attacks on Gemini were first reported in 2024, and Google has already published mitigations, but the technique remains viable today.”
Gmail does not render the malicious instruction as there are no attachments or links present, and the message may reach the victim’s inbox. If the receiver opens the email and asks Gemini to make a summary of the received mail, the AI tool will parse the invisible directive and create the summary. Figueroa provides an example of Gemini following hidden prompts, accompanied by a security warning that the victim’s Gmail password and phone number may be compromised.
Supply-chain threats: CRM systems, automated ticketing emails, and newsletters can become injection vectors, changing one exploited SaaS account into hundreds of thousands of phishing beacons.
Cross-product surface: The same tactics applies to Gemini in Slides, Drive search, Docs and any workplace where the model is getting third-party content.
According to Marco, “Security teams must treat AI assistants as part of the attack surface and instrument them, sandbox them, and never assume their output is benign.”
Google will launch its Gemini AI chatbot soon for children below the age of 13 with parent-managed Google accounts. The move comes as tech companies try to attract young users with AI tools. According to a mail sent to a parent of an 8-year-old, Google apps will soon be available to a child. It means your child can use Gemini to ask questions, get homework help, and also create stories.
That chatbot will be available to children whose guardians have Family Link, a Google feature that allows families to make Gmail and opt-in services like YouTube for their children. To register a child account, the parent gives the tech company the child’s personal information such as name and date of birth.
According to Google spokesperson Karl Ryan, Gemini has concrete measures for younger users to restrict the chatbot from creating unsafe or harmful content. If a child with a Family Link account uses Gemini, the company can not use the data for training its AI model.
Gemini for children can drive the use of chatbots among vulnerable populations as companies, colleges, schools, and others struggle with the effects of popular gen AI tech. The systems are trained on massive amounts of data sets to create human-like text and realistic images and videos. Google and other AI chatbot developers are battling fierce competition to get young users’ attention.
Recently, President Donald Trump requested schools to embrace tools for teaching and learning. Millions of teens are already using chatbots for study help, virtual companions, and writing coaches. Experts have warned that chatbots could pose serious threats to child safety.
The bots are known to sometimes make things up. UNICEF and other children's advocacy groups have found that AI systems can misinform, manipulate, and confuse young children who may face difficulties understanding that the chatbots are not humans.
According to UNICEF’s global research office, “Generative AI has produced dangerous content,” posing risks for children. Google has acknowledged some risks, cautioning parents that “Gemini can make mistakes” and suggesting they “help your child think critically” about the chatbot.
The New York State Department of Financial Services claims that Gemini, which the twins started following their well-known argument with Mark Zuckerberg over who developed Facebook, neglected to "fully vet or sufficiently monitor" Genesis, Gemini Earn's now-bankrupt lending partner.
The Earn program, which promised users up to 8% income on their cryptocurrency deposits, was canceled in November 2022 when Genesis was unable to pay withdrawals due to the fall of infamous scammer Sam Bankman-Fried's FTX enterprise.
Since then, almost 30,000 residents of New York and over 200,000 other Earn users have lost access to their money.
Gemini "engaged in unsafe and unsound practices that ultimately threatened the financial health of the company," according to the state regulator.
NYSDFS Superintendent Adrienne Harris claimed in a statement that "Gemini failed to conduct due diligence on an unregulated third party, later accused of massive fraud, harming Earn customers who were suddenly unable to access their assets after Genesis Global Capital experienced a financial meltdown."
Customers of Earn, who are entitled to the assets they committed to Gemini, have won with today's settlement.
“Collecting hundreds of millions of dollars in fees from Gemini customers that otherwise could have gone to Gemini, substantially weakening Gemini’s financial condition,” was the unregulated affiliate that dubbed Gemini Liquidity during the crisis.
Although it did not provide any details, the regulator added that it "further identified various management and compliance deficiencies."
Gemini also consented to pay $40 million to Genesis' bankruptcy proceedings as part of the settlement, for the benefit of Earn customers.
"If the company does not fulfill its obligation to return at least $1.1 billion to Earn customers after the resolution of the [Genesis] bankruptcy," the NYSDFS stated that it "has the right to bring further action against Gemini."
Gemini announced that the settlement would "result in all Earn users receiving 100% of their digital assets back in kind" during the following 12 months in a long statement that was posted on X.
The business further stated that final documentation is required for the settlement and that it may take up to two months for the bankruptcy court to approve it.
The New York Department of Financial Services (DFS) was credited by Gemini with helping to reach a settlement that gives Earn users a coin-for-coin recovery.
Attorney General Letitia James of New York filed a lawsuit against Genesis and Gemini in October, accusing them of defrauding Earn consumers out of their money and labeling them as "bad actors."
James tripled the purported scope of the lawsuit earlier this month. The complaint was submitted a few weeks after The Post revealed that, on August 9, 2022, well in advance of Genesis's bankruptcy, Gemini had surreptitiously taken $282 million in cryptocurrency from the company.
Subsequently, the twins stated that the change was made to the advantage of the patrons.
The brothers' actions, however, infuriated Earn customers, with one disgruntled investor telling The Post that "there's no good way that Gemini can spin this."
In a different lawsuit, the SEC is suing Gemini and Genesis because the Earn program was an unregistered security.
The collapse of Earn was a significant blow to the Winklevoss twins' hopes of becoming a dominant force in the industry.
Gemini had built its brand on the idea that it was a reliable player in the wild, mostly uncontrolled cryptocurrency market.
Google has been working on the development of the Gemini large language model (LLM) for the past eight months and just recently provided access to its early versions to a small group of companies. This LLM is believed to be giving head-to-head competition to other LLMs like Meta’s Llama 2 and OpenAI’s GPT-4.
The AI model is designed to operate on various formats, be it text, image or video, making the feature one of the most significant algorithms in Google’s history.
In a blog post, Google CEO Sundar Pichai wrote, “This new era of models represents one of the biggest science and engineering efforts we’ve undertaken as a company.”
The new LLM, also known as a multimodal model, is capable of various methods of input, like audio, video, and images. Traditionally, multimodal model creation involves training discrete parts for several modalities and then piecing them together.
“These models can sometimes be good at performing certain tasks, like describing images, but struggle with more conceptual and complex reasoning,” Pichai said. “We designed Gemini to be natively multimodal, pre-trained from the start on different modalities. Then we fine-tuned it with additional multimodal data to further refine its effectiveness.”
Google also unveiled the Cloud TPU v5p, its most potent ASIC chip, in tandem with the launch. This chip was created expressly to meet the enormous processing demands of artificial intelligence. According to the company, the new processor can train LLMs 2.8 times faster than Google's prior TPU v4.
For ChatGPT and Bard, two examples of generative AI chatbots, LLMs are the algorithmic platforms.
The Cloud TPU v5e, which touted 2.3 times the price performance over the previous generation TPU v4, was made generally available by Google earlier last year. The TPU v5p is significantly faster than the v4, but it costs three and a half times as much./ Google’s new Gemini LLM is now available in some of Google’s core products. For example, Google’s Bard chatbot is using a version of Gemini Pro for advanced reasoning, planning, and understanding.
Developers and enterprise customers can use the Gemini API in Vertex AI or Google AI Studio, the company's free web-based development tool, to access Gemini Pro as of December 13. Further improvements to Gemini Ultra, including thorough security and trust assessments, led Google to announce that it will be made available to a limited number of users in early 2024, ahead of developers and business clients.