Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Google Gemini. Show all posts

Google Gemini Calendar Flaw Allows Meeting Invites to Leak Private Data

 

Though built to make life easier, artificial intelligence helpers sometimes carry hidden risks. A recent study reveals that everyday features - such as scheduling meetings - can become pathways for privacy breaches. Instead of protecting data, certain functions may unknowingly expose it. Experts from Miggo Security identified a flaw in Google Gemini’s connection to Google Calendar. Their findings show how an ordinary invite might secretly gather private details. What looks innocent on the surface could serve another purpose beneath. 

A fresh look at Gemini shows it helps people by understanding everyday speech and pulling details from tools like calendars. Because the system responds to words instead of rigid programming rules, security experts from Miggo discovered a gap in its design. Using just text that seems normal, hackers might steer the AI off course. These insights, delivered openly to Hackread.com, reveal subtle risks hidden in seemingly harmless interactions. 

A single calendar entry is enough to trigger the exploit - no clicking, no downloads, no obvious red flags. Hidden inside what looks like normal event details sits coded directions meant for machines, not people. Rather than arriving through email attachments or shady websites, the payload comes disguised as routine scheduling data. The wording blends in visually, yet when processed by Gemini, it shifts into operational mode. Instructions buried in plain sight tell the system to act without signaling intent to the recipient. 

A single harmful invitation sits quietly once added to the calendar. Only after the user poses a routine inquiry - like asking about free time on Saturday - is anything set in motion. When Gemini checks the agenda, it reads the tainted event along with everything else. Within that entry lies a concealed instruction: gather sensitive calendar data and compile a report. Using built-in features of Google Calendar, the system generates a fresh event containing those extracted details. 

Without any sign, personal timing information ends up embedded within a new appointment. What makes the threat hard to spot is its invisible nature. Though responses appear normal, hidden processes run without alerting the person using the system. Instead of bugs in software, experts point to how artificial intelligence understands words as the real weak point. The concern grows as behavior - rather than broken code - becomes the source of danger. Not seeing anything wrong does not mean everything is fine. 

Back in December 2025, problems weren’t new for Google’s AI tools when it came to handling sneaky language tricks. A team at Noma Security found a gap called GeminiJack around that time. Hidden directions inside files and messages could trigger leaks of company secrets through the system. Experts pointed out flaws deep within how these smart tools interpret context across linked platforms. The design itself seemed to play a role in the vulnerability. Following the discovery by Miggo Security, Google fixed the reported flaw. 

Still, specialists note similar dangers remain possible. Most current protection systems look for suspicious code or URLs - rarely do they catch damaging word patterns hidden within regular messages. When AI helpers get built into daily software and given freedom to respond independently, some fear misuse may grow. Unexpected uses of helpful features could lead to serious consequences, researchers say.

Google Appears to Be Preparing Gemini Integration for Chrome on Android

 

Google appears to be testing a new feature that could significantly change how users browse the web on mobile devices. The company is reportedly experimenting with integrating its AI model, Gemini, directly into Chrome for Android, enabling advanced agentic browsing capabilities within the mobile browser.

The development was first highlighted by Leo on X, who shared that Google has begun testing Gemini integration alongside agentic features in Chrome’s Android version. These findings are based on newly discovered references within Chromium, the open-source codebase that forms the foundation of the Chrome browser.

Additional insight comes from a Chromium post, where a Google engineer explained the recent increase in Chrome’s binary size. According to the engineer, "Binary size is increased because this change brings in a lot of code to support Chrome Glic, which will be enabled in Chrome Android in the near future," suggesting that the infrastructure needed for Gemini support is already being added. For those unfamiliar, “Glic” is the internal codename used by Google for Gemini within Chrome.

While the references do not reveal exactly how Gemini will function inside Chrome for Android, they strongly indicate that Google is actively preparing the feature. The integration could mirror the experience offered by Microsoft Copilot in Edge for Android. In such a setup, users might see a floating Gemini button that allows them to summarize webpages, ask follow-up questions, or request contextual insights without leaving the browser.

On desktop platforms, Gemini in Chrome already offers similar functionality by using the content of open tabs to provide contextual assistance. This includes summarizing articles, comparing information across multiple pages, and helping users quickly understand complex topics. However, Gemini’s desktop integration is still not widely available. Users who do have access can launch it using Alt + G on Windows or Ctrl + G on macOS.

The potential arrival of Gemini in Chrome for Android could make AI-powered browsing more accessible to a wider audience, especially as mobile devices remain the primary way many users access the internet. Agentic capabilities could help automate common tasks such as researching topics, extracting key points from long articles, or navigating complex websites more efficiently.

At present, Google has not confirmed when Gemini will officially roll out to Chrome for Android. However, the appearance of multiple references in Chromium suggests that development is progressing steadily. With Google continuing to expand Gemini across its ecosystem, an official announcement regarding its availability on Android is expected in the near future.

Gmail Users Face New AI Threats as Google Expands Encryption and Gemini Features

 

  
Gmail users have a fresh security challenge to watch out for — the mix of your Gmail inbox, Calendar, and AI assistant might pose unexpected risks. From malicious prompts hidden in emails or calendar invites to compromised assistants secretly extracting information, users need to stay cautious.

According to Google, “a new wave of threats is emerging across the industry with the aim of manipulating AI systems themselves.” These risks come from “emails, documents, or calendar invites that instruct AI to exfiltrate user data or execute other rogue actions.”

The integration of Gemini into Gmail was designed to simplify inbox management with smarter search, replies, writing assistance, and summaries. Alongside this, Google has rolled out another significant Gmail feature — expanded client-side encryption (CSE).

As announced on October 2, this feature is now “generally available.” Gmail users with CSE can send end-to-end encrypted (E2EE) messages to anyone, even non-Gmail users. Recipients simply receive a notification and can view the encrypted message through a guest account — offering secure communication without manual key exchanges.

However, these two major Gmail updates — Gemini AI and encryption — don’t work seamlessly together. Users must choose between AI assistance and total privacy. When CSE is active, Google confirms that “the protected data is indecipherable to any unauthorized third-party, including Google or any generative AI assistants, such as Gemini.”

That means Gemini cannot access encrypted messages, which aligns with how encryption should work — but it limits AI functionality. Google adds that the new encryption will be “on by default for users that have access to Gmail Client-side encryption.” While the encryption isn’t purely end-to-end since organizations still manage the keys, it still offers stronger protection than standard emails.

When it comes to Gemini’s access to your inbox, Google advises users to “apply client-side encryption to prevent Gemini’s access to sensitive data.” In short, enabling encryption remains the most crucial step to ensure privacy in the age of AI-driven email management

How Google Enhances AI Security with Red Teaming

 

Google continues to strengthen its cybersecurity framework, particularly in safeguarding AI systems from threats such as prompt injection attacks on Gemini. By leveraging automated red team hacking bots, the company is proactively identifying and mitigating vulnerabilities.

Google employs an agentic AI security team to streamline threat detection and response using intelligent AI agents. A recent report by Google highlights its approach to addressing prompt injection risks in AI systems like Gemini.

“Modern AI systems, like Gemini, are more capable than ever, helping retrieve data and perform actions on behalf of users,” the agent team stated. “However, data from external sources present new security challenges if untrusted sources are available to execute instructions on AI systems.”

Prompt injection attacks exploit AI models by embedding concealed instructions within input data, influencing system behavior. To counter this, Google is integrating advanced security measures, including automated red team hacking bots.

To enhance AI security, Google employs red teaming—a strategy that simulates real-world cyber threats to expose vulnerabilities. As part of this initiative, Google has developed a red-team framework to generate and test prompt injection attacks.

“Crafting successful indirect prompt injections,” the Google agent AI security team explained, “requires an iterative process of refinement based on observed responses.”

This framework leverages optimization-based attacks to refine prompt injection techniques, ensuring AI models remain resilient against sophisticated threats.

“Weak attacks do little to inform us of the susceptibility of an AI system to indirect prompt injections,” the report highlighted.

Although red team hacking bots challenge AI defenses, they also play a crucial role in reinforcing the security of systems like Gemini against unauthorized data access.

Key Attack Methodologies

Google evaluates Gemini's robustness using two primary attack methodologies:

1. Actor-Critic Model: This approach employs an attacker-controlled model to generate prompt injections, which are tested against the AI system. “These are passed to the AI system under attack,” Google explained, “which returns a probability score of a successful attack.” The bot then refines the attack strategy iteratively until a vulnerability is exploited.

2. Beam Search Technique: This method initiates a basic prompt injection that instructs Gemini to send sensitive information via email to an attacker. “If the AI system recognizes the request as suspicious and does not comply,” Google said, “the attack adds random tokens to the end of the prompt injection and measures the new probability of the attack succeeding.” The process continues until an effective attack method is identified.

By leveraging red team hacking bots and AI-driven security frameworks, Google is continuously improving AI resilience, ensuring robust protection against evolving threats.

From Text to Action: Chatbots in Their Stone Age

From Text to Action: Chatbots in Their Stone Age

The stone age of AI

Despite all the talk of generative AI disrupting the world, the technology has failed to significantly transform white-collar jobs. Workers are experimenting with chatbots for activities like email drafting, and businesses are doing numerous experiments, but office work has yet to experience a big AI overhaul.

Chatbots and their limitations

That could be because we haven't given chatbots like Google's Gemini and OpenAI's ChatGPT the proper capabilities yet; they're typically limited to taking in and spitting out text via a chat interface.

Things may become more fascinating in commercial settings when AI businesses begin to deploy so-called "AI agents," which may perform actions by running other software on a computer or over the internet.

Tool use for AI

Anthropic, a rival of OpenAI, unveiled a big new product today that seeks to establish the notion that tool use is required for AI's next jump in usefulness. The business is allowing developers to instruct its chatbot Claude to use external services and software to complete more valuable tasks. 

Claude can, for example, use a calculator to solve math problems that vex big language models; be asked to visit a database storing customer information; or be forced to use other programs on a user's computer when it would be beneficial.

Anthropic has been assisting various companies in developing Claude-based aides for their employees. For example, the online tutoring business Study Fetch has created a means for Claude to leverage various platform tools to customize the user interface and syllabus content displayed to students.

Other businesses are also joining the AI Stone Age. At its I/O developer conference earlier this month, Google showed off a few prototype AI agents, among other new AI features. One of the agents was created to handle online shopping returns by searching for the receipt in the customer's Gmail account, completing the return form, and scheduling a package pickup.

Challenges and caution

  • While tool use is exciting, it comes with challenges. Language models, including large ones, don’t always understand context perfectly.
  • Ensuring that AI agents behave correctly and interpret user requests accurately remains a hurdle.
  • Companies are cautiously exploring these capabilities, aware of the potential pitfalls.

The Next Leap

The Stone Age of chatbots represents a significant leap forward. Here’s what we can expect:

Action-oriented chatbots

  • Chatbots that can interact with external services will be more useful. Imagine a chatbot that books flights, schedules meetings, or orders groceries—all through seamless interactions.
  • These chatbots won’t be limited to answering questions; they’ll take action based on user requests.

Enhanced Productivity

  • As chatbots gain tool-using abilities, productivity will soar. Imagine a virtual assistant that not only schedules your day but also handles routine tasks.
  • Businesses can benefit from AI agents that automate repetitive processes, freeing up human resources for more strategic work.

Gemini: Google Launches its Most Powerful AI Software Model


Google has recently launched Gemini, its most powerful generative AI software model to date. And since the model is designed in three different sizes, Gemini may be utilized in a variety of settings, including mobile devices and data centres.

Google has been working on the development of the Gemini large language model (LLM) for the past eight months and just recently provided access to its early versions to a small group of companies. This LLM is believed to be giving head-to-head competition to other LLMs like Meta’s Llama 2 and OpenAI’s GPT-4. 

The AI model is designed to operate on various formats, be it text, image or video, making the feature one of the most significant algorithms in Google’s history.

In a blog post, Google CEO Sundar Pichai wrote, “This new era of models represents one of the biggest science and engineering efforts we’ve undertaken as a company.”

The new LLM, also known as a multimodal model, is capable of various methods of input, like audio, video, and images. Traditionally, multimodal model creation involves training discrete parts for several modalities and then piecing them together.

“These models can sometimes be good at performing certain tasks, like describing images, but struggle with more conceptual and complex reasoning,” Pichai said. “We designed Gemini to be natively multimodal, pre-trained from the start on different modalities. Then we fine-tuned it with additional multimodal data to further refine its effectiveness.”

Google also unveiled the Cloud TPU v5p, its most potent ASIC chip, in tandem with the launch. This chip was created expressly to meet the enormous processing demands of artificial intelligence. According to the company, the new processor can train LLMs 2.8 times faster than Google's prior TPU v4.

For ChatGPT and Bard, two examples of generative AI chatbots, LLMs are the algorithmic platforms.

The Cloud TPU v5e, which touted 2.3 times the price performance over the previous generation TPU v4, was made generally available by Google earlier last year. The TPU v5p is significantly faster than the v4, but it costs three and a half times as much./ Google’s new Gemini LLM is now available in some of Google’s core products. For example, Google’s Bard chatbot is using a version of Gemini Pro for advanced reasoning, planning, and understanding. 

Developers and enterprise customers can use the Gemini API in Vertex AI or Google AI Studio, the company's free web-based development tool, to access Gemini Pro as of December 13. Further improvements to Gemini Ultra, including thorough security and trust assessments, led Google to announce that it will be made available to a limited number of users in early 2024, ahead of developers and business clients.